EU AI Act 2025 Risk Report

Sep 16, 2025By Larkspur International
Larkspur International

As of September 2025, the EU Artificial Intelligence Act is no longer a distant regulation: it is binding law, with major obligations already in effect and others rapidly approaching. The Act introduces the world’s first comprehensive, risk-based framework for AI, aimed at protecting safety, fundamental rights, and trust while setting a global precedent for governance.

Several milestones have already reshaped the compliance landscape. Since February 2025, organisations have been prohibited from using so-called “unacceptable” AI practices, such as social scoring, manipulative systems, or biometric categorisation of sensitive traits. At the same time, companies became responsible for ensuring their staff have a basic level of AI literacy to handle these technologies responsibly. From August 2025, obligations for providers of general-purpose AI models took effect. These include strict requirements for transparency, documentation, oversight, and user guidance. Crucially, non-compliance can now trigger penalties of up to €35 million or seven percent of global annual turnover, placing AI compliance on par with GDPR in terms of risk exposure.

Looking ahead, August 2026 will mark the entry into force of the full regime for high-risk AI systems. These include applications in credit scoring, employment, border control, healthcare, and other sensitive domains. Providers and deployers in these areas will need to implement comprehensive risk-management systems, rigorous data governance, human oversight mechanisms, and ongoing monitoring. By August 2027, the transition period for older, pre-existing high-risk systems will expire, making compliance unavoidable across the board.

The risks for organisations go beyond fines. Misclassifying systems, failing to maintain adequate documentation, or underestimating the resources needed for compliance can result in both legal and reputational harm. Companies also face pressure from high compliance costs, skills shortages, and lingering uncertainties around enforcement practices as national authorities scale up their capabilities.

To mitigate these risks, organisations should act now. Mapping all AI systems and assessing whether they fall under general-purpose or high-risk categories is the first step. Strengthening documentation and data provenance practices, investing in staff training, and embedding human oversight into critical decision-making processes are equally important. Proactively engaging with regulators and considering voluntary adoption of the GPAI Code of Practice can also provide strategic advantages.

In conclusion, the EU AI Act is not simply a compliance challenge: it is a framework for trustworthy AI. Early movers that align with its requirements will not only reduce exposure to regulatory and reputational risks but also gain a competitive edge in building safe, transparent, and reliable AI systems that the market can trust.