When the European Union’s AI Act became the world’s first comprehensive AI law in August 2024, it set a brisk timetable. General Purpose AI model providers — the Anthropics, OpenAIs, Googles and Mistrals of the world — had exactly twelve months to get their houses in order. That deadline passed in August 2025. Now, eight months into live enforcement, the EU AI Office is beginning to show its teeth, and the compliance picture is messy — a picture that British firms serving European customers are watching with unusual care.
The Paper Mountain No One Saw Coming
The GPAI provisions sound straightforward on paper: providers of general-purpose AI models must publish technical documentation, implement copyright compliance policies for training data, and — if their model clears the 10²⁵ FLOP training compute threshold for “systemic risk” classification — submit to mandatory safety evaluations, adversarial testing, and incident reporting obligations.
In practice, the technical documentation requirement alone has consumed legal, engineering, and policy teams at every major AI lab. The regulation demands model cards detailing training data provenance, evaluation benchmarks, known failure modes, and energy consumption figures across the full training run. For frontier models trained over months on clusters spanning hundreds of thousands of chips, assembling that documentation retroactively has proved genuinely difficult.
“The regulation was written by lawyers and policy experts, not ML engineers,” one compliance officer at a major US AI lab told industry analysts in March. “Some of what it asks for simply doesn’t exist as a structured artefact you can hand over.”
The EU AI Office has documented over 80 companies currently under preliminary review for potential GPAI violations, though formal enforcement proceedings — which can result in fines up to €30 million or 6% of global annual turnover, whichever is higher — have been reserved for the most egregious cases so far.
Copyright: The Fault Line Nobody Resolved
If technical documentation is the compliance headache, training data copyright is the migraine. The AI Act requires GPAI providers to implement a “state of the art” policy for honouring copyright opt-outs under EU law, including compliance with the Text and Data Mining (TDM) exception in the EU Copyright Directive. Publishers, news organisations, and rights-holders who have registered opt-outs must have those wishes respected.
The problem: there is no agreed standard for what “state of the art” means in this context. The EU AI Office issued draft guidance in November 2025, but it remains contested. Meanwhile, major AI providers have taken divergent approaches — some publishing exhaustive lists of crawled domains and their opt-out status, others relying on blanket contractual representations — creating an uneven compliance landscape that frustrated regulators are still trying to map.
Europe’s publishing industry, which has been lobbying aggressively since the Act passed, estimates that fewer than 40% of GPAI models in commercial use today meet the spirit of the TDM opt-out rules. The AI Office has not confirmed that figure, but its public communications have grown noticeably sharper in tone since January. British publishers — including the Publishers Association and the News Media Association — have pressed Westminster for an equivalent domestic regime, citing a growing divergence between UK and EU rights protections post-Brexit.
Systemic Risk: The Tier That Changes Everything
For models classified as posing systemic risk — currently a small group that includes the most powerful frontier models from OpenAI, Google DeepMind, Anthropic, and Meta — obligations escalate sharply. These providers must conduct adversarial testing (“red-teaming”) using both internal teams and third-party evaluators, report serious incidents to the EU AI Office within 72 hours, and maintain ongoing monitoring of model behaviour in deployment.
The 72-hour incident reporting window has already produced friction. Several labs have argued that defining what constitutes a “serious incident” in an AI context is far more ambiguous than in, say, aviation or pharmaceuticals. Is a model generating misleading political content at scale a serious incident? What about a jailbreak enabling synthesis instructions for dangerous chemicals? The AI Office’s interpretive guidance has not kept pace with the questions practitioners are raising in real deployments.
Despite these frictions, the systemic risk framework has driven meaningful investment in safety infrastructure. Multiple frontier labs have stood up dedicated EU compliance teams numbering in the dozens, and third-party AI evaluation firms — a category that barely existed in 2023 — now compete for contracts worth tens of millions of euros. The UK’s AI Security Institute, established in 2023, has emerged as one of the better-regarded third-party evaluators, and several frontier labs have routed evaluation work through London as part of their broader compliance posture.
The €4.5 Billion Question
Industry analysts at Oliver Wyman estimated in late 2025 that aggregate compliance costs for European companies under the AI Act — including both the GPAI provisions and the approaching August 2026 deadline for high-risk AI systems — will reach approximately €4.5 billion over the first three years of enforcement. The figure covers legal counsel, technical documentation, safety testing, and organisational restructuring.
For smaller European AI companies and startups, those costs represent an existential challenge. The European AI startup ecosystem, still smaller than its US and Chinese counterparts, faces a compliance burden designed with large, well-capitalised players in mind. Several vocal founders have argued that the Act’s requirements effectively create a moat for incumbents. The EU AI Office’s response — a simplified compliance pathway for low-risk models and a network of national AI regulatory sandboxes — has been welcomed in principle but criticised as underfunded in practice.
For the UK, the asymmetry creates an interesting strategic question. British AI firms selling into the EU are subject to the full Act regardless of post-Brexit regulatory sovereignty, while domestic-only UK deployments fall under the lighter-touch, sector-led approach favoured by the Department for Science, Innovation and Technology. A handful of founders have quietly welcomed the divergence as a competitive edge; others worry that London is slowly ceding the rule-setting role to Brussels by default.
What Comes Next
The next major milestone arrives in August 2026: the deadline for high-risk AI systems under Annex III of the Act. This covers AI used in employment decisions, credit scoring, educational assessment, biometric categorisation, and critical infrastructure management — a far larger universe of products than the GPAI provisions alone. The EU AI Office is already signalling that it will not grant extensions.
For now, the GPAI enforcement period is proving what many predicted but few wanted to say aloud: the world’s most ambitious AI regulation is colliding with the practical realities of how AI systems are actually built and deployed. The law is not wrong to ask hard questions. The challenge is building the institutional capacity — on both sides of the regulator-industry relationship — to answer them.
Sources: EU AI Office public communications; Oliver Wyman industry analysis (2025); company disclosure filings; interviews with AI policy practitioners.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.