Nine months after the EU AI Act’s General-Purpose AI (GPAI) provisions took formal effect in August 2025, European regulators are signaling that the grace period is over. The AI Office in Brussels has begun issuing formal requests for transparency documentation from major model providers — and the scramble to comply is exposing structural gaps in how the industry has approached governance.
What the GPAI Rules Actually Require
The AI Act’s GPAI chapter applies to any foundation model made available in the EU with more than 10^25 FLOPs of training compute — a threshold that captures virtually every commercially significant large language model on the market today. Providers are required to maintain detailed technical documentation, publish summaries of training data used for copyright-relevant content, and implement policies to detect and prevent the generation of illegal content.
Models classified as posing “systemic risk” — broadly defined as those with capabilities that could affect critical infrastructure, democratic processes, or public safety at scale — face an additional tier of obligations: adversarial testing (red-teaming), incident reporting to the AI Office within 72 hours of serious incidents, and annual third-party audits.
According to documentation published by the EU AI Office in March 2026, at least 17 GPAI providers have been formally identified as falling under systemic-risk classification. The list includes models from OpenAI, Anthropic, Google DeepMind, Meta, Mistral, and xAI, among others.
The Compliance Gap
Industry observers and legal analysts estimate that fewer than half of affected providers have submitted complete technical documentation packages as required under Article 53. “What we’re seeing is a lot of summary documents that satisfy the letter of the requirement without the substance,” said one EU AI Office official speaking on background. “We expected this. The formal review process was always going to be the real test.”
OpenAI and Google have both published public-facing model cards and safety frameworks, but critics argue these fall short of the Act’s documentation standards, which require information about training data provenance, compute infrastructure, and known limitations at a level of granularity that most companies treat as proprietary. Anthropic has been notably more forthcoming, publishing its model specification and safety reasoning frameworks — though regulators are reportedly still seeking additional technical detail.
The stakes are substantial. Non-compliance carries fines of up to 3% of global annual turnover for GPAI violations, rising to 1.5% for failures to cooperate with investigations. For a company with OpenAI’s projected $12 billion in 2026 revenue, that exposure exceeds $360 million.
The Brussels Effect in Practice
The GPAI enforcement wave is already reshaping product decisions. Several mid-tier model providers have quietly withdrawn EU market access for their most capable models rather than face documentation requirements they cannot meet. One European AI startup told Clarqo it had restructured its model training pipeline specifically to stay below the 10^25 FLOP threshold — a calculation now baked into product roadmaps across the sector.
Larger players are taking a different approach. Microsoft has embedded a dedicated AI compliance team within its Brussels operations, headcount now exceeding 40. Google’s DeepMind recently appointed its first Chief AI Compliance Officer, a role that did not exist 18 months ago. These investments signal that the industry has accepted the regime as permanent — even as lobbying efforts continue to push for technical clarifications in secondary legislation.
What Comes Next
The AI Office is expected to publish its first formal non-compliance findings in Q3 2026, with financial penalties likely to follow in early 2027 once appeal processes are exhausted. The more immediate pressure point may be the systemic-risk audit requirement: third-party auditors capable of assessing frontier AI systems are in short supply, and the AI Office has not yet published an approved auditor list — a gap that is creating genuine legal uncertainty for providers trying to comply in good faith.
For the AI industry, the EU AI Act is no longer a distant regulatory horizon. It is an operational reality, and the companies that treated compliance as a legal formality rather than a technical and organizational discipline are now facing the consequences. The first enforcement wave is unlikely to be the last.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.