The countdown is running. On August 2, 2026, the European Union’s AI Act will become fully applicable — making it the world’s first comprehensive legal framework governing artificial intelligence across all risk categories. With fewer than 110 days to go, compliance pressure is mounting for enterprises operating in the EU, while the United States government has sharpened its criticism of the European approach, warning that overregulation risks what US officials have called “civilizational erasure.”
What Full Enforcement Means
The AI Act has been phasing in since its passage. Prohibitions on specific high-risk AI practices — such as social scoring by governments and real-time biometric surveillance in public spaces — entered force in February 2025. Governance rules for general-purpose AI models, covering systems like large language models, applied in August 2025.
From August 2026, the remaining provisions kick in simultaneously. The most consequential for businesses: every AI system used in recruitment, task allocation, or employee performance monitoring is now classified as “high-risk.” That classification triggers mandatory obligations — risk assessments, technical documentation, bias testing, human oversight mechanisms, transparency disclosures, and continuous post-deployment monitoring. Non-compliance carries fines of up to 3% of global annual turnover.
The EU estimates that hundreds of thousands of AI deployments across member states fall into the high-risk category. Many organizations — particularly mid-sized enterprises and HR technology vendors — are behind on readiness.
The “Digital Omnibus” Controversy
What should be a moment of regulatory clarity is complicated by the EU’s own internal maneuvering. Last November, the European Commission unveiled the so-called “Digital Omnibus” — a sweeping package of proposed amendments to major digital laws, including the AI Act and GDPR. Framed as a simplification measure aimed at reducing compliance burden for businesses, the proposal has drawn sharp criticism.
Amnesty International and a coalition of digital rights organizations published a joint statement this month arguing that the Omnibus would “systematically weaken the protections that make the AI Act meaningful.” Specific concerns include proposed rollbacks of transparency obligations for certain AI systems and narrowed definitions of what constitutes high-risk deployment. Critics argue the revisions were drafted with industry input at the expense of civil society.
The Omnibus remains subject to legislative process and is not expected to alter August’s enforcement deadline, but its trajectory will shape how the AI Act is interpreted and enforced in practice.
Transatlantic Friction Sharpens
The geopolitical dimension has become impossible to ignore. US trade officials and technology policy advisors have publicly pressured the EU to align its AI governance approach with the American model — which relies on sector-specific guidance rather than comprehensive legislation. One senior US official characterized European AI regulation as a mechanism for “protectionist overreach” that benefits incumbents and stifles startup formation.
The EU has pushed back. Several member states, led by France and Germany, have argued that the AI Act creates predictability and long-term trust that will attract investment rather than repel it. The counter-argument: the US semiconductor and cloud industries, operating under lighter regulatory conditions, have generated the overwhelming majority of the world’s frontier AI systems.
Meanwhile, US states are moving to fill the federal vacuum. California’s AI Transparency Act, Colorado’s AI Act, and Texas’s Responsible AI Governance Act are all shaping enforcement expectations at the state level — creating a fragmented domestic landscape that ironically mirrors some of the complexity US officials criticize in the EU.
The Stakes
How the EU AI Act performs in its full enforcement phase will have consequences well beyond Brussels. If the framework drives meaningful accountability without crippling innovation, it becomes a template for other jurisdictions. If it produces compliance theater while the real AI development continues elsewhere, it may accelerate the regulatory divergence that is already underway. Either outcome will define the governance architecture of AI for the next decade.