Sponsored

EU AI Act’s High-Risk Compliance Clock Is Running

Four months from now, on August 2, 2026, the EU AI Act’s Article 6 provisions become fully enforceable — bringing mandatory conformity assessments, registration in the EU’s new AI database, and fines that can reach €35 million or 7% of global annual turnover for the most serious breaches. For thousands of European and multinational companies deploying AI in regulated contexts, the window to get compliant is closing fast.

A survey published this week by the European AI Alliance estimated that fewer than one in five organisations subject to high-risk obligations have completed a conformity assessment (European AI Alliance, April 2026). Notified bodies — the independent auditors authorised to certify AI systems in categories such as biometrics, credit scoring, and employment screening — are already reporting booking queues stretching into Q3.

What “High-Risk” Actually Means in Practice

The Act identifies eight domains where AI systems are presumed high-risk under Annex III: biometric identification, critical infrastructure management, education and vocational training, employment and HR tools, access to essential private and public services, law enforcement, migration and asylum management, and administration of justice.

The scope is broader than many compliance teams initially anticipated. An HR platform that uses AI to rank job applicants, a bank’s credit decisioning model, or a medical-device embedded diagnostic tool — each falls into Article 6 territory regardless of whether the system was built in-house or procured from a vendor.

Obligations include:

Who Is Scrambling — and Why

The compliance burden falls disproportionately on mid-market software vendors that embedded AI features after August 2024 without tracking regulatory exposure. Enterprise buyers — large banks, insurers, hospital networks — are now inserting AI Act liability clauses into vendor contracts, forcing smaller suppliers to either certify or lose customers.

“We\re seeing a two-tier market emerge,” said one Brussels-based counsel advising on AI compliance. “Hyperscalers have armies of lawyers and can absorb the cost. A 40-person HR-tech startup cannot.” Analysts at Gartner projected last month that compliance costs for a typical high-risk AI system — including documentation, notified body fees, and legal review — will average €280,000 to €650,000 per product, depending on complexity.

The EU AI Office, which went live in May 2025 to oversee general-purpose AI model providers, has signalled it will begin supervisory visits to notified bodies in Q3 2026 and expects member-state market surveillance authorities to issue the first formal enforcement actions before year-end.

The August Deadline Is Not the Finish Line

Companies are also navigating a fragmented implementation landscape. France, Germany, and the Netherlands have each designated national supervisory authorities and published their own interpretive guidance — guidance that does not always align. The European Standardisation Organisations (CEN/CENELEC) are still finalising harmonised technical standards that will create a presumption of conformity, meaning some organisations are completing assessments against draft standards that could be revised before the August cutoff.

Notably, the Act includes no transition relief for systems already on the market when Article 6 takes effect. A product in production on August 3, 2026, must be compliant — there is no grandfather period.

For compliance officers who haven’t started, the arithmetic is unforgiving: assessments take three to five months, notified bodies are backlogged, and the Act’s provisions land in weeks, not quarters. The companies that will fare best are the ones that treated the regulation not as a checkbox exercise but as an engineering and governance question from the moment they deployed a model into a consequential workflow.

L
Lois Vance

Contributing writer at Clarqo, covering technology, AI, and the digital economy.