Sponsored

The European Union’s AI Act — the world’s first comprehensive legal framework for artificial intelligence — reaches its first major enforcement milestone in eight days. By May 2, 2026, any organization operating a high-risk AI system within the EU must register that system in the official EU AI Database, a centralized public registry maintained by the European AI Office in Brussels.

Fewer than 40% of affected organizations have completed the process, according to a survey of 620 EU compliance officers conducted by law firm Bird & Bird in April 2026. The remaining 60% cite fragmented internal AI inventories, unclear product categorization, and insufficient legal guidance as the primary barriers.

What Counts as High-Risk

Under Annex III of the AI Act, high-risk systems span eight domains: biometric identification, critical infrastructure management, education and vocational training, employment and HR screening, access to essential services (credit, insurance), law enforcement, migration and asylum, and administration of justice. The breadth of this list catches many enterprise software products that their developers did not originally think of as AI at all.

A bank’s credit-scoring engine qualifies. So does an HR vendor’s CV-screening tool, a hospital’s diagnostic image classifier, and any municipal CCTV system with automated behavior detection. The European AI Office estimates 22,000 to 28,000 individual high-risk system registrations are required across the bloc.

As of April 24, 2026, roughly 8,600 registrations have been submitted — approximately 35% of the projected total. The shortfall is concentrated in SMEs, which lack the legal and technical teams to navigate what critics describe as a complex, multi-step registration workflow.

The Registration Process and Penalties

Registration requires submitting a standardized technical dossier through the EU AI Database portal (ai-database.eu), including a description of the system’s intended purpose, risk management approach, training data documentation, human oversight mechanisms, and conformity assessment results. For systems that already carry CE marking (common in medical devices and industrial machinery), existing notified-body certification can partially substitute.

Non-compliance carries substantial penalties. Deploying an unregistered high-risk AI system in the EU after May 2 exposes operators to fines of up to €30 million or 6% of global annual turnover — whichever is higher. Providers of general-purpose AI models with systemic risk (those trained on more than 10^25 FLOPs) face separate obligations, including mandatory incident reporting and model evaluation requirements, which took effect in February 2026.

“The May deadline is real and the penalties are real,” said Dr. Hanna Müller, head of AI regulatory affairs at Siemens, in a statement to Reuters. “We have 47 systems to register across 14 EU member states. The portal works, but it is not fast.”

Big Tech Response

Microsoft, Google, and Meta have each published compliance roadmaps. Microsoft disclosed in its April 22 earnings call that Azure AI services and Copilot products operating in regulated EU sectors have been pre-assessed for high-risk classification, with 31 systems identified for mandatory registration. The company said it expects to complete submissions by April 30.

Google has retained external auditors Bureau Veritas to conduct conformity assessments on Vertex AI products deployed by EU healthcare and financial services customers. Meta, whose AI systems have a narrower EU enterprise footprint, reported nine systems requiring registration.

For companies that miss the deadline, the EU AI Office has signaled it will initially focus enforcement on sectors with the highest societal impact — healthcare, law enforcement, and financial services — rather than blanket prosecutions across all industries. A three-month grace period for “good faith registrations in progress” is under discussion but has not been formally adopted.

The May 2 deadline marks only the first act. Stricter provisions — including requirements for Fundamental Rights Impact Assessments and expanded third-party audit obligations — come into force progressively through 2027, giving compliance teams little time to rest after the registration sprint.

L
Lois Vance

Contributing writer at Clarqo, covering technology, AI, and the digital economy.