Sponsored

For three years, the EU AI Act was a policy document on a horizon. As of this month, it has teeth. The regulation’s provisions governing high-risk AI systems entered full enforcement phase in April 2026, marking the most significant legal intervention in artificial intelligence since GDPR reshaped data privacy globally in 2018. The first compliance audits are underway, the first penalties are being calculated, and the first legal challenges are beginning to form.

What Just Became Mandatory

The EU AI Act operates on a risk-tiered framework. Systems classified as “high-risk” — covering AI deployed in employment screening, credit scoring, biometric identification, critical infrastructure, and several healthcare applications — are now subject to mandatory conformity assessments, technical documentation requirements, human oversight protocols, and registration in the EU’s central AI database.

For providers placing high-risk AI systems on the European market, the obligations are substantial. Conformity assessments must be completed before deployment, and for certain system categories, that means third-party audits by notified bodies — the same accreditation structure used for medical devices and industrial safety equipment. The European AI Office, established under the act, has authority to conduct market surveillance, demand documentation on 15 days’ notice, and impose fines.

The penalty structure is the sharpest tool in the act’s arsenal: up to €35 million or 7% of global annual turnover (whichever is higher) for prohibited AI practices; up to €15 million or 3% for high-risk system violations; and up to €7.5 million or 1.5% for providing incorrect information to regulators. For context, a company with $10 billion in global revenue could face a €700 million fine for a Category 1 violation.

Who Is Exposed

The financial services sector faces the most concentrated near-term risk. Credit decisioning algorithms, insurance pricing models, and fraud detection systems all fall within the high-risk category — and many were built before the act’s technical standards were finalized. European Banking Authority guidance published in February estimated that approximately 40% of financial institutions operating AI-driven credit tools in the EU had not yet completed the required conformity documentation as of Q1 2026.

Recruiters and HR technology vendors are equally exposed. Any AI system that influences hiring decisions — resume screening, interview scoring, candidate ranking — qualifies as high-risk under Annex III of the act. Vendors including several major HRTech SaaS companies have already issued compliance advisories to European customers, and at least two have quietly restricted EU access to certain product features pending audit completion.

Big Tech’s exposure is more complex. Under the act’s general-purpose AI (GPAI) provisions, which also came into force this year, frontier model providers above 10^25 FLOPs training compute must publish model evaluations, red-teaming results, and copyright training data summaries. Anthropic, Google DeepMind, and OpenAI have all filed GPAI transparency reports with the European AI Office — Meta’s compliance status remains contested following a dispute over its open-source model classification.

Three formal legal challenges to the act’s GPAI provisions are already pending before the European Court of Justice, brought by a coalition of open-source AI advocacy groups and two mid-tier model providers. The core argument: that applying conformity requirements to open-weight models is structurally incompatible with open-source software licensing and chills research. A ruling is not expected before late 2027.

In the meantime, the enforcement machinery is operational. The European AI Office confirmed in a statement last week that it has initiated 11 preliminary investigations across seven member states, focused on high-risk system registrations. None of the targets have been named publicly, but industry sources indicate at least two involve recruitment AI deployed across multiple EU markets.

For any company deploying AI in Europe, the message is unambiguous: the grace period is over. Legal and compliance teams that treated the AI Act as a future problem are now working weekends. The firms that invested in governance infrastructure two years ago are watching their competitors scramble — and the regulatory window for quiet remediation is closing fast.

L
Lois Vance

Contributing writer at Clarqo, covering technology, AI, and the digital economy.