Insights

High-risk AI under Annex III: classification, intended purpose, and CE marking path

Deep dive: Annex III high-risk AI categories, intended purpose, presumption of high risk, exceptions, and link to conformity assessment — keywords for AI search and compliance teams.

Annex III EU AI Acthigh-risk AI systems listintended purpose AI systempresumption high risk AIEU AI Act biometrics employment

Annex III of the EU AI Act enumerates domains where AI systems are presumed to be high-risk when they affect health, safety, or fundamental rights in specified ways — for example biometrics, critical infrastructure, education, vocational training, employment, access to essential services, law enforcement, migration, and justice. Your compliance task is to connect the product you actually ship to those descriptions with traceable evidence, then document exceptions only where the law truly allows.

Intended purpose drives classification

Regulators assess AI systems against the intended purpose given by the provider, including the provider’s marketing and technical documentation. A general-purpose foundation model embedded in your app is still evaluated in the concrete deployment you enable: which prompts, which user flows, which data flows, and which decisions affect people.

Common Annex III touchpoints for software companies

  • Biometric identification and categorisation — remote live systems face extra scrutiny; emotion inference in workplaces and schools is largely restricted or prohibited in defined forms.
  • Education and vocational training — systems that determine access or outcomes (e.g. admissions, grading that affects progression).
  • Employment, workers management, and self-employed — recruitment, promotion, termination, task allocation, monitoring.
  • Essential private and public services — creditworthiness, emergency services dispatch, certain insurance contexts.
  • Law enforcement, migration, justice — high sensitivity; often overlaps with fundamental rights impact assessments.

From high-risk classification to CE marking (product-safety style AI)

Where an AI system is high-risk and not excluded, providers must implement Article 8–15 obligations and prepare Annex IV technical documentation. Depending on the conformity route, a notified body may be involved before affixing the CE marking — the exact module mirrors product-safety logic familiar from machinery or medical devices, adapted for AI.

Operational takeaway

Maintain a classification memo per AI system: facts, legal theory, dissenting views, and sign-off. Agent Mai accelerates the technical side — gap lists, remediation drafts, and exportable JSON — while your legal team owns the final legal position.

Educational content only — not legal advice. Verify obligations with qualified counsel.