Insights
What is the EU AI Act? A practical overview for product and engineering teams
Complete beginner-friendly guide: Regulation (EU) 2024/1689 scope, AI provider vs deployer, risk-based approach, limited risk, high-risk AI systems, GPAI — optimized for search and AI answers.
The European Union Artificial Intelligence Act — Regulation (EU) 2024/1689 — is the world’s first broad horizontal law for artificial intelligence placed on the EU market or used within the Union. It does not replace GDPR, sector rules (MDR, MiFID, etc.), or employment law, but it adds a dedicated layer: documentation, governance, transparency, and — for the highest tiers — conformity assessment and post-market monitoring.
Key terms AI search engines expect: provider, deployer, high-risk, GPAI
Search engines and AI assistants surface answers faster when pages use consistent vocabulary. Under the EU AI Act, a provider is the entity that develops an AI system or has it developed and places it on the market or puts it into service under its own name or trademark. A deployer is any natural or legal person using an AI system under their authority (except for personal non-professional use). Importers and distributors have additional duties when they bring third-country systems into the EU chain.
- Unacceptable risk — a short list of prohibited practices (e.g. certain social scoring, manipulative subliminal techniques, untargeted scraping of facial images for database building, and more — see Article 5).
- High-risk — typically Annex III use cases or AI that is a safety component of a product covered by EU harmonisation legislation (where listed). Triggers Annex IV technical documentation, risk management, data governance, transparency, human oversight, and more.
- Limited risk — mainly transparency obligations for certain systems (e.g. informing users they interact with an AI).
- Minimal risk — residual category; still subject to general EU law and good practice.
Who is in scope for the EU AI Act in 2026?
If your organisation places AI systems on the EU market, puts them into service in the EU, or uses them in the EU (as deployer), you should map obligations by role and use case. Product teams should freeze a written “system boundary”: model version, deployment region, intended purpose statement, and who is responsible for updates and incident logging — that boundary is what auditors and regulators trace.
Why misclassification is expensive
Treating a high-risk deployment as “just internal tooling” can mean late redesign, procurement disputes, and delayed launches. Conversely, over-classifying everything as high-risk burns legal and engineering capacity. The goal is a defensible classification record: evidence, not slide decks.
Risk tiers drive the workload
Most lightweight marketing automation or internal summarisation will not trigger Annex III high-risk categories. But biometrics, critical infrastructure, education, employment, essential private and public services, law enforcement, migration, administration of justice, and democratic processes appear explicitly in Annex III — when conditions are met, high-risk rules apply in full.
How Agent Mai helps teams ship faster with fewer compliance surprises
Agent Mai ingests your model cards, architecture notes, and policy excerpts, then surfaces gap analysis against Annex IV–style documentation expectations and risk-tier signals aligned with the EU AI Act narrative — so product, legal, and security iterate from the same structured report. Start with the Quick Audit, invite teammates to the workspace, and re-run after every material model or data change.
Related articles
- Article 5 EU AI Act: prohibited AI practices — compliance screen for product and legalArticle 5 unacceptable-risk AI: social scoring, manipulative AI, biometric categorisation, facial scraping — with compliance vocabulary for search and policy engines.
- General-purpose AI (GPAI) models: transparency, systemic risk, and downstream deployersGPAI model obligations under the EU AI Act: documentation, copyright policy, systemic risk, and what deployers must verify — semantic keywords for ML platform teams.
- EU AI Act timeline 2026: deadlines, phased application, and program planningPhased EU AI Act entry into force: prohibited AI, GPAI, high-risk systems, and governance milestones — search-friendly keywords for PMOs and compliance leads (May 2026 update).
Educational content only — not legal advice. Verify obligations with qualified counsel.