Sovereign AI Compliance

Turn EU AI Act complexityinto operational certainty.

Agent Mai helps you classify use cases, run Annex-ready audits, and draft technical documentation — with structured evidence your governance teams can review (not legal advice).

  • Trust & safety by design
  • EU-based product posture
  • GDPR-aligned processing
  • Evidence-first audits

Live Compliance Scanner

ACTIVE
Annex IVArticle 14Article 5

91.4%

Integrity

2

Open Gaps

PASS

Art. 5

How it works

How to use Agent Mai — and what you get on day one

No months-long setup. Create a workspace, run a quick integrity audit on your real model cards and policies, then share structured gaps and scores with legal and engineering — from the same console.

  1. Create your organization workspace

    Sign up with your work email. Your audits, reports, and team invites stay under one org — the boundary EU AI Act evidence should follow.

    Create workspace
  2. Open Quick Audit & add evidence

    Paste text or upload files (model cards, architecture notes, DPIA excerpts). Agent Mai normalizes them for scanning against Annex III–style signals and Annex IV documentation depth.

    Open Quick Audit
  3. Review risk, gaps, and remediation drafts

    Get a conformity-style score, prioritized gaps, and draft remediation language your counsel can edit — then export JSON for GRC or print a report for the board.

    See report workflow

Quick benefits

  • First pass in minutes — not weeks — so product and compliance can align before the next release train.
  • One evidence trail for high-risk classification, documentation gaps, and prohibited-practice screening — not scattered slides.
  • Mai Cloud or Private Vault — same workflow; choose SaaS speed or on-prem sovereignty when you upgrade.
  • Built for AI search & audits — structured outputs that cite what was reviewed, so humans and automated reviews can trace decisions.

EU AI Act · plain English

The big pieces of the law — in order

Regulation (EU) 2024/1689 is long, but most product and compliance conversations boil down to a handful of ideas: what is forbidden, what counts as high-risk, what you must prove, and how people stay in control. Here is a simple map — always confirm details with your legal team.

Title II · Prohibited

Banned uses (Article 5)

A short list of AI practices the EU does not allow — for example certain social scoring, manipulative systems, or emotion inference in schools or workplaces. If you are in this bucket, it is not a paperwork problem: the use case itself has to change.

Annex III

High-risk list (Annex III)

If your AI is used in areas like biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice, it is often treated as high-risk. That triggers the full provider rulebook — not optional extras.

Chapter III

What high-risk providers must do (Arts. 8–15)

Risk management, training data governance, technical documentation, logging, transparency, human oversight, accuracy, and cybersecurity — all designed so authorities can see how the system was built and how it stays safe in production.

Annex IV

The documentation pack (Annex IV)

Annex IV spells out what “technical documentation” means: system design, data, testing, monitoring plans, and more. Think of it as the structured evidence bundle behind your AI, not a one-page marketing summary.

e.g. Art. 50

Transparency & informing users

Many systems must make clear when people are talking to an AI, when content is synthetic, or when emotion or biometric categorisation is used — so users are not misled about what is happening.

Art. 14

Humans in the loop

High-risk systems need meaningful human oversight: people who understand the limits of the system, can stop it, and are not just rubber-stamping outputs — especially where decisions affect rights or safety.

Simplified for orientation only — not legal advice. Official text and guidance from the EU and national authorities always prevail.

In plain language

Agent Mai tells you what is risky, why it matters, and what to do next.

Instead of legal complexity, your teams get practical answers: which obligations apply, which controls are missing, and which fix should happen first.

For product teams

Know if a feature is likely low-risk, high-risk, or prohibited before roadmap commitments become expensive rework.

For legal and compliance

Review structured outputs with legal references, confidence, and evidence links instead of disconnected screenshots.

For security and infra

Choose cloud speed or private vault sovereignty while preserving the same controls, logs, and governance process.

Modules

Compliance architecture built like a platform.

Each module handles one practical job: detect risk, map obligations, prepare evidence, and keep compliance current as your product evolves.

Annex IV Composer

Ten-section technical documentation draft in the workspace — pair with audits and export as you iterate with legal.

Article 5 Sentinel

Heuristic prohibited-risk signals in each audit plus a workspace checklist — not a full repository scanner.

Evidence trail

Structured gaps and Article references in every report — deeper artifact linking on the roadmap.

Sovereign Runtime

Mai Cloud for speed, Vault mode for private LLM endpoints when configured — same workspace UX.

Deployment path

Choose your sovereignty mode.

Mai Cloud

Fastest path to active compliance operations.

First-pass integrity report in the workspace

Classify → audit → Annex IV draft in one shell

API keys for automated audits (Developers)

Start Cloud Rollout

runtime profile

MODE=SOVEREIGN_CLOUD

RESIDENCY=EU_REGIONS

EGRESS=CONTROLLED

OPS=MANAGED

Your compliance cycle

From architecture docs to audit-grade outputs.

You do not need to redesign your stack. Start with existing docs and logs, then iterate in short compliance cycles with clear ownership across product, legal, and security.

Explore full report

Step 01

Scope & ingest

Connect architecture docs, model cards, and policy notes.

Step 02

Classify

Map use case to AI Act risk tiers and obligations.

Step 03

Remediate

Generate control gaps and remediation sequencing.

Step 04

Export & iterate

JSON exports, Annex IV drafts, API keys for repeat runs.

System-level results

One compliance graph instead of scattered tools.

Teams stop chasing updates across spreadsheets, tickets, and policy docs. Everyone works from one source of truth.

5+ → 1

Governance tools unified

80% → 20%

Manual reporting effort

10 → 1,000+

Scale without control loss

Ecosystem

Fits into your existing tools, not the other way around.

Banking systemsBilling toolsData warehousesSIEM pipelinesIdentity providersTicketing systemsInternal policy docsBanking systemsBilling toolsData warehousesSIEM pipelinesIdentity providersTicketing systemsInternal policy docsBanking systemsBilling toolsData warehousesSIEM pipelinesIdentity providersTicketing systemsInternal policy docs

Implementation resources

Everything your rollout team needs in one place.

Instead of FAQs in the landing flow, start with concrete assets: deployment runbooks, legal templates, and role-based launch checklists.

Launch checklist

A practical sequence from first audit to production monitoring so teams can move without confusion.

Open checklist

Policy starter pack

Starter language for internal AI governance policy, human oversight ownership, and escalation paths.

See compliance notice

Detailed FAQ

A full product, legal, security, and deployment FAQ in a dedicated page for stakeholders.

Read full FAQ

Move from regulatory uncertainty to structured action.

Agent Mai translates EU AI Act language into deployable controls, technical evidence, and auditable process.