Support Center

Answers built for cross-functional rollout

Product, legal, security, and leadership get the same vocabulary: what Agent Mai automates, what still needs human judgment, and how to phase audits alongside releases — without drowning in generic AI hype.

Commercial stakeholders aligning on rollout plan

Evidence-first

Structured exports for review

Plain language

No black-box scores

Sovereign-ready

Cloud or Vault paths

Living policy

Re-run after changes

Rollout sequence

DiscoverAssessAlignDeploy

Why teams bookmark this page

These themes come up in every enterprise pilot — from first demo to production sign-off.

Clarity without oversimplifying

We separate automated drafting from legal conclusions so your counsel knows exactly what to verify.

Operational FAQs

Cadence, ownership, and escalation patterns you can paste into internal wiki or onboarding docs.

Human support path

When an answer depends on your jurisdiction or stack, we point you to audit and infrastructure guides.

Ask better questions

Each section nudges you toward the next artifact to collect — model card gaps, DPIA triggers, or Annex IV depth.

Product and Workflow

Curated for stakeholders who need depth, not marketing fluff.

What does Agent Mai actually do in plain language?

Agent Mai ingests your AI system documentation, estimates likely EU AI Act risk positioning, surfaces gaps against Annex IV–style expectations, and turns that into a prioritized remediation backlog with ownership hints. It is built for cross-functional teams: product sees scope, engineering sees evidence gaps, and compliance sees what still needs sign-off.

How quickly can we get a first result?

Most teams see a first-pass assessment in minutes after connecting sources or uploading documents. The output is deliberately actionable: top risks, missing artifacts, and suggested next steps — not a wall of generic text — so you can convene a review the same day.

Do we need perfect documentation before using it?

No. Partial documentation is the norm. Agent Mai highlights what is missing and in what order to fix it, so you make progress while you iterate on model cards, data sheets, and governance records. The goal is continuous improvement, not a documentation freeze.

Legal and Governance

Curated for stakeholders who need depth, not marketing fluff.

Does Agent Mai replace lawyers or notified bodies?

No. It accelerates technical readiness, traceability, and consistency of evidence. Legal interpretation, regulatory strategy, and formal conformity decisions stay with qualified counsel and authorities. Think of Agent Mai as structured preparation — not a substitute for professional judgment.

Can outputs be used in board or legal reviews?

Yes. Exports are structured for governance forums: clear assumptions, cited inputs, open gaps, and remediation paths. Legal and leadership can see what was automated versus what still needs human attestation, which shortens review cycles.

How often should we run audits?

After meaningful changes to models, training data governance, safety policies, or product scope. Many teams keep a monthly health check and run a focused audit before high-impact releases, procurement renewals, or regulator interactions.

Security and Deployment

Curated for stakeholders who need depth, not marketing fluff.

Can we keep data in our own infrastructure?

Yes. Private Vault and customer-controlled deployment patterns exist for organizations that need strict residency, egress control, or separation from multi-tenant SaaS. Your security team can align network boundaries with internal policy.

What is the difference between Cloud and Vault?

Cloud is the fastest path: managed upgrades, elastic capacity, and standard security baselines. Vault adds deeper infrastructure control — tighter sovereignty boundaries, custom integration patterns, and operational responsibility split that maps to regulated industries.

What teams should be involved in rollout?

Product, engineering, legal/compliance, and security should share one evidence workflow. Agent Mai works best when ownership is explicit: who attests to model behavior, who owns data processing records, and who signs off on residual risk.

Team discussing implementation details

Still need a tailored answer for your team?

Use the audit workspace to test your exact scenario, then review outputs with legal and security stakeholders. Bring the export to your next governance forum with gaps and owners already labeled.

Run tailored assessment

Go deeper with a guided implementation review

Pair technical validation with policy alignment: run an audit, compare deployment paths, and walk stakeholders through the same evidence package.

Bring product, legal, and security into one room

Use Agent Mai outputs as the shared agenda: gaps, owners, and timelines — then track remediation like any other release train.