Run IMDA's AI Verify on your model — without standing up the toolkit yourself.
We host AI Verify Toolkit 2.0 and Project Moonshot, run the technical tests, draft the process checklists with AI, and route them to your reviewer. You get a testing report aligned to Singapore's national AI governance method, delivered in days, not months.
AI Verify + Moonshot, hosted and orchestrated.
We don't invent a methodology. We run the two pieces the AI Verify Foundation already publishes — AI Verify for technical tests and Moonshot for LLM red-teaming — and we wrap them in the operational layer most teams don't have time to build.
IMDA's open-source testing framework. 11 governance principles, stock plugins for fairness, explainability, robustness. Apache-2.0.
AI Verify Foundation's red-team and benchmark engine for LLM applications. Cookbooks for bias, toxicity, jailbreak, hallucination, data disclosure.
Hosted runtime, evidence schema, AI-assisted process checklist drafting, reviewer workflow, signed report delivery.
The same 11 principles IMDA tests against.
Every report covers all eleven AI Verify principles. Three carry automated technical tests; eight are answered through process checklists with documentary evidence.
Four phases, two to four weeks.
A typical first run takes two to four weeks. After that, every release ships with a fresh report — most of the work is already wired in.
We catalogue your model, prompts, and governance documents. 3–5 days.
AI Verify stock plugins for technical tests; Moonshot cookbooks for LLM red-teaming. 5–10 days.
AI drafts the 11 process checklists from your documents; your reviewer approves or revises. 3–5 days.
IMDA-aligned summary HTML + PDF, scoped to your AI system. Re-runnable on every release.
The deliverable.
IMDA-aligned summary covering all 11 principles, with metric tables, evidence references, and per-control verdicts.
Every metric is bound to a deterministic run hash. Re-run on every release; diff the report. No manual rework.
Every process checklist answer is signed, timestamped, and stored append-only. Defensible under a regulator inspection.
Clear language on what was tested, what wasn't, and what the report does and does not certify. No regulatory overreach.
What this is, and what it isn't.
Built on AI Verify Toolkit — an open-source project of the AI Verify Foundation, a subsidiary of Singapore's Infocomm Media Development Authority (IMDA). Licensed under Apache-2.0.
This is a testing report, not a certificate. The AI Verify Foundation does not issue certificates for individual AI systems; the framework is a testing methodology. AgentSure is not currentlyan accredited AI Verify Testing Partner — the “AI Verify” name and logo remain the property of the AI Verify Foundation.
Get the report before your regulator asks for it.
Quantification is the front door to risk reduction and to insurance. Start with a 60-minute scoping call.