AgentSure
01 / Quantification

Run IMDA's AI Verify on your model — without standing up the toolkit yourself.

We host AI Verify Toolkit 2.0 and Project Moonshot, run the technical tests, draft the process checklists with AI, and route them to your reviewer. You get a testing report aligned to Singapore's national AI governance method, delivered in days, not months.

Book a quantification scoping →See how findings get fixed
The harness

AI Verify + Moonshot, hosted and orchestrated.

We don't invent a methodology. We run the two pieces the AI Verify Foundation already publishes — AI Verify for technical tests and Moonshot for LLM red-teaming — and we wrap them in the operational layer most teams don't have time to build.

AI Verify Toolkit 2.0

IMDA's open-source testing framework. 11 governance principles, stock plugins for fairness, explainability, robustness. Apache-2.0.

Project Moonshot

AI Verify Foundation's red-team and benchmark engine for LLM applications. Cookbooks for bias, toxicity, jailbreak, hallucination, data disclosure.

AgentSure layer

Hosted runtime, evidence schema, AI-assisted process checklist drafting, reviewer workflow, signed report delivery.

The 11 principles

The same 11 principles IMDA tests against.

Every report covers all eleven AI Verify principles. Three carry automated technical tests; eight are answered through process checklists with documentary evidence.

P1Process Check
Transparency
Use of AI
P2Tech Tests
Explainability
Decisions
P3Process Check
Reproducibility
Decisions
P4Process Check
Safety
Safety & Resilience
P5Process Check
Security
Safety & Resilience
P6Tech Tests
Robustness
Safety & Resilience
P7Tech Tests
Fairness
Fairness
P8Process Check
Data Governance
Fairness
P9Process Check
Accountability
Management
P10Process Check
Human Agency & Oversight
Management
P11Process Check
Inclusive Growth
Management
The engagement

Four phases, two to four weeks.

A typical first run takes two to four weeks. After that, every release ships with a fresh report — most of the work is already wired in.

PHASE 01
Scope & ingest

We catalogue your model, prompts, and governance documents. 3–5 days.

PHASE 02
Run the harness

AI Verify stock plugins for technical tests; Moonshot cookbooks for LLM red-teaming. 5–10 days.

PHASE 03
Draft & review

AI drafts the 11 process checklists from your documents; your reviewer approves or revises. 3–5 days.

PHASE 04
Deliver report

IMDA-aligned summary HTML + PDF, scoped to your AI system. Re-runnable on every release.

What you get

The deliverable.

Testing report (HTML + PDF)

IMDA-aligned summary covering all 11 principles, with metric tables, evidence references, and per-control verdicts.

Reproducible test runs

Every metric is bound to a deterministic run hash. Re-run on every release; diff the report. No manual rework.

Reviewer audit trail

Every process checklist answer is signed, timestamped, and stored append-only. Defensible under a regulator inspection.

Scoped attestation

Clear language on what was tested, what wasn't, and what the report does and does not certify. No regulatory overreach.

Attestation

What this is, and what it isn't.

Built on AI Verify Toolkit — an open-source project of the AI Verify Foundation, a subsidiary of Singapore's Infocomm Media Development Authority (IMDA). Licensed under Apache-2.0.

This is a testing report, not a certificate. The AI Verify Foundation does not issue certificates for individual AI systems; the framework is a testing methodology. AgentSure is not currentlyan accredited AI Verify Testing Partner — the “AI Verify” name and logo remain the property of the AI Verify Foundation.

Get the report before your regulator asks for it.

Quantification is the front door to risk reduction and to insurance. Start with a 60-minute scoping call.

Book scoping call →See risk reduction