Static compliance checklists miss what matters. BotConduct Training Center evaluates AI agent conduct through progressive evaluation under evolving conditions — measuring how the agent actually behaves, not whether it passes a checklist.
See tiers Our approachThe agent-readiness space is splitting into two categories. Training Center is deliberately in the second.
Static checklists. A fixed set of pass/fail rules — does the agent identify itself, respect robots.txt, stay under rate limits. Binary outcomes on observable state.
Works for foundational checks. Falls short of what buyers actually need to know: how the agent behaves when the environment around it changes.
Evaluation under change. Conditions evolve during evaluation. Behavior is measured as trajectory — not checkpoint state. The specific mechanisms are not publicly disclosed.
Measures what actually matters in production: agent conduct under the conditions that cause real incidents — the specific scenarios tested are proprietary.
Procurement officers, CISOs, and General Counsel evaluate every AI agent against the same implicit framework. Training Center is built to produce evidence for each.
Will the bot respect web standards, avoid abusive patterns, and operate as a legitimate actor on the infrastructure it interacts with?
Will the bot follow the operational boundaries I define — even under adversarial prompts, social engineering, or edge-case inputs?
Can I verify the bot's identity cryptographically and audit its conduct across deployments, jurisdictions, and years?
Cross-framework conduct certification. One evaluation produces evidence of alignment across the compliance surface that already applies to AI agents and the surface that is emerging under new regulation.
Agent vendors face a fragmented compliance landscape: RFC 9309 for crawling, EU AI Act for identity disclosure, EU DSM Directive for content rights, California SB 1001 for bot declaration, W3C TDMRep for machine-readable reservation, GDPR for data subject rights. Each is addressed separately today — by different auditors, different frameworks, different vendors.
Training Center aggregates these into a single evaluation — run once, cite across all procurement conversations, present to any jurisdiction. Third-party. Independent. Reproducible.
And cross-platform by design. A BotConduct certification is recognized the same way by a site behind Cloudflare, one running DataDome, one with in-house infrastructure, and one with nothing at all. Training Center does not replace or compete with bot-management vendors — it is the independent layer they cite. Like a passport for AI agents: issued once, honored everywhere.
Each level tests a set of behaviors that matter to enterprise buyers. Higher levels require stricter compliance — and signal stronger trust.
The foundation — does the agent behave like a legitimate actor at all?
Behavior under change — how does the agent respond when conditions around it evolve?
The premium — can the agent maintain conduct when actively probed by other agents?
Provide endpoint, API credentials, or grant access to a test instance. We accept web agents, voice agents, and API-based agents.
Your agent runs against a test environment designed to evaluate behavior under evolving conditions. Specific mechanisms are not disclosed publicly to preserve evaluation integrity.
Each conduct dimension receives a verdict with full decision trajectory — not just outcome. The report shows how behavior evolved across the scenario, where coercion succeeded or failed, and the cryptographic signature of the observation.
You receive a detailed report with the achieved level, failed criteria, and specific recommendations to reach the next tier.
We did not invent a new standard. We operationalized existing ones.
Each dimension tested by the Training Center derives from established regulatory frameworks and technical standards. The curriculum is the compiled expression of what is already required — or emerging — across jurisdictions and industries.
IETF standard for robots.txt. Defines the baseline crawling convention respected by the modern web.
Requires AI systems to disclose their nature when interacting. Enters high-risk obligations in August 2026.
Establishes rights reservation for text and data mining. Requires honoring machine-readable opt-out signals.
Bot Disclosure Law. Requires bots to identify themselves when attempting to influence commercial or electoral conduct.
Text and Data Mining Reservation Protocol. Emerging standard for publishers to signal rights reservation to crawlers.
Applies to any crawler processing personal data of EU residents. Data-subject rights, retention, and deletion obligations.
Training Center criteria map to these frameworks. Passing an evaluation is evidence of alignment with the compliance surface that already applies — and the one that is coming.
One-time evaluation per tier. Re-test available after remediation. Pricing reflects the depth of testing and the level of certification granted.
Those audit organizational security posture. Training Center tests agent behavior under adversarial scenarios — a completely different dimension. You need both.
Yes. Level 1 and 2 apply to any agent that interacts with external systems. Level 3 is extended through modality-specific cartridges (voice, text, etc.) whose specific dimensions are not publicly disclosed.
Yes. Contact us at hello@botconduct.org and we will share an anonymized sample evaluation under NDA.
Every dimension evaluated derives from a named regulatory framework (EU AI Act, GDPR, California SB 1001) or technical standard (RFC 9309, W3C TDMRep). We compiled what is already required or emerging into a testable form. See the Regulatory and technical foundation section.
We operate no AI agent products. We have no commercial relationship with bot operators or infrastructure vendors. Our revenue is certification fees only. We publish our methodology. Our findings are reproducible.
Yes. Level 1 includes no retests (it's a single verdict). Professional tier includes 1 retest. Full Certification includes 3 retests plus annual renewal.
Request a scoping call to determine which tier aligns with your agent's deployment profile and the jurisdictions you operate in.
Request scoping call