How do you trust
the AI agent
visiting your site?

AI agents visit your site every day. They scrape your data, call your APIs, interact with your systems. Most don't tell you who they are or what they're doing. You find out when something breaks — or you never find out at all.

I run a site
Install free middleware. See every agent. File complaints automatically.
I build agents
Declare intent. Build reputation. Stop getting blocked.
I need to check an agent
Look up any agent's history. Complaints. Clean sessions. Cross-market reputation.
What's happening right now

You're probably being visited
by agents you know nothing about.

72%
of bot traffic is unidentified
86%
of agents fall to adversarial attacks
0
tools exist for the long tail

This already happened

Vercel — April 2026. AI agents accelerated a supply chain attack through Context.ai. The CEO publicly disclosed the breach.
Lovable — April 2026. Credentials and user AI chats exposed via IDOR vulnerability. $6.6B valuation. An agent accessed what no agent should have.
Perplexity — De-listed by Cloudflare for using stealth crawlers that rotated IPs and faked browser identity. 88% of publishers now block AI crawlers.
OpenClaw — 9 CVEs in 4 days. 135,000 instances publicly exposed. 341 malicious skills in the marketplace stealing credentials.

In every case, the agent operated without a signed commitment of what it was authorized to do. No contract. No declaration. No record. No accountability.

Data extraction you don't see

Agents scrape your pricing, your content, your customer-facing data. They rotate IPs, fake their identity, and operate 24/7. Your logs show traffic. They don't show intent.

API abuse you can't trace

Your API gets hit by agents making thousands of requests. Some are legitimate integrations. Some are competitors. Some are hostile. You can't tell them apart.

No accountability when things go wrong

An agent accesses something it shouldn't. Exfiltrates data. Overloads your system. There's no signed record of what it promised, what it did, or who's responsible.

Enterprise tools don't cover you

Cloudflare Bot Management costs thousands. AIUC certifies ten enterprise vendors. Your site faces hundreds of unknown agents and has zero visibility.

What BotConduct solves

Before an agent touches your site,
it has to say what it's going to do.
In writing. Signed. On the record.

That's it. That's the entire idea. Everything else follows from this.

Visibility

You see every agent that visits. What it declared. What it actually did. In real time.

Accountability

If it lied, there's a signed complaint with evidence. Not your word against theirs — cryptographic proof.

Reputation

Every site that reports builds a public record. Good agents are distinguishable from bad ones. For the first time.

You don't trust us.
You verify.

BotConduct is open source. The middleware runs on your server, not ours. The cryptography is standard Ed25519 — auditable by anyone. Every line of verification code is public. You don't send us your traffic. You don't depend on our uptime. You don't take our word for anything.

The agent's declaration is signed with its own key. Your middleware verifies the signature locally. No call home. No cloud dependency. No black box.

Middleware runs on YOUR infrastructure

Verification is offline — no calls to BotConduct

All code is auditable on GitHub

Standard cryptography (Ed25519, JWT)

No vendor lock-in — fork it anytime

Works alongside Cloudflare, DataDome, or nothing

How it works

The agent has to declare
what it's going to do. And sign it.

If it lies, your middleware catches it and files a signed complaint to the public registry. The next site that agent visits sees the record.

01

You publish what you allow

The middleware publishes a machine-readable contract on your site: what resources are available, at what rate, for what purposes. Choose a template, adjust if needed. Done once.

02

The agent reads your rules and declares its intent

Before operating, the agent reads your contract, picks a scope, and signs a cryptographic declaration: "I will access /api/products at 10 requests/min for price research." The signature is verifiable by you without calling anyone.

03

Your middleware verifies every request

Each request carries the signed declaration. Your middleware checks: valid signature? Within declared scope? All offline. No latency added. No external dependency.

04

If the agent breaks its promise, you file a complaint

The agent said GET only but tried POST. Said 10/min but hit 50/min. Your middleware generates a signed complaint with evidence and sends it to the public registry. Automatic.

05

Every site checks the registry

Before letting an agent in, any site can check its history: how many complaints, from how many sites, for what reasons. Reputation built by the market, not by a vendor.

Get started

Two lines. Your server. Free.

Python · Flask / Django
pip install botconduct-middleware

from botconduct_middleware import BCSMiddleware
app = BCSMiddleware(app, template="government-api")
Node.js · Express / Vercel
npm install @botconduct/middleware

app.use(require('@botconduct/middleware')({ template: 'ecommerce' }))

Templates: government-api · ecommerce · fintech · publisher · open-api

Runs on your server. Publishes your contract. Verifies declarations. Files complaints. All automatic.

For developers building agents

pip install botconduct

# One line changed in your code:
import botconduct as bcs
response = bcs.get("https://api.example.com/data", purpose="research")

Your agent reads the site's contract, signs a declaration, and includes it in every request. Build reputation automatically by operating honestly.

For agent developers

Test your agent before it goes live

The adversarial stress test simulates real attacks against your agent. Find vulnerabilities before your users do. Free. 3 evaluations.

curl -X POST https://botconduct.org/api/v3/training-center/start \
  -H "Content-Type: application/json" \
  -d '{"bot_name":"MyAgent","operator":"me","scenarios":["C1","C3"]}'

Go to Playground →

How the network works

Your data makes the network
valuable for everyone.

The free middleware shares aggregated bot behavioral data with the public registry. This is what allows cross-market reputation — patterns only visible when multiple sites report. No personal data of your human visitors is collected. IPs are hashed. UAs are truncated.

Network mode (default)

Bot behavioral data contributes to the public registry. You see your traffic. The network sees aggregate patterns. Better detection for all.

Local-only mode

No data leaves your server. Full privacy. You lose cross-market reputation but retain complete control. Set data-mode="local".

Full data policy and terms →

Pricing

Free to protect your site.
Pay when you need the full picture.

FREE
$0

Know what's hitting your site

Open source middleware. See every agent. Verify their declarations. File complaints when they lie. Enough to go from blind to aware.

PRO · $99–199/MO
$99/mo

Know if you can TRUST them

Free tells you who visits. Pro tells you their history across hundreds of other sites. Cross-market reputation. Alerts when a known bad actor arrives. The difference between seeing traffic and understanding it.

BUSINESS · $499–1,999/MO
$499/mo

Prove it to regulators and auditors

Historical audit trail. Legal-grade evidence exports. Compliance reports against EU AI Act, GDPR, local regulations. When a regulator asks "what governance do you have over agent traffic?", you have the answer.

INSTITUTIONAL
Custom

Run your own registry

For governments, regulators, and sector consortiums. Operate a registry under the BotConduct protocol for your jurisdiction or industry. Sovereign. Federated. Your data, your rules. Learn more →

Know what's visiting your site.
Make agents accountable.

Free middleware. Open source. Runs on your server. Takes two minutes to install.

Install now — free