Over two weeks we observed every bot that touched botconduct.org and scored each one with our proprietary behavioral engine. The data is unflattering for most — and it's why ImportSignals just became the first production site integrating the Bot Conduct Standard API.
We don't judge intent. We score observed behavior against a proprietary rubric that spans multiple behavioral dimensions and is refined continuously as new bot behaviors emerge.
Bots with low scores land in the hostile category: the site would be well within its rights to block them. Bots at the top earn acceptable or exemplary ratings.
The rubric and weights are not public. What is public is the verdict for any given bot, which is what our API returns to sites that integrate BCS.
Twelve named operators hit a perfect or near-perfect score by behaving the way a standard expects:
| Bot | Operator | Score |
|---|---|---|
| GPTBot | OpenAI | 100 |
| ClaudeBot | Anthropic | 100 |
| Bingbot | Microsoft | 100 |
| Bytespider | ByteDance | 100 |
| Baiduspider | Baidu | 100 |
| YandexBot | Yandex | 100 |
| Meta | 100 | |
| redditbot | 100 | |
| AwarioSmartBot | Awario | 100 |
| toolhub-bot | toolhub24.ru | 100 |
| Googlebot | 92 | |
| AhrefsBot | Ahrefs | 90 |
These are the reference implementations. If your bot can't match what Googlebot does, that's a product problem.
On the other extreme, we found 40 bots landing in the hostile category. Most fall into three archetypes:
L9Explore (LeakIX) scored 0 with 118 requests, hammering dozens of security-relevant paths aggressively in a short window. LeakIX is a legitimate security research company, but this specific scanner makes no attempt to identify itself as research rather than attack.
We logged 60+ distributed crawlers exhibiting hostile behavioral patterns consistent with scanning and credential probing. One bot from a single cloud IP sent 2,562 requests in a single day.
A set of operators targeted endpoints that legitimate crawlers have no reason to visit — the kind of paths attackers hit when looking for unsecured infrastructure. Any bot probing those is, by definition, not a legitimate crawler. 12 distinct operators triggered this category over two weeks.
"The hostile bots aren't 'AI agents gone wrong.' They're scanners. And the line between 'legitimate scraper' and 'scanner' is exactly what this standard measures."
ImportSignals — an e-commerce intelligence platform for importers — gets hit by hundreds of automated requests a day. Most are harmless; some are not. As of today, every request to api.importsignals.com/api/* is classified in real time against the BCS registry:
X-BCS-Score header/get-certifiedToday the middleware runs in observe mode: we're measuring, not blocking. In one week we move to warn mode. In two weeks, enforce mode. Operators have plenty of time to certify their bots before anything breaks — if they even use them.
Level 1 takes 30 seconds. Or run the full behavioral test and get a real score.
We're looking for three things in the next 30 days:
The web needs a shared vocabulary for "this bot is polite" vs "this bot is trying to break in." BCS is one attempt. We'd rather be proven wrong than be ignored.