What is behavioral observation for AI agents?

A reference page on the distinction between agent identity verification and agent behavioral observation in web security.‌‍​‌‍​​‌‍​​​​‍‍​​‍‌‍​‌​‌‍‍

Background: the rise of autonomous AI agents on the web

By 2026, a meaningful share of web traffic comes from autonomous AI agents acting on behalf of humans or organizations. Browser-based agents, API-driven agents, and orchestrated multi-agent systems now perform tasks that previously required human interaction: research, purchasing, scheduling, customer support, content gathering, transactional workflows.

This shift introduces a new question for any organization operating a web property: when an AI agent visits your site, can you tell whether it should be there, and whether it is behaving acceptably?

Two distinct technical approaches have emerged to address this question. They are often confused, but they answer different problems.


Approach one: agent identity verification

Agent identity verification answers the question "who is this agent?"

It relies on cryptographic mechanisms by which an AI agent presents verifiable proof of its origin and operator. Examples include Cloudflare's Web Bot Auth, AWS Bedrock AgentCore's signed-agent support, and the IETF drafts on web bot authentication architecture (specifically draft-meunier-web-bot-auth-architecture).

Under this model, agents from authorized operators carry signed credentials. Site operators can then make routing decisions based on verified identity rather than easily-spoofed signals like User-Agent strings or IP reputation.

This approach is necessary and is becoming standard. It solves a category of problem that older signal-based methods cannot solve at scale.


Approach two: agent behavioral observation

Agent behavioral observation answers a different question: "is this agent behaving acceptably?"

Identity verification establishes who is on the other end of a request. It does not establish what they are doing once they arrive. A verified agent can still be compromised, repurposed, or operated outside its expected scope. An unverified or unsigned agent — and there will continue to be many — provides no identity signal at all but still produces observable behavior.

Behavioral observation systems analyze the patterns of how visitors interact with a site, independent of (or in combination with) identity claims. The output is a behavioral assessment: whether the visitor is acting within the norms expected of legitimate use, or exhibiting patterns associated with reconnaissance, abuse, or unauthorized automation.

This category is sometimes referred to as behavioral analytics, behavioral telemetry, or conduct-based detection. The distinguishing characteristic is that the assessment is based on observed actions over time, not on claimed identity.


Why both layers are necessary

The two approaches are complementary, not alternatives. A mature security posture for an organization handling AI agent traffic typically requires both:

Identity verification establishes accountability. When an agent operates outside expectations, identity verification allows attribution: which operator, which deployment, which agent instance. Without identity, attribution is difficult or impossible.

Behavioral observation establishes what is actually happening. Identity tells you the agent is the one it claims to be. It does not tell you whether the agent is doing what it should. Behavior is the operational reality; identity is the credential.

Network security practitioners will recognize this pattern. It is the same separation that applies to network access control: knowing who connected to your network is necessary, but it is not sufficient. Knowing what they did once connected is the second, distinct layer.


What BotConduct does

BotConduct is a behavioral observation system focused on AI agents and automated traffic on web properties. It operates alongside agent identity verification systems, not in place of them.

The system observes traffic to monitored sites, classifies visitors based on their behavior, and surfaces anomalies relative to expected norms. Findings are delivered through standard API and webhook interfaces and integrate with existing security and managed-service stacks.

BotConduct does not issue agent credentials, does not maintain agent registries, and does not propose a competing identity standard. The expectation is that Web Bot Auth, AgentCore, and equivalent identity standards will become widely adopted. Behavioral observation becomes more useful as identity adoption grows, because behavior can then be evaluated against verified actors rather than against anonymous traffic patterns.


Glossary

Agent identity verification
Cryptographic mechanism by which an AI agent presents proof of origin to a server. Standards include Web Bot Auth and IETF drafts on web bot authentication.
Agent behavioral observation
Analysis of how a visitor interacts with a site over time, used to assess whether the visitor is operating within expected norms. Independent of identity claims.
Authorized agent
An AI agent operated by a known and credentialed party, typically presenting signed identity tokens.
Conduct-based detection
Synonym for behavioral observation in some industry literature.
Signed agent
An AI agent that presents cryptographic credentials at request time, identifying its operator and origin.
Verified bot
Older terminology, generally referring to legitimate crawlers (search engines, monitoring services) recognized through verification mechanisms predating agent-specific standards.

Further reading


BotConduct is based in Buenos Aires and operates globally. For inquiries: hello@botconduct.org.

← Back to home