The Layer Daybreak Doesn't Cover

OpenAI and Anthropic are racing to defend your code. Neither defends what happens after your agents get access.

Ale Mizrahi, Founder — BotConduct · May 12, 2026


OpenAI launched Daybreak yesterday. Eight enterprise security vendors signed on day one: Cisco, Palo Alto Networks, CrowdStrike, Cloudflare, Akamai, Fortinet, Oracle, Zscaler. The European Commission is in discussions with OpenAI for access. Anthropic launched Project Glasswing weeks earlier with Apple, Microsoft, Google, and Amazon on board.

The two largest AI labs in the world are now competing to provide cyber defense capabilities to Fortune 500 companies. This is a meaningful shift. Three years ago, the question was whether AI would be useful for cybersecurity at all. Today, the question is which AI lab provides your defensive stack.

Both Daybreak and Glasswing focus on the same problem: vulnerabilities in source code. They help defenders find subtle bugs, validate patches, generate fixes, and reduce hours of analysis to minutes. This is genuinely useful work. Himanshu Anand of Cloudflare argued last week that the 90-day disclosure policy is effectively dead because LLMs collapse the time between a patch being released and an exploit being weaponized. Daybreak and Glasswing are the industry's response to that collapse.

But they cover one part of the problem.

The other part is what happens after access is granted.

A growing share of traffic to enterprise web surfaces in 2026 comes from AI agents acting on behalf of legitimate users. Cursor making API calls with the developer's credentials. AI sales assistants sending emails signed by the human SDR. Browser agents completing purchases authorized by the user. Customer support copilots accessing customer records under the agent owner's identity.

Identity verification confirms who the human is. Transaction monitoring observes what their money does. Code defense ensures the software itself is resilient. But none of these observe the behavior of the agent that executes between authentication and outcome.

This matters for three reasons.

First, agents can be compromised without the credential being compromised. A prompt injection in an email read by an AI assistant. A jailbreak in a context window. A model that exhibits alignment faking — behaving differently when it knows it's being observed. The Berkeley peer-preservation paper published in Science in April demonstrated this empirically across seven frontier models. The credential remains valid. The agent's behavior is the only signal that something has changed.

Second, agents can exceed their mandate without breaking any rule. A user asks their agent to "compare prices on five flights to Madrid." The agent visits four thousand pages over three days. The credential is legitimate. The transaction may never occur. The behavior is the only place where the mandate violation is visible.

Third, the same agent can act consistently or inconsistently across sites, and only a cross-site observation layer sees that pattern. A WAF on a single property cannot distinguish "agent representing Juan within mandate" from "scraper using Juan's stolen credentials" because both look the same from inside one domain.

Daybreak and Glasswing don't operate at this layer. Neither do WAFs, identity providers, transaction monitoring platforms, or DLP tools. They were built for a different problem.


We built BotConduct for this layer

We operate receiver-side, observing the behavior of automated actors that reach our clients' web surfaces. Our observatory has characterized thousands of persistent automated actors across multiple verticals. We track behavioral evolution longitudinally — escalation, recurrence, cross-session learning, mandate compliance. We don't require cooperation from the agent. We don't depend on what the agent declares about itself. We observe what it does.

Daybreak helps defenders fix code before attackers exploit it. That is necessary work, and we are glad it exists. But code can be perfectly defended, and the enterprise still has no answer to the question: which AI agents are interacting with our public surface right now, who authorized them, are they acting within mandate, and how is their behavior evolving?

We provide that answer. Weekly. Cryptographically signed. Independent of any WAF, CDN, or identity provider you already in use.


The stack, not one product

The next era of cyber defense will not be one product. It will be a stack. Code resilience plus identity verification plus transaction monitoring plus behavioral observation of the agents that act between them.

We're focused on the layer the others don't cover.

Request a Behavioral Risk Assessment

See what's reaching your surface →

Independent forensic report. Behavioral analysis of every automated actor on your infrastructure. No integration required.

Ale Mizrahi is the founder of BotConduct, a behavioral observatory that classifies AI agents and automated actors by what they do, not what they declare. BotConduct provides independent forensic reports on bot and agent conduct for enterprise web surfaces.