Unaccountable Agents
We share some thoughts on the Cyberhaven 2026 AI Adoption & Risk Report, and view this from the lens of AI Accountability.
Cooperating with AI
Usage is not cooperation. Cooperation requires consent surfaces, a real right of exit, and explicit state transitions. Intelligence should advance by making offers others can refuse.
AI Pilots Burn Cash
The State of Agents (Part 4/6) shifts the conversation from the technical (Architecture/Safety) to the financial (ROI/Business Value), directly addressing the "Pilot Purgatory" that most enterprises are currently experiencing.
The Agent Management Trap
"HR for bots" is a management trap. Adding agents increases entropy, not just throughput. To build systems that actually work, we need to move from "chat" to "state" and treat AI output as a claim to be settled, not an answer to be trusted. Stop scaling chaos and start building protocols.
Agents Need Boundaries
Trust (in AI agents) must be engineered, not implied. This is what we refer to as "Agent Physics".
The Signal in the Noise: Why the Openclaw Mess Matters
We spent the last few days staring at the car crash. And rightly so. Openclaw was a security nightmare, a credential sieve, and a live demo of what happens when you let code execute other code based on "vibes" and bad engineering practices.
The "AI Overseeing AI" Trap.
The State of Agents (Part 3/6) shifts the focus from what agents do (Skills/Agency) to how we govern them. We believe the current trend of "AI Supervisors" doesn't make sense, and explain why we think "Physics for Agents" is the superior safety model.