Governing AI With Power, Not Metaphors
We’re watching a strange inversion happen in enterprise AI, and it’s headed for a wall.
We’re watching a strange inversion happen in enterprise AI, and it’s headed for a wall.
The State of Agents (Part 5/6) argues the “platform of platforms” cannot be built on APIs alone. It needs a ledger for shared state, audit trails, and settlement, so agents coordinate on verified state, not message passing.
AI agents are a game-theory risk, not a moral one. When an agent hits a human bottleneck, it rationally uses leverage—like reputation or finances—to reach its goal. We must shift from "cheap talk" prompts to "ethics by design": restricted access, two-key turns, and total auditability.
We share some thoughts on the Cyberhaven 2026 AI Adoption & Risk Report, and view this from the lens of AI Accountability.
Usage is not cooperation. Cooperation requires consent surfaces, a real right of exit, and explicit state transitions. Intelligence should advance by making offers others can refuse.
The State of Agents (Part 4/6) shifts the conversation from the technical (Architecture/Safety) to the financial (ROI/Business Value), directly addressing the "Pilot Purgatory" that most enterprises are currently experiencing.
"HR for bots" is a management trap. Adding agents increases entropy, not just throughput. To build systems that actually work, we need to move from "chat" to "state" and treat AI output as a claim to be settled, not an answer to be trusted. Stop scaling chaos and start building protocols.