The Signal in the Noise: Why the Openclaw Mess Matters
We spent the last few days staring at the car crash. And rightly so. Openclaw was a security nightmare, a credential sieve, and a live demo of what happens when you let code execute other code based on "vibes" and bad engineering practices.
OpenClaw, the Antichrist
OpenClaw isn’t evil. It’s just what happens when you take a hungry piece of software, give it the keys to your life, and then act surprised when it tries the doors.
Who’s in Charge?
The risk is not rogue agents. It is human drift in the presence of fluent authority. Build flows that force choice, preserve values, and keep the human in control.
Employable Agents
You wouldn't hand a new intern the keys to your office, without explicit intent, boundaries and controls. Why do it with AI?
Agent Minds
Precise language prevents sloppy engineering. We project agency onto fluent bots, creating a security risk. Real intelligence requires stakes - a loop with reality LLMs lack. Solve this with architecture: explicit intent and proof.
Solving the AI Productivity Paradox
"Rework Tax" explains why AI makes us feel faster while work takes longer. Why optimising nodes breaks the network - and how Shared State restores the physics of production.
Endurance Wins
A personal reflection, passing the ten-year mark of being deeply inside the problem, building the solution.