OpenAI’s latest manifesto, AI for Self Empowerment, is a perfectly calibrated piece of techno-optimism.
It introduces the “Capability Overhang” as a new narrative: This is the gap between what AI systems can technically do and the actual value humans are capturing from them.
OpenAI’s diagnosis is simple. They claim that to close this gap, we need to maximise access. Their argument is based on a belief that agency comes from two things:
- Access to Truth: "Accurate information creates agency."
- Access to Compute: "Every individual needs... their own slice of compute."
It is a beautiful vision. It is also fundamentally wrong, in our opinion as builders of an intelligent cooperating system for people to use AI in real-world applications.
The "overhang" isn’t caused by a lack of access. It is caused by a lack of trust anchors.
The "Consumer" Trap
OpenAI’s definition of agency is dangerously passive. By framing agency as "access to compute," they are describing a consumer, not an agent with agency.
A consumer creates demand. An agent creates change.
If you give a human infinite compute and infinite "truth," but no mechanism to reliably impact the shared state of the world, you haven’t given them agency. You’ve given them a simulator.
This explains the "slop" we discussed in Part 1. The AI giants have given millions of people access to "frontier intelligence," who are generally using it to generate endless markdown files, emails, and code snippets that live and die in a vacuum.
They have Output, but they do not have Outcome.

ICYMI: "Skillful Agents That Work" is Part 1 of a 6-part series on the engineering of Skillful Agents.
True Agency requires Authority
At IXO, for our Qi Intelligent Cooperating System, we define agency differently.
Agency = Intent + Authority + Verified State Change.
You do not have agency just because you can ask a super-intelligent model a question. You have agency when your intent can reliably transition the state of a system—and that transition is recognised by others.
- OpenAI says: "Here is a slice of compute to help you think faster."
- Qi says: "Here is a cryptographic right to change the state of this contract."
The difference is profound. One is a faster brain. The other is a stronger hand.
The Real Bottleneck is Settlement
OpenAI is right that there is an overhang, but they are solving the wrong side of the equation.
We don't need to shove more compute into the hands of users. We need to build systems that can accept the output of that compute as valid, verified work.
The bottleneck isn't Generation (creating ideas). The bottleneck is Settlement (making ideas real).
If I use a "slice of compute" to design a new supply chain protocol, but the existing systems of the world (banks, governments, logistics networks) have no way to verify, trust, or execute that design without manual human intervention, my agency is zero.
I'm just a guy with a very smart chatbot!
From "Self Empowerment" to "Systemic Coherence"
OpenAI’s vision is "Self Empowerment"—a lone individual, boosted by AI, rising above the noise. That is the myth of the "10x Engineer" writ large.
But the Internet of Impact isn't built by lone wolves with GPUs. It is built by cooperation.
In the Qi architecture, we don't optimise for "self empowerment." We optimise for systemic coherence.
We don't just ask, "Did the AI give you a good answer?" We ask, "Did that answer update the ledger? Did it settle a claim? Did it move the project forward in a way that is visible, verifiable, and irreversible?"
Our IXO - Qi World View
The future belongs to those who understand that intelligence is a commodity, but agency is a structure.
OpenAI wants to sell you the commodity. They want you to believe that if you just buy enough of it, you will become powerful. They are primarily driven by the incentive to sell tokens.
But if you pour water into a sieve, you don't get a reservoir. You just get wet.
To close the capability overhang, we don't need more water. We need a bucket. We need the shared state, the protocols, and the governance that turn "compute" into "consequence."
Don't settle for a slice of compute. Demand the right to write to the state of the world.
In Part 3, we will tackle AI Systems Safety. Specifically, why the industry standard of "AI overseeing AI" is a hallucination loop, and why true oversight requires Stateful Agent Physics.
