There is a sharp critique circulating right now—specifically from 12 Grams of Carbon—arguing that the current ecosystem of AI agent "skills" is mostly junk.
They are right.
Most agent skills today are markdown files full of good intentions. They look reusable. They feel productive. But they let you down the moment they hit reality.
Whilst the diagnosis is correct, the industry is missing the root cause. The problem isn’t a lack of quality control. Not is it that our prompts aren’t clever enough.
The problem is that we have confused instructions with capability.
The Vacuum Problem
Currently, most agent systems live in a vacuum. They digest a prompt, generate text, and call it a "skill." But nothing is anchored. Nothing is observable. And crucially, nothing is accountable.
A skill that acts in a vacuum is just commentary.
If I tell an agent to "negotiate a contract," and it generates a beautiful email, but the actual contract state in the database hasn't changed, what actually happened? Nothing.
We have built a generation of agents that are excellent at talking about work, but structurally incapable of getting things done and settling it.
The Missing Move: Shared State
At IXO, we operate on a fundamental premise that separates Qi from standard agent frameworks:
A Qi Skill is not an agent skill unless it changes the state of the system.
If an agent (or a human) claims to have performed a task, but the shared state of the network remains unchanged, the skill didn't fire. It was just noise.
This is the architectural divergence we took with Qi. We did not set out a prompt marketplace. Qi is an intelligent cooperating system over shared state. This changes the definition of what a "skill" actually is.
In our view, a Qi Skill is not "something an agent knows how to do."
A Qi Skill is the ability of a collective—human or machine—to reliably transition shared state toward an intended outcome.
From "Prompt Folklore" to Physics
Without shared state, agent skills degrade into "prompt folklore"—vague rituals of text that work sometimes, maybe.
When you enforce shared state, the physics of the system change. You stop relying on the agent's "vibes" and start relying on the protocol's rules:
- A claim must move from Draft to Verified.
- A workflow must move from Pending to Settled.
- An outcome must move from Asserted to Evidenced.
This constraint eliminates the slop.
Qi Skills cannot sprawl because every skill represents a contract. It must declare what state it reads, what state it writes, and under what authority it operates. It leaves an audit trail not because we asked it to, but because the system cannot advance without it.
The Future is Coherence
Most agent ecosystems are failing because they optimised for the ease of creation—making it fast to write a prompt. We optimise for coherence.
That choice is harder. It is slower to build. But it is the only reason cooperation scales.
We are moving past experimenting with agents as "chatbots with tools." We are now building operating systems for the real world where humans and machines share state, share intent, and share responsibility.
If your agent can’t point to the specific state change it caused on the ledger of reality, it doesn't have a skill. It just has an opinion.

Read: The 2026 trends analysis from Cloudera highlights a critical reality: to move AI into production, we must stop treating data as passive storage and start treating it as active intelligence.
Three questions to stress-test your stack
As we move through this series, we invite you to look at your current AI deployments with a critical eye:
- Where do your agents currently operate without any observable state transition?
- What would break if you forced every agent action to write to a shared, inspectable log?
- Are you building tools that merely feel productive, or systems that actually cohere?
In Part 2, we will look at Agency. Specifically, why OpenAI’s definition of agency as "Access to Compute" is a trap, and why true agency requires Authority.
