Most narratives about carbon emission reductions start with what people should do: drive less, consume less, electrify everything. These are all important actions. But they mostly treat emissions as a behavioural problem or a hardware problem.
In reality, a massive share of our emissions is a byproduct of how systems are architected.
A thought experiment
In his 1973 seminal book Small is Beautiful: Economics As If People Mattered E.F. Schumacher presented the following scenario:
A British lorry travels from London loaded with fancy biscuits bound for shops in Scotland, while another lorry simultaneously leaves Scotland laden with shortbread for customers in London. This criss‑crossing of nearly identical goods shows how the systems we have architected for an economy that is obsessed with growth and trade volumes burns fuel, clogs roads, and wastes human effort to achieve something people could do more simply by sharing knowledge instead. If Londoners and Scots exchanged recipes rather than truckloads of biscuits, each place could satisfy its tastes locally, preserving energy and community skills instead of treating senseless transport as “progress.”
Invisible systems generate carbon
Now let's think about the invisible systems that underpin digital and physical economies. This includes the architecture for how information is encoded, processed, and moved. If you change the architecture, the emissions profile of an entire industry can change. If you ignore the architecture, efficiency gains at the surface are just rounding errors.
I recently joined a new W3C working group that's tackling this challenge and came across the Carbon Blueprint 10-Year Projection for the Knowledge3D stack. This predicts that if we change the computational architecture of visual and spatial computing, this could avoid gigatons of emissions over a decade.
Verify the math behind the "gigatons" claim and see the code (Apache-2.0 licensed) that implements the stack.
Any ten-year projection involving "gigatons" deserves a healthy dose of skepticism, as the numbers are always going to be speculative. But the core logic is worth taking seriously.
Carbon is a byproduct of information
When a GPU renders a frame, or a robot navigates and processes a scene, it’s reconstructing information. Today, we do this using brute force. We move massive tensors and redundant pixel arrays across CPUs, GPUs, and storage. It’s heavy. It’s hot. And it’s incredibly inefficient.
The Knowledge3D proposal suggests a different path: represent visual information procedurally. Instead of storing and sending the "pixels" of a scene, you store the instructions describing it. Somewhat like sending a biscuit recipe instead of truckloads of biscuits.
This changes the energy calculus entirely.
- Instead of sending millions of pixels, you send a short programme.
- Instead of recomputing every pixel every frame, you update only the logic that changes.
Whether the actual reduction is 10x or 100x isn't the point. The structural point is that when you redesign how information is represented, energy consumption collapses as a side effect.
We’ve seen this before in computing. Compression algorithms, vector graphics, even early gaming—architecture always beats optimisation.
The Procedural Layer for Trust
At first glance, Knowledge3D (spatial computing) and IXO (verifiable real-world state) live in different worlds. But they are solving the exact same bottleneck: inefficient representations of reality.
For instance, in climate finance, we don’t actually lack capital. We lack trusted information about outcomes, so that the capital can be deployed responsibly.
Right now, if you want to move a billion dollars into carbon mitigation, the "information architecture" is a mess of fragmented databases, manual PDF reports, and expensive third-party audits that often require international consultants to fly into the project site to conduct physical inspections. Because the data is "heavy" and unreliable, the system compensates with bureaucracy. We waste energy and capital not because we want to, but because our measurement systems are too "low-resolution" to trust.
This is why we built IXO. We’re essentially building the procedural layer for trust.
Instead of treating climate impact as a static report, it becomes a set of executable, verifiable claims. Digital twins for real-world systems. Agentic oracles that evaluate evidence in real-time.
The parallel isn't accidental. Just as procedural graphics move from heavy data to compact instructions, verifiable impact moves from "brute force auditing" to "reconstruction through rules."
Infrastructure for Intelligent Cooperation
There’s a thread in the Carbon Blueprint that I think most people will miss: AI and robotics only become "green" if their computational substrate becomes radically more efficient.
This aligns with what we see in the field. AI is becoming the coordination layer for the physical world—managing supply chains, energy grids, and agriculture. But coordination only works when agents (human or machine) can operate on a shared, trustworthy state.
Without that shared state, you get more automation but the same old coordination failures. You just fail faster.
This is why things like verifiable credentials and decentralised agent execution aren't just "blockchain features." They are the foundations for cooperation between autonomous systems. And in the end, cooperation is what determines whether a technology reduces emissions or just triggers a rebound effect where we use the "saved" energy to do more of the same.
Beyond Physics
The Carbon Blueprint estimates potential reductions of multiple gigatons of CO2. I treat those numbers as illustrative, not predictive. Technology transitions are messy; they get stuck in standards battles and legacy incentives.
But the direction is right.
A lot of climate discourse assumes the breakthroughs must come from physics, such as new battery chemistries or fusion reactors. In practice, many breakthroughs could come from information systems that allow energy, capital, and intelligence to move with less friction. This is the far more intelligent path that we are beginning to travel, and that has potential to become a super-highway if we use super-intelligence intelligently, when this arrives.
The Knowledge3D blueprint is an attempt to rethink the substrate of spatial information. IXO is an attempt to rethink the substrate of trusted information about the state of real-world systems.
Both lead to the same conclusion:
The bottleneck to reducing carbon emissions is no longer only about physics. It’s also about how we coordinate human activity with more energy-efficient information protocols.