Unaccountable Agents

We share some thoughts on the Cyberhaven 2026 AI Adoption & Risk Report, and view this from the lens of AI Accountability.

15 hours ago   •   3 min read

By IXO World

The Cyberhaven 2026 AI Adoption & Risk Report confirms that AI is quickly shifting from a toy people play with to infrastructure they depend on.

Is Your AI Use Outpacing Your Security? The 2026 AI Risk Report
Enterprise AI adoption is fragmenting. Research shows employees enter sensitive data into AI tools every three days , yet most tools don’t meet enterprise risk standards. See the full data from Cyberhaven Labs.

The problem is that most of this infrastructure is being built on sand, using legacy controls that were never designed for non-human actors.

The Cyberhaven researchers looked at billions of data movements across 220+ companies. Even if you subtract the vendor bias, the signals are clear. Here's what it means for the Qi Intelligent Cooperating System we are building at IXO.

In the Shadows

The report highlights an "AI adoption gap." This isn't just a "fast vs. slow" thing; it’s a fundamental split in how organisations operate. Frontier enterprises are using 300+ GenAI tools. Cautious ones use fewer than 15.

That’s not a gap in "innovation." It’s a gap in the operating system of the firm.

The data movements are where it gets ugly. About 40% of AI interactions now involve sensitive data. The average employee inputs something sensitive once every three days.

But here is the real failure mode: 32% to 60% of this usage is happening via personal accounts.

Shadow AI isn't a compliance problem. It’s a product problem. If the "safe" path is 10x slower than a personal Claude or DeepSeek account, people will route around you. Not because they’re malicious, but because they have a job to do and you’re in the way.

Autonomous Agents and Open Weights

As we have passed the chatbot era into building agentic systems and coding assistants.

  • Agents: 23% of enterprises are already using platforms like n8n or Copilot Studio to build agentic workflow automations.
  • Open Weights: Chinese models like Qwen and DeepSeek now account for a massive chunk of usage.

This changes the game. You can’t "block" an agent that lives inside an IDE or a background automation the same way you block a website. Governance that tries to fence in an ecosystem designed to escape fences is doomed.

From "AI Use" to "Accountable Cooperation"

At IXO, we start from a different premise: AI is not an app. It’s a new class of actor.

Most governance assumes a human is sitting at a keyboard. But when an agent performs a task, who is the "actor"? In our Qi intelligent cooperating system, we’ve moved away from the idea of "agents with tools and skills" to focus on agents and people cooperating over shared state.

When work happens through copy/pasting data into a chat box, you destroy provenance. You lose the "why" and the "who." When work happens inside a Flow over shared state:

  • Every change is attributable to a specific actor (human or AI).
  • Every artefact has its context attached.
  • The "decision trace" is a feature of the system, not a manual audit log you have to reconstruct later.

We don't need "prompt policies." We need authorisation boundaries.

An agent shouldn't have "ambient access" to your company data; it should have scoped permissions for a specific workflow, just like a junior employee.

Govern the Work, Not the Tool

If you’re a CEO looking at the report recommendations, theatrical "AI Council" meetings aren't going to save you. You need to make the safe path the fast path.

  1. Stop measuring tool adoption. It’s a vanity metric. Start measuring workflow penetration. Where is sensitive context actually moving?
  2. Define the "Actor." If an agent touches data, it needs an identity, a scope, and a log. If you can’t answer "which agent changed this record?" you don't have governance. You have vibes.
  3. Build for the Decision Trace. Automation is useless if it isn't repeatable or evaluable. Invert your strategy: make actions attributable first, then scale the automation.

The real risk isn't that AI is dangerous. It’s that we are trying to run a high-speed, agentic economy on a 20th-century system of record.

The next stage of the enterprise isn't "AI-powered." It’s accountable.

Questions for your exec team this week:

  1. If your top 5 workflows were fully automated tomorrow, could you audit why a specific decision was made six months from now?
  2. Is our "safe" AI path actually faster for a developer than using a personal account?
  3. Are we governing tools (which change weekly) or work (which doesn't)?

Spread the word

Keep reading