Over the past year, I’ve watched a whole wave of AI workflow tools arrive - Relay.app is one of the better ones. It’s polished, thoughtful, and genuinely useful if you want to stitch together Gmail, Notion, HubSpot, Slack, and a few LLM calls.
Tools like this are important. They pull AI out of the lab and into everyday work.
But they also highlight a deeper question my team has been grappling with for the work that we do:
At what point does “AI automation” stop being a convenience feature and become part of your critical infrastructure?
Because once AI systems start influencing money, rights, climate, or health, the bar changes completely.
That’s exactly why we have been building Qi Flows and Qi Agents, for voluntary intelligent cooperation between teams of people and AI agents.

What Relay (and similar tools) are really good at
Let’s give Relay its due.
It’s designed to help teams:
- Automate workflows across SaaS tools
- Add AI steps (summarise, extract, classify, draft)
- Keep humans in the loop with approvals and checkpoints
- Move faster without needing a dev team
If you want to:
- process leads and update your CRM,
- clean up your sales pipeline,
- summarise support tickets,
- draft follow-up emails,
- route internal requests to the right person,
...then a tool like Relay may be a sensible choice.
In those worlds, the “source of truth” lives in your SaaS stack, and the worst case is usually lost time, some confusion, or a clumsy email.
This is not the world Qi is designed for.
Why Qi Flows and Agents are different
Qi Flows and Agents sit in a very different category.
This is not “Zapier + AI”. It’s not another agent runner.
Qi Flows are for shared, high-stakes systems where:
- multiple organisations (or countries) must trust the same logic,
- data and decisions need to be auditable over years,
- outcomes control real assets and rights,
- and AI is no longer just a helper — it’s part of the decision surface.
Think of use-cases like:
- clean cooking programmes where stove usage turns into carbon credits and ITMOs
- pathogen genomics platforms that flag outbreak risks across borders
- youth impact platforms where activities unlock opportunities and funding
- impact-linked finance, where claims and evaluations trigger movement of money and payments
- cooperative or DAO governance, where oracles and agents inform collective decisions
In each of these contexts, we must know:
- Who is allowed to act?
- Under what rules?
- What did they actually do?
- How was that behaviour evaluated?
- Can we prove that no one stepped outside the rules?
- Can we safely attach payments, rights, and governance to these outcomes?
Our Qi Flow Engine is built to answer these questions.
The core difference: automation vs accountable systems
There are three big differences I want to propose:
First, The Sources of truth
In Relay-style tools, the state of a process lives in your SaaS apps.
In Qi, the source of truth is the IXO stack:
- Claims and authorisations that express intents, evidence, evaluations, disputes
- Digital twins for entities, projects, devices, portfolios, and real-world systems
- UDIDs (Universal Decision & Impact Determinations) that record how actions were judged, and what outcomes they produced
- Smart Accounts that hold assets and programmable rights
Qi Flows orchestrate humans and AI agents over this verifiable state.
External tools are just edges. They don’t define reality; they just interface with it.
Second, Guarantees — not just convenience
A Relay workflow can be well-designed and still be opaque:
- You don’t have formal guarantees that it won’t approve a bad case.
- You don’t have an audit trail with cryptographic proofs.
- You don’t have executable contracts determining “what is allowed to happen, and who is authorised to make this happen”.
Qi is different.
We use:
- User-defined Authorisation Networks (UCAN) to define who may act on what → object capabilities
- Claims to record what actually happened
- Universal Decision and Impact Determination (UDID) to verify the results, with evidence for how this was evaluated
- Rubrics written in Lean (a proof system) to guardrail evaluators within governed rules
That means you can say things like:
- “It is not allowed for this oracle to approve a claim if the score is below 0.8.”
- “It is not allowed for this evaluation to pay more than the maximum agreed schedule.”
- “It is not allowed for a patch that updates a state to alter fields outside this whitelist.”
And these are not just as policy or documentation — they are machine-checked proofs.
Third, Economics and Governance
Relay can call Stripe or Xero, but it’s not an economic protocol.
Qi is wired directly into:
- escrows and bonds
- tokenised outcomes and RWAs
- on-chain stablecoins
- DAO / cooperative governance
- staking, rewards, and slashing for Agent services and Oracles
Qi Flows can:
- originate claims,
- route them to agentic oracles,
- evaluate them under a verified rubric,
- mint or update assets,
- release payments from escrow,
- record all of that in a way regulators, auditors, and communities can inspect.
This is not “send an email when something happens”.
It’s “run part of a national climate programme, or an Article 6.2 pipeline, or an outbreak early warning system, safely.”
So when Qi rather than a Relay-style tool?
If you’re asking:
“What’s the easiest way to automate part of my team’s internal workflow?”
You probably don’t need Qi.
Use Relay or something similar. You’ll move faster and that’s perfectly fine.
If your reality is:
We are building a shared system where AI agents and will influence real money, rights, climate, or health outcomes.
People will challenge and rely on these decisions.
We might have regulators in the loop.
How do we make sure this remains trustworthy over a decade?
These are all Qi territory.
Concretely, use Qi Flows and Agents when:
- multiple organisations must trust the same pipeline and rules,
- claims and evaluations drive payments or issuance of credits,
- you need a full audit trail for how decisions were made,
- AI is not just a helper but part of the decision surface,
- you need sovereignty over data and deployment,
- you want governance, not just configuration.
In many serious organisations, you may end up using both:
- a Relay-style tool for day-to-day internal automation, interfacing with your existing SaaS systems,
- Qi Flows and Agents for the “hard core” of your impact, finance, and governance logic — ideally moving these into Programmable Organisational Domains (PODs)
One is for convenience.
The other is for accountability.
How Qi Flows feel in practice
From a user’s perspective, Qi Flows become part of the conversation, as you work with your Qi Companion agent to:
- Set up a trusted cooperation environment (a POD) for your programme or ecosystem.
- Model the digital twins in your system (projects, devices, people, portfolios), using blueprints.
- Define the claims you care about (usage, outcomes, verifications), drawing on a growing library of Claim Templates.
- Pick or design rubrics from the protocols that specify how claims are evaluated.
- Compose flows where humans and AI work together to collect evidence, run evaluations, issue UDIDs, and trigger actions.
The difference is an intelligent cooperating system that runs on a trust infrastructure that gives you chains of cryptographic and logical guarantees that make this safe to rely on for your mission-critical workflows.
Where we’re heading
We’re entering a phase where “AI workflow automation” will become normal.
The more interesting question is:
which of those workflows are just convenience,
and which are mission-critical standard operating procedures and workflows that have to be governed, controlled, audited, and evaluated, to ensure that your intents are reliably turned into verified outcomes.
If you’re working on systems where AI and automation touch real-world accountability, I’d love to be in conversation.
