From Taskmaster to Thinking Partner

We’ve moved past the "wow" phase of AI. Now we have the data on how work is actually changing. A recent survey of PMs, engineers, and founders reveals a fascinating divergence.

10 days ago   •   3 min read

By Dr Shaun Conway

Working together with intent

We’ve moved past the "wow" phase of AI. We are now in the integration phase, and the data is starting to tell a fascinating story about how work is actually changing.

A recent large-scale survey of product managers, engineers, designers, and founders revealed a baseline truth we all suspected: AI is working. More than half of respondents say AI tools exceed their expectations, saving them at least half a day per week.

But if you look closer at the data, a divergence appears. Most individual contributors are using AI to speed up output—writing code, drafting emails, cleaning up docs. Founders and leaders, however, are doing something qualitatively different. They are using AI as a thinking partner for strategy, ideation, and decision support.

This specific pattern validates everything we are building with Qi It proves that the real leverage isn’t in faster typing; it’s in better reasoning.

Here is how these market signals are shaping our approach to building a flow engine for human–AI cooperation.

1. Automation is Table Stakes; Sense-Making is the Goal

The survey highlights that AI helps most with downstream, output-oriented tasks. Upstream thinking—research, strategy, and roadmap definition—is still lagging.

For us, this sets a clear bar. If Qi only accelerates execution, we are merely delivering incremental value. To participate in the real productivity revolution, we have to move upstream.

  • Basic automation (drafts, transformations) must be standard—the "boring" foundation.
  • Strategic input loops must be the priority.

The goal isn't just a tool that writes for you; it’s a partner that helps you determine what matters to build next. If we don't enhance collective sense-making, we are just helping teams run faster in the wrong direction.

2. The Founder’s Edge: Augmenting Judgment

Why do founders report getting higher value from AI than many others? As a founder, I believe this is because we are more inclined to treat AI as a co-pilot for judgment, not just a generator of text.

This aligns directly with Qi’s core thesis regarding Shared State.

To move from "doing" to "thinking," AI needs context. It cannot just operate on the prompt you just gave it; it needs to operate over a shared state of intent, insights, and feedback signals. In Qi, AI must:

  • Surface insights from patterns of past decisions, not just raw data.
  • Project the downstream effects of current choices.
  • Reduce noise without dampening the necessary diversity of thought.

3. Specialization vs. Fragmentation

Engineers in the survey made it clear: they prefer specialised tooling over generalist chat interfaces. They want help with the specific tedious work of tests, docs, and code reviews.

This creates a design tension for Qi. To be effective, the platform must adapt to the user's role—we cannot treat a Product Manager and a DevOps Engineer the same way. We need customisable cohorts of agents, such as an "Engineering Assistant" versus a "Vision Synthesiser."

However, this brings a risk. If we build separate tools for separate roles, we destroy alignment.

The solution is Semantic Role Metadata within the shared state. A "done" task means something different to an engineer than it does to a researcher, but they must exist on the same "common ground." Qi must offer role-specific behaviors on the front end, anchored by a unified truth on the back end.

4. Anchoring the "Fuzzy" Work

The biggest gaps identified in the market are in the "fuzzy" areas: research, context gathering, and early prototyping.

This is where human grounding is non-negotiable. For AI to succeed here, the shared state must capture more than just tickets and to-do lists. It needs to capture:

  • Assumptions and their evidentiary status.
  • Hypotheses linked to expected outcomes.
  • Strategic signals from users and markets.

When you structure these artefacts, AI stops being a hallucination risk and starts being a strategic asset.

5. Solving the Noise Problem

It wasn’t all good news in the data. A majority of respondents reported significant downsides: hallucinations, context gaps, and general misalignment.

This confirms that AI without structure amplifies noise just as efficiently as it amplifies signal. This is why Qi focuses so heavily on provenance and trust.

  • We must anchor AI reasoning in authenticated shared state.
  • We must track trust signals for AI suggestions.
  • Human agents must have the power to challenge and refine AI insights.

This turns AI’s unpredictability into controlled amplification.

The Bottom Line

More people are beginning to realise that not only is AI a powerful tool, but its real value emerges when it becomes a cooperating partner, not an executor.

Qi’s architecture of human–AI cooperation is designed for this exact shift. We aren't just building a faster typewriter; we are building a better way to think together. The core challenge now is to answer the questions that will define the next generation of work:

  1. What distinct artefacts make upstream strategy AI-ready?
  2. How do we enforce integrity in AI-derived insights?
  3. How do we allow for role-specific AI behaviours without fracturing our common ground?

The answer to these questions will determine whether we amplify productivity, or whether we actually amplify clarity of purpose.

Spread the word