We are uncovering better ways of developing software by partnering in teams where agents and humans collaborate. Through this work, we have come to value:
That is, while there is value in the items on the right, we value the items on the left more.
Twelve Principles of Hybrid Intelligence
Intention is the Primary Artifact
The value of a human-agent team is measured by the clarity, verifiability, and reusability of the specifications that guide production. Working software is still the ultimate goal, but executable intent is how we achieve it.
Customer Value Drives the Loop
Our highest priority is to satisfy the customer through early and continuous delivery of verified solutions. Agents can produce output that appears complete — tests pass, features work — while solving a problem no customer has. The iteration loop must be anchored to validated human outcomes, not to agent throughput.
Govern Change, Accelerate Execution
Welcome changing requirements, even late in development. Agent speed makes change cheap to execute but not cheap to govern. Humans must own architectural coherence, dependency impact, and integration risk — the agent implements the change, the human decides whether the system can absorb it.
Safe Delegation Requires Contracts
An agent can only operate responsibly when provided with a clear objective, strict constraints, verifiable acceptance criteria, and an explicit escalation policy.
Autonomy is a Design Parameter
Autonomy is not a fixed state. It must be dynamically calibrated based on task risk, action reversibility, and specification maturity.
Sustainable Cognitive Pace
In a hybrid team, the bottleneck shifts from production to comprehension. The pace of delivery is dictated by the human capacity to validate, orchestrate, and maintain a coherent mental model — not the agent's infinite capacity to generate. When output exceeds understanding, quality collapses silently. Sponsors, humans, and systems should maintain a pace the team can cognitively sustain indefinitely.
Humans are Guarantors
The human role shifts from manual implementation toward intention translation, orchestration, validation, and systemic responsibility. But guarantorship demands active competence — a human who cannot understand the agent's output cannot meaningfully guarantee it. Build projects around motivated individuals, equip them with the right AI tools, invest in their ability to evaluate what those tools produce, and trust them to govern the outcomes.
Failure Demands a Protocol
When an agent fails, the cost is proportional to the time between failure and detection. Hybrid teams must design explicit failure protocols: how agent errors are surfaced, how escalation is triggered, what the human fallback is, and how the system degrades gracefully. An undetected agent failure is more dangerous than a visible human mistake.
Coherence is Collective Responsibility
When multiple agents generate changes in parallel, system consistency becomes the hardest coordination problem. No single agent sees the whole system; coherence doesn't emerge from individual correctness — it must be explicitly designed, continuously maintained, and collectively owned.
Trust is Calibrated, Not Granted
Trust in an agent is established through demonstrated performance within a specific task domain, not through general reputation. It is never blindly granted; it is scoped to capability boundaries and adjusted dynamically with empirical evidence. An agent trusted in one domain has earned nothing in another.
Ambiguity is Risk
What an agent does not understand, it invents with apparent coherence. The most efficient method of conveying information to an agent is a rigorously refined specification. Resolving ambiguities before delegation is the modern equivalent of face-to-face communication.
Continuous Mutual Learning
At regular intervals, the team reflects on how to become more effective — but in a hybrid team, reflection runs in both directions. Agent error patterns reveal where human specifications are ambiguous, incomplete, or contradictory. Human corrections reveal where agent capabilities have boundaries. Each cycle should produce better specifications and better-calibrated delegation — the team and its agents co-evolve.