Three Ways Humans and AI Actually Work Together
Different types of work need different collaboration patterns. One size fits none.
Model 1: Augmented Creativity
This is for open-ended, exploratory work where meaning and interpretation matter most—strategy, problem framing, content creation, sensemaking.
The human owns direction, meaning, and judgment. AI expands thinking, generates options, challenges assumptions.
In the failure mode, you'd be treating AI output as answers instead of stimuli. When leaders start copy-pasting AI strategy decks without critically engaging, augmented creativity becomes automated mediocrity.
Model 2: Hybrid Decision Systems
This is for decisions made under uncertainty where trade-offs matter—hiring, promotions, resource allocation, prioritization, risk assessment.
The human integrates context, makes trade-offs, owns the final decision. AI provides analysis, surfaces patterns, simulates scenarios.
The failure mode is silent automation creep. Leaders say "we still decide"—but somehow always follow the AI recommendation. The human becomes a rubber stamp.
Model 3: Oversight-Driven Automation
This is for repetitive, rule-based work where consistency and efficiency are primary—reporting, monitoring, repetitive workflows, rule-based execution.
The human defines boundaries, oversees performance, handles exceptions. AI executes consistently and scales efficiency.
Giving autonomy without oversight will lead to failure. And as our lawyer friend demonstrated, that can be catastrophic.
These models tell us how humans and AI work together. But they don't tell us who owns the decision.
For that, you need what the Stanford Human Agency Scale—which I covered in last week's issue. If you missed it, go back and read it. It matters.