The Stanford Approach to Making AI Actually Work


Hi Reader!

Last week, a Head of HR at a tech consulting firm shared her concern with me: "We're going full scale into AI—trying various tools across many functions, training people, writing policies. But we still have no clarity on what ROI we can expect from it or whether we're even moving in the right direction."

I have to admit, this situation is far from unique.

The AI Adoption Paradox

Companies are racing to adopt AI faster than ever. The Stanford AI Index 2025 report shows that 78% of organizations were already using AI in 2024—that's 23% higher than in 2023. And 68% of CEOs expect AI to accelerate growth and efficiency.

But the reality is that 95% of organizations are getting zero return on investment from AI.

Gartner predicts that over 40% of agentic AI projects will be canceled by the end of 2027. And a combination of reports shows that about 60% of all AI initiatives will be canceled next year.

So what's going wrong?

We're adopting AI faster than we can absorb it. And in the rush to automate, we're forgetting to decide who's actually in control and who is making decisions on how it all should work.

The Missing Conversation

Ask yourself this question: "If this AI system makes a bad decision tomorrow, who gets called into the room to explain it?"

Because we've been so focused on what AI can do, we forgot to define what it should do—and where humans need to stay in control.

That's why I love the Human Agency Scale introduced by Stanford researchers. It forces clarity around a single issue: where does judgment live in this workflow, and who is accountable for the outcome?

The Human Agency Scale (HAS)

The scale describes five levels of human involvement:

H1 — Full Automation

AI handles the task entirely. No human reviews inputs, outputs, or decisions in real time. Responsibility is delegated to the system.

H2 — Minimal Human Input

AI performs autonomously but requires limited human input—configuration, thresholds, periodic checks. Human involvement is occasional and indirect.

H3 — Equal Partnership

AI and humans work together. AI provides analysis or recommendations; humans actively engage with the output, challenge it, and make decisions jointly.

H4 — Human-Led with AI Support

Humans lead the task and decision-making. AI provides support or suggestions but cannot proceed without explicit human approval.

H5 — Human-Only

AI has no operational role. All decisions and actions are carried out by humans.

The HAS framework defines the level of control and responsibility the human must keep to ensure the work remains responsible.

Why This Matters More Than You Think

Without explicitly naming the level of human agency, organizations slide toward higher automation simply because it appears efficient. Over time, AI recommendations begin to feel authoritative. Human involvement becomes symbolic rather than substantive.

This is how systems end up "deciding" without anyone formally deciding to give them that power.

And when something goes wrong? Everyone points to the AI. But the AI didn't choose its own level of autonomy. People did—or more often, people let it happen by default.

(I'll share more about how AI is influencing our brains in decision-making in my future issues, so look out for my next newsletters that come out every Thursday.)

The Real Question

The first thing leaders need to decide is who is making decisions and how to make responsibility visible.

Leaders must be able to point to a workflow and clearly articulate:

  • What role AI plays
  • Where human judgment intervenes
  • And who owns the outcome

If you can't answer those three questions, you don't have an AI strategy. You have a gamble.

Where to Start

If you're wondering whether your team is overusing AI—or underusing human judgment—I've put together a simple AI Reliance Team Self-Assessment. It's a downloadable resource for leaders who want to avoid the hidden costs of over-relying on AI and build teams that thrive in the age of intelligent machines.

Because the truth is: AI doesn't fail because it's not smart enough.

It fails because we haven't been clear about where we need it to stop and wait for us.

See you next week,

Daria


P.S. This newsletter was created at H4—Human-Led with AI Support. I led the thinking, AI helped with the polish. If you found it useful, forward it to a colleague who's navigating the AI chaos.

Check out more of our work at...

Linkedin

Connect

Youtube

Subscribe

My book

Read

If you want to get in touch, hit REPLY.

I'm happy to help!

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

Meaning Makers

A no-nonsense newsletter for busy leaders who are done with overwork and ready to scale smarter. Join a community of 15K+ leaders and followers across platforms getting concise, actionable insights on leadership, team building, and how to use AI and hybrid intelligence to make work easier—so you can earn more, go home earlier, and lead with purpose without burning out.

Read more from Meaning Makers

Hi Reader! Let me introduce my team. There's me. There's my assistant. And then there are the AI bots. These are custom AI assistants that handle marketing, content creation, research, data analysis, and even self-reflection. Tasks that used to take weeks now take hours. And no, I haven't lost my mind. I've just stopped pretending I can do everything myself. Here's How People Are Using AI Bots Custom AI bots aren't just productivity hacks. They're becoming something much more personal. Nearly...

Hi Reader! Last week I wrote about how AI is quietly taking over our cognitive powers—and what we need to do about it. This week's newsletter is on a more positive note: how to learn anything with AI. Because, honestly, we now have the best tutor in the world. One who knows everything we want to know and can teach us anything we need. Sure, he can also give us a bunch of crap. But using the right approach, you can learn anything you want. Fast. The Learning Crisis of Too Much Information...

Hi Reader! We talk a lot about AI replacing jobs. But the actual risk is cognitive offloading at scale. AI doesn't just automate tasks - it automates judgment. And when we hand over our thinking to machines, we don't just save time. We drain our mental engagement. A 2025 research paper found that extended AI use leads to "cognitive strain, attention depletion, information overload, and decision fatigue." The more AI thinks for us, the less mentally present we become. AI multiplies decisions...