Stop winging it: How to create a constitution for human-AI collaboration


Hi Reader,

A team is no longer just a group of humans sitting in one office.

It may include people working across locations, some meeting in person, some never meeting at all, and increasingly, AI agents participating in research, drafting, analysis, coordination, or execution.

That changes the leadership question from "How do I manage people?" to "How do I manage a system of humans and AI working together?"

In my recent conversation on The Tech Leaders Playbook, I made a simple point: what makes a good team is clear roles and shared purpose. But when you add a robot to your team, that becomes even more important.

Think about it. You now need to answer:

What is the role of this AI agent on your team? What is the role of other people on your team? How do you work? How do you collaborate? How do you make decisions? What needs human judgment? What gets escalated?

Most teams I talk with haven't written down the answers. They're flying blind, making it up as they go.

And that's a problem.

Because when work is distributed, hybrid, and AI-supported, clarity cannot stay informal. Teams need shared rules for how they work together.

Not bureaucracy. Not a heavy policy document. A practical team charter. A constitution. A working agreement that makes collaboration clearer.

What to put in your team's constitution

A team constitution for human-AI collaboration might clarify:

  • What AI is used for and what it is not used for
  • Which decisions stay fully human
  • Where human review is required
  • How team members challenge AI outputs
  • Which tools are approved
  • How experimentation happens
  • How the team measures whether AI is actually helping
  • How people communicate across remote and hybrid settings
  • When to escalate risks, mistakes, or ambiguity

In many teams, confusion doesn't come from bad intentions. It comes from unspoken assumptions. And AI scales those assumptions fast.

Learn from others who've done it

I'm sharing a few public examples of handbooks, constitutions, and playbooks below. These aren't templates to copy word for word. They're an inspiration for creating clearer ways of working:

  • Claude Constitution - Anthropic's vision for Claude's character and decision-making. Shows how to document principles, priorities, and ethical guidelines for AI systems in detailed, transparent language.
  • HubSpot AI Agents Playbook - A thorough guide to implementing AI agents, with frameworks for human-AI partnership, deployment best practices, and use cases across marketing, sales, and operations.
  • Basecamp Employee Handbook - A public handbook covering policies, perks, rituals, and how work gets done. Clear proof that transparency and documentation build trust.
  • Zapier Remote Work Guide - Practical advice for distributed teams on communication, collaboration, and building culture when people aren't in the same room.
  • GitLab Handbook - Perhaps the most exhaustive public handbook in existence. Demonstrates what it means to operate handbook-first, with everything from values to workflows documented and accessible.
  • Atlassian Team Playbook - A collection of exercises and plays to help teams improve collaboration, run better meetings, and build alignment around shared goals.
  • Dropbox Virtual First Toolkit - Resources for designing work around distributed-first principles, including guides on asynchronous communication and intentional collaboration.

If your team has never created a charter, start here.

And if you want the full conversation, watch the episode:

The bottom line? If you lead a team that includes both humans and AI, the question is not whether you need more clarity. You do.

The question is whether you will leave that clarity unspoken—or write it down.

Talk soon,

Daria


P.S. Prefer to listen? You can find my full conversation on The Tech Leaders Playbook on Spotify or Apple Podcasts.

Check out more of our work at...

Linkedin

Connect

Youtube

Subscribe

My book

Read

If you want to get in touch, hit REPLY.

I'm happy to help!

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
Unsubscribe · Preferences

Meaning Makers

A no-nonsense newsletter for busy leaders who are done with overwork and ready to scale smarter. Join a community of 15K+ leaders and followers across platforms getting concise, actionable insights on leadership, team building, and how to use AI and hybrid intelligence to make work easier—so you can earn more, go home earlier, and lead with purpose without burning out.

Read more from Meaning Makers

Hi Reader, Daria’s here. This email is more personal than usual. With the conflict affecting so many countries in the Middle East, it's hard to stick to original plans. We're experiencing pressure psychologically, wrapped in missile alerts — you don't know when they will come, and you need to drop everything and go to the shelter. That could happen in the middle of writing this newsletter, in the middle of a workshop planned two months ago, or at night. It's psychologically and physically...

Hi Reader! Daria’s here. Last week, I was recording a new episode of Built by People Leaders with Elena Krutova — and about 15 minutes in, she said something that made me stop and ask her to repeat it. She was talking about the hardest kind of leadership work. And it’s not scaling a team. Not building something from scratch. It’s the other kind of work: walking into an organization that’s struggling, where the processes exist but aren’t working, where people are in place, but their motivation...

Hi Reader! Less than 1% of CEOs see HR as a key partner in unlocking the value of generative AI. That's not speculation. That's from Gartner Leadership Vision for 2025. And I really think it might be the most expensive blind spot in business right now. Because here's what I've been seeing on the ground. Over the past few months, I've been sitting down with HR and people leaders across industries to ask one question: what actually happens when HR takes a seat at the AI transformation table?...