TL;DR
An AI agent and a small child are solving the same problem: how to act in the world with limited context, tools they didn't design, and a budget of mistakes. Montessori figured out the answer in 1907. We rediscovered it in 2023 and called it "agent design." This post is the cheat sheet.
Okay, real talk. You've probably spent more hours this month coaching ChatGPT than your parents spent reading any single parenting book. You know what happens when the prompt is too vague. You know what happens when you give it too many tools. You know the weird, slightly embarrassing pride when it nails the task on its own.
Hold that feeling. That's the same feeling a Montessori guide has watching a four-year-old button their own coat for the first time. And it is not a coincidence.
Maria Montessori was a physician, an observer, and — accidentally — the first person to write a spec for autonomous agents. She wasn't thinking about machines. She was watching children. But the thing she figured out is the exact same thing every serious AI lab is trying to reverse-engineer right now: how do you build a system that learns, decides, and corrects itself — without a human micromanaging every step?
1.What's an "agent," anyway?
Forget Hollywood. An agent — AI or human — is anything that can do three things in a loop:
Perceive
Notice what's in front of it.
Decide
Pick the next move.
Act
Do the thing — and learn from it.
A toddler sorting socks is running that loop. Claude writing your email is running that loop. A Montessori child pouring water from one pitcher to another is running that loop. The mechanism is the same. The substrate — biological vs. silicon — is a footnote.
2.The four parallels
Here's where it gets uncanny. Every principle Montessori built into her classrooms a century ago has a one-to-one match in how modern AI agents are designed today. Four of them. Side by side.
Parallel #1 — The prepared environment ↔ context & tools
Montessori obsessed over the room. Shelves at child height. Each material chosen on purpose. Nothing extra. Nothing random. The environment was the teacher.
AI people talk about this constantly — they just call it "the context window" and "the tool list." Too much context and the agent hallucinates. Too little and it paralyses. Same rule for the kid. Overstuffed playroom? Can't focus. Empty room? Nothing to do.
Prepared environment
Shelves at child height. Materials from simple to complex. Only what the child is ready for.
Context & tools
A curated context window. Tools introduced in layers. Only what the agent needs to reach the goal.
Before you "teach" — curate. Three toys on a shelf beats thirty in a bin. Every time.
Parallel #2 — Freedom within limits ↔ guardrails
This is the one GenZ parents get viscerally. You grew up with "free-range" parenting takes on Instagram and "helicopter parent" takes on TikTok and you already know both extremes are broken. Montessori nailed the middle.
Freedom is real because limits are real. A child picks their own work — from the shelf. Picks their own pace — inside the work cycle. Walks, speaks softly, returns the material. Those aren't cages. Those are the rails that make the freedom safe enough to actually use.
AI engineers spent two years painfully rediscovering this. We call it guardrails now. Same principle: you don't get autonomy without constraints. The agent is safe to be free because the rails are real.
Freedom within limits
Choose your own work, your own pace — inside non-negotiable ground rules.
Guardrails
Choose the next action, the reasoning path — inside permissions, budgets, and scope.
Parallel #3 — Control of error ↔ self-verification
The Montessori cylinder block is one of the quietest pieces of genius in education. Each cylinder fits only one hole. If you put it in the wrong place, the last one doesn't fit. The wood tells you. Not the adult.
The child owns the feedback loop. No shame, no "good job / bad job." Just try, observe, adjust.
That's literally the inner loop of a modern AI agent. It writes some code. It runs a test. The test fails. It tries again. The code either compiles, or it doesn't. The test either passes, or surfaces a failure. The material is the judge — not the human breathing down its neck.
Control of error
The cylinder won't fit the wrong hole. The material is the judge — not the adult.
Self-verification
The code either compiles, or it doesn't. The agent verifies its own work — before asking a human.
Before you correct — pause. Let the world do the teaching. The juice spills, the puzzle won't close, the shoe goes on the wrong foot. That's the lesson. You are not the lesson.
Parallel #4 — The observer teacher ↔ human-in-the-loop
The hardest one. Especially for us.
In a Montessori classroom, the adult's job looks suspiciously like doing nothing. She prepares the environment. She watches. She waits for the moment the child is stuck — truly stuck, not just taking a beat — and only then does she step in with the smallest possible nudge. Her success is measured by how little she acts.
AI people reinvented this in 2024. They call it "human-in-the-loop." The human sets up the environment, lets the agent run, reviews at checkpoints, intervenes only on dangerous decisions. Well-designed oversight, they'll tell you, is almost invisible.
The observer teacher
Prepare, then step back. Watch. Guard concentration. Intervene as little as possible.
Human-in-the-loop
Set up the environment, let the agent run. Review at checkpoints. Intervene only when it matters.
3.What this means on a Tuesday night
Okay, lovely theory. Your kid is melting down because the blueberries touched the pasta. What now?
You don't need to rebuild your house into a classroom. You don't need the wooden everything. You just need to start asking one question in every parenting moment:
Who is the agent in this moment — me or them? And if the answer is "me," is that actually required? Or am I just faster, neater, more patient than their five-year-old self?
That's it. That's the practice. Most of the time the honest answer is: they could do it. It would just take longer, be messier, and look wrong for a while. And every single time you let that happen — you let them stay the agent — you grow a little more of what Montessori (and AI engineers, and you, obviously) are after.
Three moves for the week
1. Curate, don't accumulate. Pick three things on their shelf this week. Put the rest away. Watch what happens to their focus.
2. Let the world be the teacher. Next time the cup is about to spill, do not intervene. Narrate nothing. Let the cup spill. Hand them a towel. That is the lesson.
3. Count your interventions. For one afternoon, just count how many times you step in. No judgment, just a tally. Then try to halve it next week.
Agency isn't given. It's grown.
Grown in a prepared environment, by a respected learner, with freedom within limits. It works for children. It works for AI. Because it's just how learning actually works.
Want the full deck — 23 slides, all four parallels, the speaker notes version — for your next parent night or team off-site? Write to us.