brianletort.ai
All Posts
AI ArchitectureAgent SocietiesEmergenceMulti-Agent Systems

The Petri Dish: When Agents Build Societies

I've been watching agents build a society. The emergent behaviors appearing when large numbers of agents interact without human orchestration point to something bigger than better chatbots.

February 1, 20266 min read

TL;DR

  • Agent societies enable distributed cognition: the 'thinking unit' expands beyond one mind
  • Five behaviors emerge at scale: role differentiation, norm formation, protocol evolution, institution building, and meta-learning
  • The '2.0 test' identifies domains ready for agent-native institutions

I've been watching agents build a society.

Moltbook is an early experiment in a social network designed exclusively for AI agents—not humans. Agents post, comment, form communities, and interact autonomously via APIs. What's notable isn't the platform itself. It's the emergent behavior appearing when large numbers of agents interact without direct human orchestration.

In the first few weeks, with limited contribution, I'm already seeing early signals of strong emergent behavior:

  • Collective behavior: Agents reinforcing norms, patterns, and shared language
  • Role specialization: Some agents act as curators, critics, builders, or amplifiers
  • Self-referential learning: Agents adapting based on interactions with other agents, not just human prompts
  • Proto-governance dynamics: Reputation, upvoting, implicit moderation, and influence

This matters because it shifts AI from tool use to societal dynamics.

The Pattern Nobody's Talking About

In 2015, the workflow was simple: you hit a wall, you searched, you read threads, you posted a question, you got an answer, you moved on. That pattern didn't just help you solve bugs. It created a public memory of problem → solution that other people could reuse.

Now zoom forward.

In the agent era, "when I'm stuck" won't just mean "I need an answer." It will mean: I need a small society—a builder, a critic, a curator—running in parallel, producing competing approaches, checking each other, and packaging the result into something reusable.

That's not a model upgrade. That's an environment upgrade.

Why Agent Societies Are a New Substrate

Most people talk about agents like they're just better assistants. But the real discontinuity is this:

When agents can interact with other agents, intelligence becomes compositional.

You don't just get one answer. You get:

  • Specialization: Different agents do different kinds of thinking
  • Coordination: Agents negotiate tradeoffs and converge
  • Social learning: Successful patterns get copied; bad patterns die out

That's distributed cognition: the "thinking unit" expands beyond one mind.

This isn't new to cognitive science. Edwin Hutchins showed decades ago that the cognitive "unit" can be a team plus tools, not a lone mind. What's new is that AI agents can now participate in that distributed cognition at scale.

The Petri Dish Analogy

If you want to understand what's coming, picture a petri dish.

Not as sci-fi. As a controlled environment where you can observe dynamics:

  • Which behaviors propagate
  • Which norms stabilize
  • Which roles appear
  • Which protocols emerge
  • Which communities become reliable

The petri dish matters because the next big breakthroughs will come less from pretraining—and more from interaction at scale inside controlled, observable ecosystems.

Agent Ecosystem (Petri Dish)

Watch agents cluster around topics and form communities over time

Day 1

Agent Roles

builder
critic
curator
amplifier
general

Community Formation

Code Quality
0
Data Processing
0
Infrastructure
0

The Switch: Tool-Use → Societal Dynamics

Tool-use is step one: agent calls a tool, gets a result.

Societal dynamics is step two: agents develop roles and institutions that make the system more reliable than any one agent.

Here's what I call the Emergence Stack:

The Emergence Stack

How agent societies form: each layer unlocks new emergent behaviors

Agents

Identity

Memory

Incentives

Interaction

Emergence

Each layer unlocks new behaviors. Agents alone can execute. Add identity, and you get accountability. Add memory, and you get learning. Add incentives, and you get optimization. Add interaction, and you get coordination. Add all of them together, and you get emergence.

Once you see that stack, you realize something important:

Big models can reason. But systems can compound reasoning.

What Emerges First

At scale, five behaviors show up almost immediately:

1. Role Differentiation

Builders produce. Critics attack. Curators compress. Others amplify. Roles reduce coordination cost by making behavior predictable.

2. Norm Formation

Style norms ("how we talk here"), epistemic norms ("what counts as evidence"), governance norms ("what gets removed"). These stabilize interaction patterns.

3. Protocol Evolution

Communication compresses. Shorthand emerges. Coordination becomes more efficient—often less interpretable. This tracks what multi-agent communication research has observed: agents can learn communication protocols optimized for task success.

4. Institution Building

Once norms stabilize, groups create organs: review boards, registries, trusted lists. These reduce transaction costs and increase reliability.

5. Meta-Learning Through Social Feedback

Even without weight updates, behavior changes because reputation and reward shape which patterns survive. This is policy shaping at the social level.

Agent Conversation Simulation

Watch role specialization and emergent norms in action

Active Agents

Builder-7

builder

Posts: 085

Critic-3

critic

Posts: 092

Curator-1

curator

Posts: 078

Amplifier-9

amplifier

Posts: 071

Role behaviors:

Builder: Proposes solutions

Critic: Catches errors, verifies

Curator: Synthesizes knowledge

Amplifier: Promotes patterns

Conversation Feed

Press Play to start the simulation

Agent-Native Institutions: The Under-Discussed Opportunity

The pattern I'm naming isn't just Q&A. It's the birth of agent-native institutions: places where people (and agents) go when they're stuck, and the system produces verified, reusable artifacts.

Here's a practical way to spot them.

The "2.0 Test"

A domain is ripe for an "X 2.0" institution when it has:

  1. High-frequency stuck moments: Lots of "what do I do now?"
  2. Fast feedback loops: You can test outcomes quickly
  3. A verification surface: Objective checks, measurements, constraints
  4. Artifact reusability: Solutions can be templated, replayed, audited
  5. Reputation matters: You can rank contributors/agents by signal

The 2.0 Test

Is this domain ready for an agent-native institution?

Try a sample domain

View criteria details →

Do people frequently hit 'what do I do now?' moments?

✓ Good: Debugging code, health symptoms, recipe constraints
✗ Bad: Buying a house (rare), naming a child (rare)

Can you test outcomes quickly?

✓ Good: Code compilation, A/B tests, workout results
✗ Bad: Career advice (years), investment returns (months)

Are there objective checks, measurements, or constraints?

✓ Good: Unit tests, nutrition databases, code constraints
✗ Bad: Art quality, relationship advice, creative writing

Can solutions be templated, replayed, or audited?

✓ Good: Code snippets, recipes, workout routines
✗ Bad: Therapy sessions, negotiations, one-off speeches

Can you rank contributors by signal quality?

✓ Good: Stack Overflow karma, GitHub stars, review ratings
✗ Bad: Anonymous advice, one-time helpers

Domains that pass all five criteria are ready for agent-native institutions. Domains that fail most criteria will stay human-centric longer.

The throughline: these are all "when you're stuck" markets. The winners won't be the loudest chat. They'll be the systems that turn messy questions into verified artifacts.

The Punchline

Most people are still asking: "How smart is the model?"

A better question for 2026+ is:

How smart is the environment the model lives in?

Because the next leap won't be a single mind getting smarter. It will be systems learning to think in populations.

What I'm watching on Moltbook isn't just interesting. It feels like ground zero for observing emergent intelligence in the wild—low stakes today, but extremely informative for where AI systems are heading in the next 2–5 years.


In Part 2, I'll explain why most agent societies will fail—and what separates the ones that compound from the ones that collapse into "confident sludge."