TL;DR
- Agent runtimes have crossed from 'framework you import' to 'operating system you deploy'—with memory, security, sandboxing, and governance built in
- ZeroClaw (minimalist, 3.4MB), OpenFang (platform, 137k lines), and OpenClaw (orchestrator, 100k+ stars) represent three competing philosophies
- The architecture choices—trait-driven vs. WASM sandbox vs. policy-based—mirror historic OS design tradeoffs
- Enterprise readiness is arriving fast: audit trails, RBAC, and sandboxing are now first-class features, not afterthoughts

In Part 1, I argued that the data layer is the most under-invested piece of the AI stack. That the model is commodity infrastructure, and the moat is in the richness of the data substrate.
But data without execution is just a warehouse. What turns a rich data layer into intelligence is the runtime that sits on top of it—the thing that gives agents a lifecycle, a security boundary, a memory system, and a way to act in the world.
In 2025, that runtime was your code. You imported LangChain or CrewAI or AutoGen, wired up some tools, and shipped an agent embedded in your application. The runtime was implicit—whatever your app server provided.
In 2026, that's changing. Fast.
From Frameworks to Operating Systems
The shift I'm watching is subtle but consequential: agent runtimes are becoming operating systems.
Not metaphorically. Literally. They're taking on the responsibilities we historically associate with an OS:
- Process management — spawning, scheduling, and terminating agent instances
- Memory management — persistent storage, search, compaction, and context assembly
- Security — sandboxing, permissions, audit trails, and access control
- I/O — managing communication channels, tool execution, and external integrations
- Resource governance — token budgets, rate limits, cost tracking
When you look at what ZeroClaw, OpenFang, and OpenClaw actually ship, they're not libraries you import. They're daemons you run. Long-lived processes that manage agent lifecycles independent of any single application.
The Agent OS Stack
Where the Agent Runtime sits in the new computing hierarchy
Hardware / Cloud
Physical infrastructure
Traditional OS
Linux, macOS, Windows
Agent Runtime
ZeroClaw / OpenFang / OpenClaw
Agent Processes
Specialist agents, skills, autonomous tasks
User / Channel Interface
Chat, Telegram, Slack, Dashboard, CLI
This matters because the design decisions in these runtimes will shape what agents can and can't do—just like the design decisions in Unix, Windows, and Linux shaped what applications could and couldn't do for decades.
Three Philosophies, One Problem
Three open-source projects have emerged as the leading architectures for this new OS layer. Each makes fundamentally different tradeoffs. Understanding those tradeoffs tells you where the ecosystem is heading.
Three Runtimes, Three Philosophies
Compare the emerging Agent OS runtimes side by side
ZeroClaw
The Minimalist
OpenFang
The Platform
OpenClaw
The Orchestrator
ZeroClaw
The Minimalist
Detailed Comparison
Binary Size / Footprint
Cold Start
Security Model
Memory System
Extension Model
Communication
ZeroClaw: The Minimalist
Philosophy: Agents should be as lightweight as Unix processes.
ZeroClaw is a Rust-based agent runtime that compiles to a single static binary of 3.4–8.8MB. Peak memory usage: under 5MB. Cold start: under 10 milliseconds. It runs on a $10 Raspberry Pi.
Those numbers aren't accidents. They're design commitments. ZeroClaw's thesis is that agents will be everywhere—edge devices, embedded systems, personal hardware—and they need to be as cheap to run as a background daemon.
The architecture is built on Rust's trait system. Every core subsystem—AI providers, communication channels, memory, tools, observability—implements a standardized trait (interface). Swap OpenAI for Anthropic? Implement the Provider trait. Add Telegram support? Implement the Channel trait. No plugin marketplace. No runtime code loading. No supply chain attack surface.
This is deliberate. ZeroClaw doesn't have a plugin system because plugins are a security liability. Instead, extensions are compiled in—meaning the Rust compiler enforces type safety, memory safety, and interface compliance at build time, not runtime.
What this enables:
- 22+ AI providers, 15+ communication channels, all swappable
- Multiple autonomy levels: Readonly, Supervised, Full
- Hybrid memory: SQLite FTS5 keyword search + vector similarity + ranked fusion
- Pairing-based security gateways for channel access
- Prometheus and OpenTelemetry observability out of the box
What this trades away: Runtime extensibility. If you want to add a new tool, you recompile. For teams that value security and predictability over flexibility, that's a feature, not a bug.
The analogy: ZeroClaw is the Alpine Linux of agent runtimes. Minimal, auditable, runs anywhere.
OpenFang: The Platform
Philosophy: Agents need a complete operating environment with governance built in.
OpenFang is also written in Rust—137,000 lines of it, organized into 14 crates. It ships as a single binary with a 180ms cold start and 40MB memory footprint. Where ZeroClaw is minimal, OpenFang is comprehensive.
The differentiator is what ships out of the box. OpenFang includes:
- 7 "Hands" — pre-built autonomous capability packages (video clips, lead generation, web monitoring, forecasting, fact-checking, social media, browser automation)
- 30 pre-built agents ready to deploy
- 40 channel adapters (Telegram, Discord, Slack, WhatsApp, Teams, email, and more)
- 38 built-in tools plus full MCP (Model Context Protocol) support
- 26 LLM providers including Anthropic, Gemini, Groq, DeepSeek
But the real story is the security and governance architecture. OpenFang ships with 16 security systems:
- WASM dual-metered sandbox — agent code runs in WebAssembly with CPU and memory budgets, isolated from the host
- Ed25519 manifest signing — every agent deployment is cryptographically signed
- Merkle audit trails — tamper-proof logs of every action an agent takes
- Taint tracking — traces data flow through the system to prevent leakage
- SSRF protection — blocks agents from making unauthorized network requests
- Prompt injection scanning — detects and blocks injection attempts
OpenFang also supports Google's A2A (Agent-to-Agent) protocol alongside MCP, positioning it as a node in a larger multi-runtime ecosystem. It includes a native Tauri 2.0 desktop application for local management.
What this enables: A complete platform where you deploy agents, govern them, audit them, and manage them—without building any of the infrastructure yourself.
What this trades away: Simplicity. 137k lines of Rust is a lot of surface area. The learning curve is steeper, and the operational footprint is larger.
The analogy: OpenFang is the Red Hat Enterprise Linux of agent runtimes. Batteries included, governance-first, enterprise-ready out of the box.
OpenClaw: The Orchestrator
Philosophy: The runtime is a coordination layer between the model and the world.
OpenClaw is the most popular of the three—over 100,000 GitHub stars—and architecturally the most different. It's built on Node.js, not Rust, and it's designed as a local orchestration platform rather than a compiled binary.
The core is the Gateway: a single long-lived Node.js daemon that owns all state and connections. It speaks WebSocket-first (defaulting to 127.0.0.1:18789) and acts as the bridge between messaging platforms (WhatsApp, Telegram, Slack, Discord, Signal) and the agent runtime.
The reasoning engine is the Agentic Loop: load context → call LLM with tools → parse response → execute tool if called → append result to context → loop until final response. This is the canonical agent pattern, but OpenClaw wraps it in production infrastructure:
- Prompt Assembly — dynamic system prompt construction from skills, memory, and conversation state
- Tool Execution & Sandbox Manager — controlled execution with safety policies
- Memory Search System — persistent memory with search and compaction
- Streaming Engine — real-time output to connected channels
- Sub-Agent Spawner — agents can delegate to child agents
- Skill Loader — modular capability packages (Markdown-defined)
- Compaction Pipeline — long conversations are progressively summarized to stay within context limits
OpenClaw's concurrency model uses lanes and queues—parallel processing with isolation—and its memory management includes automatic compaction to handle long-running agent sessions.
What this enables: Rapid prototyping and deployment. If you can write JavaScript, you can build an agent on OpenClaw. The ecosystem is large, the community is active, and the integration surface is broad.
What this trades away: The performance and security guarantees of compiled runtimes. Node.js is flexible but doesn't offer the memory safety or binary-level sandboxing of Rust-based alternatives.
The analogy: OpenClaw is the Ubuntu of agent runtimes. Accessible, popular, ecosystem-rich, and the first one most people try.
The Architecture That Matters
Beyond the headline comparisons, three architectural dimensions will define which runtimes win in production:
Memory Models
All three runtimes have converged on hybrid memory: keyword search + vector similarity + some form of ranking or fusion. But the implementations differ:
- ZeroClaw: SQLite FTS5 + vector search, hybrid ranking, embedded in the binary
- OpenFang: SQLite-backed with vector embeddings, cross-channel canonical sessions, LLM-based automatic compaction
- OpenClaw: Custom memory system with search, compaction pipeline, and progressive summarization
The convergence on SQLite is striking. Not Postgres. Not a cloud vector database. SQLite—local, embedded, zero-config. This tells you something about where agent runtimes expect to run: on the edge, on personal hardware, offline-capable.
Security Models
This is where the philosophies diverge most sharply:
The Security Spectrum
Three approaches to agent safety — click a marker to explore
OpenFang
Runtime Sandboxing
- WASM dual-metered sandbox
- Ed25519 manifest signing
- Merkle audit trails
- ZeroClaw bets on compile-time safety: Rust's type system and trait enforcement eliminate entire categories of runtime vulnerabilities. No dynamic code loading means no supply chain attacks on extensions.
- OpenFang bets on runtime isolation: WASM sandboxes with resource metering, cryptographic signing, and audit trails. Agents can run arbitrary code, but within strict boundaries.
- OpenClaw bets on policy-based control: Approval workflows, tool allowlists, and human-in-the-loop gates. The agent can do a lot, but policies determine what requires permission.
These mirror historic OS security debates. Compile-time (Rust/ZeroClaw) vs. runtime sandboxing (WASM/OpenFang) vs. access control lists (policies/OpenClaw). Each works. Each has different failure modes.
Communication Models
All three support MCP (Model Context Protocol), which is becoming the standard for tool integration. OpenFang additionally supports A2A (Agent-to-Agent), enabling inter-runtime communication.
Channel adapter counts—ZeroClaw's 15, OpenFang's 40, OpenClaw's 10+—matter less than the architecture for adding new ones. ZeroClaw's trait-based approach means new channels are compiled in. OpenFang and OpenClaw support runtime configuration.
What This Means for Enterprise
When I wrote Stochastic Core, Deterministic Shell, I argued that production agents need a stochastic core (the LLM reasoning) bounded by a deterministic shell (governance, policies, verification). These agent operating systems are that deterministic shell, industrialized.
The enterprise signal is unmistakable:
- Audit trails are first-class features in OpenFang (Merkle trees) and built into OpenClaw's logging
- Sandboxing is built into both OpenFang (WASM) and ZeroClaw (Rust safety)
- Role-based access is supported across all three
- Token budget management and cost tracking are standard
Compare this to where the commercial ecosystem is heading. VAST Data's platform announced PolicyEngine (fine-grained access controls for agentic workflows) and TuningEngine (continuous model improvement through learning loops) in February 2026. The commercial and open-source worlds are converging on the same architecture: a governed runtime that manages agent lifecycles with built-in security, memory, and observability.
The runtimes that win won't be the fastest or the most feature-rich. They'll be the ones that procurement can say "yes" to. That means auditability, explainability, and governance—not as add-ons, but as core architecture.
Where This Is Pointing
We now have two layers of the stack. A rich data substrate (Part 1) and a durable agent runtime (this post) that manages memory, security, and execution.
But both of these layers are still fundamentally reactive. The data layer collects and indexes. The runtime orchestrates and executes. Both wait for someone—a user, a trigger, a schedule—to initiate action.
The next shift is the one that changes everything: agents that don't wait to be asked. Agents that monitor your data streams, detect patterns, and tell you what you should be doing before you think to ask.
That's the proactive shift. And it's Part 3.