ClawLaw: Bringing Law to the Frontier
On governing autonomous agents, the stochastic gap, and why you can’t build prisons out of water
In 1982, Iron Crown Enterprises published Claw Law—a supplement for the Rolemaster system that codified the combat mechanics for creatures with natural weapons. Claws, fangs, tentacles. The book wasn’t about making monsters weaker; it was about making them playable. You could finally have a dragon at your table because you had rules for what a dragon could do.
The name stuck with me for forty years. And when I saw OpenClaw—this remarkable, terrifying autonomous agent that had captured the imagination of the technical world—I knew what it needed.
Not restrictions. Rules.
ClawLaw is governance for the autonomous age. It applies SwiftVector—my architectural pattern for deterministic AI control—to the desktop agent domain. It’s the supplement that makes the monster playable.
The Wild West
OpenClaw appeared in late 2025 and spread like wildfire. Within weeks, its GitHub repository had over 100,000 stars. The testimonials were extraordinary: developers building websites from their phones while putting babies to sleep; engineers running overnight coding sessions where the agent fixed bugs, captured errors, and opened pull requests—all while they slept.
The architecture is elegant. A local gateway connects to the messaging platforms you already use—WhatsApp, Telegram, Slack, Discord. You talk to it like a coworker. It has persistent memory, proactive behaviors, browser control, shell access. It’s everything Siri was supposed to be.
It’s also the Wild West.
I set up OpenClaw on a spare machine last week. Within an hour, I had built three separate control systems just to feel safe running it: network isolation to keep command channels off the internet, a token budget monitor to prevent runaway costs, an approval queue for anything touching external services.
I wasn’t configuring an assistant. I was constructing a containment facility.
The Hacker News comment that captured it perfectly: “On one hand, it’s cool that this thing can modify anything on my machine that I can. On the other, it’s terrifying that it can modify anything on my machine that I can.”
This isn’t a bug in OpenClaw. Peter Steinberger and the community built something genuinely powerful. The problem is deeper—it’s a bug in the paradigm itself.
The Stochastic Gap
Here’s the uncomfortable truth about every autonomous agent, including OpenClaw: there is a gap between what you intend and what it does.
I call this the Stochastic Gap. It’s the distance between your instruction (“help me organize my files”) and the model’s interpretation (which might, on a bad day, include “delete the tax returns that looked like clutter”).
The gap exists because large language models are probabilistic. They don’t execute instructions; they predict completions. Most of the time, the prediction aligns with your intent. Sometimes it doesn’t. And with an agent that has shell access, browser control, and messaging capabilities, “sometimes” is not an acceptable failure rate.
The current solution? Prompt engineering. We write careful system prompts that say “don’t delete important files” and “always ask before sending emails.” We’re essentially asking nicely.
This is the equivalent of posting a sign in a nuclear facility that reads: “Please don’t press the red button.”
Signs work when everyone agrees to follow them. They fail catastrophically when someone—or something—decides not to. And language models don’t “decide” anything. They complete patterns. Your carefully crafted safety prompt is just another pattern to complete, and adversarial inputs, confused contexts, or simple misunderstandings can produce completions that ignore it entirely.
You cannot close the Stochastic Gap with prompts. Prompts are made of the same probabilistic material as the problem.
The Case for Governed Autonomy
There’s a distinction we need to make—one that the current discourse around AI agents mostly ignores.
Intelligence should be fluid. Creative. Probabilistic. That’s what makes language models useful. You want the agent to handle ambiguity, make connections, adapt to novel situations. Rigidity in intelligence is brittleness.
Authority must be rigid. Deterministic. Absolute. When the question is “can this agent write to my .ssh directory,” the answer cannot be “probably not.” It must be “no,” enforced at a level the agent cannot subvert.
The problem with current agent architectures is that they conflate these two concerns. The same system that generates creative responses also decides what tools to invoke and when to stop. Intelligence and authority live in the same probabilistic soup.
You cannot build the Law out of the same material as the Agent. You cannot build a prison out of water.
What we need is a separation of concerns: let the agent be intelligent within boundaries, but make the boundaries themselves deterministic and inviolable.
This is what I mean by Governed Autonomy: full capability operating within formal constraints. Not reduced capability. Not lobotomized agents. The dragon at full power—but with rules for what a dragon can do.
SwiftVector: The Constitutional Framework
For the past year, I’ve been developing SwiftVector—an architectural framework for state-based agent control. The core thesis is simple:
State, not prompts, must be the authority.
Instead of instructing an agent what it shouldn’t do, you define a state machine that determines what it can do. The agent operates within state-defined contexts. It cannot modify those contexts through conversation, persuasion, or prompt injection. The state machine doesn’t negotiate.
At the heart of SwiftVector is the Reducer—a pure function that takes current state and a proposed action, then returns either a new state or a rejection:
(CurrentState, Action) → NewState | Rejection
Because it’s a pure function, it’s deterministic. Given the same inputs, it always makes the same decision. It cannot be convinced. It cannot be socially engineered. It cannot have a bad day.
SwiftVector provides the constitutional framework. The specific domains of law—filesystem access, network boundaries, budget constraints, authorization levels—are implemented as Vectors: composable modules that can be combined for different agent contexts.
ClawLaw applies SwiftVector to the desktop agent domain. It composes three Vectors:
-
Containment Vector — Defines what the agent can touch. Filesystem paths are typed as
SandboxedPath, enforcing that only permitted directories are accessible. Network boundaries specify which domains and ports are reachable. The agent proposes; the Vector permits or denies. -
Budget Vector — Defines what the agent can spend. Token consumption triggers state transitions: normal → warning → critical → suspended. Circuit-breaker patterns prevent runaway costs. The agent reasons freely until it hits the wall.
-
Authorization Vector — Defines what requires human approval. Actions are classified by risk tier. High-risk operations (external API calls, file deletions, credential access) enter an approval queue. The agent waits; the human decides.
From Pattern to Framework
SwiftVector began as a pattern: a reducer-driven law layer that keeps agent intent boxed inside deterministic state. ClawLaw turns that pattern into a framework by shipping the law itself as modular, composable Swift packages.
Each Vector is a standalone governance module with a clean boundary: the Containment Vector defines what the agent can touch, the Budget Vector defines what it can spend, and the Authorization Vector defines what it must ask a human to approve. Together, they form a safety kernel that is smaller, auditable, and harder to subvert than the agent runtime.
The shift matters because it makes governance reusable. You don't re-invent law for every agent. You assemble law from Vectors, compose them like armor plates, and deploy the same enforcement stack across desktops, drones, or narrative engines. Pattern becomes framework the moment the law ships as code.
The same Budget Vector that governs OpenClaw costs could govern drone flight time in my Flightworks Control system, or narrative token allocation in ChronicleEngine. The architecture is domain-agnostic; the Vectors are domain-specific.
This is the transition from pattern to framework—from an architectural idea to downloadable, composable modules. In the coming months, we will be releasing these Vectors as standalone Swift packages, allowing you to compose governance for any agent system—from desktops to drones to narrative engines. ClawLaw is the first complete application; the Vectors themselves are the foundation for an ecosystem.
Why Swift?
Here’s where I anticipate the objection: Why write governance for a Node.js agent in Swift?
The answer is architectural necessity, not preference.
You cannot reliably enforce law with a language designed for fluidity. Python and JavaScript are optimized for flexibility, dynamism, rapid iteration—exactly the qualities that make them excellent for the experimental, probabilistic work of AI agents. They’re also exactly the qualities that make them poor choices for the rigid, deterministic work of governance.
Swift is optimized for correctness:
Type Safety as Legal Safety. In Swift, a file path isn’t a string that might be a path. It’s a SandboxedPath—a type that the compiler guarantees refers to a location within the permitted sandbox. You cannot accidentally pass a URL where a filesystem path is expected. The illegal state is unrepresentable.
Actor Isolation. Swift’s Actor model ensures that governance state cannot be corrupted by concurrent access. When multiple agent thoughts race to update the budget or approval queue, the Actor serializes them by design. Race conditions in the safety layer are not merely unlikely; they’re impossible.
Resource Efficiency. ClawLaw compiles to a 5MB binary with near-zero startup time and negligible runtime overhead. It watches; it doesn’t compete for the resources your agent needs.
Crash Resistance. If the governor crashes, the agent runs uncontrolled. The governor must not crash. Swift’s memory safety, lack of null pointer exceptions, and compile-time guarantees make crashes not impossible, but dramatically less likely than in dynamic languages.
This is the Law at the Edge thesis applied: when control is the concern, use a language built for control.
The Steward and the Paradox
There’s a meta-level to this project that I want to surface.
I did not write all the implementation code for ClawLaw. I defined protocols. I specified behaviors. I wrote tests that describe what the system must do. Then I directed AI agents—Claude, specifically—to generate the implementations that satisfy those specifications.
This is not a confession of laziness. It’s a demonstration of the workflow I believe represents the future of serious software development.
I call this the Steward role—a concept I explore more fully in The Agency Paradox. The Steward doesn’t write code; the Steward governs agents that write code. The skill is architectural: knowing what to specify, how to verify, when to intervene. It’s the difference between being a programmer and being an engineering leader—except now the “team” includes AI systems.
ClawLaw embodies the Agency Paradox twice over:
First, it solves the paradox for users by governing OpenClaw—providing the deterministic authority layer that makes autonomous capability safe to deploy.
Second, it leverages the paradox in its own creation—using AI agents to generate code within the architectural boundaries I defined.
The Steward doesn’t fear the agent. The Steward governs the agent. That’s the posture I believe we all need to develop as these systems become more capable.
What Comes Next
ClawLaw is under active development. The architecture and protocols are defined; the implementation is progressing publicly. You can follow along, contribute, or simply watch how the sausage gets made.
But ClawLaw is just one application of the underlying framework. SwiftVector itself is the deeper contribution—a constitutional architecture that can govern any agent in any domain where autonomy must be bounded.
In the coming weeks, I’ll be publishing a series of technical articles documenting the implementation: how the Sandbox Envelope works, how the Budget Governor manages cost through state transitions, how the Approval Queue makes human oversight tractable. Each article will show working code because I don’t believe in governance you can’t compile.
If you’re running OpenClaw—or any autonomous agent—and you’ve felt that mixture of excitement and unease, you’re not alone. The power is real. The risk is real. And the solution isn’t to retreat from capability.
The solution is to bring law to the frontier.
ClawLaw is open source under MIT license. SwiftVector is the constitutional framework that powers it. Both are part of the Agent In Command initiative at agentincommand.ai.
If you’re an engineering leader struggling to certify AI systems for production—whether that’s enterprise deployment, regulated industries, or safety-critical applications—let’s talk. If you’re building autonomous agents and that mixture of excitement and unease keeps you up at night, you’re exactly who this work is for.
Reach me at stephen@agentincommand.ai or connect on LinkedIn.
"Capability without governance is liability."