The Deno sandbox used in LLM tool-calling systems implements a simple but effective security pattern. LLM-generated code runs in a sandboxed subprocess. The sandbox has network access, but a separate privileged process sits between the sandbox and the outside world. This proxy intercepts outbound HTTP requests, matches the destination host against a configuration map, and rewrites the request headers to inject the appropriate API credentials.
The sandboxed code never sees or handles secret material. It expresses intent ("I want to call the OpenAI chat endpoint") and the proxy attaches authority (the API key). The configuration is a flat map:
host → credential
api.openai.com → Bearer sk-...
api.anthropic.com → x-api-key sk-ant-...
This works. But it has structural limitations that become apparent when you try to build more sophisticated agent systems on top of it.
Single axis of mediation. The proxy only interposes on network requests. If the sandbox needs attenuated access to a filesystem, a database, or a subprocess, the proxy pattern doesn't extend naturally — you'd need to build separate mediation layers for each resource type.
Coarse authority. Binding a credential to a host grants full access to everything that credential authorizes. If the OpenAI key has access to chat completions, embeddings, fine-tuning, and file uploads, the sandbox gets all of them. Narrowing the authority requires URL-path-level rules in the proxy, which becomes a bespoke access control list rather than a principled security model.
No composition. The sandbox is a single undifferentiated environment. If an agent orchestrates multiple tools that shouldn't trust each other (a web search tool and a code execution tool, for example), the proxy model offers no way to isolate them from each other within the sandbox.
No revocation. Credentials are bound for the lifetime of the sandbox. If an agent starts behaving unexpectedly, your options are to kill the process or reconfigure the proxy. There's no way to surgically withdraw a specific capability while the agent continues operating.
No delegation. If agent A wants to grant agent B a subset of its authority, there's no mechanism for this. The proxy config is set by the operator, not by the agents themselves.
The Deno proxy is already doing something that has a name in security research: it's acting as a capability attenuation layer. It holds full authority (the API key) and grants a narrowed, mediated version of it (network access to a specific host with credentials injected) to the sandboxed process.
Object-capability (ocap) security generalizes this into a complete programming model built on a few principles:
- A capability is an unforgeable reference to an object. If you hold the reference, you can invoke its methods. If you don't hold it, you can't obtain it by guessing.
- Authority flows through the object graph. The only way to acquire a capability is to receive it — as an argument, a return value, or an endowment from the environment that created you.
- Capabilities can be attenuated. You can wrap a powerful capability in a less powerful one that exposes a subset of its methods or enforces additional constraints.
- Capabilities can be revoked. A "caretaker" wrapper can be switched off, causing all future invocations to fail.
- No ambient authority. Nothing is available by default. Every capability must be explicitly granted.
The Deno proxy implements principles 1–3 in a limited way (for network requests only). A full ocap framework implements all five, for all resource types, compositionally.
Endo is a distributed secure JavaScript sandbox built on Hardened JavaScript (SES). It provides a layered stack of tools — confinement, communication, and concurrency — that together implement the full ocap model. Agoric uses it for blockchain smart contracts; MetaMask uses it to sandbox browser extension plugins.
Endo's stack maps onto and extends each aspect of the Deno proxy pattern.
Endo's lockdown() function freezes all JavaScript intrinsics — Array.prototype, Object.prototype, Function.prototype, and so on — making them immutable and safe to share between mutually suspicious programs. This eliminates prototype pollution attacks and ensures that the shared JavaScript runtime itself can't be weaponized.
This replaces the process boundary in the Deno model. Instead of relying on OS-level isolation to keep the sandbox from reaching the credential store, lockdown makes it safe to run confined code in the same process, which enables much richer interaction patterns.
A Compartment is an isolated execution environment with its own globalThis and module graph. Critically, a compartment starts with no ambient authority at all. It has no fetch, no fs, no net, no crypto — nothing unless explicitly endowed by the host.
This is the fundamental upgrade over the Deno model. Instead of giving the sandbox general network access and mediating it through a proxy, a compartment receives only the specific capabilities the host chooses to provide:
import 'ses';
lockdown();
const OPENAI_KEY = process.env.OPENAI_API_KEY;
// Create an attenuated capability: only chat completions,
// only gpt-4o-mini, with the key sealed in a closure
const chatCapability = harden({
complete: async (messages) => {
const res = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${OPENAI_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-4o-mini',
messages,
max_tokens: 1024,
}),
});
return res.json();
},
});
const compartment = new Compartment({
globals: { ai: chatCapability },
__options__: true,
});
compartment.evaluate(agentCode);The agent code can call ai.complete(messages) but it literally cannot:
- Construct an HTTP request to any URL (it has no
fetch) - Access the API key (it's sealed in the host's closure)
- Call any model other than gpt-4o-mini (the capability doesn't expose that parameter)
- Access the filesystem, spawn processes, or reach any other resource
The entire attack surface of "what if the agent crafts a request to a different host that also has credential bindings" disappears, because the agent can't craft requests at all.
harden() deeply freezes an object graph, making it safe to pass into untrusted code. When we harden(chatCapability) above, the sandboxed code can invoke ai.complete() but cannot modify the object — it can't replace methods, add interceptors, or tamper with the prototype chain.
This is what makes in-process capability passing safe. In the Deno model, the proxy is safe because it runs in a separate process. In Endo, capabilities are safe because they're hardened objects that can't be altered after creation.
The Deno credential proxy is inherently a same-machine, single-hop pattern. Endo's Capability Transport Protocol (CapTP) stretches capability references across process boundaries and networks while preserving their security properties. The E() wrapper provides ergonomic asynchronous method invocation on possibly-remote objects:
import { E } from '@endo/eventual-send';
// The agent doesn't know or care whether 'ai' is local or remote.
// E() returns a promise for the result.
const result = await E(ai).complete(messages);
// Promise pipelining: the second call is sent without waiting
// for the first to resolve, reducing round trips
const parsed = await E(E(ai).complete(messages)).extractData();This means the credential store can be a separate service on a different machine. The sandbox gets an opaque reference to a remote capability object and invokes methods on it. The messages travel over encrypted connections where the capability reference is a cryptographic token that can't be forged by guessing. The Deno proxy pattern, which only works for co-located processes sharing a network interface, becomes a fully distributed architecture.
The Deno model's host→key configuration map has an analog in Endo's @endo/compartment-mapper combined with LavaMoat-style policy files. Instead of mapping hosts to credentials, a policy declares what endowments each compartment receives:
{
"agent-orchestrator": {
"globals": {
"console": true
},
"packages": {
"search-tool": true,
"code-runner": true
}
},
"search-tool": {
"globals": {
"webSearch": true
}
},
"code-runner": {
"globals": {
"sandbox": true
}
}
}Policies can be auto-generated through static analysis (what does this code appear to need?), then reviewed and committed. This is auditable, diffable, and version-controlled, unlike a runtime proxy configuration.
The Endo Pet Daemon is a system daemon that manages capability references with human-meaningful petnames. Users can:
- Name a capability:
endo name my-openai-key - Grant it to an agent process:
endo send agent-1 my-openai-key - Revoke it: agent processes are Endo workers that communicate over CapTP with limited, auditable access to user resources
- Audit what capabilities an agent has and has used
This is the full realization of the supervisor/membrane architecture, with a user-facing interface for managing authority delegation to AI agents.
| Concern | Deno Credential Proxy | Endo Ocap Framework |
|---|---|---|
| Isolation mechanism | OS process boundary | Compartment + lockdown |
| Default authority | Network access (all hosts) | Nothing |
| Credential management | Host → key map in proxy config | Hardened capability objects with keys in closures |
| Authority narrowing | URL pattern matching in proxy | Attenuated capability wrappers |
| Multi-resource mediation | Network only; others need separate mechanisms | Any resource type via capability endowments |
| Mutual isolation between tools | Not supported within sandbox | Separate compartments per tool |
| Revocation | Kill process or reconfigure proxy | Caretaker pattern; revoke individual capabilities |
| Delegation | Not supported | First-class; agents can pass attenuated capabilities |
| Distribution | Same-machine only | CapTP stretches capabilities over networks |
| Policy format | Flat host → credential map | Declarative per-compartment endowment policies |
| Auditability | Proxy logs | Membrane interposition on all capability invocations |
The most important capability Endo adds is compositional attenuation. In the Deno model, all policy lives in the proxy's URL-rewriting rules. In Endo, attenuation is just creating a new hardened object that wraps the previous one with a narrower interface:
// Full authority: the raw API key and unrestricted fetch
const fullOpenAI = harden({
async call(endpoint, body) {
return fetch(`https://api.openai.com/v1/${endpoint}`, {
method: 'POST',
headers: { 'Authorization': `Bearer ${key}` },
body: JSON.stringify(body),
}).then(r => r.json());
}
});
// Attenuate to chat completions only
const chatOnly = harden({
async complete(messages, opts = {}) {
return fullOpenAI.call('chat/completions', {
model: opts.model || 'gpt-4o-mini',
messages,
max_tokens: opts.maxTokens || 1024,
});
}
});
// Further attenuate: fixed model, rate-limited
let callCount = 0;
const rateLimited = harden({
async complete(messages) {
if (callCount++ > 100) throw new Error('Rate limit exceeded');
return chatOnly.complete(messages, { model: 'gpt-4o-mini' });
}
});
// The agent receives only the most attenuated reference
const agentCompartment = new Compartment({
globals: { ai: rateLimited },
__options__: true,
});Each layer of attenuation is a simple function wrapping a capability reference. The agent at the bottom of the chain cannot reach up to a more powerful reference — it only has what it was given. This is the Principle of Least Authority, enforced by construction rather than by policy checking.
A caretaker is a wrapper that can be switched off, causing all future invocations to fail:
function makeCaretaker(target) {
let revoked = false;
const caretaker = harden({
complete: async (...args) => {
if (revoked) throw new Error('Capability revoked');
return target.complete(...args);
}
});
const revoker = harden({ revoke: () => { revoked = true; } });
return { caretaker, revoker };
}
const { caretaker, revoker } = makeCaretaker(rateLimited);
// Agent gets the caretaker
agentCompartment.globalThis.ai = caretaker;
// Operator keeps the revoker
// If the agent misbehaves:
revoker.revoke();
// All subsequent calls to ai.complete() now throwThe Deno model has no equivalent. The only way to withdraw authority from the sandbox is to destroy it.
When an agent needs multiple tools, Endo lets you isolate them from each other:
const searchTool = new Compartment({
globals: { webSearch: searchCapability },
__options__: true,
});
const codeTool = new Compartment({
globals: { sandbox: codeExecCapability },
__options__: true,
});
// The orchestrator compartment gets references to the tools
// but the tools can't see each other
const orchestrator = new Compartment({
globals: {
search: harden({ query: (q) => searchTool.evaluate(`webSearch("${q}")`) }),
code: harden({ run: (c) => codeTool.evaluate(c) }),
},
__options__: true,
});In the Deno model, all tools share the same sandbox and the same proxy. A compromised search tool could exfiltrate data through the code execution tool's network access. In the Endo model, each tool is a separate compartment that can only see its own endowments.
The Deno credential proxy pattern remains appropriate for simple, single-purpose sandboxes where the only external resource is HTTP APIs and the trust model is binary (the sandbox either works or gets killed).
Endo becomes necessary when:
- Agents need access to multiple resource types beyond HTTP
- Multiple tools within an agent need isolation from each other
- Authority needs to be narrowed beyond host-level granularity
- Capabilities need to be dynamically granted, attenuated, or revoked
- The system is distributed across multiple machines
- You need auditable, composable security policies
- Agents need to delegate subsets of their authority to sub-agents
For AI agent systems that are growing beyond single-shot tool calls into persistent, multi-step, multi-tool workflows, the Deno pattern hits its ceiling quickly. Endo provides the generalized foundation.
- Endo Repository — The full framework
- HardenedJS — The JavaScript language subset Endo builds on
- SES Guide — Programming with Lockdown, Harden, and Compartment
- CapTP — Capability Transport Protocol for distributed capabilities
- LavaMoat — Supply chain security using Endo compartments
- OCapN — The emerging standard for capability networking
- Mark Miller, Navigating the Attack Surface — 15-minute explanation of the Principle of Least Authority