展示 HN: Amla 沙盒 – AI 代理的 WASM bash shell 沙盒
Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents

原始链接: https://github.com/amlalabs/amla-sandbox

## amla-sandbox:LLM 代理的安全代码执行 流行的代理框架通常直接执行 LLM 生成的代码,这存在通过提示注入导致任意代码执行的重大安全风险。**amla-sandbox** 提供了一种解决方案:一个轻量级、基于能力的安全性沙箱,使用 WebAssembly (WASM)。 与基于 Docker 的解决方案不同,amla-sandbox **不需要 Docker 守护进程或虚拟机** – 它是一个可以通过 `pip install amla-sandbox` 安装的单个二进制文件。它隔离代理代码,只允许明确授予的工具访问,并带有定义的约束(例如,最大交易金额)。该沙箱提供一个虚拟文件系统(仅在 `/workspace` 和 `/tmp` 中可写),并且**没有网络或 shell 访问权限**。 它通过启用“代码模式”来提高效率 – 将多个工具调用合并到一个脚本中 – 同时保持安全性。能力验证确保工具在定义的参数范围内被调用。该系统借鉴了 seL4 等基于能力的安全性,仅在明确允许时授予访问权限。 amla-sandbox 支持 JavaScript 和 shell 脚本,与 LangGraph 集成,并提供对工具访问的细粒度控制。虽然它不提供完整的 Linux 环境或 GPU 访问,但它非常适合在常见的代理应用程序中安全地运行生成的代码,并控制工具访问。

## Amla 沙箱:AI 代理的安全 WASM 运行环境 Amla 沙箱是一个新工具,提供了一种安全的方式来运行大型语言模型 (LLM) 生成的代码。它为 AI 代理提供了一个类似 bash shell 的环境,置于 WebAssembly (WASM) 沙箱内,限制它们只能访问预先批准的工具和定义的约束。 与依赖 Docker 或子进程的解决方案不同,Amla 沙箱是一个轻量级的 pip 安装包(约 11MB WASM 二进制文件),专注于策略执行,而非完全的语言支持(目前使用 quickjs)。开发者优先控制代理可以调用*哪些*工具以及*如何*调用,而不是提供完整的 Python 或 JavaScript 环境。 该项目承认通过提供的工具可能存在逃逸风险,并将自身定位为策略层,将敏感凭据保存在宿主机上。它是利用 WASM 进行沙箱化日益增长的趋势的一部分,相关项目如 BrowserPod 和 agentvm(后者提供 WASM 中的完整 Linux VM)也在探索这种方法。Amla 沙箱旨在成为一种更轻量级的替代方案。
相关文章

原文

Every popular agent framework runs LLM-generated code via subprocess or exec(). That's arbitrary code execution on your host. One prompt injection and you're done.

Some frameworks offer Docker isolation (OpenHands, AutoGen), but that requires running a Docker daemon and managing container infrastructure.

amla-sandbox is a WASM sandbox with capability enforcement. Agents can only call tools you explicitly provide, with constraints you define. Sandboxed virtual filesystem. No network. No shell escape.

uv pip install "git+https://github.com/amlalabs/amla-sandbox"

No Docker. No VM. One binary, works everywhere.

from amla_sandbox import create_sandbox_tool

sandbox = create_sandbox_tool(tools=[stripe_api, database])

# Agent writes one script instead of 10 tool calls (JavaScript)
result = sandbox.run('''
    const txns = await stripe.listTransactions({customer: "cus_123"});
    const disputed = txns.filter(t => t.disputed);
    console.log(disputed[0]);
''', language="javascript")

# Or with shell pipelines
result = sandbox.run('''
    tool stripe.listTransactions --customer cus_123 | jq '[.[] | select(.disputed)] | .[0]'
''', language="shell")

Tool-calling is expensive. Every MCP call is a round trip through the model:

LLM → tool → LLM → tool → LLM → tool → ...

Ten tool calls = ten LLM invocations. Code mode collapses this:

LLM → script that does all 10 things → result

But you can't just eval whatever the model spits out. So people either pay the token tax or run unsafe code. This gives you both: code-mode efficiency with actual isolation.

The sandbox runs inside WebAssembly with WASI for a minimal syscall interface. WASM provides memory isolation by design—linear memory is bounds-checked, and there's no way to escape to the host address space. The wasmtime runtime we use is built with defense-in-depth and has been formally verified for memory safety.

On top of WASM isolation, every tool call goes through capability validation:

from amla_sandbox import Sandbox, MethodCapability, ConstraintSet, Param

sandbox = Sandbox(
    capabilities=[
        MethodCapability(
            method_pattern="stripe/charges/*",
            constraints=ConstraintSet([
                Param("amount") <= 10000,
                Param("currency").is_in(["USD", "EUR"]),
            ]),
            max_calls=100,
        ),
    ],
    tool_handler=my_handler,
)

# This works
sandbox.execute('await stripe.charges.create({amount: 500, currency: "USD"})')

# This fails - amount exceeds capability
sandbox.execute('await stripe.charges.create({amount: 50000, currency: "USD"})')

The design draws from capability-based security as implemented in systems like seL4—access is explicitly granted, not implicitly available. Agents don't get ambient authority just because they're running in your process. This matters because prompt injection is a fundamental unsolved problem; defense in depth through capability restriction limits the blast radius.

from amla_sandbox import create_sandbox_tool

sandbox = create_sandbox_tool()

# JavaScript
sandbox.run("console.log('hello'.toUpperCase())", language="javascript")  # -> "HELLO"

# Shell
sandbox.run("echo 'hello' | tr 'a-z' 'A-Z'", language="shell")  # -> "HELLO"

# With tools
def get_weather(city: str) -> dict:
    return {"city": city, "temp": 72}

sandbox = create_sandbox_tool(tools=[get_weather])
sandbox.run("const w = await get_weather({city: 'SF'}); console.log(w);", language="javascript")

With constraints:

sandbox = create_sandbox_tool(
    tools=[transfer_money],
    constraints={
        "transfer_money": {
            "amount": "<=1000",
            "currency": ["USD", "EUR"],
        },
    },
    max_calls={"transfer_money": 10},
)

Tools require object syntax:

// WORKS - tools always take an object argument
await get_weather({city: "SF"});
await transfer({to: "alice", amount: 500});

// FAILS - positional arguments don't work
await get_weather("SF");  // Error: argument after ** must be a mapping

Use return or console.log() for output:

// Return value is captured and output
return await get_weather({city: "SF"});  // -> {"city":"SF","temp":72}
return {a: 1, b: 2};  // -> {"a":1,"b":2}
return "hello";  // -> hello (strings not double-quoted)

// console.log also works
console.log(JSON.stringify({a: 1}));  // -> {"a":1}

// No return = no output
const x = 42;  // -> (no output)

VFS is writable only under /workspace and /tmp:

// WORKS - /workspace and /tmp are ReadWrite
await fs.writeFile('/workspace/data.json', '{}');
await fs.mkdir('/tmp/cache');

// FAILS - root is read-only
await fs.mkdir('/mydir');  // EACCES: Permission denied

For LangGraph integration:

from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
from amla_sandbox import create_sandbox_tool

sandbox = create_sandbox_tool(tools=[get_weather, search_db])
agent = create_react_agent(
    ChatAnthropic(model="claude-sonnet-4-20250514"),
    [sandbox.as_langchain_tool()]  # LLM writes JS/shell that calls your tools
)

For fine-grained capability control:

from amla_sandbox import SandboxTool, MethodCapability, ConstraintSet, Param

caps = [
    MethodCapability(
        method_pattern="mcp:search_db",
        constraints=ConstraintSet([Param("query").starts_with("SELECT")]),
        max_calls=5,
    )
]

sandbox_tool = SandboxTool.from_functions([search_db], capabilities=caps)
agent = create_react_agent(model, [sandbox_tool.as_langchain_tool()])
┌────────────────────────────────────────────────┐
│              WASM Sandbox                      │
│  ┌──────────────────────────────────────────┐  │
│  │         Async Scheduler                  │  │
│  │   tasks waiting/running/ready            │  │
│  └──────────────────────────────────────────┘  │
│  ┌────────────┐ ┌──────────┐ ┌──────────────┐  │
│  │  VFS       │ │ Shell    │ │ Capabilities │  │
│  │ /workspace │ │ builtins │ │ validation   │  │
│  └────────────┘ └──────────┘ └──────────────┘  │
│                    ↓ yield                     │
└════════════════════════════════════════════════┘
                     │
                     ▼
┌─────────────────────────────────────────────┐
│              Python Host                    │
│                                             │
│   while sandbox.has_work():                 │
│       req = sandbox.step()  # tool call     │
│       sandbox.resume(execute(req))          │
│                                             │
└─────────────────────────────────────────────┘

The sandbox yields on tool calls. Host executes them (after capability checks) and resumes. QuickJS runs inside WASM for the JS runtime.

First run compiles the WASM module (~300ms). Cache it:

Subsequent loads: ~0.5ms.

from amla_sandbox import Param, ConstraintSet

constraints = ConstraintSet([
    Param("amount") >= 100,
    Param("amount") <= 10000,
    Param("currency").is_in(["USD", "EUR"]),
    Param("path").starts_with("/api/"),
])

Pattern matching for method names:

  • stripe/charges/create — exact match
  • stripe/charges/* — single path segment
  • stripe/** — zero or more segments

What you get: Isolation without infrastructure. Capability enforcement. Token efficiency.

What you don't get: Full Linux environment. Native module support. GPU access. Infinite loop protection (a while(true){} will hang - the step limit only counts WASM yields, not JS instructions).

If you need a real VM with persistent state and arbitrary dependencies, use e2b or Modal. amla-sandbox is for the common case: agents running generated code with controlled tool access.

Python code is MIT. The WASM binary is proprietary—you can use it with this package but can't extract or redistribute it separately.


Website · Examples · Docs

联系我们 contact @ memedata.com