英伟达 NemoClaw
Nvidia NemoClaw

原始链接: https://github.com/NVIDIA/NemoClaw

## NVIDIA NemoClaw:运行安全、始终在线的助手 NVIDIA NemoClaw 是一个开源堆栈,旨在简化和保护使用 OpenClaw 部署始终在线的助手。它利用 NVIDIA Agent Toolkit 的 OpenShell 运行时为代理创建隔离、沙盒化的环境,并通过 NVIDIA 云服务进行推理。 目前处于早期 alpha 阶段,NemoClaw 提供了快速设置本地环境的工具,但尚未达到生产就绪状态,并且可能会发生变化。它需要 Ubuntu 22.04+、Node.js 20+、npm 10+ 和 Docker,推荐配置为 16GB 内存和 40GB 磁盘空间。 NemoClaw 的核心是“蓝图”系统,它协调沙盒创建、安全策略(网络、文件系统、进程控制、推理路由)和模型访问。一个 CLI (`nemoclaw`) 管理堆栈,而一个交互式 TUI 允许监控和批准网络请求。 NemoClaw 目前支持 `nvidia/nemotron-3-super-120b-a12b` 模型,需要 NVIDIA API 密钥才能访问。该项目欢迎社区反馈和贡献,并随着发展不断完善。

Hacker News 新闻 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 Nvidia NemoClaw (github.com/nvidia) 16 分,由 hmokiguess 1小时前 | 隐藏 | 过去 | 收藏 | 1 条评论 帮助 frenchie4111 1分钟前 [–] 我发现这部分很有意思:“来自代理的推理请求从不直接离开沙箱。OpenShell 拦截每个调用并将其路由到 NVIDIA 云提供商。” 看起来他们这样做是为了成为设置 OpenClaw 最简单方式的默认计算提供商。如果成功,这可能会为他们带来可观的消费者推理收入。 回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

License Security Policy Project Status

NVIDIA NemoClaw is an open source stack that simplifies running OpenClaw always-on assistants safely. It installs the NVIDIA OpenShell runtime, part of NVIDIA Agent Toolkit, a secure environment for running autonomous agents, with inference routed through NVIDIA cloud.

Alpha software

NemoClaw is early-stage. Expect rough edges. We are building toward production-ready sandbox orchestration, but the starting point is getting your own environment up and running. Interfaces, APIs, and behavior may change without notice as we iterate on the design. The project is shared to gather feedback and enable early experimentation, but it should not yet be considered production-ready. We welcome issues and discussion from the community while the project evolves.


Follow these steps to get started with NemoClaw and your first sandboxed OpenClaw agent.

Note

NemoClaw currently requires a fresh installation of OpenClaw.

Check the prerequisites before you start to ensure you have the necessary software and hardware to run NemoClaw.

Resource Minimum Recommended
CPU 4 vCPU 4+ vCPU
RAM 8 GB 16 GB
Disk 20 GB free 40 GB free

The sandbox image is approximately 2.4 GB compressed. During image push, the Docker daemon, k3s, and the OpenShell gateway run alongside the export pipeline, which buffers decompressed layers in memory. On machines with less than 8 GB of RAM, this combined usage can trigger the OOM killer. If you cannot add memory, configuring at least 8 GB of swap can work around the issue at the cost of slower performance.

Dependency Version
Linux Ubuntu 22.04 LTS or later
Node.js 20 or later
npm 10 or later
Docker Installed and running
OpenShell Installed

Install NemoClaw and Onboard OpenClaw Agent

Download and run the installer script. The script installs Node.js if it is not already present, then runs the guided onboard wizard to create a sandbox, configure inference, and apply security policies.

$ curl -fsSL https://nvidia.com/nemoclaw.sh | bash

When the install completes, a summary confirms the running environment:

──────────────────────────────────────────────────
Sandbox      my-assistant (Landlock + seccomp + netns)
Model        nvidia/nemotron-3-super-120b-a12b (NVIDIA Cloud API)
──────────────────────────────────────────────────
Run:         nemoclaw my-assistant connect
Status:      nemoclaw my-assistant status
Logs:        nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────

[INFO]  === Installation complete ===

Connect to the sandbox, then chat with the agent through the TUI or the CLI.

$ nemoclaw my-assistant connect

The OpenClaw TUI opens an interactive chat interface. Type a message and press Enter to send it to the agent:

sandbox@my-assistant:~$ openclaw tui

Send a test message to the agent and verify you receive a response.

Use the OpenClaw CLI to send a single message and print the response:

sandbox@my-assistant:~$ openclaw agent --agent main --local -m "hello" --session-id test

NemoClaw installs the NVIDIA OpenShell runtime and Nemotron models, then uses a versioned blueprint to create a sandboxed environment where every network request, file access, and inference call is governed by declarative policy. The nemoclaw CLI orchestrates the full stack: OpenShell gateway, sandbox, inference provider, and network policy.

Component Role
Plugin TypeScript CLI commands for launch, connect, status, and logs.
Blueprint Versioned Python artifact that orchestrates sandbox creation, policy, and inference setup.
Sandbox Isolated OpenShell container running OpenClaw with policy-enforced egress and filesystem.
Inference NVIDIA cloud model calls, routed through the OpenShell gateway, transparent to the agent.

The blueprint lifecycle follows four stages: resolve the artifact, verify its digest, plan the resources, and apply through the OpenShell CLI.

When something goes wrong, errors may originate from either NemoClaw or the OpenShell layer underneath. Run nemoclaw <name> status for NemoClaw-level health and openshell sandbox list to check the underlying sandbox state.


Inference requests from the agent never leave the sandbox directly. OpenShell intercepts every call and routes it to the NVIDIA cloud provider.

Provider Model Use Case
NVIDIA cloud nvidia/nemotron-3-super-120b-a12b Production. Requires an NVIDIA API key.

Get an API key from build.nvidia.com. The nemoclaw onboard command prompts for this key during setup.


The sandbox starts with a strict baseline policy that controls network egress and filesystem access:

Layer What it protects When it applies
Network Blocks unauthorized outbound connections. Hot-reloadable at runtime.
Filesystem Prevents reads/writes outside /sandbox and /tmp. Locked at sandbox creation.
Process Blocks privilege escalation and dangerous syscalls. Locked at sandbox creation.
Inference Reroutes model API calls to controlled backends. Hot-reloadable at runtime.

When the agent tries to reach an unlisted host, OpenShell blocks the request and surfaces it in the TUI for operator approval.


Run these on the host to set up, connect to, and manage sandboxes.

Command Description
nemoclaw onboard Interactive setup wizard: gateway, providers, sandbox.
nemoclaw deploy <instance> (experimental) Deploy to a remote GPU instance through Brev.
nemoclaw <name> connect Open an interactive shell inside the sandbox.
openshell term Launch the OpenShell TUI for monitoring and approvals.
nemoclaw start / stop / status Manage auxiliary services (Telegram bridge, tunnel).

Plugin commands (openclaw nemoclaw)

Run these inside the OpenClaw CLI. These commands are under active development and may not all be functional yet.

Command Description
openclaw nemoclaw launch [--profile ...] Bootstrap OpenClaw inside an OpenShell sandbox.
openclaw nemoclaw status Show sandbox health, blueprint state, and inference.
openclaw nemoclaw logs [-f] Stream blueprint execution and sandbox logs.

See the full CLI reference for all commands, flags, and options.

Known limitations:

  • The openclaw nemoclaw plugin commands are under active development. Use the nemoclaw host CLI as the primary interface.
  • Setup may require manual workarounds on some platforms. File an issue if you encounter blockers.

Refer to the documentation for more information on NemoClaw.

This project is licensed under the Apache License 2.0.

联系我们 contact @ memedata.com