Caution
UNOFFICIAL PROJECT — This is a community-built tool, not affiliated with or endorsed by Lambda.
A fast CLI and MCP server for managing Lambda cloud GPU instances.
Two ways to use it:
- CLI (
lambda) - Direct terminal commands for managing GPU instances - MCP Server (
lambda-mcp) - Let AI assistants like Claude manage your GPU infrastructure
brew install strand-ai/tap/lambda-clicargo install --git https://github.com/Strand-AI/lambda-cliDownload from GitHub Releases.
Get your API key from the Lambda dashboard.
export LAMBDA_API_KEY=<your-key>export LAMBDA_API_KEY_COMMAND="op read op://Personal/Lambda/api-key"The command is executed at startup and its output is used as the API key. This works with any secret manager.
Get notified on Slack, Discord, or Telegram when your instance is ready and SSH-able.
Set one or more of these environment variables:
# Slack (incoming webhook)
export LAMBDA_NOTIFY_SLACK_WEBHOOK="https://hooks.slack.com/services/T00/B00/XXX"
# Discord (webhook URL)
export LAMBDA_NOTIFY_DISCORD_WEBHOOK="https://discord.com/api/webhooks/123/abc"
# Telegram (bot token + chat ID)
export LAMBDA_NOTIFY_TELEGRAM_BOT_TOKEN="123456:ABC-DEF..."
export LAMBDA_NOTIFY_TELEGRAM_CHAT_ID="123456789"Slack: Create an Incoming Webhook in your workspace.
Discord: In channel settings → Integrations → Webhooks → New Webhook → Copy Webhook URL.
Telegram:
- Message @BotFather →
/newbot→ copy the token - Message your bot, then visit
https://api.telegram.org/bot<TOKEN>/getUpdatesto find your chat ID
| Command | Description |
|---|---|
lambda list |
Show available GPU types with pricing and availability |
lambda running |
Show your running instances |
lambda start |
Launch a new instance |
lambda stop |
Terminate an instance |
lambda find |
Poll until a GPU type is available, then launch |
List available GPUs:
Start an instance:
lambda start --gpu gpu_1x_a10 --ssh my-key --name "dev-box"Stop an instance:
lambda stop --instance-id <id>Wait for availability and auto-launch:
lambda find --gpu gpu_8x_h100 --ssh my-key --interval 30| Flag | Description |
|---|---|
-g, --gpu |
Instance type (required) |
-s, --ssh |
SSH key name (required) |
-n, --name |
Instance name |
-r, --region |
Region (auto-selects if omitted) |
-f, --filesystem |
Filesystem to attach (must be in same region) |
--no-notify |
Disable notifications even if env vars are set |
| Flag | Description |
|---|---|
-g, --gpu |
Instance type to wait for (required) |
-s, --ssh |
SSH key name (required) |
--interval |
Poll interval in seconds (default: 10) |
-n, --name |
Instance name when launched |
-f, --filesystem |
Filesystem to attach when launched |
--no-notify |
Disable notifications even if env vars are set |
Notifications are automatic when env vars are configured. Use --no-notify to disable:
lambda start --gpu gpu_1x_a10 --ssh my-key --no-notifyThe lambda-mcp binary is an MCP (Model Context Protocol) server that lets AI assistants manage your Lambda infrastructure.
The easiest way to use lambda-mcp is via npx—no installation required:
npx @strand-ai/lambda-mcp| Flag | Description |
|---|---|
--eager |
Execute API key command at startup instead of on first use |
When using LAMBDA_API_KEY_COMMAND, the MCP server defers command execution until the first API request by default. This avoids unnecessary delays when starting Claude Code if you don't use Lambda tools in every session.
Use --eager to execute the command at startup instead:
npx @strand-ai/lambda-mcp --eagerNote: The CLI (
lambda) always executes the API key command at startup since it's used for immediate operations.
| Tool | Description |
|---|---|
list_gpu_types |
List all available GPU instance types with pricing, specs, and current availability |
start_instance |
Launch a new GPU instance (auto-notifies if configured) |
stop_instance |
Terminate a running instance |
list_running_instances |
Show all running instances with status and connection details |
check_availability |
Check if a specific GPU type is available |
When notification environment variables are configured, the MCP server automatically sends notifications when instances become SSH-able. No additional flags needed—just set the LAMBDA_NOTIFY_* env vars and launch instances as usual.
claude mcp add lambda -s user -e LAMBDA_API_KEY=your-api-key -- npx -y @strand-ai/lambda-mcpWith 1Password CLI:
claude mcp add lambda -s user -e LAMBDA_API_KEY_COMMAND="op read op://Personal/Lambda/api-key" -- npx -y @strand-ai/lambda-mcpThen restart Claude Code.
Once configured, you can ask Claude things like:
- "What GPUs are currently available on Lambda?"
- "Launch an H100 instance with my ssh key 'macbook'"
- "Show me my running instances"
- "Check if any A100s are available"
- "Terminate instance i-abc123"
# Build
cargo build
# Run tests
cargo test
# Run CLI
cargo run --bin lambda -- list
# Run MCP server
cargo run --bin lambda-mcpTo create a release:
- Update the version in
Cargo.toml - Merge to main — this automatically:
- Creates a git tag
- Builds binaries for all platforms
- Publishes to npm
- Updates the Homebrew formula