展示 HN:Axe,一个 12MB 的二进制文件,可替代你的 AI 框架。
Show HN: Axe – A 12MB binary that replaces your AI framework

原始链接: https://github.com/jrswab/axe

## Axe:命令行 LLM 代理编排 Axe 是一款 CLI 工具,用于构建和运行专注、可组合的 LLM 驱动代理,它采用了不同于典型聊天机器人中心化 AI 工具的方法。它将代理视为 Unix 程序——每个代理都设计用于单一任务,在 TOML 配置文件中定义,并通过命令行执行,数据输入并输出结果。 主要特性包括支持多种 LLM 提供商(Anthropic、OpenAI、Ollama)、声明式代理定义、子代理委托、持久化内存以及用于可重用指令的技能系统。Axe 可以无缝集成到现有的工具中,如 cron、git 钩子和 CI/CD 管道,避免了专用守护进程或 GUI 的需要。 代理配置有系统提示、模型选择以及对沙盒化工具的访问,用于文件操作和命令执行。Axe 优先考虑简单性和控制力,提供诸如干运行模式、JSON 输出和内存垃圾回收等功能。它可以通过 `go install` 或从源代码构建获得,并提供 Docker 镜像以进行隔离执行。

## Axe:轻量级 AI 代理工具 Axe 是一款新的 12MB 二进制文件,旨在成为笨重 AI 框架的极简替代方案。由 jrswab 创建,它将 LLM 代理视为 Unix 程序——小巧、专注且可组合——拒绝了集成式聊天机器人风格 AI 的趋势。 Axe 使用 TOML 配置文件来定义执行特定任务(代码审查、日志分析等)的代理,可以直接从命令行运行,并通过管道接受输入(例如,`git diff | axe run reviewer`)。主要功能包括子代理委托、可选的持久内存、对各种 LLM 提供商(OpenAI、Anthropic、Ollama)的支持以及安全的文件操作。 Axe 使用 Go 语言构建,除了两个库之外没有其他依赖项,避免了 Python、Docker 和 GUI,优先考虑速度、简洁性和可组合性,以便通过 CLI、cron 作业或 CI/CD 管道自动化任务。它受到类似项目(如 `ell`)的启发,旨在服务于那些喜欢短时、专注型 AI 代理的用户。
相关文章

原文

axe banner

A CLI tool for managing and running LLM-powered agents.

Most AI tooling assumes you want a chatbot. A long-running session with a massive context window doing everything at once. But that's not how good software works. Good software is small, focused, and composable.

Axe treats LLM agents the same way Unix treats programs. Each agent does one thing well. You define it in a TOML file, give it a focused skill, and run it from the command line. Pipe data in, get results out. Chain agents together. Trigger them from cron, git hooks, or CI. Whatever you already use. No daemon, no GUI, no framework to buy into. Just a binary and your configs.

Axe orchestrates LLM-powered agents defined via TOML configuration files. Each agent has its own system prompt, model selection, skill files, context files, working directory, persistent memory, and the ability to delegate to sub-agents.

Axe is the executor, not the scheduler. It is designed to be composed with standard Unix tools — cron, git hooks, pipes, file watchers — rather than reinventing scheduling or workflow orchestration.

  • Multi-provider support — Anthropic, OpenAI, and Ollama (local models)
  • TOML-based agent configuration — declarative, version-controllable agent definitions
  • Sub-agent delegation — agents can call other agents via LLM tool use, with depth limiting and parallel execution
  • Persistent memory — timestamped markdown logs that carry context across runs
  • Memory garbage collection — LLM-assisted pattern analysis and trimming
  • Skill system — reusable instruction sets that can be shared across agents
  • Stdin piping — pipe any output directly into an agent (git diff | axe run reviewer)
  • Dry-run mode — inspect resolved context without calling the LLM
  • JSON output — structured output with metadata for scripting
  • Built-in tools — file operations (read, write, edit, list), shell command execution, all sandboxed to the agent's working directory
  • Minimal dependencies — two direct dependencies (cobra, toml); all LLM calls use the standard library

Requires Go 1.24+.

go install github.com/jrswab/axe@latest

Or build from source:

git clone https://github.com/jrswab/axe.git
cd axe
go build .

Initialize the configuration directory:

This creates the directory structure at $XDG_CONFIG_HOME/axe/ with a sample skill and a default config.toml for provider credentials.

Scaffold a new agent:

Edit its configuration:

Run the agent:

Pipe input from other tools:

git diff --cached | axe run pr-reviewer
cat error.log | axe run log-analyzer

The examples/ directory contains ready-to-run agents you can copy into your config and use immediately. Includes a code reviewer, commit message generator, and text summarizer — each with a focused SKILL.md.

# Copy an example agent into your config
cp examples/code-reviewer/code-reviewer.toml "$(axe config path)/agents/"
cp -r examples/code-reviewer/skills/ "$(axe config path)/skills/"

# Set your API key and run
export ANTHROPIC_API_KEY="your-key-here"
git diff | axe run code-reviewer

See examples/README.md for full setup instructions.

Axe provides a Docker image for running agents in an isolated, hardened container.

Multi-architecture builds (linux/amd64, linux/arm64) are supported via buildx:

docker buildx build --platform linux/amd64,linux/arm64 -t axe:latest .

Mount your config directory and pass API keys as environment variables:

docker run --rm \
  -v ./my-config:/home/axe/.config/axe \
  -e ANTHROPIC_API_KEY \
  axe run my-agent

Pipe stdin with the -i flag:

git diff | docker run --rm -i \
  -v ./my-config:/home/axe/.config/axe \
  -e ANTHROPIC_API_KEY \
  axe run pr-reviewer

Without a config volume mounted, axe exits with code 2 (config error) because no agent TOML files exist.

The examples above mount the entire config directory. If you only need to run one agent with one skill, mount just those files to their expected XDG paths inside the container. No config.toml is needed when API keys are passed via environment variables.

docker run --rm -i \
  -e ANTHROPIC_API_KEY \
  -v ./agents/reviewer.toml:/home/axe/.config/axe/agents/reviewer.toml:ro \
  -v ./skills/code-review/:/home/axe/.config/axe/skills/code-review/:ro \
  axe run reviewer

The agent's skill field resolves automatically against the XDG config path inside the container, so no --skill flag is needed.

To use a different skill than the one declared in the agent's TOML, use the --skill flag to override it. In this case you only mount the replacement skill — the original skill declared in the TOML is ignored entirely:

docker run --rm -i \
  -e ANTHROPIC_API_KEY \
  -v ./agents/reviewer.toml:/home/axe/.config/axe/agents/reviewer.toml:ro \
  -v ./alt-review.md:/home/axe/alt-review.md:ro \
  axe run reviewer --skill /home/axe/alt-review.md

If the agent declares sub_agents, all referenced agent TOMLs and their skills must also be mounted.

Agent memory persists across runs when you mount a data volume:

docker run --rm \
  -v ./my-config:/home/axe/.config/axe \
  -v axe-data:/home/axe/.local/share/axe \
  -e ANTHROPIC_API_KEY \
  axe run my-agent

A docker-compose.yml is included for running axe alongside a local Ollama instance.

Cloud provider only (no Ollama):

docker compose run --rm axe run my-agent

With Ollama sidecar:

docker compose --profile ollama up -d ollama
docker compose --profile cli run --rm axe run my-agent

Pull an Ollama model:

docker compose --profile ollama exec ollama ollama pull llama3

Note: The compose axe service declares depends_on: ollama. Docker Compose will attempt to start the Ollama service whenever axe is started via compose, even for cloud-only runs. For cloud-only usage without Ollama, use docker run directly instead of docker compose run.

If Ollama runs directly on the host (not via compose), point to it with:

  • Linux: --add-host=host.docker.internal:host-gateway -e AXE_OLLAMA_BASE_URL=http://host.docker.internal:11434
  • macOS / Windows (Docker Desktop): -e AXE_OLLAMA_BASE_URL=http://host.docker.internal:11434

The container runs with the following hardening by default (via compose):

  • Non-root user — UID 10001
  • Read-only root filesystem — writable locations are the config mount, data mount, and /tmp/axe tmpfs
  • All capabilities droppedcap_drop: ALL
  • No privilege escalationno-new-privileges:true

These settings do not restrict outbound network access. To isolate an agent that only talks to a local Ollama instance, add --network=none and connect it to the shared Docker network manually.

Container Path Purpose Default Access
/home/axe/.config/axe/ Agent TOML files, skills, config.toml Read-write
/home/axe/.local/share/axe/ Persistent memory files Read-write

Config is read-write because axe config init and axe agents init write into it. Mount as :ro if you only run agents.

Variable Required Purpose
ANTHROPIC_API_KEY If using Anthropic API authentication
OPENAI_API_KEY If using OpenAI API authentication
AXE_OLLAMA_BASE_URL If using Ollama Ollama endpoint (default in compose: http://ollama:11434)
AXE_ANTHROPIC_BASE_URL No Override Anthropic API endpoint
AXE_OPENAI_BASE_URL No Override OpenAI API endpoint
Command Description
axe run <agent> Run an agent
axe agents list List all configured agents
axe agents show <agent> Display an agent's full configuration
axe agents init <agent> Scaffold a new agent TOML file
axe agents edit <agent> Open an agent TOML in $EDITOR
axe config path Print the configuration directory path
axe config init Initialize the config directory with defaults
axe gc <agent> Run memory garbage collection for an agent
axe gc --all Run GC on all memory-enabled agents
axe version Print the current version
Flag Default Description
--model <provider/model> from TOML Override the model (e.g. anthropic/claude-sonnet-4-20250514)
--skill <path> from TOML Override the skill file path
--workdir <path> from TOML or cwd Override the working directory
--timeout <seconds> 120 Request timeout
--dry-run false Show resolved context without calling the LLM
--verbose / -v false Print debug info (model, timing, tokens) to stderr
--json false Wrap output in a JSON envelope with metadata

Agents are defined as TOML files in $XDG_CONFIG_HOME/axe/agents/.

name = "pr-reviewer"
description = "Reviews pull requests for issues and improvements"
model = "anthropic/claude-sonnet-4-20250514"
system_prompt = "You are a senior code reviewer. Be concise and actionable."
skill = "skills/code-review/SKILL.md"
files = ["src/**/*.go", "CONTRIBUTING.md"]
workdir = "/home/user/projects/myapp"
tools = ["read_file", "list_directory", "run_command"]
sub_agents = ["test-runner", "lint-checker"]

[sub_agents_config]
max_depth = 3       # maximum nesting depth (hard max: 5)
parallel = true     # run sub-agents concurrently
timeout = 120       # per sub-agent timeout in seconds

[memory]
enabled = true
last_n = 10         # load last N entries into context
max_entries = 100   # warn when exceeded

[params]
temperature = 0.3
max_tokens = 4096

All fields except name and model are optional.

Agents can use built-in tools to interact with the filesystem and run commands. When tools are enabled, the agent enters a conversation loop — the LLM can make tool calls, receive results, and continue reasoning for up to 50 turns.

Tool Description
list_directory List contents of a directory relative to the working directory
read_file Read file contents with line-numbered output and optional pagination (offset/limit)
write_file Create or overwrite a file, creating parent directories as needed
edit_file Find and replace exact text in a file, with optional replace-all mode
run_command Execute a shell command via sh -c and return combined output
call_agent Delegate a task to a sub-agent (controlled via sub_agents, not tools)

Enable tools by adding them to the agent's tools field:

tools = ["read_file", "list_directory", "run_command"]

The call_agent tool is not listed in tools — it is automatically available when sub_agents is configured and the depth limit has not been reached.

All file tools (list_directory, read_file, write_file, edit_file) are sandboxed to the agent's working directory. Absolute paths, .. traversal, and symlink escapes are rejected.

When an LLM returns multiple tool calls in a single turn, they run concurrently by default. This applies to both built-in tools and sub-agent calls. Disable with parallel = false in [sub_agents_config].

Skills are reusable instruction sets that provide an agent with domain-specific knowledge and workflows. They are defined as SKILL.md files following the community SKILL.md format.

The skill field in an agent TOML is resolved in order:

  1. Absolute path — used as-is (e.g. /home/user/skills/SKILL.md)
  2. Relative to config dir — e.g. skills/code-review/SKILL.md resolves to $XDG_CONFIG_HOME/axe/skills/code-review/SKILL.md
  3. Bare name — e.g. code-review resolves to $XDG_CONFIG_HOME/axe/skills/code-review/SKILL.md

Skills often reference helper scripts. Since run_command executes in the agent's workdir (not the skill directory), script paths in SKILL.md must be absolute. Relative paths will fail because the scripts don't exist in the agent's working directory.

# Correct — absolute path
/home/user/.config/axe/skills/my-skill/scripts/fetch.sh <args>

# Wrong — relative path won't resolve from the agent's workdir
scripts/fetch.sh <args>
$XDG_CONFIG_HOME/axe/
├── config.toml
├── agents/
│   └── my-agent.toml
└── skills/
    └── my-skill/
        ├── SKILL.md
        └── scripts/
            └── fetch.sh
Provider API Key Env Var Default Base URL
Anthropic ANTHROPIC_API_KEY https://api.anthropic.com
OpenAI OPENAI_API_KEY https://api.openai.com
Ollama (none required) http://localhost:11434

Base URLs can be overridden with AXE_<PROVIDER>_BASE_URL environment variables or in config.toml.

Apache-2.0. See LICENSE.

联系我们 contact @ memedata.com