展示 HN:qqqa – 一个快速、无状态的、由 LLM 驱动的 shell 助手
Show HN: qqqa – A fast, stateless LLM-powered assistant for your shell

原始链接: https://github.com/matisojka/qqqa

## qqqa:为你的 Shell 提供 LLM 辅助 qqqa 是一款快速、无状态的 CLI 工具,通过两个二进制文件为你的命令行带来 LLM 的强大功能:**qq** 用于快速提问,**qa** 用于快速基于代理的任务。`qq` 回答单个问题(例如“如何递归列出文件”),而 `qa` 可以执行命令、读写文件——所有操作都经过确认以确保安全。 qqqa 专为速度和简洁性而设计,有意保持无状态——不存储会话历史记录,确保可重复性,并能与管道和文件进行 shell 友好的组合。它优先考虑安全性;`qq` 仅为只读模式,`qa` 在执行操作前需要确认。 该项目默认使用 **Groq** 进行快速推理(使用 `openai/gpt-oss-20b` 模型),但也支持 **OpenAI** 和其他兼容的提供商,通过可配置的设置 (`~/.qq/config.json`) 实现。 安装涉及为你的平台下载预构建的二进制文件,并将它们添加到 PATH 中。初始设置 (`qq --init` 或 `qa --init`) 会引导你完成提供商选择和 API 密钥配置。功能包括丰富的格式化、可配置的提供商以及文件访问和命令执行的安全防护。

相关文章

原文

Fast, stateless LLM-powered assistant for your shell: qq answers; qa runs commands

qqqa is a two-in-one, stateless CLI tool that brings LLM assistance to the command line without ceremony.

The two binaries are:

  • qq - ask a single question, e.g. "qq how can I recursively list all files in this directory" (qq stands for "quick question")
  • qa - a single step agent that can optionally use tools to finish a task: read a file, write a file, or execute a command with confirmation (qa stands for "quick agent")

By default the repo includes profiles for OpenAI and Groq.

demo.mp4

qq means quick question. qa means quick agent. Both are easy to type rapidly on QWERTY keyboards with minimal finger movement. That makes interacting with LLMs faster and more natural during real work.

qqqa is deliberately stateless. There is no long running session and no hidden conversation memory stored by the tool. Every run is independent and reproducible.

Why stateless is great:

  • Simple and focused - Unix philosophy applied to LLM tools.
  • Shell friendly - compose with pipes and files instead of interactive chats.
  • Safe by default - qq is read-only and has access to no tools. qa is built with security in mind and requires confirmation before running tools.

The tools may include transient context you choose to provide:

  • qq can include the last few terminal commands as hints and piped stdin if present.
  • qa can read files or run a specific command, but only once per invocation and with safety checks.

Why we recommend using Groq by default

For fast feedback loops, speed and cost matter. The included groq profile targets Groq's OpenAI compatible API and the model openai/gpt-oss-20b. We recommend Groq for really fast inference speed at roughly 1000 tokens per minute and at a low price point compared to many alternatives. Set GROQ_API_KEY and you are ready to go.

You can still use OpenAI or any other OpenAI compatible provider by adding a provider entry and a profile in ~/.qq/config.json.

  • OpenAI compatible API client with streaming and non streaming calls.
  • Stateless, single shot workflow that plays well with pipes and scripts.
  • Rich but simple formatting using XML like tags rendered to ANSI colors.
  • Config driven providers and profiles with per profile model overrides.
  • Safety rails for file access and command execution.
  • Old-school and SERIOUS? Optional no-emoji mode persisted via --no-fun 🥸

Download a prebuilt archive from the releases directory (or the GitHub Releases page) for your platform, then extract and place the binaries on your PATH.

Common targets:

  • macOS (Intel): qqqa-vX.Y.Z-x86_64-apple-darwin.tar.gz
  • macOS (Apple Silicon): qqqa-vX.Y.Z-aarch64-apple-darwin.tar.gz
  • Linux (x86_64): qqqa-vX.Y.Z-x86_64-unknown-linux-gnu.tar.gz
  • Linux (ARM64): qqqa-vX.Y.Z-aarch64-unknown-linux-gnu.tar.gz

On first run qqqa creates ~/.qq/config.json with safe permissions. For a smooth first interaction, run the init flow:

# Interactive setup (choose provider and set key)
qq --init
# or
qa --init

The initializer lets you choose the default provider:

  • Groq + openai/gpt-oss-20b (faster, cheaper)
  • OpenAI + gpt-5-mini (slower, a bit smarter)

It also offers to store an API key in the config (optional). If you prefer environment variables, leave it blank and set one of:

  • GROQ_API_KEY for Groq
  • OPENAI_API_KEY for OpenAI

Defaults written to ~/.qq/config.json:

You can still override at runtime:

# choose profile
qq -p groq "what is ripgrep"

# override model for a single call
qq -m openai/gpt-oss-20b "explain this awk one-liner"
# simplest
qq "convert mp4 to mp3"

# stream tokens with formatted output
qq -s "how do I kill a process by name on macOS"

# include piped context
git status | qq "summarize what I should do next"

# raw text (no ANSI formatting)
qq -r "explain sed vs awk"

# skip reading terminal history
qq -n "find large files in the last day"

# disable emojis in responses (persists)
qq --no-fun "summarize this"

Note: it is possible to run qq without quotes, which works most of the time the same way as with quotes.

# simplest
qq convert mp4 to mp3

Example: forgot the ffmpeg incantation

You want to extract audio from a YouTube video but you do not remember the exact flags.

Ask with qq:

qq "how do I use ffmpeg to extract audio from a YouTube video into mp3"

A typical answer will suggest installing the tools and then using yt-dlp to fetch audio and ffmpeg to convert it:

# macOS
brew install yt-dlp ffmpeg

# Debian or Ubuntu
sudo apt-get update && sudo apt-get install -y yt-dlp ffmpeg

# Download and extract audio to MP3 using ffmpeg under the hood
yt-dlp -x --audio-format mp3 "https://www.youtube.com/watch?v=VIDEO_ID"

Do it for me with qa:

qa "download audio as mp3 from https://www.youtube.com/watch?v=VIDEO_ID"

The agent will propose a safe command like yt-dlp -x --audio-format mp3 URL, show it for confirmation, then run it. You can pass -y to auto approve.

qa - do a single step with tools

qa can either answer in plain text or request one tool call in JSON. Supported tools:

  • read_file with { "path": string }
  • write_file with { "path": string, "content": string }
  • execute_command with { "command": string, "cwd?": string }

Examples:

# read a file the safe way
qa "read src/bin/qq.rs and tell me what main does"

# write a file
qa "create a README snippet at notes/intro.md with a short summary"

# run a command with confirmation
qa "list Rust files under src sorted by size"

# auto approve tool execution for non interactive scripts
qa -y "count lines across *.rs"

# disable emojis in responses (persists)
qa --no-fun "format and lint the repo"

execute_command prints the proposed command and asks for confirmation. It warns if the working directory is outside your home. Use -y to auto approve in trusted workflows.

The runner enforces a default allowlist (think ls, grep, find, rg, awk, etc.) and rejects pipelines, redirection, and other high-risk constructs. When a command is blocked, qa prompts you to add it to command_allowlist inside ~/.qq/config.json; approving once persists the choice and updates future runs.

  • File tools require paths to be inside your home or the current directory. Reads are capped to 1 MiB, and traversal/symlink escapes are blocked.
  • Command execution uses a default allowlist (e.g. ls, grep, rg, find) plus your custom command_allowlist entries. Destructive patterns (rm -rf /, sudo, mkfs, etc.) are always blocked, and pipelines/redirection/newlines prompt for confirmation even with --yes.
  • Commands run with a 120 s timeout and the agent performs at most one tool step—there is no loop.
  • Config files are created with safe permissions. API keys come from environment variables unless you explicitly add a key to the config.
  • GROQ_API_KEY for the Groq provider
  • OPENAI_API_KEY for the OpenAI provider

Project layout:

  • src/bin/qq.rs and src/bin/qa.rs entry points
  • Core modules in src/: ai.rs, config.rs, prompt.rs, history.rs, perms.rs, formatting.rs
  • Tools in src/tools/: read_file.rs, write_file.rs, execute_command.rs
  • Integration tests in tests/

See CONTRIBUTING.md for guidelines on reporting issues and opening pull requests, building from source, and the release process.

The repo ships prebuilt binaries under releases/.

  • Build and package a release:
# Build v0.8.2 for common targets and package tar.gz artifacts
scripts/release.sh v0.8.2

# Optionally specify a Git SHA to record in the manifest (and tag later)
scripts/release.sh v0.8.2 <git_sha>

# Override targets (space-separated)
TARGETS="x86_64-apple-darwin aarch64-apple-darwin" scripts/release.sh v0.8.2

What the script does:

  • Bumps Cargo.toml version to the given one.
  • Builds qq and qa for each target with cargo build --release.
  • Packages qqqa-v<version>-<target>.tar.gz into releases/ and writes checksums.
  • Writes releases/v<version>/manifest.json and updates releases/index.json.
  • Prunes older versions, keeping the last 3.

Tagging the release:

git add Cargo.toml releases/
git commit -m "release: v0.8.2"
git tag -a v0.8.2 -m "qqqa v0.8.2"   # or: git tag -a v0.8.2 <sha> -m "qqqa v0.8.2"
git push && git push --tags

Common targets (customizable via TARGETS):

  • x86_64-apple-darwin
  • aarch64-apple-darwin
  • x86_64-unknown-linux-gnu
  • aarch64-unknown-linux-gnu

Notes:

  • Cross-compiling may require additional toolchains; rustup target add <triple> is attempted automatically.
  • For fully-static Linux builds, you can adjust targets to *-unknown-linux-musl if your environment supports it.
  • API error about missing key: run qq --init to set things up, or export the relevant env var, e.g. export GROQ_API_KEY=....
  • No output when streaming: try -d to see debug logs.
  • Piped input not detected: ensure you are piping into qq and not running it in a subshell that swallows stdin.

Licensed under MIT.

联系我们 contact @ memedata.com