展示 HN: Gonzo – 一个基于 Go 的 TUI,用于日志分析(支持 OpenTelemetry/OTLP)
Show HN: Gonzo – A Go-based TUI for log analysis (OpenTelemetry/OTLP support)

原始链接: https://github.com/control-theory/gonzo

## Gonzo:实时日志分析终端工具 Gonzo是一个强大的、受k9s启发的终端UI,用于实时分析日志流。它支持多种输入源——文件、标准输入和OpenTelemetry (OTLP) 通过gRPC/HTTP——并自动检测格式,如JSON、logfmt和纯文本。 主要功能包括一个类似k9s的交互式仪表盘,其中包含实时图表,显示严重程度分布、词频和时间序列数据。用户可以使用键盘/鼠标导航,使用正则表达式过滤日志,按属性搜索,并深入到单个日志条目。 Gonzo提供可选的AI驱动的洞察,用于异常检测和根本原因建议,可配置为GPT-4、GPT-3.5或通过OpenAI兼容API(LM Studio、Ollama)的本地选项。一个专门的“日志计数”模式提供热图可视化和模式分析。 安装通过`go install`或Homebrew非常简单。可以通过YAML文件进行配置,以实现持久设置。Gonzo使用Go构建,并利用了Bubble Tea和OpenTelemetry等库。

## Gonzo:一种用于日志分析的新型TUI Gonzo 是一种新的、基于 Go 的终端用户界面 (TUI),旨在简化和加速日志分析。它被创建为“日志的 k9s”,支持摄取 JSON、文本和 OpenTelemetry (OTLP) 日志,突出显示模式,甚至利用本地或基于 API 的 AI 模型进行摘要。 初步用户反馈显示存在性能问题——一位用户报告处理 5,000 行日志文件需要 6 分钟。然而,用户对其潜力感到兴奋,并建议将其集成到 k9s 等现有工具中(这已经可以通过插件实现)。 该项目利用 Bubbletea 库来实现其 TUI。开发者承认 "Gonzo" 这个名字可能存在歧义(除了其俏皮的“go 和木偶”起源外,还有其他含义),并欢迎贡献和反馈,以便项目不断发展。
相关文章

原文

Gonzo Mascot

Go Version License: MIT PRs Welcome

A powerful, real-time log analysis terminal UI inspired by k9s. Analyze log streams with beautiful charts, AI-powered insights, and advanced filtering - all from your terminal.

Gonzo Walkthrough

Gonzo Main Dashboard

Gonzo Stats

Gonzo Heatmap

  • Live streaming - Process logs as they arrive from stdin, files, or network
  • OTLP native - First-class support for OpenTelemetry log format
  • OTLP receiver - Built-in gRPC server to receive logs via OpenTelemetry protocol
  • Format detection - Automatically detects JSON, logfmt, and plain text
  • Severity tracking - Color-coded severity levels with distribution charts

📈 Interactive Dashboard

  • k9s-inspired layout - Familiar 2x2 grid interface
  • Real-time charts - Word frequency, attributes, severity distribution, and time series
  • Keyboard + mouse navigation - Vim-style shortcuts plus click-to-navigate and scroll wheel support
  • Smart log viewer - Auto-scroll with intelligent pause/resume behavior
  • Global pause control - Spacebar pauses entire dashboard while buffering logs
  • Modal details - Deep dive into individual log entries with expandable views
  • Log Counts analysis - Detailed modal with heatmap visualization, pattern analysis by severity, and service distribution
  • AI analysis - Get intelligent insights about log patterns and anomalies with configurable models
  • Regex support - Filter logs with regular expressions
  • Attribute search - Find logs by specific attribute values
  • Severity filtering - Focus on errors, warnings, or specific log levels
  • Interactive selection - Click or keyboard navigate to explore logs
  • Pattern detection - Automatically identify recurring issues
  • Anomaly analysis - Spot unusual patterns in your logs
  • Root cause suggestions - Get AI-powered debugging assistance
  • Configurable models - Choose from GPT-4, GPT-3.5, or any custom model
  • Multiple providers - Works with OpenAI, LM Studio, Ollama, or any OpenAI-compatible API
  • Local AI support - Run completely offline with local models
go install github.com/control-theory/gonzo/cmd/gonzo@latest

Using Homebrew (macOS/Linux)

brew tap control-theory/gonzo
brew install gonzo

Download the latest release for your platform from the releases page.

git clone https://github.com/control-theory/gonzo.git
cd gonzo
make build
# Read logs directly from files
gonzo -f application.log

# Read from multiple files
gonzo -f application.log -f error.log -f debug.log

# Use glob patterns to read multiple files
gonzo -f "/var/log/*.log"
gonzo -f "/var/log/app/*.log" -f "/var/log/nginx/*.log"

# Follow log files in real-time (like tail -f)
gonzo -f /var/log/app.log --follow
gonzo -f "/var/log/*.log" --follow

# Analyze logs from stdin (traditional way)
cat application.log | gonzo

# Stream logs from kubectl
kubectl logs -f deployment/my-app | gonzo

# Follow system logs
tail -f /var/log/syslog | gonzo

# Analyze Docker container logs
docker logs -f my-container 2>&1 | gonzo

# With AI analysis (requires API key)
export OPENAI_API_KEY=sk-your-key-here
gonzo -f application.log --ai-model="gpt-4"

Gonzo can receive logs directly via OpenTelemetry Protocol (OTLP) over both gRPC and HTTP:

# Start Gonzo as an OTLP receiver (both gRPC on port 4317 and HTTP on port 4318)
gonzo --otlp-enabled

# Use custom ports
gonzo --otlp-enabled --otlp-grpc-port=5317 --otlp-http-port=5318

# gRPC endpoint: localhost:4317
# HTTP endpoint: http://localhost:4318/v1/logs

Example: OpenTelemetry Collector Configuration

Using gRPC:

exporters:
  otlp/gonzo_grpc:
    endpoint: localhost:4317
    tls:
      insecure: true

service:
  pipelines:
    logs:
      receivers: [your_receivers]
      processors: [your_processors]
      exporters: [otlp/gonzo_grpc]

Using HTTP:

exporters:
  otlphttp/gonzo_http:
    endpoint: http://localhost:4318/v1/logs

service:
  pipelines:
    logs:
      receivers: [your_receivers]
      processors: [your_processors]
      exporters: [otlphttp/gonzo_http]

Example: Python Application

Using gRPC:

from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter

exporter = OTLPLogExporter(
    endpoint="localhost:4317",
    insecure=True
)

Using HTTP:

from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter

exporter = OTLPLogExporter(
    endpoint="http://localhost:4318/v1/logs",
)

See examples/send_otlp_logs.py for a complete example.

# Auto-select best available model (recommended) - file input
export OPENAI_API_KEY=sk-your-key-here
gonzo -f logs.json

# Or specify a particular model - file input
export OPENAI_API_KEY=sk-your-key-here
gonzo -f logs.json --ai-model="gpt-4"

# Follow logs with AI analysis
export OPENAI_API_KEY=sk-your-key-here
gonzo -f "/var/log/app.log" --follow --ai-model="gpt-4"

# Using local LM Studio (auto-selects first available)
export OPENAI_API_KEY="local-key"
export OPENAI_API_BASE="http://localhost:1234/v1"
gonzo -f logs.json

# Using Ollama (auto-selects best model like gpt-oss:20b)
export OPENAI_API_KEY="ollama"
export OPENAI_API_BASE="http://localhost:11434"
gonzo -f logs.json --follow

# Traditional stdin approach still works
export OPENAI_API_KEY=sk-your-key-here
cat logs.json | gonzo --ai-model="gpt-4"
Key/Mouse Action
Tab / Shift+Tab Navigate between panels
Mouse Click Click on any section to switch to it
/ or k/j Move selection up/down
Mouse Wheel Scroll up/down to navigate selections
/ or h/l Horizontal navigation
Enter View log details or open analysis modal (Counts section)
ESC Close modal/cancel
Key Action
Space Pause/unpause entire dashboard
/ Enter filter mode (regex supported)
s Search and highlight text in logs
c Toggle Host/Service columns in log view
r Reset all data (manual reset)
u / U Cycle update intervals (forward/backward)
i AI analysis (in detail view)
m Switch AI model (shows available models)
? / h Show help
q / Ctrl+C Quit
Key Action
Home Jump to top of log buffer (stops auto-scroll)
End Jump to latest logs (resumes auto-scroll)
PgUp / PgDn Navigate by pages (10 entries at a time)
/ or k/j Navigate entries with smart auto-scroll

AI Chat (in log detail modal)

Key Action
c Start chat with AI about current log
Tab Switch between log details and chat pane
m Switch AI model (works in modal too)

Log Counts Analysis Modal

Press Enter on the Counts section to open a comprehensive analysis modal featuring:

🔥 Real-Time Heatmap Visualization

  • Time-series heatmap showing severity levels vs. time (1-minute resolution)
  • 60-minute rolling window with automatic scaling per severity level
  • Color-coded intensity using ASCII characters (░▒▓█) with gradient effects
  • Precise alignment with time headers showing minutes ago (60, 50, 40, ..., 10, 0)
  • Receive time architecture - visualization based on when logs were received for reliable display

🔍 Pattern Analysis by Severity

  • Top 3 patterns per severity using drain3 pattern extraction algorithm
  • Severity-specific tracking with dedicated drain3 instances for each level
  • Real-time pattern detection as logs arrive and are processed
  • Accurate pattern counts maintained separately for each severity level

🏢 Service Distribution Analysis

  • Top 3 services per severity showing which services generate each log level
  • Service name extraction from common attributes (service.name, service, app, etc.)
  • Real-time updates as new logs are processed and analyzed
  • Fallback to host information when service names are not available
  • Scrollable content using mouse wheel or arrow keys
  • ESC to close and return to main dashboard
  • Full-width display maximizing screen real estate for data visualization
  • Real-time updates - data refreshes automatically as new logs arrive

The modal uses the same receive time architecture as the main dashboard, ensuring consistent and reliable visualization regardless of log timestamp accuracy or clock skew issues.

gonzo [flags]
gonzo [command]

Commands:
  version     Print version information
  help        Help about any command
  completion  Generate shell autocompletion

Flags:
  -f, --file stringArray           Files or file globs to read logs from (can specify multiple)
  --follow                     Follow log files like 'tail -f' (watch for new lines in real-time)
  -u, --update-interval duration   Dashboard update interval (default: 1s)
  -b, --log-buffer int             Maximum log entries to keep (default: 1000)
  -m, --memory-size int            Maximum frequency entries (default: 10000)
  --ai-model string            AI model for analysis (auto-selects best available if not specified)
  -t, --test-mode                  Run without TTY for testing
  -v, --version                    Print version information
  --config string              Config file (default: $HOME/.config/gonzo/config.yml)
  -h, --help                       Show help message

Create ~/.config/gonzo/config.yml for persistent settings:

# File input configuration
files:
  - "/var/log/app.log"
  - "/var/log/error.log"
  - "/var/log/*.log"  # Glob patterns supported
follow: true  # Enable follow mode (like tail -f)

# Update frequency for dashboard refresh
update-interval: 2s

# Buffer sizes
log-buffer: 2000
memory-size: 15000

# Development/testing
test-mode: false

# AI configuration
ai-model: "gpt-4"

See examples/config.yaml for a complete configuration example with detailed comments.

Gonzo supports multiple AI providers for intelligent log analysis. Configure using command line flags and environment variables. You can switch between available models at runtime using the m key.

# Set your API key
export OPENAI_API_KEY="sk-your-actual-key-here"

# Auto-select best available model (recommended)
cat logs.json | gonzo

# Or specify a particular model
cat logs.json | gonzo --ai-model="gpt-4"
# 1. Start LM Studio server with a model loaded
# 2. Set environment variables (IMPORTANT: include /v1 in URL)
export OPENAI_API_KEY="local-key"
export OPENAI_API_BASE="http://localhost:1234/v1"

# Auto-select first available model (recommended)
cat logs.json | gonzo

# Or specify the exact model name from LM Studio
cat logs.json | gonzo --ai-model="openai/gpt-oss-120b"
# 1. Start Ollama: ollama serve
# 2. Pull a model: ollama pull gpt-oss:20b
# 3. Set environment variables (note: no /v1 suffix needed)
export OPENAI_API_KEY="ollama"
export OPENAI_API_BASE="http://localhost:11434"

# Auto-select best model (prefers gpt-oss, llama3, mistral, etc.)
cat logs.json | gonzo

# Or specify a particular model
cat logs.json | gonzo --ai-model="gpt-oss:20b"
cat logs.json | gonzo --ai-model="llama3"

Custom OpenAI-Compatible APIs

# For any OpenAI-compatible API endpoint
export OPENAI_API_KEY="your-api-key"
export OPENAI_API_BASE="https://api.your-provider.com/v1"
cat logs.json | gonzo --ai-model="your-model-name"

Once Gonzo is running, you can switch between available AI models without restarting:

  1. Press m anywhere in the interface to open the model selection modal
  2. Navigate with arrow keys, page up/down, or mouse wheel
  3. Select a model with Enter
  4. Cancel with Escape

The model selection modal shows:

  • All available models from your configured AI provider
  • Current active model (highlighted in green)
  • Dynamic sizing based on terminal height
  • Scroll indicators when there are many models

Note: Model switching requires the AI service to be properly configured and running. The modal will only appear if models are available from your AI provider.

When you don't specify the --ai-model flag, Gonzo automatically selects the best available model:

Selection Priority:

  1. OpenAI: Prefers gpt-4gpt-3.5-turbo → first available
  2. Ollama: Prefers gpt-oss:20bllama3mistralcodellama → first available
  3. LM Studio: Uses first available model from the server
  4. Other providers: Uses first available model

Benefits:

  • ✅ No need to know model names beforehand
  • ✅ Works immediately with any AI provider
  • ✅ Intelligent defaults for better performance
  • ✅ Still allows manual model selection with m key

Example: Instead of gonzo --ai-model="llama3", simply run gonzo and it will auto-select llama3 if available.

LM Studio Issues:

  • ✅ Ensure server is running and model is loaded
  • ✅ Use full model name: --ai-model="openai/model-name"
  • ✅ Include /v1 in base URL: http://localhost:1234/v1
  • ✅ Check available models: curl http://localhost:1234/v1/models

Ollama Issues:

  • ✅ Start server: ollama serve
  • ✅ Verify model: ollama list
  • ✅ Test API: curl http://localhost:11434/api/tags
  • ✅ Use correct URL: http://localhost:11434 (no /v1 suffix)
  • ✅ Model names include tags: gpt-oss:20b, llama3:8b

OpenAI Issues:

  • ✅ Verify API key is valid and has credits
  • ✅ Check model availability (gpt-4 requires API access)
Variable Description
OPENAI_API_KEY API key for AI analysis (required for AI features)
OPENAI_API_BASE Custom API endpoint (default: https://api.openai.com/v1)
GONZO_FILES Comma-separated list of files/globs to read (equivalent to -f flags)
GONZO_FOLLOW Enable follow mode (true/false)
GONZO_UPDATE_INTERVAL Override update interval
GONZO_LOG_BUFFER Override log buffer size
GONZO_MEMORY_SIZE Override memory size
GONZO_AI_MODEL Override default AI model
GONZO_TEST_MODE Enable test mode
NO_COLOR Disable colored output

Enable shell completion for better CLI experience:

# Bash
source <(gonzo completion bash)

# Zsh  
source <(gonzo completion zsh)

# Fish
gonzo completion fish | source

# PowerShell
gonzo completion powershell | Out-String | Invoke-Expression

For permanent setup, save the completion script to your shell's completion directory.

Gonzo is built with:

  • Bubble Tea - Terminal UI framework
  • Lipgloss - Styling and layout
  • Bubbles - TUI components
  • Cobra - CLI framework
  • Viper - Configuration management
  • OpenTelemetry - Native OTLP support
  • Large amounts of ☕️

The architecture follows a clean separation:

cmd/gonzo/              # Main application entry
internal/
├── tui/                # Terminal UI implementation
├── analyzer/           # Log analysis engine
├── memory/            # Frequency tracking
├── otlplog/           # OTLP format handling
└── ai/                # AI integration
  • Go 1.21 or higher
  • Make (optional, for convenience)
# Quick build
make build

# Run tests
make test

# Build for all platforms
make cross-build

# Development mode (format, vet, test, build)
make dev
# Run unit tests
make test

# Run with race detection
make test-race

# Integration tests
make test-integration

# Test with sample data
make demo

We love contributions! Please see CONTRIBUTING.md for details.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

This project is licensed under the MIT License - see the LICENSE file for details.

  • Inspired by k9s for the amazing TUI patterns
  • Built with Charm libraries for beautiful terminal UIs
  • OpenTelemetry community for the OTLP specifications

Found a bug? Please open an issue with:

  • Your OS and Go version
  • Steps to reproduce
  • Expected vs actual behavior
  • Log samples (sanitized if needed)

If you find this project useful, please consider giving it a star! It helps others discover the tool.


Made with ❤️ by ControlTheory and the Gonzo community

联系我们 contact @ memedata.com