eBPF-powered Linux observability with AI incident detection
Linnix captures every process fork, exec, and exit with lightweight CPU/memory telemetry using eBPF. Works standalone with built-in rules engine, or add AI for natural language insights.
✨ NEW: linnix-3b model now available! Download the 2.1GB quantized model from Releases or use the automated setup script.
Traditional monitoring tells you "CPU is high". Linnix tells you WHY and WHAT TO DO.
- ⚡ Zero Overhead: <1% CPU usage with eBPF probes (vs 5-15% for traditional agents)
- 🧠 AI-Powered (Optional): Natural language insights - "Fork storm in cron job. Add rate limit to /etc/cron.d/backup"
- 🎯 Works Without AI: Built-in rules engine detects incidents out-of-the-box
- 💰 Cost-Effective: 60-80% cheaper than Datadog or Dynatrace, runs on your infrastructure
- 🔓 Open Source: Apache-2.0 license, no vendor lock-in, BYO LLM (or none)
- 🚀 Production-Ready: Battle-tested on multi-node clusters, kernel 5.8+
| Feature | Linnix (OSS) | Prometheus + Grafana | Datadog | Elastic APM |
|---|---|---|---|---|
| Setup Time | 5 minutes | 2-3 hours | 30 minutes | 1-2 hours |
| CPU Overhead | <1% (eBPF) | 2-5% (exporters) | 5-15% (agent) | 10-20% (APM) |
| Instrumentation | Zero | Manual exporters | Agent install | Code changes |
| AI Insights | ✅ Built-in | ❌ No | ❌ No | |
| Incident Detection | ✅ Auto | ✅ ML (paid) | ||
| Cost (10 nodes) | $0 | ~$50/mo hosting | ~$1,500/mo | ~$1,000/mo |
| Data Privacy | ✅ Your infra | ✅ Your infra | ❌ Vendor cloud | |
| BYO LLM | ✅ Any model | N/A | ❌ No | ❌ No |
Bottom line: We're Prometheus for process lifecycle + AI reasoning layer. Use both!
💡 Note: AI is optional! Linnix works out-of-the-box with its built-in rules engine for detecting fork storms, CPU spikes, and runaway processes. Add AI later for natural language explanations.
# Complete eBPF monitoring with AI - ready in 5 minutes
git clone https://github.com/linnix-os/linnix.git && cd linnix
./setup-llm.sh
# Then open: http://localhost:8080 (Web Dashboard)What you get instantly:
- ✅ Web Dashboard: Real-time visualization at
http://localhost:8080 - ✅ eBPF Monitoring: Every process event captured with <1% overhead
- ✅ AI Insights: 3B model analyzes incidents every 30 seconds
- ✅ Live Metrics: Process tree, CPU usage, system overview
- ✅ Zero Config: Works out of the box, all data local
After running ./setup-llm.sh, you'll have:
- Web Dashboard (
http://localhost:8080) - Beautiful real-time UI - API Access (
http://localhost:3000) - REST endpoints for integration - AI Analysis - Automatic incident detection with explanations
- Live Events - Real-time process monitoring stream
Quick Health Check:
curl http://localhost:3000/healthz # eBPF daemon
curl http://localhost:8090/health # AI model
curl http://localhost:3000/insights | jq # Get AI insightsWhat it does:
- Downloads TinyLlama model (800MB) or linnix-3b (2.1GB)
- Starts cognitod (eBPF daemon) + llama-server (AI inference)
- Runs health checks
- Ready for AI insights in < 5 minutes!
git clone https://github.com/linnix-os/linnix.git && cd linnix
docker-compose up -d
# Stream live process events
curl -N http://localhost:3000/stream
# Get incident alerts from rules engine
curl http://localhost:3000/insights | jq✅ No AI/LLM required | ✅ No Rust toolchain required | ✅ Works on any Linux | ✅ < 1% CPU overhead
# 1. Install cognitod
curl -sfL https://raw.githubusercontent.com/linnix-os/linnix/main/scripts/install.sh | sh
# 2. Start monitoring
sudo systemctl start cognitod
# 3. Stream live events
linnix-cli stream
# 4. Get AI insights
export LLM_ENDPOINT="http://localhost:8090/v1/chat/completions"
export LLM_MODEL="linnix-3b-distilled"
linnix-reasoner --insights┌──────────────────────────────────────────────────────────────┐
│ Kernel Space (eBPF) │
├──────────────────────────────────────────────────────────────┤
│ fork hook → exec hook → exit hook → CPU/mem sampling │
└────────────────────────┬─────────────────────────────────────┘
│ Perf buffers
▼
┌──────────────────────────────────────────────────────────────┐
│ User Space (cognitod) │
├──────────────────────────────────────────────────────────────┤
│ • Event processing • Process tree tracking │
│ • State management • Rules engine │
│ • HTTP/SSE API • Prometheus metrics │
└────────────────────────┬─────────────────────────────────────┘
│
┌───────────────┼───────────────┐
│ │ │
▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌─────────────┐
│ CLI │ │ Reasoner │ │ Prometheus │
│ Stream │ │ AI │ │ Grafana │
└─────────┘ └──────────┘ └─────────────┘
Linnix provides comprehensive eBPF-based monitoring with optional AI-powered incident detection:
- ✅ eBPF monitoring - Kernel-level process lifecycle tracking
- ✅ Real-time event streaming - SSE endpoints for live data
- ✅ Process tree tracking - Full ancestry and lineage graphs
- ✅ CPU/memory telemetry - Lightweight resource monitoring
- ✅ Local rules engine - Detects fork storms, CPU spikes, runaway processes (no AI needed)
- ✅ Prometheus integration - Standard metrics export
- ✅ LLM inference (optional) - Bring your own model for natural language insights (OpenAI, local, etc.)
- ✅ Training examples - 50+ curated incident samples included
docker run -d \
--name cognitod \
--privileged \
--pid=host \
--network=host \
-v /sys/kernel/btf:/sys/kernel/btf:ro \
-v /sys/kernel/debug:/sys/kernel/debug:ro \
linnixos/cognitod:latestUbuntu/Debian:
wget https://github.com/linnix-os/linnix/releases/latest/download/cognitod_amd64.deb
sudo dpkg -i cognitod_amd64.deb
sudo systemctl start cognitodRHEL/CentOS:
wget https://github.com/linnix-os/linnix/releases/latest/download/cognitod.rpm
sudo rpm -i cognitod.rpm
sudo systemctl start cognitod# Install Rust (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Clone repository
git clone https://github.com/linnix-os/linnix.git
cd linnix
# Build eBPF programs
cargo xtask build-ebpf
# Build and install
cargo build --release
sudo cp target/release/cognitod /usr/local/bin/
sudo cp target/release/linnix-cli /usr/local/bin/
sudo cp target/release/linnix-reasoner /usr/local/bin/Full documentation: GitHub docs/
Cognitod exposes a REST API on port 3000:
GET /health- Health checkGET /metrics- Prometheus metricsGET /processes- All live processesGET /graph/:pid- Process ancestry graphGET /stream- Server-sent events (real-time)GET /insights- AI-generated insightsGET /alerts- Active alerts from rules engine
For API examples, see cognitod/examples/.
Linnix works with any OpenAI-compatible LLM endpoint:
We provide a distilled 3B model optimized for CPU inference:
# Download demo model (2.1GB)
wget https://github.com/linnix-os/linnix/releases/download/v0.1.0/linnix-3b-distilled-q5_k_m.gguf
# Serve with llama.cpp
./serve_distilled_model.sh # Starts on port 8090
# Or manually:
llama-server -m linnix-3b-distilled-q5_k_m.gguf \
--port 8090 --ctx-size 4096 -t 8
# Test the model
export LLM_ENDPOINT="http://localhost:8090/v1/chat/completions"
export LLM_MODEL="linnix-3b-distilled"
linnix-reasoner --insightsPerformance: 12.78 tok/s on CPU (no GPU required!)
# Option 1: Local model with llama.cpp
./llama-server -m qwen2.5-7b-instruct-q5_k_m.gguf --port 8090
# Option 2: vLLM
vllm serve Qwen/Qwen2.5-7B-Instruct --port 8090
# Option 3: Ollama
ollama serve qwen2.5:7b
# Configure endpoint
export LLM_ENDPOINT="http://localhost:8090/v1/chat/completions"
export LLM_MODEL="qwen2.5-7b"
# Get insights
linnix-reasoner --insightsYou can also use commercial APIs (OpenAI, Anthropic, etc.) by pointing to their endpoints.
Create /etc/linnix/linnix.toml:
[runtime]
offline = false # Set true to disable external HTTP calls
[telemetry]
sample_interval_ms = 1000 # CPU/memory sampling frequency
[rules]
enabled = true
config_path = "/etc/linnix/rules.yaml"
[api]
bind_address = "127.0.0.1:3000"
[llm]
endpoint = "http://localhost:8090/v1/chat/completions"
model = "qwen2.5-7b"
timeout_secs = 120# CLI streaming
linnix-cli stream
# Or use curl with SSE
curl -N http://localhost:3000/stream# For a specific PID
curl http://localhost:3000/graph/1234 | jq .
# All processes
curl http://localhost:3000/processes | jq .# Get AI-generated insights
linnix-reasoner --insights
# Output:
# {
# "summary": "System experiencing high CPU due to fork storm...",
# "risks": ["cpu_spin", "fork_storm"]
# }Edit /etc/linnix/rules.yaml:
rules:
- name: fork_storm
condition: "forks_per_sec > 100"
severity: critical
actions:
- alert
- log
- name: cpu_spike
condition: "process.cpu_percent > 95 AND duration > 60"
severity: warning
actions:
- alertWe love contributions! Here's how to get started:
- Fork the repository
- Create a feature branch (
git checkout -b feat/amazing-feature) - Make your changes
- Run tests (
cargo test --workspace) - Commit (
git commit -m 'Add amazing feature') - Push (
git push origin feat/amazing-feature) - Open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
# Clone repo
git clone https://github.com/linnix-os/linnix.git
cd linnix
# Install dependencies
cargo build --workspace
# Build eBPF programs
cargo xtask build-ebpf
# Run tests
cargo test --workspace
# Run clippy
cargo clippy --all-targets -- -D warningsFound a bug? Please open an issue with:
- Your OS and kernel version
- Cognitod version (
cognitod --version) - Steps to reproduce
- Expected vs actual behavior
Linnix is licensed under the Apache License 2.0.
See LICENSE for details.
Linnix uses several open source libraries. See THIRD_PARTY_LICENSES for details.
The eBPF programs in linnix-ai-ebpf/linnix-ai-ebpf-ebpf/ are dual-licensed under GPL-2.0 OR MIT (eBPF programs must be GPL-compatible).
If you find Linnix useful, please star the repo! It helps us grow the community.
If Linnix helps you catch production incidents, add this badge to your README:
[](https://github.com/linnix-os/linnix)Linnix is built on the shoulders of giants:
Special thanks to the eBPF community for making kernel observability accessible!
If you use Linnix in research, please cite:
@software{linnix2025,
author = {Shah, Parth},
title = {Linnix: eBPF-powered Linux observability with AI},
year = {2025},
url = {https://github.com/linnix-os/linnix}
}Made with ❤️ by the Linnix team
