Instant history compaction and context optimization for AI agents
Compresr is a YC-backed company building LLM prompt compression and context optimization.
Context Gateway sits between your AI agent (Claude Code, Cursor, etc.) and the LLM API. When your conversation gets too long, it compresses history in the background so you never wait for compaction.
# Install gateway binary
curl -fsSL https://compresr.ai/api/install | sh
# Then select an agent (opens interactive TUI wizard)
context-gatewayThe TUI wizard will help you:
- Choose an agent (claude_code, cursor, openclaw, or custom)
- Create/edit configuration:
- Summarizer model and api key
- enable slack notifications if needed
- Set trigger threshold for compression (default: 75%)
Supported agents:
- claude_code: Claude Code IDE integration
- cursor: Cursor IDE integration
- openclaw: Open-source Claude Code alternative
- custom: Bring your own agent configuration
- No more waiting when conversation hits context limits
- Compaction happens instantly (summary was pre-computed in background)
- Check
logs/history_compaction.jsonlto see what's happening
We welcome contributions! Please join our Discord to contribute.
