Headroom is an open-source “context optimization layer” designed to sit as a proxy for LLM applications and handle compression, caching, and context-window management. The project packages a proxy server and SDK that aim to reduce token usage — and therefore provider costs — by applying reversible and statistical compression techniques without requiring changes to existing tool integrations.
What it does
Headroom operates as a transparent proxy: provider-bound requests are routed through the proxy which applies transforms to tool outputs and request prefixes. Key behaviors include statistical compression of JSON tool outputs, stabilization of prefixes to improve provider-side caching, and rolling-window context management that prevents token-limit failures while preserving tool semantics. Compression is reversible via the CCR architecture, so original content can be retrieved if the LLM requests it.
Core capabilities
- SmartCrusher — statistical compression for JSON outputs, reducing large lists and search results while keeping anomalies and relevant items.
- CacheAligner — normalizes prefixes to increase cache hit rates with providers.
- RollingWindow — manages context windows to avoid token-limit errors without breaking tool call order.
- CCR — reversible compression with automatic retrieval when needed.
- LLMLingua-2 (opt-in) — an ML-based compression option advertised for higher compression ratios.
The repository emphasizes zero code changes required: existing clients can point at the proxy endpoint and benefit from token savings.
Quickstart
A minimal proxy quickstart shown in the repo:
- Install the proxy:
pip install "headroom-ai[proxy]" - Start the proxy:
headroom proxy --port 8787 - Point clients at the proxy, for example
OPENAI_BASE_URL=[http://localhost:8787/v1](http://localhost:8787/v1`) for OpenAI-compatible clients orANTHROPIC_BASE_URL=[http://localhost:8787](http://localhost:8787`) for Anthropic.
A stats endpoint exposes token savings and cost metrics, for example:
curl [http://localhost:8787/stats](http://localhost:8787/stats`) returns JSON about tokens saved and percent savings.
Providers and performance
Headroom lists support for major provider integrations and token counting strategies, including OpenAI (tiktoken), Anthropic (official API), Google, Cohere, and Mistral. The repo provides performance examples with reported token reductions:
- Search results (1000 items): from 45,000 tokens to 4,500 tokens (90% savings)
- Log analysis (500 entries): from 22,000 tokens to 3,300 tokens (85% savings)
- Long conversation (50 turns): from 80,000 tokens to 32,000 tokens (60% savings)
Reported overhead is small: approximately 1–5 ms per request.
Safety and guarantees
The project documents several safety constraints:
- User and assistant messages are not compressed.
- Tool call ordering is preserved.
- Parse failures are no-ops (malformed content passes through unchanged).
- Compression is reversible and can be expanded via CCR when necessary.
Docs, examples, and contribution
Documentation covers SDK usage, proxy deployment, configuration, CCR internals, metrics, and troubleshooting. Examples in the repo include runnable code such as basic_usage.py, proxy_integration.py, and ccr_demo.py. The project is licensed under Apache-2.0 and includes contribution guidance and a test suite.
For code, documentation, and to explore the project further, see the repository: https://github.com/chopratejas/headroom.