Y
Backed by Y Combinator

Compress Context.Cut Cost. Improve performance.

🎯 Request-Tailored Context for LLMs

Question-specific context compression with our proprietary algorithm.

  • Outperforms traditional retrieval baselines.
  • Integration via our SDK in minutes.

Contact us for a customized agentic search based solution (requires knowledge base indexing).

🤖 Agent Proxy for Context Control

Route agent traffic through a proxy: full control over context.

  • Compress context before model calls: conversation state, tool traces, retrieved information.
  • Use for free with a vanilla compressor or upgrade to compresr for best results.

Get started in minutes

Drop-in addition to your current context management workflow.

1

Get Your API Key

Create an API key from your console.

setup.sh
# Your API key: cmp_...
export COMPRESR_API_KEY="cmp_..."
2

Install the SDK

Install the official Python library. Works with Python 3.8+.

terminal
pip install compresr
3

Upload Documents (Under development)

Use a dedicated ingestion client to upload documents into a named collection.

index_docs.py
from compresr import DocumentIngestionClient

ingestion = DocumentIngestionClient(api_key="cmp_...")

upload_result = ingestion.upload_to_collection(
    collection_name="my-meetings",
    files=["./Sam_Nov22.pdf", "./Dario_Feb5.pdf"]
)
4

Ready to Use

Compress your context and use it with any LLM of your choice.

compression.py
from compresr import CompressionClient

client = CompressionClient(
    api_key="cmp_..."
)

result = client.generate(
    use_collections=["my-meetings"],  # Pre-uploaded context
    context="Your context...",  # In-line context ingestion
    question="User's question...",
    compression_model_name="compresr_v1",
)

print(result.data.compressed_context)

Stay in the Loop

Get the latest updates on new features, research papers, and tips for optimizing your AI workflows.

We respect your privacy. Unsubscribe at any time.

Ready to equip every query with laser-focused context?