supermemory Infinite Chat
Build chat applications with unlimited context using supermemory’s intelligent proxy
supermemory Infinite Chat is a powerful solution that gives your chat applications unlimited contextual memory. It works as a transparent proxy in front of your existing LLM provider, intelligently managing long conversations without requiring any changes to your application logic.
Unlimited Context
No more token limits - conversations can extend indefinitely
Zero Latency
Transparent proxying with negligible overhead
Cost Efficient
Save up to 70% on token costs for long conversations
Provider Agnostic
Works with any OpenAI-compatible endpoint
Unlimited Context
No more token limits - conversations can extend indefinitely
Zero Latency
Transparent proxying with negligible overhead
Cost Efficient
Save up to 70% on token costs for long conversations
Provider Agnostic
Works with any OpenAI-compatible endpoint
Getting Started
To use the Infinite Chat endpoint, you need to:
1. Get a supermemory API key
Head to supermemory’s Developer Platform built to help you monitor and manage every aspect of the API.
Done! You can now use your API key to authenticate requests to the supermemory API.
Next up, let’s add your first memory.
2. Add supermemory in front of any OpenAI-Compatible API URL
How It Works
Transparent Proxying
All requests pass through supermemory to your chosen LLM provider with zero latency overhead.
Intelligent Chunking
Long conversations are automatically broken down into optimized segments using our proprietary chunking algorithm that preserves semantic coherence.
Smart Retrieval
When conversations exceed token limits (20k+), supermemory intelligently retrieves the most relevant context from previous messages.
Automatic Token Management
The system intelligently balances token usage, ensuring optimal performance while minimizing costs.
Performance Benefits
Save up to 70% on token costs for long conversations through intelligent context management and caching.
Pricing
Free Tier
100k tokens stored at no cost
Standard Plan
$20/month fixed cost after exceeding free tier
Usage-Based
Each thread includes 20k free tokens, then $1 per million tokens thereafter
Free Tier
100k tokens stored at no cost
Standard Plan
$20/month fixed cost after exceeding free tier
Usage-Based
Each thread includes 20k free tokens, then $1 per million tokens thereafter
Feature | Free | Standard |
---|---|---|
Tokens Stored | 100k | Unlimited |
Conversations | 10 | Unlimited |
Error Handling
supermemory is designed with reliability as the top priority. If any issues occur within the supermemory processing pipeline, the system will automatically fall back to direct forwarding of your request to the LLM provider, ensuring zero downtime for your applications.
Each response includes diagnostic headers that provide information about the processing:
Header | Description |
---|---|
x-supermemory-conversation-id | Unique identifier for the conversation thread |
x-supermemory-context-modified | Indicates whether supermemory modified the context (“true” or “false”) |
x-supermemory-tokens-processed | Number of tokens processed in this request |
x-supermemory-chunks-created | Number of new chunks created from this conversation |
x-supermemory-chunks-deleted | Number of chunks removed (if any) |
x-supermemory-docs-deleted | Number of documents removed (if any) |
If an error occurs, an additional header x-supermemory-error
will be included with details about what went wrong. Your request will still be processed by the underlying LLM provider even if supermemory encounters an error.
Rate Limiting
Currently, there are no rate limits specific to supermemory. Your requests are subject only to the rate limits of your underlying LLM provider.
Supported Models
supermemory works with any OpenAI-compatible API, including:
OpenAI
GPT-3.5, GPT-4, GPT-4o
Anthropic
Claude 3 models
Other Providers
Any provider with an OpenAI-compatible endpoint