Skip to content

Smart Context Optimization for LLMs - Reduce tokens by 66%, save 40% on API costs. Intelligent ranking and selection of relevant context using embeddings, keywords, and semantic analysis.

License

Notifications You must be signed in to change notification settings

Naseem77/tokenWise-Optimizer

Repository files navigation

🎯 TokenWise Optimizer

Smart Context Optimization for LLMs - Reduce tokens by 70-90%, save costs, maintain quality.

TokenWise Optimizer analyzes user queries and intelligently ranks context pieces to send only the most relevant information to your LLM, dramatically reducing token usage and costs while maintaining or improving response quality.

📦 PyPI Package: tokenwise-optimizer


✨ Key Features

  • 🎯 Smart Ranking: Multi-method relevance scoring (embeddings, keywords, recency, relationships)
  • 📊 70-90% Token Reduction: Typical reduction without quality loss
  • ⚡ Fast: <500ms optimization overhead
  • 🔄 Multiple Strategies: Top-N, Diversity (MMR), Dependency-aware selection
  • 💾 Vector Store: Optional pre-indexing for faster queries
  • 🗄️ Caching: In-memory caching for repeated queries
  • 📈 Analytics: Track savings, performance, and optimization metrics

🏗️ Architecture

User Query → Analyze Intent → Rank Context Pieces → Select Top N → Send to LLM

Core Components

  1. Context Chunker: Breaks large content into manageable pieces (fixed-size, semantic, sliding window)
  2. Relevance Ranker: Scores chunks using embeddings, keywords, recency, and relationships
  3. Context Selector: Picks optimal chunks within token budget using various strategies
  4. Vector Store: ChromaDB for fast similarity search (optional)
  5. Cache Layer: In-memory caching for performance

🚀 Quick Start

1. Installation

# Install from PyPI
pip install tokenwise-optimizer

# Or install from source
git clone https://github.com/Naseem77/tokenWise.git
cd tokenWise
pip install -r requirements.txt

2. Configuration

Create a .env file with your OpenAI API key:

cp .env.example .env
# Edit .env and add your OpenAI API key

Required configuration:

OPENAI_API_KEY=sk-your-api-key-here

3. Run the Server

python main.py

The API will start at http://localhost:8000

4. Test It Out

Run the basic example:

python examples/basic_usage.py

📖 API Usage

Optimize Context

Endpoint: POST /optimize

Request:

{
  "query": "How does authentication work?",
  "context": [
    {
      "id": "file1",
      "text": "Your content here...",
      "type": "code"
    }
  ],
  "targetTokens": 4000,
  "options": {
    "strategy": "diversity",
    "includeMetadata": true,
    "preserveOrder": false,
    "minRelevanceScore": 0.3,
    "diversityLambda": 0.5
  }
}

Response:

{
  "optimized_context": [
    {
      "id": "chunk_1",
      "text": "Relevant content...",
      "relevance_score": 0.89,
      "reason": "High semantic similarity + keywords match",
      "source": "file1"
    }
  ],
  "stats": {
    "original_tokens": 50000,
    "optimized_tokens": 3847,
    "reduction_percent": 92.3,
    "estimated_savings_usd": 2.41,
    "processing_time_ms": 387,
    "chunks_analyzed": 45,
    "chunks_selected": 8
  }
}

Index Content (Optional)

Pre-index content for faster future queries:

curl -X POST http://localhost:8000/index \
  -H "Content-Type: application/json" \
  -d '{
    "id": "my_document",
    "text": "Content to index...",
    "type": "docs"
  }'

Health Check

curl http://localhost:8000/health

Statistics

curl http://localhost:8000/stats

🎮 Usage Examples

Basic Python Usage

import requests

response = requests.post(
    "http://localhost:8000/optimize",
    json={
        "query": "How do I authenticate users?",
        "context": [
            {"id": "auth", "text": "...", "type": "code"}
        ],
        "targetTokens": 2000
    }
)

result = response.json()
print(f"Reduced tokens by {result['stats']['reduction_percent']}%")

With Different Strategies

Top-N Selection (Fastest)

options = {"strategy": "top-n"}

Simply takes the highest-scoring chunks. Fast but may miss diversity.

Diversity Selection (Recommended)

options = {
    "strategy": "diversity",
    "diversityLambda": 0.5  # Balance relevance (1.0) vs diversity (0.0)
}

Uses Maximal Marginal Relevance to ensure variety in selected chunks.

Dependency-Aware (Best for Code)

options = {"strategy": "dependency"}

Includes related chunks (e.g., functions that call each other).


📊 Benchmark Results

Real-world performance test with authentication codebase (GPT-3.5-turbo):

Metric Before TokenWise After TokenWise Improvement
Tokens 459 155 66.2% reduction
Cost per query $0.001288 $0.000772 $0.000516 saved (40%)
Processing time 5.1s 8.7s +3.5s optimization overhead

Cost Savings at Scale

Queries/Month Before After Monthly Savings
10,000 $12.88 $7.72 $5.16
100,000 $128.80 $77.20 $51.60
1,000,000 $1,288 $772 $516

Query: "How does user authentication and login work?"
Context: 5 files (auth, database, payment, email, analytics)
Selected: Only authentication-related code (1 file)
Ignored: Payment, email, analytics modules (irrelevant)

What This Means

  • 66% fewer tokens sent to your LLM
  • Same or better answer quality (focused on relevant code)
  • Automatic filtering of irrelevant context
  • $516/month saved at 1M queries (typical enterprise scale)
  • Run benchmark: python run_benchmark.py (requires OpenAI API key)

🛠️ Technology Stack

  • Backend: FastAPI (Python)
  • Embeddings: OpenAI text-embedding-3-small
  • Vector DB: ChromaDB
  • Caching: In-memory (Redis-ready)
  • Token Counting: tiktoken

📐 How It Works

1. Chunking

Content is split into chunks using:

  • Fixed-size: Every N tokens
  • Semantic: At logical boundaries (functions, paragraphs)
  • Sliding window: Overlapping chunks for continuity

2. Ranking

Each chunk is scored using:

Embedding Similarity (50%): Semantic similarity to query
Keyword Matching (20%): Keyword overlap
Recency (15%): Newer content scores higher
Relationships (10%): Connected chunks are boosted
LLM Scoring (5%): Optional deep analysis

3. Selection

Chunks are selected using:

Top-N: Simple, fast, take highest scores
Diversity (MMR): Balance relevance with variety
Dependency: Include related chunks

4. Assembly

Selected chunks are:

  • Reordered logically (by source, position)
  • Formatted with metadata
  • Returned with statistics

💰 Cost Savings Example

Before TokenWise:

  • 100K tokens per query
  • 1,000 queries/day
  • Cost: $3/1M tokens
  • Daily cost: $300

After TokenWise:

  • 10K tokens per query (90% reduction)
  • 1,000 queries/day
  • Cost: $3/1M tokens
  • Daily cost: $30

Savings: $270/day = $8,100/month


🎯 Optimization Strategies

When to Use Each Strategy

Strategy Use Case Pros Cons
top-n Simple queries, speed priority Fastest, most relevant May lack diversity
diversity Complex queries, broad topics Balanced coverage Slightly slower
dependency Code analysis, interconnected data Complete context May include less relevant items

Tuning Parameters

targetTokens: Adjust based on your LLM's context window

  • GPT-3.5: 4,000-8,000 tokens
  • GPT-4: 8,000-16,000 tokens
  • Claude: 4,000-8,000 tokens

minRelevanceScore: Filter out low-relevance chunks

  • Strict: 0.5-0.7
  • Moderate: 0.3-0.5
  • Permissive: 0.1-0.3

diversityLambda: Balance relevance vs diversity (for diversity strategy)

  • High relevance: 0.7-1.0
  • Balanced: 0.4-0.6
  • High diversity: 0.0-0.3

🔧 Configuration

Environment Variables

# Required
OPENAI_API_KEY=sk-...

# Optional
DEFAULT_TOKEN_BUDGET=4000
EMBEDDING_MODEL=text-embedding-3-small
CACHE_TTL=3600
DEBUG=True
LOG_LEVEL=INFO

# Scoring Weights (must sum to reasonable total)
EMBEDDING_WEIGHT=0.5
KEYWORD_WEIGHT=0.2
RECENCY_WEIGHT=0.15
RELATIONSHIP_WEIGHT=0.1
LLM_WEIGHT=0.05

Chunking Options

ChunkingOptions(
    strategy="semantic",      # or "fixed", "sliding"
    chunk_size=512,           # tokens per chunk
    overlap=50,               # token overlap (sliding only)
    preserve_code_blocks=True,
    preserve_paragraphs=True
)

📊 Monitoring & Analytics

Access Statistics

curl http://localhost:8000/stats

Returns:

  • Vector store chunk count
  • Cache configuration
  • Scoring weights
  • Default settings

Track Savings

Every optimization returns:

  • Token reduction percentage
  • Estimated cost savings
  • Processing time
  • Chunks analyzed vs selected

🧪 Testing

Run the example scripts:

# Basic example
python examples/basic_usage.py

# Advanced example with indexing
python examples/advanced_usage.py

🚀 Production Deployment

1. Use Production Server

pip install gunicorn
gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000

2. Add Redis for Distributed Caching

Update .env:

REDIS_HOST=your-redis-host
REDIS_PORT=6379

3. Enable HTTPS

Use nginx or cloud load balancer for SSL termination.

4. Monitor Performance

  • Track processing times
  • Monitor token reduction rates
  • Watch for cache hit rates
  • Alert on API errors

🎓 Advanced Features

Query Expansion

Automatically expand vague queries for better matching.

Multi-Pass Retrieval

Retrieve context, then expand based on what was found.

Hierarchical Summarization

Create multi-level summaries for very large documents.

Learned Optimization

Train ML models on feedback to improve ranking.


🤝 Contributing

Contributions welcome! Areas for improvement:

  • Additional chunking strategies
  • More ranking methods
  • Support for more embedding providers
  • Performance optimizations
  • Better relationship detection
  • Query expansion
  • Feedback learning

📝 License

MIT License - Use freely in your projects!


🐛 Troubleshooting

"OpenAI API key not configured"

  • Add OPENAI_API_KEY to .env file

"Could not connect to TokenWise API"

  • Make sure server is running: python main.py
  • Check port 8000 is not in use

Slow performance

  • Pre-index large content with /index endpoint
  • Reduce target_tokens budget
  • Use "top-n" strategy instead of "diversity"
  • Enable Redis for distributed caching

Low quality results

  • Increase target_tokens budget
  • Lower minRelevanceScore threshold
  • Try "diversity" strategy
  • Check if content is properly chunked

📞 Support

For issues, questions, or feature requests, please open an issue on the repository.


🎯 Roadmap

  • Support for Cohere, HuggingFace embeddings
  • Relationship detection from code imports
  • Query expansion with LLM
  • Feedback-based learning
  • Multi-modal support (images, diagrams)
  • Real-time adaptation during conversation
  • Team analytics dashboard
  • Browser extension
  • Python SDK

Built with ❤️ for the LLM community

Save tokens. Save money. Build better AI applications.

About

Smart Context Optimization for LLMs - Reduce tokens by 66%, save 40% on API costs. Intelligent ranking and selection of relevant context using embeddings, keywords, and semantic analysis.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published