A modern, async/sync Python client for the DeepSeek LLM API. Supports completions, chat, retries, and robust error handling. Built for local dev, CI, and production.
Doc | Description |
---|---|
Getting Started | Quick setup guide |
Web UI Guide | Guide to using the web interface |
Features | Detailed feature list |
API Reference | API documentation for developers |
Deployment Guide | Deployment options and configurations |
FAQ | Frequently asked questions |
For DeepSeek AI model capabilities, see DeepSeek documentation.
Below are screenshots showing the evolution of the DeepSeek Wrapper web UI and features over time:
Pre-release
Initial UI and feature set before public release.
Tool Status & Caching Panel
See per-tool status, cache stats, and manage tool caches directly from the UI.
- Sync & async support
- Type hints throughout
- Clean error handling
- Session-based chat history
- Markdown rendering
- File uploads & processing
- Current date & time info
- Multiple formats (ISO, US, EU)
- No external API required
- Automatic retries with backoff
- 94%+ test coverage
- Environment variable config
- Tool integration framework
- Built-in tools (Weather, Calculator)
- Custom tool creation system
- Tool status dashboard: visualize tool health, API key status, and cache performance in real time
- Integrated settings panel
- Secure API key storage in .env
- Tool configuration UI
A modern, session-based chat interface for DeepSeek, built with FastAPI and Jinja2.
To run locally:
uvicorn src.deepseek_wrapper.web:app --reload
Then open http://localhost:8000 in your browser.
Web UI Features:
- Chat with DeepSeek LLM (session-based history)
- Async backend for fast, non-blocking responses
- Reset conversation button
- Timestamps, avatars, and chat bubbles
- Markdown rendering in assistant responses
- Loading indicator while waiting for LLM
- Non‑flashing "Thinking…" indicator with progress bar and optional expandable live details (accessibility-friendly)
- Error banner for API issues
- Tool configuration in settings panel with API key management
For a comprehensive guide to using the web interface, see the Web UI Guide.
pip install -r requirements.txt
pip install -e . # for local development
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txt
"DEEPSEEK_API_KEY=sk-your-key" | Out-File -FilePath .env -Encoding ascii
uvicorn src.deepseek_wrapper.web:app --reload
For detailed installation instructions, see the Getting Started Guide.
from deepseek_wrapper import DeepSeekClient
client = DeepSeekClient()
result = client.generate_text("Hello world!", max_tokens=32)
print(result)
# Async usage
import asyncio
async def main():
result = await client.async_generate_text("Hello async world!", max_tokens=32)
print(result)
# asyncio.run(main())
from deepseek_wrapper import DeepSeekClient
from deepseek_wrapper.utils import get_realtime_info
# Get real-time date information as JSON
realtime_data = get_realtime_info()
print(realtime_data) # Prints current date in multiple formats
# Create a client with real-time awareness
client = DeepSeekClient()
# Use in a system prompt
system_prompt = f"""You are a helpful assistant with real-time awareness.
Current date and time information:
{realtime_data}
"""
# Send a message with the real-time-aware system prompt
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "What's today's date?"}
]
response = client.chat_completion(messages)
print(response) # Will include the current date
from deepseek_wrapper import DeepSeekClient, DateTimeTool, WeatherTool, CalculatorTool
# Create a client and register tools
client = DeepSeekClient()
client.register_tool(DateTimeTool())
client.register_tool(WeatherTool())
client.register_tool(CalculatorTool())
# Create a conversation
messages = [
{"role": "user", "content": "What's the weather in London today? Also, what's the square root of 144?"}
]
# Get a response with tool usage
response, tool_usage = client.chat_completion_with_tools(messages)
# Print the final response
print(response)
# See which tools were used
for tool in tool_usage:
print(f"Used {tool['tool']} with args: {tool['arguments']}")
For a complete API reference and advanced usage, see the API Reference.
- Set
DEEPSEEK_API_KEY
in your.env
or environment - Optionally set
DEEPSEEK_BASE_URL
,timeout
,max_retries
- See
.env.example
Default model: deepseek-chat
(per DeepSeek docs)
For deployment options and environment configurations, see the Deployment Guide.
All methods accept extra keyword args for model parameters (e.g., temperature
, top_p
, etc).
pytest --cov=src/deepseek_wrapper
- Run
pre-commit install
to enable hooks
Licensed under the Apache-2.0 license. See docs/LICENSE.
Note: The model selection feature is currently under development and IS NOT functional yet.
The DeepSeek Wrapper will soon support switching between different DeepSeek models:
- deepseek-chat
- deepseek-coder
- deepseek-llm-67b-chat
- deepseek-llm-7b-chat
- deepseek-reasoner
When complete, users will be able to:
- Select different models through the settings panel
- See the currently active model in the UI
- Configure model-specific settings, such as extracting only final answers from reasoning models