0.3.0
Ollama Manager Guide
Introduction
Ollama Manager provides a streamlined way to prototype and develop applications using Ollama's AI models. Instead of manually managing the Ollama server process, installing it as a service, or running it in a separate terminal, Ollama Manager handles the entire lifecycle programmatically.
Key Benefits for Prototyping:
- Start/stop Ollama server automatically within your Python code
- Configure resources dynamically based on your needs
- Handle multiple server instances for testing
- Automatic cleanup of resources
- Platform-independent operation
Quick Start
from clientai import ClientAI
from clientai.ollama import OllamaManager
# Basic usage - server starts automatically and stops when done
with OllamaManager() as manager:
# Create a client that connects to the managed server
client = ClientAI('ollama', host="http://localhost:11434")
# Use the client normally
response = client.generate_text(
"Explain quantum computing",
model="llama2"
)
print(response)
# Server automatically stops when exiting the context
Installation
# Install with Ollama support
pip install "clientai[ollama]"
# Install with all providers
pip install "clientai[all]"
Core Concepts
Server Lifecycle Management
-
Context Manager (Recommended)
with OllamaManager() as manager: # Server starts automatically client = ClientAI('ollama') # Use client... # Server stops automatically
-
Manual Management
manager = OllamaManager() try: manager.start() client = ClientAI('ollama') # Use client... finally: manager.stop()
Configuration Management
from clientai.ollama import OllamaServerConfig
# Create custom configuration
config = OllamaServerConfig(
host="127.0.0.1",
port=11434,
gpu_layers=35,
memory_limit="8GiB"
)
# Use configuration with manager
with OllamaManager(config) as manager:
client = ClientAI('ollama')
# Use client...
For more information, see the docs.
What's Changed
- Ollama manager added by @igorbenav in #5
Full Changelog: v0.2.1...v0.3.0