Ollama adapter for PULSE Protocol — run local AI models with the same interface as cloud providers.
Your data never leaves your machine. GDPR-compliant by design.
# Switch from cloud to local — one line changes:
# from pulse_openai import OpenAIAdapter as AI; adapter = AI(api_key="sk-...")
from pulse_ollama import OllamaAdapter as AI; adapter = AI(model="llama3.2")
# Everything below stays EXACTLY the same
msg = PulseMessage(action="ACT.ANALYZE.SENTIMENT", parameters={"text": "I love open source!"})
response = adapter.send(msg)
print(response.content["parameters"]["result"])| Feature | Cloud (OpenAI/Anthropic) | Local (Ollama) |
|---|---|---|
| Data leaves your machine | Yes | No |
| GDPR compliance | Depends on DPA | By design |
| Works offline | No | Yes |
| API key required | Yes | No |
| Cost per token | Yes | Free |
pip install pulse-ollamaRequirements: Ollama must be installed and running.
# Install Ollama (macOS/Linux)
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.2 # 2GB, fast
ollama pull mistral # 4GB, high quality
ollama pull phi3 # 2GB, efficient
ollama pull gemma2 # 5GB, Google's modelfrom pulse import PulseMessage
from pulse_ollama import OllamaAdapter
adapter = OllamaAdapter(model="llama3.2")
# See what models you have installed
print(adapter.list_models())
# ['llama3.2:latest', 'mistral:latest', 'phi3:latest']
# Ask a question
msg = PulseMessage(
action="ACT.QUERY.DATA",
parameters={"query": "Explain PULSE Protocol in one sentence"}
)
response = adapter.send(msg)
print(response.content["parameters"]["result"])
# Sentiment analysis
msg = PulseMessage(
action="ACT.ANALYZE.SENTIMENT",
parameters={"text": "I love this open source project!"}
)
response = adapter.send(msg)
print(response.content["parameters"]["result"])
# {"sentiment": "positive", "confidence": 0.95, "explanation": "..."}
# Translate
msg = PulseMessage(
action="ACT.TRANSFORM.TRANSLATE",
parameters={"text": "Hello, world!", "target_language": "German"}
)
response = adapter.send(msg)
print(response.content["parameters"]["result"])
# Hallo, Welt!| PULSE Action | Description |
|---|---|
ACT.QUERY.DATA |
Ask questions, get answers |
ACT.CREATE.TEXT |
Generate text from instructions |
ACT.ANALYZE.SENTIMENT |
Analyze emotional tone |
ACT.ANALYZE.PATTERN |
Find patterns in data |
ACT.TRANSFORM.TRANSLATE |
Translate between languages |
ACT.TRANSFORM.SUMMARIZE |
Summarize long text |
adapter = OllamaAdapter(
model="llama3.2", # Default model
host="http://localhost:11434", # Ollama server URL
timeout=120, # Inference timeout (seconds)
)
# Override model per-request
msg = PulseMessage(
action="ACT.CREATE.TEXT",
parameters={
"instructions": "Write a poem about AI",
"model": "mistral", # Override default
"temperature": 0.9, # Creativity
}
)
# Remote Ollama server (shared team deployment)
adapter = OllamaAdapter(host="http://ai-server.company.com:11434")The whole point of PULSE adapters is that switching providers is one line:
# Local model (privacy, offline, free)
from pulse_ollama import OllamaAdapter as AI
adapter = AI(model="llama3.2")
# Cloud (scale, latest models)
# from pulse_openai import OpenAIAdapter as AI
# adapter = AI(api_key="sk-...")
# Everything else stays identical
response = adapter.send(msg)pulse-protocol # Core (install this first)
pulse-openai # OpenAI GPT
pulse-anthropic # Anthropic Claude
pulse-ollama # Local models (this package)
pulse-gemini # Google Gemini
pulse-binance # Binance exchange
pulse-bybit # Bybit exchange
pulse-kraken # Kraken exchange
Apache 2.0 — free forever.