dPrompts enables teams to perform distributed, bulk LLM operations locally using Ollama, which is cost-effective and works on most laptops with an integrated GPU.
-
Run the installer script:
curl -fsSL https://raw.githubusercontent.com/HexmosTech/dPrompts/main/install.sh | bashThis will:
- Download and install the latest
dprbinary to/usr/local/bin - Copy
.dprompts.tomlto your home directory (if present in the current directory) - Check/install Ollama and the required model
- Start the Ollama server if not already running
- Download and install the latest
-
Configuration:
- Place your configuration file as
.dprompts.tomlin your home directory ($HOME/.dprompts.toml).
- Place your configuration file as
make workeror
dpr --mode=workermake clientor manually:
dpr --mode=client --args='{"prompt":"Why is the sky blue?"}' --metadata='{"type":"manpage","category":"science"}'-
Run Ollama server:
ollama serve
-
Pull a model:
ollama pull gemma2:2b
-
List available models:
ollama list
-
Test if Ollama is running:
curl http://localhost:11434/api/chat -d '{ "model": "gemma2:2b", "messages": [ { "role": "user", "content": "Why is the sky blue?" } ], "stream": false }'
-
Stop Ollama server (Ctrl+C if running in foreground): Press
Ctrl+Cin the terminal runningollama serve. -
Kill Ollama server running in background:
pkill ollama
- The
.dprompts.tomlfile must be placed in your home directory. - You can customize job arguments and metadata using the
--argsand--metadataflags (as JSON). - The worker will process jobs and store results in the configured PostgreSQL database.