This Python script acts as a seamless bridge between a Meshtastic LoRa mesh network and a locally hosted Ollama large language model (LLM). It listens for messages on the mesh that begin with the prefix @ai, forwards those prompts to the Ollama LLM for processing, and returns the AI-generated responses directly to the sender node over the mesh.
Designed for simplicity and reliability, this tool enables users to interact with powerful AI models even in remote or offline mesh network environments where traditional internet access is unavailable. It's ideal for adding intelligent, conversational capabilities to decentralized, low-bandwidth communication setups.
- Connects to a Meshtastic node over serial USB
- Filters incoming messages to process only those beginning with @ai
- Sends prompts to Ollama LLM via HTTP API
- Handles errors and timeouts gracefully
- Sanitizes and trims AI responses to fit Meshtastic message size limits
- Sends AI responses directly back to the sender node
- Logs all important events with timestamps
- Gracefully shuts down on user interrupt
- Connecting: The script connects to the Meshtastic node over serial USB
- Listening: It listens indefinitely for incoming messages on the mesh network
- Filtering: When a message is received, it checks if the text starts with the prefix @ai
- Prompt Processing: If yes, the prefix is stripped, and the remaining text is sent as a prompt to the Ollama LLM
- LLM Response: The LLM response is received, sanitized to ASCII characters, trimmed to 200 characters, and sent back over the mesh network to the original sender node
- Logging: Events such as message receipt, API calls, errors, and sends are printed with timestamps
- Shutdown: On Ctrl+C, the script cleanly closes the serial interface and exits
- Communication: Configure your model or prompts to keep responses short, as Meshtastic messages have a 200-byte limit and LLMs often generate lengthy replies (optional)
- Python 3.7+
- Python packages:
meshtasticrequestspypubsub(forpubsub)
- A Meshtastic node connected via USB
- An Ollama LLM model (default
"llama2") - Ollama LLM API running locally and accessible at
http://[server-ip]:11434/api/generate
See Ollama on how to download and serve a local LLM.
- OLLAMA_MODEL: Set the desired Ollama model name the model you serve (default is
"llama2")
Ensure your Meshtastic device is connected via USB.
git clone https://github.com/axlixr/meshtastic-ollama-bridge.git
cd meshtastic-ollama-bridgepip install meshtastic requests pypubsubpython main.pyWait for messages. When a mesh node sends a message starting with @ai, e.g.:
@ai Hello how are you LLM?The script will query Ollama and directly reply back to the sender node with the AI's answer.
GNU General Public License v3.0
Created by Axl.
Disclaimer: This is an unofficial project and is not affiliated with or endorsed by the Meshtastic team.
- Meshtastic – Mesh communication platform
- Meshtastic Python API – For direct serial communication with the mesh
- Ollama – Get up and running with large language models

