Boilerplate for working with the Llama API via the Ollama platform.
Ollama-boilerplate/
├── prompts/ # Folder containing prompt text files
│ ├── init_terminal.txt # Welcome print content
│ ├── system_prompt.txt # System prompt content
├── utility/ # Utility scripts
│ ├── build_messages.py # Builds messages for the Ollama API
│ ├── call_Ollama.py # Handles API calls to Ollama
│ ├── check_api.py # Validates the API key
│ ├── debug_print.py # Prints debug information
│ ├── display_init_terminal.py # Initialize the welcome message inside the terminal
│ ├── load_api.py # Loads the API key from .env
│ ├── print_response.py # Print the Ollama reponse with a typewriter effect
│ ├── read_prompt.py # Reads content from the prompt text file
├── .env # Environment file (API endpoint)
├── .gitignore # Git ignore file
├── main.py # Main script to run the project
├── README.md # Project documentation
├── requirements.txt # List of dependencies
-
Python 3.8+
-
Ollama Installed and Running
- Visit ollama.com to download and install Ollama.
- Once installed, ensure the server is running. By default, it should be available at:
http://localhost:11434/. - If the server is not running, use the following commands:
ollama serve
-
Model Downloaded
- Models must be downloaded before running the project. For example, to download the
llama3.2
model:ollama pull llama3.2
Note: Without a downloaded model, the project will not work. Ensure the model is ready before proceeding.
- Models must be downloaded before running the project. For example, to download the
Install the required Python libraries from requirements.txt
:
pip install -r requirements.txt
The .env
file is used to store the Ollama API endpoint. You can create it inside the project folder via the command line:
echo "OLLAMA_API=http://localhost:11434/" > .env
Ensure the server URL matches the one provided by Ollama.
Ensure that the prompts/
folder contains the following file:
system_prompt.txt
: A text file with the system prompt.
You can store differen system prompt files in this folder and suit them to your use case.
Run the main.py
script with custom file paths and model names as parameters:
python3 main.py
--system_prompt ./prompts/system_prompt.txt
--model llama3.2
or
python3 main.py
--sp ./prompts/system_prompt.txt
--m llama3.2
- Server Not Running: Visit http://localhost:11434/. If it's not available:
ollama serve
- Model Not Found: Download the required model using:
ollama pull <model-name>
- Dependency Issues: Run
pip3 install -r requirements.txt
to install all required dependencies. - API Errors: Check the
OLLAMA_API
variable in your.env
file and ensure it points to a running server.
Your problem is not mentioned? Feel free to ask about it to me or by submitting an issue.
This project is open-source and available under the MIT License.