OllamaTor is a user-friendly desktop application that brings the power of Ollama's local Large Language Models (LLMs) to your fingertips. Chat with AI, adjust settings, and monitor system usage.
- Easy Chat: Select a model and start chatting immediately.
- Model Management: Dropdown selection to choose from your local LLMs collection.
- Customizable: Adjust temperature and chat history length.
- Resource Monitoring: Real-time CPU, RAM, and GPU (load + memory) usage monitor. + AI performance "TPM" (tokens-per-minute)
TO-DO:
Stop Button to stop generating a response-> Available inv0.0.3
- fixing the math rendering (KaTeX)
- Customizable Copilots (inital prompt for the AI to get a better understanding on what it's working on)
Download Ollama.exe |
---|
- Select a Model: Choose a model from the dropdown.
- Chat: Type your prompt & click "Send".
- Settings: Use the gear icon to change temperature and history.
- Help: Use the help icon to start the step-by-step tour, download Ollama & get instructions to properly install models
- Windows 10/11 (other OS not tested)
- Chrome, Edge or Electron
- Ollama
- Downloaded Ollama models
- Python (for source code execution only; not needed for the compiled
.exe
):eel
requests
psutil
nvidia-ml-py
Report bugs or suggest features via GitHub Issues. Pull requests welcome!