Skip to content

Chat offline and localy with many AI models. OllamaTor is a Python Eel based front-end for the Ollama API.

Notifications You must be signed in to change notification settings

NiiV3AU/OllamaTor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to OllamaTor 👋

OllamaTor is a user-friendly desktop application that brings the power of Ollama's local Large Language Models (LLMs) to your fingertips. Chat with AI, adjust settings, and monitor system usage.

Screenshot of OllamaTor

Features

  • Easy Chat: Select a model and start chatting immediately.
  • Model Management: Dropdown selection to choose from your local LLMs collection.
  • Customizable: Adjust temperature and chat history length.
  • Resource Monitoring: Real-time CPU, RAM, and GPU (load + memory) usage monitor. + AI performance "TPM" (tokens-per-minute)

TO-DO:

  • Stop Button to stop generating a response -> Available in v0.0.3
  • fixing the math rendering (KaTeX)
  • Customizable Copilots (inital prompt for the AI to get a better understanding on what it's working on)

Installation

Download Ollama.exe

Getting Started

  1. Select a Model: Choose a model from the dropdown.
  2. Chat: Type your prompt & click "Send".
  3. Settings: Use the gear icon to change temperature and history.
  4. Help: Use the help icon to start the step-by-step tour, download Ollama & get instructions to properly install models

Requirements

  • Windows 10/11 (other OS not tested)
  • Chrome, Edge or Electron
  • Ollama
  • Downloaded Ollama models
  • Python (for source code execution only; not needed for the compiled .exe):
    • eel
    • requests
    • psutil
    • nvidia-ml-py

Contributing

Report bugs or suggest features via GitHub Issues. Pull requests welcome!

About

Chat offline and localy with many AI models. OllamaTor is a Python Eel based front-end for the Ollama API.

Topics

Resources

Stars

Watchers

Forks