LLM story writer with a focus on high-quality long output based on a user provided prompt.
-
Updated
Aug 23, 2025 - Python
LLM story writer with a focus on high-quality long output based on a user provided prompt.
LLMs prompt augmentation with RAG by integrating external custom data from a variety of sources, allowing chat with such documents
🦙 Manage Ollama models from your CLI!
TransFire is a simple tool that allows you to use your locally running LLMs while far from home, whitout requiring port forwarding
Use your open source local model from the terminal
(Work in Progress) A cross-platform desktop client for offline LlaMA-CPU
50-line local LLM assistant in Python with Streamlit and GPT4All
Add a description, image, and links to the local-llama topic page so that developers can more easily learn about it.
To associate your repository with the local-llama topic, visit your repo's landing page and select "manage topics."