- git clone https://github.com/AllAboutAI-YT/easy-local-rag.git
- cd dir
- pip install -r requirements.txt
- Install Ollama (https://ollama.com/download)
- ollama pull llama3 (etc)
- ollama pull mxbai-embed-large
- run upload.py (pdf, .txt, JSON)
- run localrag.py (with query re-write)
- run localrag_no_rewrite.py (no query re-write)
- git clone https://github.com/AllAboutAI-YT/easy-local-rag.git
- cd dir
- pip install -r requirements.txt
- Install Ollama (https://ollama.com/download)
- ollama pull llama3 (etc)
- ollama pull mxbai-embed-large
- set YOUR email logins in .env (for gmail create app password (video))
- python collect_emails.py to download your emails
- python emailrag2.py to talk to your emails
- Added Email RAG Support (v1.3)
- Upload.py (v1.2)
- replaced /n/n with /n
- New embeddings model mxbai-embed-large from ollama (1.2)
- Rewrite query function to improve retrival on vauge questions (1.2)
- Pick your model from the CLI (1.1)
- python localrag.py --model mistral (llama3 is default)
- Talk in a true loop with conversation history (1.1)
https://www.youtube.com/c/AllAboutAI
RAG is a way to enhance the capabilities of LLMs by combining their powerful language understanding with targeted retrieval of relevant information from external sources often with using embeddings in vector databases, leading to more accurate, trustworthy, and versatile AI-powered applications
Ollama is an open-source platform that simplifies the process of running powerful LLMs locally on your own machine, giving users more control and flexibility in their AI projects. https://www.ollama.com