Skip to content

Chat With Multiple PDF Documents using Conversational RAG on CPU with LLAMA2, Langchain ChromaDB

License

Notifications You must be signed in to change notification settings

yyassif/documents-retrieval-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chat With Multiple PDF Documents using Conversational RAG on CPU with LLAMA2, Langchain ChromaDB

This is a Conversational Retrieval Augmented Generation (RAG) Knowledge Base Chat built on top of LLAMA2 (Embeddings & Model), Langchain and ChromaDB and orchestrated by FastAPI framework to provide and Endpoint for easy communication.


Quickstart

Conversational RAG runs offline on local CPU

  1. Setup a virtual environement & Install the requirements:
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt`
  1. Copy your PDF files to the documents folder.

  2. Run the FastAPI server, to process and ingest your data on start with the LLM RAG and return the answer:

python main.py "What is the invoice number value?"

About

Chat With Multiple PDF Documents using Conversational RAG on CPU with LLAMA2, Langchain ChromaDB

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published