This app is a document-grounded AI chatbot that answers queries from:
- A folder of 5 .txt FAQ documents (for FAQ search)
- A .csv file of monthly sales (columns: Date, Product, Sales; for sales analytics)
- LangChain agent with Supervisor for tool routing (FAQ and sales)
- FAQ agent always answers directly from the FAQ documentation, never from general LLM knowledge
- FAQ search using OpenAI embeddings and vector search
- Sales data Q&A using SQL over a SQLite database (automatically loaded from CSV)
- FastAPI backend with streaming responses (SSE)
- Streamlit chat UI for interactive chat (port 8501)
- Uses OpenAI API for both LLM and embeddings (set your OPENAI_API_KEY)
- All test files are in the
test/
directory
- Install dependencies:
pip install -r requirements.txt pip install streamlit sseclient
- Set your OpenAI API key in your environment:
export OPENAI_API_KEY=sk-...
- Place your 5 FAQ
.txt
files indata/faqs/
. - Place your sales data in
data/sales.csv
(columns: Date, Product, Sales). - Run the backend:
uvicorn app.main:app --reload
- Run the Streamlit chat UI:
streamlit run app/streamlit_chat.py
- Add your
OPENAI_API_KEY
to a.env
file in the project root:OPENAI_API_KEY=sk-...
- Build and start both backend and UI:
docker-compose up --build
- Access:
- FastAPI backend: http://localhost:8000
- Streamlit chat UI: http://localhost:8501
- The Streamlit UI is preconfigured to use the backend at
http://app:8000/chat
(Docker Compose networking).
- Send POST requests to
/chat
with{ "message": "your question" }
. - Responses are streamed back as text chunks (SSE).
- The agent will automatically route questions to the FAQ or sales SQL agent as appropriate.
- FAQ answers are always grounded in your documentation. If no answer is found, the agent will say so.
- All test files are in the
test/
directory. - To run a test manually:
python test/test_faq_agent.py python test/test_sales_agent.py python test/test_faq_tool.py python test/test_fastapi_chat.py
- The FastAPI chat endpoint can be tested with
test/test_fastapi_chat.py
(streams SSE responses).