A powerful local RAG (Retrieval-Augmented Generation) AI agent that enables natural language querying of SQL databases using LangChain and Ollama.
RAG AI Agent
├── Database Connection Layer
├── Schema Analysis & Vectorization
├── Query Processing Engine
├── LangChain RAG Pipeline
├── Ollama LLM Integration
└── User Interface
-
Install dependencies:
pip install -r requirements.txt
-
Configure your database connection in
config/database.yaml -
Start Ollama service:
ollama serve
-
Pull required models:
ollama pull llama3.1 ollama pull nomic-embed-text
-
Initialize the database schema:
python scripts/initialize_schema.py
-
Run the agent:
python main.py
- "What are the sales last month?"
- "Show me the top 10 customers by revenue"
- "What's the average order value this quarter?"
- "Which products have the highest profit margins?"
RAG/
├── src/
│ ├── agents/ # AI agent implementations
│ ├── database/ # Database connection and operations
│ ├── embeddings/ # Vector embeddings and storage
│ ├── llm/ # LLM integration (Ollama)
│ ├── rag/ # RAG pipeline components
│ └── utils/ # Utility functions
├── config/ # Configuration files
├── data/ # Sample data and schemas
├── scripts/ # Setup and utility scripts
├── tests/ # Test files
└── ui/ # User interface components
See config/README.md for detailed configuration options.
MIT License