Contents: Agentic RAG with LLamaIndex
This project enhances Retrieval Augmented Generation (RAG) by incorporating an agent-based approach. The agent intelligently selects relevant tools like vector search and summarization for answering complex queries across multiple documents. It uses an LLM's function-calling capability to reason over multiple steps, maintain conversation history, and integrate metadata filtering for precise responses. The project also addresses challenges of scaling and LLM context limitations, making it suitable for large-scale applications.