Skip to content

Code repository showcases my experiments and prototypes in Generative AI, along with best practices I’ve learned and developed along the way.

Notifications You must be signed in to change notification settings

abhishekdodda/abhi-llm-apps

Repository files navigation

Welcome to Abhi-LLM-Apps

Welcome to Abhi-LLM-Apps, implementation of projects and prototypes demonstrating best practices for working with Large Language Models (LLMs), AI-powered applications, and AI agents. Repo is a hub for exploring AI solutions, leveraging advanced tools, frameworks, and methodologies to inspire and guide developers in building innovative, safe, responsible, ethical, and scalable AI applications.

Overview

Repo features a diverse range of LLM applications powered by models such as OpenAI, Llama 3, and Mistral, as well as frameworks like Ollama for local model deployment. For advanced users, it includes implementations of Nvidia NIMs and VILA, as well as highly customized tools like Mixture of Agents and Route LLM, LLM-as-a-Judge, Semantic Cache for specialized use cases.

Getting Started

Clone the repository

git clone https://github.com/abhishekdodda/abhi-llm-apps.git

Navigate to the desired project directory

cd abhi-llm-apps/advanced_tools_frameworks/advanced_llm_eval

Install the required dependencies

pip install -r requirements.txt

Follow the project-specific instructions in each project's README.md file to set up and run the app.

Go through the below, on what is built here

Highlights

AI Agents

Discover intelligent, task-specific agents designed for:

  • Customer Support: Deliver friendly and comprehensive responses to user querys.
  • Research Content Creation: Plan, write, and edit high-quality research articles or blogs with collaborative agents. These agents leverage advanced conversational frameworks for efficiency and accuracy.

Handling Data Privacy

Explore methods to ensure data privacy in AI workflows:

  • Anonymization Techniques: Protect sensitive data using tools like Microsoft Presidio.
  • PII Safeguards: Anonymize personally identifiable information (PII) with synthetic data generation and secure storage solutions.

Finetuning LLMs

Learn how to fine-tune models like Llama 3.2 for specific applications:

  • Low-Rank Adaptation (LoRA): Efficiently optimize model layers for specialized domain specific tasks.
  • Dataset Integration: Customize finetuning workflows using curated datasets for improved performance.

Guardrails for AI Safety

Implement safety frameworks to ensure ethical AI deployments:

  • NeMo Guardrails: Manage sensitive data and moderate responses with predefined conversational boundaries.
  • LlamaGuard: Detect and block unsafe content, preventing jailbreaking attempts and ensuring compliance with safety policies.

Advanced Tools and Frameworks

Mixture of Agents (MoA)

Leverage the collective power of multiple LLMs to achieve superior performance:

  • Combines responses from various models to produce refined, accurate outputs.
  • Ideal for applications like synthetic data generation and complex problem-solving.

Route LLM

Optimize cost and performance by dynamically routing queries between strong and weak models:

  • Utilizes router frameworks for real-time decision-making.
  • Ensures a balance between resource efficiency and response quality.

Semantic Cache Implementation

Boost application speed and reduce inference costs:

  • Semantic Cache for faster retrieval.
  • Enable scalable and cost-effective AI applications without sacrificing performance.

LLM as a Judge

Robust evaluation framework for your llm applications:

  • The framework applies the LLM-as-a-Judge methodology, utilizing powerful language models as evaluators to analyze and grade generated outputs. This ensures objective and consistent evaluation across metrics.
  • Answer Relevancy: Measures how relevant the generated answers are to the provided query.
  • Faithfulness: Ensures that generated answers are grounded in the provided context and do not hallucinate information.
  • Demonstration of using Ragas Frameworks, Nvidia models

Nvidia NIMs and VILA

  • Nvidia NIMs: Enhance retrieval and reranking capabilities with Mistral-based models.
  • Nvidia VILA: Integrate Vision-Language Models for multimedia content understanding, such as analyzing images and videos.

Purpose

This repository is intended for:

  • AI Enthusiasts: Explore prototypes showcasing the latest advancements in LLMs and AI tools.
  • Developers: Build scalable, efficient, and privacy-compliant AI applications.

Conclusion

Whether you're developing next-generation AI systems, experimenting with innovative tools like Mixture of Agents and Route LLM, or safeguarding AI applications with NeMo Guardrails, Abhi-LLM-Apps provides a comprehensive resource for technical exploration and practical implementation.

For detailed insights into each project, explore the individual folders and their documentation. Join us on this journey to advance AI capabilities and build ethical, scalable, and impactful solutions. Together, let’s shape the future of AI! 🚀

About

Code repository showcases my experiments and prototypes in Generative AI, along with best practices I’ve learned and developed along the way.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published