You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This sample has the full End2End process of creating RAG application with Prompty and Azure AI Foundry. It includes GPT 3.5 Turbo LLM application code, evaluations, deployment automation with AZD CLI, GitHub actions for evaluation and deployment and intent mapping for multiple LLM task mapping.
Open-WebUI-Functions is a collection of custom pipelines, filters, and integrations designed to enhance Open WebUI. These functions enable seamless interactions with Azure AI, N8N, and other AI models, providing dynamic request handling, preprocessing, and automation.
Model Mondays is a weekly livestream with Discord office hours - to help you navigate the fast-moving ecosystem of generative AI models with 5-minute roundups and 15-minute spotlight sessions. Build your model IQ - and make informed model choices!
The LLMAgentOps Toolkit is a repository that provides a foundational structure for building LLM Agent-based applications using the Semantic Kernel. It serves as a starting point for data scientists and developers, facilitating experimentation, evaluation, and deployment of LLM Agent-based applications to production.
DataSage is an AI-powered question-answering system for tabular (SQL) data, leveraging Azure AI Foundry, LangGraph, Azure SQL DB, and Streamlit. It enables users to query databases using natural language and retrieve intelligent, context-aware responses. Deployment is supported via Python and Bicep for seamless Azure resource provisioning.
Model Mondays is a weekly livestreamed series on Microsoft Reactor that helps you make informed model choice decisions with timely updates and model deep-dives. Watch live for the content. Join Discord for the discussions.
Demonstrates a workflow for LLM function calling evaluation. Uses GitHub Copilot to generate synthetic eval data and Azure AI Foundry for handling results.
This project explores document chunking strategies and vector search algorithms in Azure AI Search for Retrieval-Augmented Generation (RAG). It leverages Azure OpenAI embeddings and GPT-4 to improve retrieval accuracy and response quality. The solution includes an Azure Function for data loading and Bicep for resource deployment.