The AI framework that adds the engineering to prompt engineering (Python/TS/Ruby/Java/C#/Rust/Go compatible)
-
Updated
Oct 22, 2025 - Rust
The AI framework that adds the engineering to prompt engineering (Python/TS/Ruby/Java/C#/Rust/Go compatible)
An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data. Supports NLP, pattern matching, and customizable pipelines.
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Building blocks for rapid development of GenAI applications
Fastest LLM gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.
⚕️GenAI powered multi-agentic medical diagnostics and healthcare research assistance chatbot. 🏥 Designed for healthcare professionals, researchers and patients.
A curated list of blogs, videos, tutorials, code, tools, scripts, and anything useful to help you learn Azure Policy - by @jesseloudon
PAIG (Pronounced similar to paige or payj) is an open-source project designed to protect Generative AI (GenAI) applications by ensuring security, safety, and observability.
Real-time guardrail that shows token spend & kills runaway LLM/agent loops.
ChatGPT API Usage using LangChain, LlamaIndex, Guardrails, AutoGPT and more
Open-source MCP gateway and control plane for teams to govern which tools agents can use, what they can do, and how it’s audited—across agentic IDEs like Cursor, or other agents and AI tools.
Framework for LLM evaluation, guardrails and security
LangEvals aggregates various language model evaluators into a single platform, providing a standard interface for a multitude of scores and LLM guardrails, for you to protect and benchmark your LLM models and pipelines.
Make AI work for Everyone - Monitoring and governing for your AI/ML
LLM proxy to observe and debug what your AI agents are doing.
Xiangxin Guardrails is an open-source, context-aware AI guardrails platform that provides protection against prompt injection attacks, content safety risks, and data leakage. It can be deployed as a security gateway or integrated via API, offering enterprise-grade, fully private deployment options.
First-of-its-kind AI benchmark for evaluating the protection capabilities of large language model (LLM) guard systems (guardrails and safeguards)
Trustworthy question-answering AI plugin for chatbots in the social sector with advanced content performance analysis.
Learn how to create an AI Agent with Django, LangGraph, and Permit.
Awesome AWS service control policies (SCPs), Resource Control Policies (RCPs), and other organizational policies
Add a description, image, and links to the guardrails topic page so that developers can more easily learn about it.
To associate your repository with the guardrails topic, visit your repo's landing page and select "manage topics."