Agentic LLM Vulnerability Scanner / AI red teaming kit
-
Updated
Dec 25, 2024 - Python
Agentic LLM Vulnerability Scanner / AI red teaming kit
The fastest && easiest LLM security guardrails for AI Agents and applications.
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.
User prompt attack detection system
Engineered to help red teams and penetration testers exploit large language model AI solutions vulnerabilities.
Exposing Jailbreak Vulnerabilities in LLM Applications with ARTKIT
Example of running last_layer with FastAPI on vercel
Add a description, image, and links to the llm-guardrails topic page so that developers can more easily learn about it.
To associate your repository with the llm-guardrails topic, visit your repo's landing page and select "manage topics."