"Universal AI security framework - Protect LLM applications from prompt injection, jailbreaks, and adversarial attacks. Works with OpenAI, Anthropic, LangChain, and any LLM."
-
Updated
Jan 19, 2026 - Python
"Universal AI security framework - Protect LLM applications from prompt injection, jailbreaks, and adversarial attacks. Works with OpenAI, Anthropic, LangChain, and any LLM."
LLM Penetration Testing Framework - Discover vulnerabilities in AI applications before attackers do. 100attacks + AI-powered adaptive mode.
Add a description, image, and links to the openai-security topic page so that developers can more easily learn about it.
To associate your repository with the openai-security topic, visit your repo's landing page and select "manage topics."