"Universal AI security framework - Protect LLM applications from prompt injection, jailbreaks, and adversarial attacks. Works with OpenAI, Anthropic, LangChain, and any LLM."
ai jailbreak ai-safety security-framework ai-defense ai-security prompt-injection llm-security prompt-security ai-hacking llm-protection openai-security chatgpt-security hacking-tools-ai
-
Updated
Jan 19, 2026 - Python