LLM Penetration Testing Framework - Discover vulnerabilities in AI applications before attackers do. 100attacks + AI-powered adaptive mode.
testing security-testing jailbreak-detection ai-security pentration-testing llm prompt-injection llm-security security-reseachers prompt-security ai-pentesting openai-security chatgpt-security langchain-security ll-vulnerability
-
Updated
Jan 19, 2026 - Python