A curated list of awesome AI security related frameworks, standards, learning resources and open source tools.
If you want to contribute, create a PR or contact me @ottosulin.
- GenAI Security podcast
- OWASP ML TOP 10
- OWASP LLM TOP 10
- OWASP AI Security and Privacy Guide
- NIST AIRC - NIST Trustworthy & Responsible AI Resource Center
- The MLSecOps Top 10 by Institute for Ethical AI & Machine Learning
- OWASP Multi-Agentic System Threat Modeling
- OWASP: CheatSheet – A Practical Guide for Securely Using Third-Party MCP Servers 1.0
- Damn Vulnerable MCP Server - A deliberately vulnerable implementation of the Model Context Protocol (MCP) for educational purposes.
- OWASP WrongSecrets LLM exercise
- vulnerable-mcp-servers-lab - A collection of servers which are deliberately vulnerable to learn Pentesting MCP Servers.
- FinBot Agentic AI Capture The Flag (CTF) Application - FinBot is an Agentic Security Capture The Flag (CTF) interactive platform that simulates real-world vulnerabilities in agentic AI systems using a simulated Financial Services-focused application.
- NIST AI Risk Management Framework
- ISO/IEC 42001 Artificial Intelligence Management System
- ISO/IEC 23894:2023 Information technology — Artificial intelligence — Guidance on risk management
- Google Secure AI Framework
- ENISA Multilayer Framework for Good Cybersecurity Practices for AI
- OWASP Artificial Intelligence Maturity Assessment
- Google Secure AI Framework
- CSA AI Model Risk Framework
- NIST AI 100-2e2023
- AVIDML
- MITRE ATLAS
- ISO/IEC 22989:2022 Information technology — Artificial intelligence — Artificial intelligence concepts and terminology
- MIT AI Risk Repository
- AI Incident Database
- NIST AI Glossary
- The Arcanum Prompt Injection Taxonomy
- CSA LLM Threats Taxonomy
- Malware Env for OpenAI Gym - makes it possible to write agents that learn to manipulate PE files (e.g., malware) to achieve some objective (e.g., bypass AV) based on a reward provided by taking specific manipulation actions
- Deep-pwning - a lightweight framework for experimenting with machine learning models with the goal of evaluating their robustness against a motivated adversary
- Counterfit - generic automation layer for assessing the security of machine learning systems
- DeepFool - A simple and accurate method to fool deep neural networks
- Snaike-MLFlow - MLflow red team toolsuite
- HackingBuddyGPT - An automatic pentester (+ corresponding [benchmark dataset](https://github.com/ipa -lab/hacking-benchmark))
- Charcuterie - code execution techniques for ML or ML adjacent libraries
- OffsecML Playbook - A collection of offensive and adversarial TTP's with proofs of concept
- BadDiffusion - Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023
- Exploring the Space of Adversarial Images
- Adversarial Machine Learning Library(Ad-lib)](https://github.com/vu-aml/adlib) - Game-theoretic adversarial machine learning library providing a set of learner and adversary modules
- Adversarial Robustness Toolkit - ART focuses on the threats of Evasion (change the model behavior with input modifications), Poisoning (control a model with training data modifications), Extraction (steal a model through queries) and Inference (attack the privacy of the training data)
- cleverhans - An adversarial example library for constructing attacks, building defenses, and benchmarking both
- foolbox - A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
- TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
- garak - security probing tool for LLMs
- agentic_security - Agentic LLM Vulnerability Scanner / AI red teaming kit
- Agentic Radar - Open-source CLI security scanner for agentic workflows.
- llamator - Framework for testing vulnerabilities of large language models (LLM).
- whistleblower - Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API
- LLMFuzzer - 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. 🚀💥
- vigil-llm - ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
- FuzzyAI - A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
- EasyJailbreak - An easy-to-use Python framework to generate adversarial jailbreak prompts.
- promptmap - a prompt injection scanner for custom LLM applications
- PyRIT - The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
- PurpleLlama - Set of tools to assess and improve LLM security.
- Giskard-AI - 🐢 Open-Source Evaluation & Testing for AI & LLM systems
- promptfoo - Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.
- HouYi - The automated prompt injection framework for LLM-integrated applications.
- llm-attacks - Universal and Transferable Attacks on Aligned Language Models
- Dropbox llm-security - Dropbox LLM Security research code and results
- llm-security - New ways of breaking app-integrated LLMs
- OpenPromptInjection - This repository provides a benchmark for prompt Injection attacks and defenses
- Plexiglass - A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).
- ps-fuzz - Make your GenAI Apps Safe & Secure 🚀 Test & harden your system prompt
- EasyEdit - Modify an LLM's ground truths
- spikee) - Simple Prompt Injection Kit for Evaluation and Exploitation
- Prompt Hacking Resources - A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection
- mcp-injection-experiments - Code snippets to reproduce MCP tool poisoning attacks.
- gptfuzz - Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts
- AgentDojo - A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
- jailbreakbench - JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]
- giskard - 🐢 Open-Source Evaluation & Testing library for LLM Agents
- TrustGate - Generative Application Firewall (GAF) to detects, prevents and blocks attacks against GenAI Applications
- blackice - BlackIce is an open-source containerized toolkit designed for red teaming AI models, including Large Language Models (LLMs) and classical machine learning (ML) models. Inspired by the convenience and standardization of Kali Linux in traditional penetration testing, BlackIce simplifies AI security assessments by providing a reproducible container image preconfigured with specialized evaluation tools.
- augustus - LLM security testing framework for detecting prompt injection, jailbreaks, and adversarial attacks. 190+ probes, 28 providers, single Go binary. Production-ready with concurrent scanning, rate limiting, and retry logic.
- guardian-cli - AI-Powered Security Testing & Vulnerability Scanner. Guardian CLI is an intelligent security testing tool that leverages AI to automate penetration testing, vulnerability assessment, and security auditing.
- AI-Red-Teaming-Playground-Labs - AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
- HackGPT - A tool using ChatGPT for hacking
- mcp-for-security - A collection of Model Context Protocol servers for popular security tools like SQLMap, FFUF, NMAP, Masscan and more. Integrate security testing and penetration testing into AI workflows.
- cai - Cybersecurity AI (CAI), an open Bug Bounty-ready Artificial Intelligence (paper)
- AIRTBench - Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models
- PentestGPT - A GPT-empowered penetration testing tool
- HackingBuddyGPT - Helping Ethical Hackers use LLMs in 50 Lines of Code or less..
- HexStrikeAI - HexStrike AI MCP Agents is an advanced MCP server that lets AI agents (Claude, GPT, Copilot, etc.) autonomously run 150+ cybersecurity tools for automated pentesting, vulnerability discovery, bug bounty automation, and security research. Seamlessly bridge LLMs with real-world offensive security capabilities.
- Burp MCP Server - MCP Server for Burp
- burpgpt - A Burp Suite extension that integrates OpenAI's GPT to perform an additional passive scan for discovering highly bespoke vulnerabilities and enables running traffic-based analysis of any type.
- AI-Infra-Guard - A comprehensive, intelligent, and easy-to-use AI Red Teaming platform developed by Tencent Zhuque Lab.It integrates modules for Infra Scan,MCP Scan,and Jailbreak Evaluation,providing a one-click web UI, REST APIs, and Docker-based deployment for comprehensive AI security evaluation.
- strix - Strix are autonomous AI agents that act just like real hackers - they run your code dynamically, find vulnerabilities, and validate them through actual proof-of-concepts
- mcp-security-hub - A growing collection of MCP servers bringing offensive security tools to AI assistants. Nmap, Ghidra, Nuclei, SQLMap, Hashcat and more.
- AutoPentestX - AutoPentestX – Linux Automated Pentesting & Vulnerability Reporting Tool
- CyberStrikeAI - AI-native security testing platform built in Go. Integrates 100+ security tools with an intelligent orchestration engine, role-based testing with predefined security roles, skills system, and comprehensive lifecycle management. Uses MCP protocol and AI agents for end-to-end automation from conversational commands to vulnerability discovery.
- redamon - AI-powered agentic red team framework that automates offensive security operations from reconnaissance to exploitation to post-exploitation with zero human intervention.
- OWASP LLM and Generative AI Security Center of Excellence Guide
- OWASP Agentic AI – Threats and Mitigations
- OWASP AI Security Solutions Landscape
- OWASP GenAI Incident Response Guide
- OWASP LLM and GenAI Data Security Best Practices
- OWASP Securing Agentic AI Applications
- CSA Maestro AI Threat Modeling Framework
- Claude Code Security Review - An AI-powered security review GitHub Action using Claude to analyze code changes for security vulnerabilities.
- GhidraGPT - Integrates GPT models into Ghidra for automated code analysis, variable renaming, vulnerability detection, and explanation generation.
- datasig - Dataset fingerprinting for AIBOM
- OWASP AIBOM - AI Bill of Materials
- secml-torch - SecML-Torch: A Library for Robustness Evaluation of Deep Learning Models
- awesome-ai-safety
- MCP-Security-Checklist - A comprehensive security checklist for MCP-based AI tools. Built by SlowMist to safeguard LLM plugin ecosystems.
- Awesome-MCP-Security - Everything you need to know about Model Context Protocol (MCP) security.
- secure-mcp-gateway - This Secure MCP Gateway is built with authentication, automatic tool discovery, caching, and guardrail enforcement.
- mcp-context-protector - context-protector is a security wrapper for MCP servers that addresses risks associated with running untrusted MCP servers, including line jumping, unexpected server configuration changes, and other prompt injection attacks
- mcp-guardian - MCP Guardian manages your LLM assistant's access to MCP servers, handing you realtime control of your LLM's activity.
- MCP Audit VSCode Extension - Audit and log all GitHub Copilot MCP tool calls in VSCode in centrally with ease.
- Guardrail.ai - Guardrails is a Python package that lets a user add structure, type and quality guarantees to the outputs of large language models (LLMs)
- CodeGate - An open-source, privacy-focused project that acts as a layer of security within a developers Code Generation AI workflow
- LlamaFirewall - LlamaFirewall is a framework designed to detect and mitigate AI centric security risks, supporting multiple layers of inputs and outputs, such as typical LLM chat and more advanced multi-step agentic operations.
- ZenGuard AI - The fastest Trust Layer for AI Agents
- llm-guard - LLM Guard by Protect AI is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).
- vibraniumdome - Full blown, end to end LLM WAF for Agents, allowing security teams govenrance, auditing, policy driven control over Agents usage of language models.
- LocalMod - Self-hosted content moderation API with prompt injection detection, toxicity filtering, PII detection, and NSFW classification. Runs 100% offline.
- NeMo-GuardRails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
- DynaGuard - A Dynamic Guardrail Model With User-Defined Policies
- AprielGuard - 8B parameter safety–security safeguard model
- Safe Zone - Safe Zone is an open-source PII detection and guardrails engine that prevents sensitive data from leaking to LLMs and third-party APIs.
- superagent - Superagent provides purpose-trained guardrails that make AI-agents secure and compliant.
- mcp-context-protector - mcp-context-protector is a security wrapper for MCP servers that addresses risks associated with running untrusted MCP servers
- vibekit - Run Claude Code, Gemini, Codex — or any coding agent — in a clean, isolated sandbox with sensitive data redaction and observability baked in.
- claude-code-safety-net - A Claude Code plugin that acts as a safety net, catching destructive git and filesystem commands before they execute
- leash - Leash wraps AI coding agents in containers and monitors their activity.
- skill-scanner - A security scanner for AI Agent Skills that detects prompt injection, data exfiltration, and malicious code patterns. Combines pattern-based detection (YAML + YARA), LLM-as-a-judge, and behavioral dataflow analysis for comprehensive threat detection.
- Project CodeGuard - CoSAI Open Source Project for securing AI-assisted development workflows. CodeGuard provides security controls and guardrails for AI coding assistants to prevent vulnerabilities from being introduced during AI-generated code development.
- modelscan - ModelScan is an open source project from Protect AI that scans models to determine if they contain unsafe code.
- rebuff - Prompt Injection Detector
- langkit - LangKit is an open-source text metrics toolkit for monitoring language models. The toolkit various security related metrics that can be used to detect attacks
- MCP-Scan - A security scanning tool for MCP servers
- picklescan - Security scanner detecting Python Pickle files performing suspicious actions
- fickling - A Python pickling decompiler and static analyzer
- a2a-scanner - Scan A2A agents for potential threats and security issues
- medusa - AI-first security scanner with 74+ analyzers, 180+ AI agent security rules, intelligent false positive reduction. Supports all languages. CVE detection for React2Shell, mcp-remote RCE.
- julius - LLM service fingerprinting tool for security professionals. Detects 32+ AI services (Ollama, vLLM, LiteLLM, Hugging Face TGI, etc.) during penetration tests and attack surface discovery. Uses HTTP-based service fingerprinting to identify server infrastructure.
- openclaw-shield - Security plugin for OpenClaw agents - prevents secret leaks, PII exposure, and destructive command execution
- clawsec - Security scanner and hardening tool for OpenClaw deployments. Provides security assessments, configuration auditing, and vulnerability detection specifically for OpenClaw gateway and agent configurations.
- Python Differential Privacy Library
- Diffprivlib - The IBM Differential Privacy Library
- PLOT4ai - Privacy Library Of Threats 4 Artificial Intelligence A threat modeling library to help you build responsible AI
- TenSEAL - A library for doing homomorphic encryption operations on tensors
- SyMPC - A Secure Multiparty Computation companion library for Syft
- PyVertical - Privacy Preserving Vertical Federated Learning
- Cloaked AI - Open source property-preserving encryption for vector embeddings
- dstack - Open-source confidential AI framework for secure ML/LLM deployment with hardware-enforced isolation and data privacy
- PrivacyRaven - privacy testing library for deep learning systems
- claude-secure-coding-rules - Open-source security rules that guide Claude Code to generate secure code by default.
- tm_skills - Agent skills to help with Continuous Threat Modeling
- Trail of Bits Skills Marketplace - Trail of Bits Claude Code skills for security research, vulnerability detection, and audit workflows
- Semgrep Skills - Official Semgrep skills for Claude Code and other AI coding assistants. Provides security scanning, code analysis, and vulnerability detection capabilities directly in your AI-assisted development workflow.
- VulnLLM-R-7B - Specialized reasoning LLM for vulnerability detection. Uses Chain-of-Thought reasoning to analyze data flow, control flow, and security context. Outperforms Claude-3.7-Sonnet and CodeQL on vulnerability detection benchmarks. Only 7B parameters making it efficient and fast.
- Foundation-Sec-8B-Reasoning - Llama-3.1-FoundationAI-SecurityLLM-8B-Reasoning (Foundation-Sec-8B-Reasoning) is an open-weight, 8-billion parameter instruction-tuned language model specialized for cybersecurity applications. It extends the Foundation-Sec-8B base model with instruction-following and reasoning capabilities. It leverages prior training to understand security concepts, terminology, and practices across multiple security domains
- AgentDoG - AgentDoG is a risk-aware evaluation and guarding framework for autonomous agents. It focuses on trajectory-level risk assessment, aiming to determine whether an agent’s execution trajectory contains safety risks under diverse application scenarios.