Skip to content
#

ai-red-teaming

Here are 38 public repositories matching this topic...

Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily

  • Updated Oct 31, 2025
  • Python

Basilisk — Open-source AI red teaming framework with genetic prompt evolution. Automated LLM security testing for GPT-4, Claude, Grok, Gemini. OWASP LLM Top 10 coverage. 32 attack modules.

  • Updated Apr 1, 2026
  • Python

Comprehensive taxonomy of AI security vulnerabilities, LLM adversarial attacks, prompt injection techniques, and machine learning security research. Covers 71+ attack vectors including model poisoning, agentic AI exploits, and privacy breaches.

  • Updated Sep 19, 2025

LLM Attack Testing Toolkit is a structured methodology and mindset framework for testing Large Language Model (LLM) applications against logic abuse, prompt injection, jailbreaks, and workflow manipulation.

  • Updated Feb 27, 2026

Agentic AI Security Bootcamp is a hands-on, research-driven training environment for analysing, attacking, and securing autonomous AI systems. The repository provides structured labs, adversarial evaluation frameworks, and red-teaming exercises covering multi-agent observability, prompt injection..

  • Updated Apr 1, 2026
  • Python

Improve this page

Add a description, image, and links to the ai-red-teaming topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-red-teaming topic, visit your repo's landing page and select "manage topics."

Learn more