The go-to API for detecting and preventing prompt injection attacks.
-
Updated
Apr 28, 2025 - Jupyter Notebook
The go-to API for detecting and preventing prompt injection attacks.
An educational and research-based exploration into breaking the limitations of LLaMA models using advanced prompt engineering techniques, including DAN (Do Anything Now) prompts, to unlock unrestricted access and capabilities. This repository contains only theoretical explanations and is intended solely for ethical and educational purposes.
Proof of concept demonstrating Unicode injection vulnerabilities using invisible characters to manipulate Large Language Models (LLMs) and AI assistants (e.g., Claude, AI Studio) via hidden prompts or data poisoning. Educational/research purposes only.
🛡️ Advanced cybersecurity platform with AI-powered anomaly detection, steganography analysis, and automated threat response. Built with Next.js, Python ML, and n8n automation.
LMTWT is AI security testing framework for evaluating LLM prompt injection vulnerabilities
Promptsploit is a tool that help you check you LLM tools against security vulnerabilities.
The Emoji Smuggler App demonstrates how Unicode variation selectors can be exploited to hide arbitrary data within what appears to be a single emoji character. This technique can bypass AI safety filters and create covert communication channels.
Add a description, image, and links to the promptinjection topic page so that developers can more easily learn about it.
To associate your repository with the promptinjection topic, visit your repo's landing page and select "manage topics."