Source code of the paper "Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models"
-
Updated
Apr 4, 2025 - Jupyter Notebook
Source code of the paper "Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models"
Paper Reproduction Project on [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models
Add a description, image, and links to the prompt-stealing topic page so that developers can more easily learn about it.
To associate your repository with the prompt-stealing topic, visit your repo's landing page and select "manage topics."