Secure your Retool AI apps from prompt injection & PII leaks
PromptLock is a Retool custom component that sanitizes user input before sending it to an LLM. It blocks injection attempts, redacts sensitive data, and returns risk scores + violation logs.
With PromptLock, you can:
- ✅ Block prompt injection attacks
- ✅ Redact PII/PHI before it reaches OpenAI
- ✅ Receive a risk score (0–100) per input
- ✅ Log violations for compliance (HIPAA, GDPR, PCI)
- Clone or download this repo.
- In Retool, go to Custom Components → Create New → Upload File.
- Upload
src/PromptLockAnalyze.tsx. - Save and add the component to your Retool app.
-
Drag PromptLock into your Retool canvas.
-
Enter your PromptLock API key in the component settings.
-
Wire it up:
- Input → PromptLock → LLM → Output
Example workflow:
graph LR
A[User Input] --> B[PromptLock Component]
B --> C[OpenAI LLM]
C --> D[Output]
See examples/example-app.json for a sample Retool app that uses PromptLock to sanitize user prompts before sending them to OpenAI.
Get your free PromptLock API key → promptlock.io/retool
MIT License — free to use, modify, and distribute.
Pull requests are welcome. For major changes, please open an issue first to discuss what you’d like to change.