Protect AI coding agents from prompt injection attacks. Works with Claude Code, OpenCode, and other AI coding tools.
TL;DR: below screenshot of context-protector in action.

- Prompt Injection Detection - Block malicious inputs before tool execution
- Output Scanning - Detect threats in tool outputs (file reads, API responses)
- Multiple Backends - LlamaFirewall (default), NeMo Guardrails, GCP Model Armor
- Multi-Platform - Native support for Claude Code and OpenCode
- Fully Local - No cloud dependencies required (optional Ollama support)
# Using uv (recommended)
uv tool install context-protector
# Using pip
pip install context-protector
# Using pipx
pipx install context-protector1. Install the plugin:
pip install context-protector2. Add to your opencode.json:
{
"plugin": ["opencode-context-protector"]
}3. Done! The plugin will scan all tool inputs and outputs.
1. Install and initialize:
context-protector init2. Add to Claude Code settings (~/.claude/settings.json):
Use "matcher": "*" to inspect all tool calls - limiting to MCP will save on tokens and focus more on where it matters.
{
"hooks": {
"PreToolUse": [
{
"matcher": "mcp*",
"hooks": [{"type": "command", "command": "context-protector"}]
}
],
"PostToolUse": [
{
"matcher": "mcp*",
"hooks": [{"type": "command", "command": "context-protector"}]
}
]
}
}3. Done! Context Protector will now scan all tool inputs and outputs.
┌─────────────────────────────────────────────────────────────┐
│ Claude Code / OpenCode │
│ │
│ Tool Request ──► PreToolUse Hook ──► context-protector │
│ │ │ │
│ [ALLOW/BLOCK] Scan Input │
│ │ │ │
│ Tool Response ◄── PostToolUse Hook ◄── context-protector │
│ │ │ │
│ [WARN/BLOCK] Scan Output │
└─────────────────────────────────────────────────────────────┘
Config file: ~/.config/context-protector/config.yaml
# Which provider to use
provider: LlamaFirewall # LlamaFirewall, NeMoGuardrails, GCPModelArmor
# Response mode when threats detected
response_mode: warn # warn (default) or block
# Provider-specific settings
llama_firewall:
scanner_mode: auto # auto, basic, or full
nemo_guardrails:
mode: all # heuristics, injection, self_check, local, all
ollama_model: mistral:7b
ollama_base_url: http://localhost:11434
gcp_model_armor:
project_id: null
location: null
template_id: nullRun context-protector init to create a config file with all options.
All settings can be overridden with environment variables (prefix: CONTEXT_PROTECTOR_):
export CONTEXT_PROTECTOR_PROVIDER=NeMoGuardrails
export CONTEXT_PROTECTOR_RESPONSE_MODE=block
export CONTEXT_PROTECTOR_SCANNER_MODE=basicMeta's LlamaFirewall for prompt injection detection. Works out of the box in basic mode.
| Mode | Description |
|---|---|
basic |
Pattern-based detection, no setup required (default) |
auto |
Tries ML detection, falls back to basic on auth error |
full |
Full ML detection - currently broken due to upstream bug (see note below) |
Note:
fullandautomodes are currently broken due to an upstream bug in llamafirewall where it uses a deprecated HuggingFace API. Usebasicmode until Meta releases a fix.
llama_firewall:
scanner_mode: basic # Recommended - works without setupNVIDIA's guardrails toolkit with multiple detection modes.
| Mode | Description |
|---|---|
all |
Heuristics + injection detection (default) |
heuristics |
Perplexity-based jailbreak detection |
injection |
YARA-based SQL/XSS/code injection |
local |
LLM-based via Ollama (fully local) |
nemo_guardrails:
mode: local
ollama_model: mistral:7bEnterprise-grade content safety via Google Cloud.
provider: GCPModelArmor
gcp_model_armor:
project_id: your-project
location: us-central1
template_id: your-template| Mode | Behavior |
|---|---|
warn |
Log threats, inject warnings (default) |
block |
Block malicious content entirely |
If you encounter false positives and need to temporarily disable protection:
# Disable protection
context-protector --disable
# Re-enable when done
context-protector --enableThis modifies your config file and takes effect immediately on the next tool call - no Claude Code restart needed.
You can also edit the config file directly:
enabled: false # Set to true to re-enableThe OpenCode plugin (opencode-context-protector) provides:
- Pre-execution scanning via
tool.execute.beforehook - Post-execution scanning via
tool.execute.afterhook - Built-in .env protection - Blocks reading
.env,*.pem,*.key,credentials.json - Configurable skip list - Exclude specific tools from scanning
# Response mode
export CONTEXT_PROTECTOR_RESPONSE_MODE=block
# Disable .env protection
export CONTEXT_PROTECTOR_ENV_PROTECTION=false
# Skip certain tools
export CONTEXT_PROTECTOR_SKIP_TOOLS=glob,find
# Debug logging
export CONTEXT_PROTECTOR_DEBUG=trueSee opencode-plugin/README.md for full documentation.
context-protector # Run as Claude Code hook (reads stdin)
context-protector init # Create config file
context-protector --check # Check content from stdin JSON
context-protector --config <path> # Use custom config file
context-protector --help # Show help
context-protector --version # Show versionFor integration with other tools:
echo '{"content": "test input", "type": "tool_input"}' | context-protector --checkOutput:
{"safe": true, "alert": null}context-protector/
├── src/context_protector/ # Python package (PyPI)
│ ├── __init__.py # CLI entry point
│ ├── config.py # Configuration system
│ ├── hook_handler.py # Claude Code hook processing
│ └── providers/ # Detection backends
├── opencode-plugin/ # OpenCode plugin (npm)
│ ├── src/index.ts # Plugin entry point
│ ├── src/backend.ts # Python backend wrapper
│ └── tests/ # Plugin tests
├── tests/ # Python tests
└── pyproject.toml
Contributions very welcome for new guardrail providers and support for other agentic tools!
Please create an issue first before submitting a PR.
git clone https://github.com/ottosulin/context-protector.git
cd context-protector
uv sync --all-groups
uv run pytestcd opencode-plugin
bun install
bun test
bun run buildMIT