We release patches for security vulnerabilities. Currently supported versions:
| Version | Supported |
|---|---|
| 2.0.x | ✅ |
| < 2.0 | ❌ |
Please do NOT report security vulnerabilities through public GitHub issues.
We take the security of Phantom seriously. If you believe you have found a security vulnerability, please report it to us as described below.
Email: marcosfpina@protonmail.com
Please include the following information:
- Type of vulnerability
- Full path of source file(s) related to the vulnerability
- Location of the affected source code (tag/branch/commit or direct URL)
- Step-by-step instructions to reproduce the issue
- Proof-of-concept or exploit code (if possible)
- Impact of the vulnerability, including how an attacker might exploit it
- Initial Response: Within 48 hours, we'll acknowledge receipt of your report
- Status Updates: We'll keep you informed about progress toward a fix
- Fix Timeline: We aim to release patches within 90 days for critical issues
- Credit: With your permission, we'll publicly acknowledge your contribution
We prefer all communications to be in English or Portuguese.
- Vulnerability is reported privately to maintainers
- Maintainers confirm the issue and determine severity
- Patch is developed in a private branch
- Security advisory is prepared
- Patch is released with security advisory
- Public disclosure 7 days after patch release
Never commit secrets to Git:
# ❌ BAD: Hardcoding keys in code
OPENAI_API_KEY = "sk-proj-..."
# ✅ GOOD: Using environment variables
export OPENAI_API_KEY="sk-proj-..."
export DEEPSEEK_API_KEY="sk-..."Always validate user-provided file paths:
from pathlib import Path
def safe_read_file(user_path: str) -> str:
# Resolve to absolute path
path = Path(user_path).resolve()
# Ensure it's within allowed directory
allowed_dir = Path("/safe/directory").resolve()
if not str(path).startswith(str(allowed_dir)):
raise ValueError("Path outside allowed directory")
return path.read_text()Be cautious with user input in LLM prompts:
# ❌ BAD: Direct user input injection
prompt = f"Analyze this: {user_input}"
# ✅ GOOD: Sanitize and validate
from phantom.pipeline import DataSanitizer
sanitizer = DataSanitizer()
clean_input = sanitizer.remove_pii(user_input)
prompt = f"Analyze this: {clean_input}"Secure your Phantom API:
from fastapi import Depends, HTTPException
from phantom.api import app
@app.post("/process")
async def process_document(
file: UploadFile,
api_key: str = Depends(verify_api_key) # Add authentication
):
# Rate limiting
# Input validation
# Size limits
...Keep dependencies up-to-date:
# Check for vulnerabilities
pip-audit
# For Rust dependencies
cargo audit
# Update flake.lock (NixOS)
nix flake updateRisk: Malicious users may craft inputs to manipulate LLM outputs
Mitigation:
- Sanitize user inputs before LLM processing
- Use structured output schemas (Pydantic)
- Implement output validation
- Set reasonable token limits
Risk: Arbitrary file read/write through path traversal
Mitigation:
- Validate all file paths
- Use
pathlib.Path.resolve()to normalize paths - Restrict operations to sandboxed directories
- Never execute user-provided code
Risk: Processing large files can exhaust memory/GPU
Mitigation:
- Implement file size limits
- Use VRAM monitoring (
phantom.tools.vram_calculator) - Enable auto-throttling
- Set processing timeouts
Risk: Third-party packages may contain vulnerabilities
Mitigation:
- Regular
pip-auditandcargo auditruns - Automated dependency updates (Dependabot)
- Pin specific versions in production
- Monitor security advisories
Risk: API keys or sensitive data logged accidentally
Mitigation:
- Never log API keys or tokens
- Sanitize error messages
- Use structured logging
- Redact sensitive fields
-
PII Detection & Removal
from phantom.pipeline import DataSanitizer sanitizer = DataSanitizer() clean_text = sanitizer.remove_pii(text)
-
Input Validation
- Pydantic models for all data structures
- Type checking with mypy
- Runtime validation
-
File Classification
- Magic byte verification
- File integrity checks (SHA256, BLAKE3)
- Malware pattern detection
-
Resource Limits
- VRAM monitoring
- Processing timeouts
- Automatic throttling
-
Audit Logging
- Comprehensive operation logs
- Forensic-grade audit trails
- Timestamp verification
When contributing code, ensure:
- No hardcoded secrets or API keys
- Input validation for all user-provided data
- Output sanitization before returning to user
- Error messages don't leak sensitive information
- File operations are restricted to safe directories
- No arbitrary code execution (eval, exec, etc.)
- Dependencies are up-to-date
- Tests include security edge cases
- Documentation includes security considerations
Our CI/CD pipeline includes:
# Python security audit
pip-audit
# Rust security audit
cargo audit
# Dependency vulnerability scanning
safety check
# Static analysis
bandit -r src/
# Secret detection
detect-secrets scanWhen testing features:
- Input Fuzzing: Test with malicious inputs
- Path Traversal: Try
../../../etc/passwd - Injection: Test prompt injection attacks
- Rate Limiting: Verify API rate limits work
- Authentication: Test auth bypass attempts
We kindly ask security researchers to:
- Allow us time to fix vulnerabilities before public disclosure (90 days)
- Avoid exploiting the vulnerability beyond proof-of-concept
- Not access or modify other users' data
- Not perform denial-of-service attacks
We recognize responsible security researchers who help us:
- [Your name here] - Issue description (Month Year)
For security-related questions that are not vulnerabilities:
- GitHub Discussions: Tag with "security"
- Email: marcosfpina@protonmail.com
For vulnerabilities, always use private email.
This security policy is part of Phantom and is licensed under the MIT License.
Last Updated: January 2026 Version: 2.0.0