"You can bake the cake, but can you take the eggs out?"
Amnesia is an enterprise-grade platform that enables Machine Unlearningβsurgically removing specific data points from trained AI models without full retraining. It provides cryptographic verification of erasure for GDPR/CCPA compliance.
- SISA Architecture: Sharded, Isolated, Sliced, Aggregated training for efficient unlearning (75%+ faster than retraining).
- Gradient Ascent: Mathematical "erasure" of specific data points from model weights.
- Verifiable Compliance: Membership Inference Attacks (MIA) to prove data is truly forgotten.
- Premium Dashboard: Modern Next.js interface for managing datasets, training, and unlearning.
- Compliance Certificates: PDF generation for legal audit trails (GDPR Article 17).
- Framework: Next.js 14 (React)
- Styling: Tailwind CSS v4 + Framer Motion (Animations)
- Components: Custom Premium UI (Glassmorphism, Dark Mode)
- API: FastAPI (Python)
- ML Core: PyTorch (Neural Networks)
- Task Queue: Celery + Redis (Background Processing)
- Database: PostgreSQL / SQLite (Metadata & Logs)
- Visualization: Streamlit (Legacy/Admin Dashboard)
- Python 3.10+
- Node.js 18+ (for Frontend)
- Docker (Optional, for containerized run)
1. Clone the Repository
git clone https://github.com/YOUR_USERNAME/amnesia.git
cd amnesia2. Backend Setup
# Create virtual environment
python -m venv .venv
# Activate it
# Windows:
.\.venv\Scripts\Activate
# Mac/Linux:
# source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt3. Frontend Setup
cd frontend
npm install
cd ..4. Run the Application
You need two terminals:
Terminal 1 (Backend API):
python -m uvicorn api.main:app --reload --port 8000Terminal 2 (Frontend):
cd frontend
npm run devVisit http://localhost:3000 to access the dashboard.
Text models (LLMs) entangle knowledge (e.g., removing "Harry Potter" removes "Wizards"). Vision models have distinct classes, making them perfect for proving unlearning works.
The Task:
- Train a ResNet-18 on CIFAR-10 (Cars, Cats, Planes...).
- Unlearn Class 3 (Cats) while keeping Class 1 (Cars) accurate.
Located in: core/unlearning/simple_unlearn.py
Normally, you train a model to minimize error:
loss.backward() # Gradient DESCENTTo unlearn, we maximize error on the specific target data:
(-loss).backward() # Gradient ASCENT (The "Anti-Learning")This pushes the model's weights away from recognizing the target concept.
Run the entire stack with one command:
docker-compose up -d --build- Frontend: http://localhost:3000
- API Docs: http://localhost:8000/docs
- Legacy Dashboard: http://localhost:8501
The project follows a modern monorepo structure:
Amnesia/
βββ api/ # FastAPI Backend
β βββ main.py # App Entrypoint
β βββ routes/ # API Endpoints (Training, Unlearning, etc.)
βββ core/ # Machine Learning Logic
β βββ sisa/ # SISA Architecture Implementation
β βββ unlearning/ # Gradient Ascent Algorithms
β βββ verification/ # Membership Inference Attacks
βββ dashboard/ # Admin Dashboard (Streamlit)
βββ frontend/ # User Dashboard (Next.js)
β βββ src/app/ # Pages (Landing, Dashboard, Login)
β βββ src/components/ # Reusable UI Components
βββ scripts/ # Helper Scripts
β βββ demo.py # End-to-end System Test
βββ storage/ # Local Storage for Models & DB (Gitignored)
βββ tests/ # Unit & Integration Tests
βββ requirements.txt # Python Dependencies
We follow a simplified GitFlow for collaboration:
main: Stable, production-ready code.develop: Integration branch for next release.feature/xyz: New features (merge intodevelop).fix/xyz: Bug fixes (merge intomainordevelop).
To contribute:
- Checkout
main. - Create a branch:
git checkout -b feature/new-ui-component. - Commit & Push.
- Open a Pull Request.
This project is licensed under the Apache 2.0 License.
Disclaimer: This tool is a proof-of-concept for Verifiable Machine Unlearning. While it implements state-of-the-art algorithms, ensure validation before using for critical legal compliance.
Once unlearning is complete, you can verify the results and generate a GDPR-compliant Certificate of Erasure.
- Go to the Verification page.
- Run the Membership Inference Attack (MIA) on the target data.
- If successful (Confidence < Threshold), a PDF certificate is generated.
- Download it directly from the UI.
Sample Proof: A sample certificate generated from our Vision MVP is included in this repository: π PROOF_OF_ERASURE_VISION_MVP.pdf
Certificate Location:
All generated certificates are stored locally in:
storage/certificates/