CopyCat is a self-hosted media management tool for digital libraries. It bridges the gap between Cloud Storage (such as Zurg or Rclone mounts) and Local Storage (HDD/NAS).
CopyCat provides a web interface to scan, organize, and copy media files across different storage locations.
CopyCat was born out of a desire to simplify a cumbersome workflow.
My setup involved managing 300TB of media in Zurg alongside 16TB of local storage. I often wanted to bring specific content "offline" to my local drives to ensure it was always ready to go. My manual process was painful:
- Add a torrent hash to Real-Debrid via DMM or SeerrBridge.
- Remote desktop into the host machine (often from a phone).
- Manually copy folders from the Zurg directory to the local drive.
Remote desktopping from a phone is a "pain," and I felt there could be a cleaner way. CopyCat provides that solution.
πΈ View More Screenshots
Browse content through a categorized interface. Filter by movies or TV shows, view metadata, and select items for transfer.
Prefer a traditional view? The Copy Wizard offers a familiar file explorer interface, allowing you to manually navigate directory structures and define precise destinations for your transfers.
Monitor active transfers in real-time. View detailed progress, transfer speeds, and manage your queue with priority controls to ensure your most important media is ready when you are.
Run CopyCat with this single Docker command:
Warning
Configuration Required: You MUST replace /path/to/source, /path/to/destination, and JWT_SECRET_KEY with your actual paths and a secure secret.
docker run -d --name copycat -p 4222:3000 -p 4223:8000 -v "$(pwd)/data":/data -v /path/to/source:/mnt/source:ro -v /path/to/destination:/mnt/destination -e JWT_SECRET_KEY=change_this_to_secure_random_string ghcr.io/woahai321/copy-cat:mainTip
Access the App: The Web Interface is at http://localhost:4222 and the API at http://localhost:4223. Login with admin / changeme.
Deployment Instructions
CopyCat is deployed using Docker.
- A machine running Docker & Docker Compose.
- A Source Path (e.g.,
/mnt/zurgor any folder with media). - A Destination Path (e.g.,
/mnt/mediawhere you want files to go).
Clone the repo and configure your environment:
git clone https://github.com/woahai321/copy-cat.git
cd copy-cat
cp .env.example .envEdit .env to match your paths:
# Where your media is getting READ from (ReadOnly recommended)
SOURCE_PATH=/mnt/zurg
# Where you want your media COPIED to
DESTINATION_PATH=/mnt/local/media
# Security: Set a strong random password!
JWT_SECRET_KEY=change_me_to_something_secure[!CAUTION] Production Security: Ensure the
JWT_SECRET_KEYis a unique, randomly generated string. Do not use the default value in public deployments.
docker-compose up -d- Web Interface:
http://localhost:4222 - Backend API:
http://localhost:4223/api/docs - User:
admin - Password:
changeme
Need more help? Check out the Deployment Guide or Configuration Reference.
graph TD
%% Theme - Cyberpunk Purple
classDef purple fill:#2e1065,stroke:#7c3aed,color:#fff,rx:5,ry:5,stroke-width:2px;
classDef light fill:#5b21b6,stroke:#8b5cf6,color:#fff,rx:5,ry:5,stroke-width:2px;
classDef accent fill:#7c3aed,stroke:#a78bfa,color:#fff,rx:5,ry:5,stroke-width:2px;
classDef external fill:#0f172a,stroke:#334155,color:#94a3b8,rx:5,ry:5,stroke-dasharray: 5 5;
subgraph "External Ecosystem"
Cloud[(Zurg/Rclone)]:::external
Trakt[Trakt.tv API]:::external
TMDB[TMDB/Fanart]:::external
end
subgraph "Host Infrastructure"
Mount[Filesystem Mount]:::purple
LocalDest[(Local/NAS Storage)]:::purple
end
subgraph "CopyCat Core"
direction TB
subgraph Pipeline ["1. Ingestion Engine"]
Watcher[File Watcher]:::light
Scanner[Recursive Scanner]:::light
Regex{Regex Parsing}:::accent
Matcher[Media Matcher]:::light
end
subgraph Data ["2. Persistence Layer"]
DB[(SQLite Database)]:::purple
ImgCache[Image/Asset Cache]:::purple
end
subgraph Backend ["3. Application Server"]
API[FastAPI REST Layer]:::light
Auth{JWT Auth}:::accent
WS[WebSocket Manager]:::accent
Scheduler[Task Scheduler]:::light
end
subgraph Execution ["4. Transfer Engine"]
Queue[Job Queue]:::light
WorkerPool[Worker Thread Pool]:::accent
IO[IO Stream Manager]:::light
end
end
subgraph "Presentation"
UI[Nuxt.js Frontend]:::purple
Store[Pinia State]:::light
end
%% Data Flow Relationships
Cloud ==> Mount
Mount -.-> Scanner
Scanner --> Regex
Regex --> Matcher
Matcher <--> Trakt
Matcher <--> TMDB
Matcher --> DB
Matcher --> ImgCache
UI <--> API
API <--> DB
API --> WS
WS -.->|Real-time Events| UI
UI -- "Dispatch Copy" --> API
API --> Queue
Queue --> WorkerPool
WorkerPool --> IO
IO -- "Read Stream" --> Mount
IO -- "Write Stream" --> LocalDest
IO -- "Progress Events" --> WS
- Ingestion Layer: The system mounts your cloud storage (Zurg/Rclone) to the local filesystem, making it accessible as standard files.
- Processing Layer: The
Scanner Enginerecursively reads files, cleaning filenames with Regex and enriching them with metadata/posters via the Trakt API. - Persistence Layer: All library data is structured and stored in a SQLite Database, while images are cached locally for offline performance.
- Application Layer: The FastAPI Backend serves data to the Nuxt Frontend, ensuring a reactive, real-time user experience via WebSockets.
- Execution Layer: When you initiate a copy, the Queue Manager spawns optimized workers to stream data from Source to Destination efficiently.
If CopyCat saves you time, consider sponsoring:
β‘οΈ GitHub Sponsors
Thank you.
Welcome! See Developer Guide for Nuxt + FastAPI setup.