A Node.js–based distributed in-memory cache system designed to reduce database load and improve read performance under concurrent traffic.
The system supports sharding, TTL-based expiration, LRU eviction, replication, failover simulation, and is Dockerized and load tested.
- Distributed cache nodes
- Gateway-based request routing
- Hash-based sharding
- TTL-based expiration
- LRU eviction
- Replication and failover simulation
- Docker & Docker Compose support
- Load testing using k6
Client → Gateway → Cache Nodes → Database
- All client requests go through a Gateway
- The Gateway shards keys across cache nodes
- Cache nodes store data in memory
- On cache miss, data is fetched from DB and cached
- Client sends GET request to Gateway
- Gateway routes request to cache shard
- Cache hit → return value
- Cache miss → fetch from DB
- Store in cache with TTL
- Return response to client
- Hash-based sharding assigns keys to primary nodes
- Writes are replicated to a secondary node
- Reads are served from the primary
- Replica takes over on primary failure
- Eventual consistency model
- Node.js, Express
- Docker, Docker Compose
- AWS EC2
- k6 (Load Testing)
- Jest (Testing)
curl -X POST http://localhost:3000/cache \
-H "Content-Type: application/json" \
-d '{"key":"user123","value":"Alice","ttl":60}'
Get Cache Value
curl http://localhost:3000/cache/user123
Delete Cache Value
curl -X DELETE http://localhost:3000/cache/user123
📈 Performance
~6,000 requests/sec
Avg latency ~45 ms
P95 latency ~120 ms
Stable under node failure
🐳 Run with Docker
docker-compose up --build
👤 Author
Sanskriti Tyagi
GitHub: https://github.com/sanskritityagi31