A scalable workforce management system built with Node.js, Express, TypeORM, MySQL, and RabbitMQ. Designed to handle 100 to 10,000+ employees with proper scalability patterns.
- Repository Pattern: Data access abstraction with BaseRepository
- Service Layer Pattern: Business logic isolation from controllers
- Strategy Pattern: Retry policies (Immediate, Linear, Exponential backoff)
- Factory Pattern: RetryPolicyFactory for creating retry strategies
- Singleton Pattern: RabbitMQService for connection management
- Runtime: Node.js 16+
- Framework: Express.js
- Language: TypeScript
- ORM: TypeORM
- Database: MySQL 8.0
- Message Queue: RabbitMQ with Dead Letter Queue
- Cache: Redis (for employee lookups)
- Testing: Jest + Supertest
- β Department management with unique names
- β Employee management with pagination
- β Leave request system with auto-approval logic
- β Message queue processing with retry mechanism
- β Idempotent message handling
- β Request validation with Joi
- β Rate limiting (100 req/15min)
- β Health check endpoints
- Database: Indexed queries, connection pooling (10 connections), pagination (max 100/page)
- Queue: Multiple consumers support, Dead Letter Queue, exponential backoff retry
- API: Rate limiting, request validation, Redis response caching
- Caching: Redis caching for employee lookups (5min TTL)
- Docker & Docker Compose
- Node.js 16+
- npm or yarn
git clone <your-repo>
cd workforce
npm install# Start MySQL, RabbitMQ, Redis with Docker Compose
docker-compose up -d
# Wait for services to be healthy
docker-compose psnpm run migration:runnpm run devAPI will be available at http://localhost:3000
# Run all tests
npm test
# Run with coverage
npm run test:coverage
# Watch mode
npm run test:watchhttp://localhost:3000/api
POST /api/departments- Create departmentGET /api/departments/:id/employees?page=1&limit=10- List employees
POST /api/employees- Create employeeGET /api/employees/:id- Get employee with leave history (cached)
POST /api/leave-requests- Create leave request
curl -X POST http://localhost:3000/api/departments \
-H "Content-Type: application/json" \
-d '{"name": "Engineering"}'curl -X POST http://localhost:3000/api/employees \
-H "Content-Type: application/json" \
-d '{
"name": "John Doe",
"email": "john@example.com",
"departmentId": 1
}'curl -X POST http://localhost:3000/api/leave-requests \
-H "Content-Type: application/json" \
-d '{
"employeeId": 1,
"startDate": "2024-06-01",
"endDate": "2024-06-02",
"leaveType": "VACATION"
}'GET /api/health- Application healthGET /api/queue-health- RabbitMQ healthGET /api/db-health- MySQL connection healthGET /api/redis-health- Redis cache health
- Leaves β€ 2 days: Auto-approved immediately
- Leaves > 2 days: Marked as PENDING for manual approval
- Max retries: 3 attempts
- Strategy: Exponential backoff with jitter
- Base delay: 1000ms
- Max delay: 60000ms (1 minute)
- Failed messages: Routed to Dead Letter Queue after max retries
departments- Department information with unique namesemployees- Employee records with unique emailsleave_requests- Leave request records with status trackingqueue_processing_log- Queue message tracking for idempotency
employees.department_id- Fast department lookupsleave_requests.employee_id + status- Efficient leave queriesleave_requests.employee_id + startDate + endDate- Date range queriesqueue_processing_log.message_id- Idempotency checks (unique)
src/
βββ config/ # Configuration management
βββ controllers/ # HTTP request handlers
βββ entities/ # TypeORM entities
βββ repositories/ # Data access layer (Repository pattern)
βββ services/ # Business logic layer
βββ middleware/ # Express middleware (rate limiting, validation)
βββ validation/ # Request validation schemas (Joi)
βββ utils/ # Utility functions (RetryPolicy)
βββ migration/ # Database migrations
βββ __tests__/ # Test files
βββ unit/ # Unit tests (business logic)
βββ integration/ # Integration tests (API)
# View logs
docker-compose logs -f
# View specific service logs
docker-compose logs -f mysql
docker-compose logs -f rabbitmq
# Stop services
docker-compose down
# Reset everything
docker-compose down -v
npm run migration:run- Connection Pooling: 10 database connections
- Database Indexes: On foreign keys and frequently queried fields
- Pagination: Max 100 items per page
- Rate Limiting: 100 requests per 15 minutes
- Redis Caching: 5-minute TTL for employee lookups
- Queue Processing: Prefetch 1 message at a time
- Exponential Backoff: Prevents thundering herd on retries
- Username:
workforce - Password:
workforce123
Monitor:
- Queue depth
- Message processing rate
- Dead letter queue messages
- Consumer status
Check connection status:
curl http://localhost:3000/api/db-healthCheck Redis status:
curl http://localhost:3000/api/redis-health- Create Leave Request β API receives request
- Save to Database β Status: PENDING
- Publish to Queue β
leave.requestedqueue - Consumer Processes β Applies business rules
- Auto-Approval β If β€ 2 days, status: APPROVED
- Idempotency Check β Uses
QueueProcessingLog - Retry on Failure β Exponential backoff (max 3 retries)
- Dead Letter Queue β After max retries exceeded
MIT
Built with attention to scalability, maintainability, and production-readiness.