HORUS is a comprehensive frictionless tracking system designed to enhance productivity and safety at construction sites. It combines computer vision (CV) for PPE compliance detection and ESP32-based IoT tracking for asset movement monitoring. The system provides features such as real-time room-based item location, named person tracking, and AI-based summary reporting.
- PPE Compliance Detection: Real-time detection of safety violations (missing helmets, vests)
- Named Person Tracking: Personalized monitoring instead of anonymous tracking
- Multi-Camera Support: Parallel processing across multiple camera feeds
- YOLO + ReID Integration: Advanced object detection with person re-identification
- Room-based Location: Precise indoor positioning system
- Movement History: Complete audit trail of asset movements
- Live Dashboard: Real-time visualization of asset locations
- Multi-Organization Support: Complete data isolation per organization
- Automatic Attendance: Face recognition with automatic attendance marking
- Mobile Compatibility: Wireless camera streaming from mobile devices
- Comprehensive Reporting: Daily and historical attendance reports
- RAG-powered Support: Retrieval-Augmented Generation for intelligent responses
- Knowledge Base Integration: Access to system documentation and FAQs
- Multi-Model Support: OpenAI GPT and local models (FLAN-T5, distilGPT-2)
- Vector Search: FAISS-powered semantic search capabilities
- Real-time Dashboards: Live monitoring of all system components
- Trend Analysis: Historical data analysis and insights
- Safety Reports: PPE compliance reporting and violation tracking
- Asset Utilization: Movement patterns and usage analytics
HORUS System
βββ Frontend (React + TypeScript)
β βββ Landing Page
β βββ User Dashboard
β βββ Face Recognition Interface
β βββ Asset Tracking Dashboard
β βββ Chat Agent Interface
βββ Backend (FastAPI + Python)
β βββ Authentication & User Management
β βββ Computer Vision Processing
β βββ Face Recognition Service
β βββ Asset Tracking API
β βββ AI Chat Agent
β βββ Reporting Engine
βββ Database (MongoDB Atlas)
β βββ User Data
β βββ Detection Logs
β βββ Movement History
β βββ Attendance Records
βββ IoT Infrastructure
βββ ESP32 Devices
βββ Wi-Fi Access Points
βββ RSSI Monitoring
- Node.js (v16 or higher)
- Python (3.8 or higher)
- MongoDB Atlas account
- OpenAI API Key (for AI features)
git clone https://github.com/yourusername/HORUS-Frictionless-Object-Tracking-System.git
cd HORUS-Frictionless-Object-Tracking-Systemcd HORUS_backend
# Create virtual environment (recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Set environment variables
cp .env.example .env
# Edit .env with your MongoDB Atlas connection string and OpenAI API keycd HORUS_frontend
# Install dependencies
npm install
# Start development server
npm run devUse the provided initialization script:
# From project root
python initiation.pyThis will start:
- Frontend: http://localhost:5173
- Main API: http://localhost:8000
- Tracking API: http://localhost:8001
- Asset Tracker: http://localhost:8002
For wireless face recognition attendance:
-
Start Backend:
cd HORUS_backend uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload -
Serve Mobile Page:
python serve_mobile.py
-
Access from Phone:
- Connect phone to same WiFi network
- Open browser:
http://YOUR_LAPTOP_IP:3000/mobile_attendance.html - Enter laptop IP and organization email
- Start camera streaming for attendance
Create .env file in HORUS_backend/:
MONGODB_URL=mongodb+srv://username:password@cluster.mongodb.net/
OPENAI_API_KEY=your_openai_api_key_here
YOLO_MODEL_PATH=path/to/yolo/model.pt
REID_MODEL_PATH=path/to/reid/model.pth.tar
REFERENCE_IMAGES_PATH=path/to/reference/imagesRequired model files (not included in repository):
- YOLO Model:
workforcetracker/trackingmodel/yolo/Model1/weights/best.pt - ReID Model:
workforcetracker/trackingmodel/reid/Model1/model.pth.tar-60 - Reference Images:
workforcetracker/reference_images/
POST /api/auth/signup- User registrationPOST /api/auth/signin- User loginGET /api/auth/user- Get current user
POST /api/face-recognition/register-face- Register new facePOST /api/face-recognition/recognize-face- Recognize facePOST /api/face-recognition/mark-attendance- Mark attendance
GET /api/movement-history- Get movement historyPOST /api/movement-history- Log movement dataWebSocket /ws/movements- Real-time movement updates
POST /api/detection/detect- Process detection frameWebSocket /ws/video- Live video stream processing
POST /chatagent/chat- Chat with AI agentGET /chatagent/health- Service health check
HORUS_backend/
βββ app/
β βββ api/ # API routes and controllers
β βββ assettracker/ # Asset tracking module
β βββ chatagent/ # AI chat agent
β βββ database/ # Database configurations
β βββ facerecognizer/ # Face recognition service
β βββ models/ # Data models
β βββ workforcetracker/ # Computer vision module
βββ requirements.txt
βββ main.py
HORUS_frontend/
βββ src/
β βββ components/ # React components
β βββ services/ # API services
β βββ assets/ # Static assets
βββ package.json
βββ vite.config.ts
- Create feature branch:
git checkout -b feature/new-feature - Add backend API endpoints in
app/api/routes/ - Add frontend components in
src/components/ - Update documentation
- Submit pull request
# Backend tests
cd HORUS_backend
pytest
# Frontend tests
cd HORUS_frontend
npm test- Minimum: 8GB RAM, Intel i5 or equivalent
- Recommended: 16GB RAM, Intel i7 or equivalent, NVIDIA GPU (for CV processing)
- Storage: 10GB free space
- Network: Stable internet connection for MongoDB Atlas
- OS: Windows 10+, macOS 10.15+, Ubuntu 18.04+
- Browsers: Chrome 90+, Firefox 88+, Safari 14+
- Python: 3.8-3.11
- Node.js: 16.0+
- OpenAI for GPT models
- Ultralytics for YOLO implementation
- FastAPI team for the excellent framework
- React team for the frontend framework
- MongoDB for database services
Made with β€οΈ by the Team Epochs4