Face recognition pipeline powered by Triton Inference Server for flexible model deployment and inference.
- Model Flexibility - Easily swap detection and recognition models via Triton config
- Efficient Inference - Optimized pipeline using NVIDIA Triton
- Scalable Architecture - Distributed services for production workloads
- Vector Storage - Fast similarity search with Qdrant
- User-Friendly UI - Streamlit interface for face registration and management
# Start the system
docker-compose up -d
docker-compose -f docker-compose.qdrant.yaml up -d
# Launch face registration UI
streamlit run src/face_registration_app.py
# Launch application with webcam
python main.py --webcam| Service | Purpose | Tech |
|---|---|---|
| Inference Server | Model serving | Triton |
| Vector DB | Embedding storage | Qdrant |
| Object Storage | Image storage | MinIO |
| Registration UI | Face enrollment | Streamlit |
- Add your model to the Triton model repository
- Update the model configuration in
models/config.pbtxt - Restart the Triton service
The Streamlit app provides an intuitive interface to:
- Register new faces with name and attributes
- Capture faces from webcam or upload images
- View and manage existing face database
- Test face recognition in real-time