-
Notifications
You must be signed in to change notification settings - Fork 51
Open
Description
Issue Description
Currently, the calibration validation endpoint (/api/session/calib_validation) performs the same expensive model training computation for both requests with from_ruxailab: false and from_ruxailab: true. This results in unnecessary duplicate computation since both requests use identical input parameters and should produce identical results.
Current Flow
- Frontend sends first request with
from_ruxailab: false - Backend calls
gaze_tracker.predict()→ performs full model training - Frontend sends second request with
from_ruxailab: true - Backend calls
gaze_tracker.predict()again → performs identical model training - Second request additionally sends results to Firebase webhook
Impact
- Performance: Double the computational cost for identical operations
- Resource Usage: Unnecessary CPU cycles and memory consumption
Proposed Solution
Implement a caching mechanism to store the results of gaze_tracker.predict() and reuse them for subsequent requests with identical parameters.
Caching Options Under Discussion
There are ongoing discussions about the best caching approach:
- In-memory caching (LRU cache, file-based)
- Shared memory solutions for multi-process environments
- External caching services (Redis, etc.)
Related Discussions
Related Issue : #61
Related PR : #68
Request for Input
Appreciate the guidance from maintainers on:
- Preferred caching mechanism for this use case
- Configuration parameters (cache size, TTL, etc.)
- Multi-process deployment considerations
Expected Benefits
- Significant reduction in computational overhead
- Faster response times for calibration validation
- Better resource utilization
- Improved user experience during calibration process
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels