Real-time multi-object tracking, counting, and movement analysis using YOLOv8 + ByteTrack, optimised for Apple M1 (MPS acceleration).
Portfolio project β detects and tracks objects across video frames with persistent IDs, trajectory trails, heatmaps, virtual line counters, and a live stats dashboard. Exports full tracking results as CSV/JSON.
| Feature | Description |
|---|---|
| π Multi-object detection | YOLOv8 (nano β xlarge) with MPS/CPU support |
| π Persistent tracking | ByteTrack β stable IDs across frames |
| π Virtual line counter | IN/OUT counting per class with crossing events |
| π‘οΈ Movement heatmap | Gaussian-blurred density overlay, PNG export |
| π€οΈ Trajectory trails | Per-object fading polyline history |
| π Live HUD | FPS, active tracks, unique total, line counts |
| πΎ Full export | CSV + JSON tracking data, summary report |
| β‘ M1 optimised | PyTorch MPS acceleration on Apple Silicon |
- macOS Apple M1/M2/M3 (or Linux/Windows)
- Python 3.10+
- Conda or venv
# 1. Clone
git clone https://github.com/tajwarchy/video-tracking-analytics.git
cd video-tracking-analytics
# 2. Create environment
conda create -n video-tracking python=3.10 -y
conda activate video-tracking
# 3. Install dependencies
pip install -r requirements.txt
pip install -e .
# 4. Verify MPS
python -c "import torch; print('MPS:', torch.backends.mps.is_available())"# Generate a synthetic test video (instant, no download)
python -m data.prepare_video --synthetic
# Inspect any video
python -m data.prepare_video --info data/sample_videos/synthetic_test.mp4
# Resize a video to 1280Γ720
python -m data.prepare_video --input myvideo.mp4 --width 1280 --height 720python -m inference.process_video \
--config configs/tracking_config.yaml \
--source data/sample_videos/your_video.mp4 \
--showpython -m inference.live_stream \
--config configs/tracking_config.yaml \
--source 0python -m inference.live_stream \
--config configs/tracking_config.yaml \
--source 0 --savepython -m inference.process_video \
--config configs/tracking_config.yaml \
--batch data/sample_videos/| Key | Action |
|---|---|
q |
Quit |
h |
Toggle heatmap overlay |
t |
Toggle trajectory trails |
b |
Toggle bounding boxes |
l |
Toggle counting lines |
u |
Toggle HUD |
s |
Save snapshot (live mode) |
All parameters live in configs/tracking_config.yaml. Key settings:
model:
size: n # n | s | m | l | x
confidence: 0.4
device: mps # mps | cpu
classes:
filter: [0, 2, 7] # 0=person 2=car 7=truck
tracker:
type: bytetrack
track_buffer: 30 # frames to keep lost tracks alive
counting:
enabled: true
lines:
- name: "Line A"
points: [[0.5, 0.0], [0.5, 1.0]] # vertical center line
heatmap:
enabled: true
colormap: HOT # HOT | JET | INFERNO | PLASMA | TURBO
alpha: 0.5Tested on MacBook Air M1 Β· YOLOv8n Β· 100 frames Β· street_video.mp4
python benchmark.py --source data/sample_videos/street_video.mp4 --frames 100ββββββββββββββββββββββββββββββββββββββββββββββββββ
β Device β Detect β Track β Total β FPS β
ββββββββββββββββββββββββββββββββββββββββββββββββββ
β MPS β 40.0 ms β 1.6 ms β 41.5 ms β 24.1 fps β
β CPU β 39.3 ms β 1.3 ms β 40.5 ms β 24.7 fps β
ββββββββββββββββββββββββββββββββββββββββββββββββββ
MPS and CPU perform comparably at this resolution because YOLOv8n is lightweight enough that GPU dispatch overhead is a factor. MPS advantage becomes significant with larger models (yolov8s/m) or higher resolutions.
ββββββββββββββββββββββββββββββββββββββ
β Metric β Value β
ββββββββββββββββββββββββββββββββββββββ
β Unique tracks β 25 β
β ID switches β 37 β
β Total matched frames β 1245 β
β Avg track duration β 49.8 frames β
β MOTA (proxy) β 97.03 % β
β MOTP (proxy) β 32.77 % β
ββββββββββββββββββββββββββββββββββββββ
MOTA/MOTP are self-consistency proxies (no ground-truth annotations). For official scores, evaluate against MOT17 using
py-motmetrics.
video-tracking-analytics/
βββ assets/
β βββ demo.gif # portfolio demo
βββ data/
β βββ prepare_video.py # download / validate / resize / synthetic
β βββ sample_videos/ # input videos (gitignored)
βββ tracking/
β βββ detector.py # YOLOv8 wrapper β (N,6) detections
β βββ tracker.py # ByteTrack wrapper β TrackedObject list
β βββ motion_estimator.py # velocity, speed, direction per track
β βββ trajectory.py # centroid history store
βββ analytics/
β βββ counter.py # virtual line crossing counter
β βββ statistics.py # rolling FPS, counts, speed stats
β βββ heatmap_generator.py # density heatmap, PNG export
β βββ report_generator.py # CSV / JSON / TXT export
βββ inference/
β βββ process_video.py # offline pipeline
β βββ live_stream.py # real-time webcam pipeline
β βββ visualization.py # all drawing (boxes, trails, HUD)
βββ configs/
β βββ tracking_config.yaml # master config
βββ results/ # auto-generated (gitignored)
β βββ tracked_videos/
β βββ heatmaps/
β βββ statistics/
β βββ reports/
βββ weights/ # YOLO weights (gitignored)
βββ benchmark.py # speed + MOT metrics
βββ setup.py
βββ requirements.txt
After processing a video named street.mp4:
| File | Description |
|---|---|
results/tracked_videos/street_tracked.mp4 |
Annotated output video |
results/heatmaps/street_heatmap.png |
Movement density heatmap |
results/reports/street_tracks.csv |
Per-track flat table |
results/reports/street_tracks.json |
Full trajectory data |
results/statistics/street_summary.json |
Machine-readable summary |
results/statistics/street_summary.txt |
Human-readable report |
- Detection: YOLOv8 via
ultralytics - Tracking: ByteTrack via
boxmot - Vision: OpenCV, NumPy, SciPy
- Acceleration: PyTorch MPS (Apple Silicon)
MIT License β free to use and adapt.