Skip to content

tajwarchy/video-tracking-analytics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🎯 Video Tracking Analytics Pipeline

Real-time multi-object tracking, counting, and movement analysis using YOLOv8 + ByteTrack, optimised for Apple M1 (MPS acceleration).

Demo

Portfolio project β€” detects and tracks objects across video frames with persistent IDs, trajectory trails, heatmaps, virtual line counters, and a live stats dashboard. Exports full tracking results as CSV/JSON.


✨ Features

Feature Description
πŸ” Multi-object detection YOLOv8 (nano β†’ xlarge) with MPS/CPU support
πŸ”— Persistent tracking ByteTrack β€” stable IDs across frames
πŸ“Š Virtual line counter IN/OUT counting per class with crossing events
🌑️ Movement heatmap Gaussian-blurred density overlay, PNG export
πŸ›€οΈ Trajectory trails Per-object fading polyline history
πŸ“ˆ Live HUD FPS, active tracks, unique total, line counts
πŸ’Ύ Full export CSV + JSON tracking data, summary report
⚑ M1 optimised PyTorch MPS acceleration on Apple Silicon

πŸ› οΈ Setup

Requirements

  • macOS Apple M1/M2/M3 (or Linux/Windows)
  • Python 3.10+
  • Conda or venv

Install

# 1. Clone
git clone https://github.com/tajwarchy/video-tracking-analytics.git
cd video-tracking-analytics

# 2. Create environment
conda create -n video-tracking python=3.10 -y
conda activate video-tracking

# 3. Install dependencies
pip install -r requirements.txt
pip install -e .

# 4. Verify MPS
python -c "import torch; print('MPS:', torch.backends.mps.is_available())"

Prepare sample videos

# Generate a synthetic test video (instant, no download)
python -m data.prepare_video --synthetic

# Inspect any video
python -m data.prepare_video --info data/sample_videos/synthetic_test.mp4

# Resize a video to 1280Γ—720
python -m data.prepare_video --input myvideo.mp4 --width 1280 --height 720

πŸš€ Usage

Process a video file

python -m inference.process_video \
    --config configs/tracking_config.yaml \
    --source data/sample_videos/your_video.mp4 \
    --show

Live webcam stream

python -m inference.live_stream \
    --config configs/tracking_config.yaml \
    --source 0

Record the live stream

python -m inference.live_stream \
    --config configs/tracking_config.yaml \
    --source 0 --save

Batch process a folder

python -m inference.process_video \
    --config configs/tracking_config.yaml \
    --batch data/sample_videos/

Keyboard controls (both modes)

Key Action
q Quit
h Toggle heatmap overlay
t Toggle trajectory trails
b Toggle bounding boxes
l Toggle counting lines
u Toggle HUD
s Save snapshot (live mode)

βš™οΈ Configuration

All parameters live in configs/tracking_config.yaml. Key settings:

model:
  size: n           # n | s | m | l | x
  confidence: 0.4
  device: mps       # mps | cpu

classes:
  filter: [0, 2, 7] # 0=person  2=car  7=truck

tracker:
  type: bytetrack
  track_buffer: 30  # frames to keep lost tracks alive

counting:
  enabled: true
  lines:
    - name: "Line A"
      points: [[0.5, 0.0], [0.5, 1.0]]  # vertical center line

heatmap:
  enabled: true
  colormap: HOT     # HOT | JET | INFERNO | PLASMA | TURBO
  alpha: 0.5

πŸ“Š Benchmark

Tested on MacBook Air M1 Β· YOLOv8n Β· 100 frames Β· street_video.mp4

python benchmark.py --source data/sample_videos/street_video.mp4 --frames 100

Speed

──────────────────────────────────────────────────
β”‚ Device β”‚ Detect  β”‚ Track  β”‚ Total   β”‚ FPS      β”‚
──────────────────────────────────────────────────
β”‚ MPS    β”‚ 40.0 ms β”‚ 1.6 ms β”‚ 41.5 ms β”‚ 24.1 fps β”‚
β”‚ CPU    β”‚ 39.3 ms β”‚ 1.3 ms β”‚ 40.5 ms β”‚ 24.7 fps β”‚
──────────────────────────────────────────────────

MPS and CPU perform comparably at this resolution because YOLOv8n is lightweight enough that GPU dispatch overhead is a factor. MPS advantage becomes significant with larger models (yolov8s/m) or higher resolutions.

Tracking Metrics

──────────────────────────────────────
β”‚ Metric               β”‚ Value       β”‚
──────────────────────────────────────
β”‚ Unique tracks        β”‚ 25          β”‚
β”‚ ID switches          β”‚ 37          β”‚
β”‚ Total matched frames β”‚ 1245        β”‚
β”‚ Avg track duration   β”‚ 49.8 frames β”‚
β”‚ MOTA (proxy)         β”‚ 97.03 %     β”‚
β”‚ MOTP (proxy)         β”‚ 32.77 %     β”‚
──────────────────────────────────────

MOTA/MOTP are self-consistency proxies (no ground-truth annotations). For official scores, evaluate against MOT17 using py-motmetrics.


πŸ“‚ Project Structure

video-tracking-analytics/
β”œβ”€β”€ assets/
β”‚   └── demo.gif                # portfolio demo
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ prepare_video.py        # download / validate / resize / synthetic
β”‚   └── sample_videos/          # input videos (gitignored)
β”œβ”€β”€ tracking/
β”‚   β”œβ”€β”€ detector.py             # YOLOv8 wrapper β†’ (N,6) detections
β”‚   β”œβ”€β”€ tracker.py              # ByteTrack wrapper β†’ TrackedObject list
β”‚   β”œβ”€β”€ motion_estimator.py     # velocity, speed, direction per track
β”‚   └── trajectory.py           # centroid history store
β”œβ”€β”€ analytics/
β”‚   β”œβ”€β”€ counter.py              # virtual line crossing counter
β”‚   β”œβ”€β”€ statistics.py           # rolling FPS, counts, speed stats
β”‚   β”œβ”€β”€ heatmap_generator.py    # density heatmap, PNG export
β”‚   └── report_generator.py     # CSV / JSON / TXT export
β”œβ”€β”€ inference/
β”‚   β”œβ”€β”€ process_video.py        # offline pipeline
β”‚   β”œβ”€β”€ live_stream.py          # real-time webcam pipeline
β”‚   └── visualization.py        # all drawing (boxes, trails, HUD)
β”œβ”€β”€ configs/
β”‚   └── tracking_config.yaml    # master config
β”œβ”€β”€ results/                    # auto-generated (gitignored)
β”‚   β”œβ”€β”€ tracked_videos/
β”‚   β”œβ”€β”€ heatmaps/
β”‚   β”œβ”€β”€ statistics/
β”‚   └── reports/
β”œβ”€β”€ weights/                    # YOLO weights (gitignored)
β”œβ”€β”€ benchmark.py                # speed + MOT metrics
β”œβ”€β”€ setup.py
└── requirements.txt

πŸ“€ Output Files

After processing a video named street.mp4:

File Description
results/tracked_videos/street_tracked.mp4 Annotated output video
results/heatmaps/street_heatmap.png Movement density heatmap
results/reports/street_tracks.csv Per-track flat table
results/reports/street_tracks.json Full trajectory data
results/statistics/street_summary.json Machine-readable summary
results/statistics/street_summary.txt Human-readable report

πŸ—οΈ Tech Stack

  • Detection: YOLOv8 via ultralytics
  • Tracking: ByteTrack via boxmot
  • Vision: OpenCV, NumPy, SciPy
  • Acceleration: PyTorch MPS (Apple Silicon)

πŸ“„ License

MIT License β€” free to use and adapt.

About

Real-time multi-object tracking & counting pipeline using YOLOv8 + ByteTrack. Trajectory trails, heatmaps, virtual line counters, and live stats dashboard. Optimised for Apple M1 (MPS).

Topics

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages