For object tracking, this work uses SIMPLE ONLINE AND REALTIME TRACKING WITH A DEEP ASSOCIATION METRIC (Deep Sort) | Paper Link: arxiv
For real-time object detection, open-source Yolo code by AlexeyAB is used (link)
The dataset used for this project is Multi Object Tracking(MOT16). It can be downloaded from the link
We expect to download dataset and put it in the root directory of this project. The directory structure will look like this (for one subset):
MOT16
├── test
| ├── MOT16-06
| | ├── det
| | ├── img1
| | ├── seqinfo.ini
├── train
Python = 3.6+ Tensorflow==1.15.0
- Clone the repo to your local machine.
git clone https://github.com/Computer-Vision-IIITH-2021/project-blindpc
- Move to the project directory.
cd project-blindpc
- Download dataset and set the directory structure as mentioned above.
Note: For setting darknet, we have provided a seperate Google Colab notebook(darknet_demo.ipynb). We expect the user to run it on a GPU provided in Google Colab
To run the tracker, use the following command. It will generate a text file with bounding boxes and tracing id for each detection.
python app.py --sequence_path=./MOT16/test/MOT16-06 --detection_path=./detections/MOT16-06.npy \
--output_path=./MOT16_test_results/MOT16-06.txt --min_conf=0.3 --nn_budget=100
The detection file can be directly downloaded from the link or they can be generated using:
python src/detection_process_v2.py --model=model_data/mars-small128.pb --dir_mot=./MOT16/test \
--detection_path=./detections/MOT16-06.npy --dir_out=./detections
We have also tested different detection models like triplet and magnet model (apart from default cosine metric learning model). The models are present in model_data directory.
The benchmark file having bounding boxes and tracking id generated above, can be use to create a visual output.
The video can be generated using:
python result_to_video.py --mot_dir=./MOT16/test/MOT16-06/ \
--result_file=./MOT16_test_results/MOT16-06.txt \
--output_path=./videos/
DeepSort can be integrated with a multi-object detector to perform real-time tracking. We have used Yolo implemented in Darknet.
One can run an end-to-end code using our demo file darknet_demo.ipynb on Google Colab
OR
- Setup Yolo on the local machine by following instructions from AlexeyAB github repo
- Download Yolov4 weights from the above repo and put them in the darknet root directory.
- Clone the entire content of this repo into the darknet folder. (Make sure to rename 'src' folder to 'src_code' of this repo, to avoid name clash.)
- Run the command:
python yolo_with_deepsort.py
It will run the darknet on the yolo_person.mp4
in videos folder and generate Deep_sort_output.mp4
as output.
Disclaimer:
This project was done as a part of the course CSE578: Computer Vision, Spring 2021, IIIT-Hyderabad.