Official Repository of Tracking Any Object Amodally.
📙 Project Page | 📎 Paper Link | ✏️ Citations
📌 Leave a ⭐ to keep track of our updates.
Clone the repository
git clone https://github.com/WesleyHsieh0806/TAO-Amodal.git
Setup environment
conda create --name TAO-Amodal python=3.9 -y
conda activate TAO-Amodal
bash environment_setup.sh
- Download our dataset following the instructions here.
- The directory should have the following structure:
TAO-Amodal ├── frames │ └── train │ ├── ArgoVerse │ ├── BDD │ ├── Charades │ ├── HACS │ ├── LaSOT │ └── YFCC100M ├── amodal_annotations │ ├── train/validation/test.json │ ├── train_lvis_v1.json │ └── validation_lvis_v1.json ├── example_output │ └── prediction.json ├── BURST_annotations │ ├── train │ └── train_visibility.json │ ...
Explore more examples from our dataset here.
Visualize our dataset and tracker predictions to get a better understanding of amodal tracking. Instructions could be found here.
We provide the training and inference code of the proposed Amodal Expander.
The inference code generates a
lvis_instances_results.json
, which could be used to obtain the evaluation results as introduced in the next section.
- Output tracker predictions as json. The predictions should be structured as:
[{
"image_id" : int,
"category_id" : int,
"bbox" : [x,y,width,height],
"score" : float,
"track_id": int,
"video_id": int
}]
We also provided an example output prediction json here. Refer to this file to check the correct format.
- Evaluate on TAO-Amodal
cd tools
python eval_on_tao_amodal.py --track_result /path/to/prediction.json \
--output_log /path/to/output.log \
--annotation /path/to/validation_lvis_v1.json
Annotation JSON is provided in our dataset. Evaluation results will be written in your console and saved in
--output_log
.
@article{hsieh2023tracking,
title={Tracking any object amodally},
author={Hsieh, Cheng-Yen and Khurana, Tarasha and Dave, Achal and Ramanan, Deva},
journal={arXiv preprint arXiv:2312.12433},
year={2023}
}