This implements training of SiamBAN.
export PYTHONPATH=/path/to/siamban:$PYTHONPATH
Prepare training dataset, detailed preparations are listed in training_dataset directory.
Download pretrained backbones from here and put them in pretrained_models
directory
To train a model, run train.py
with the desired configs:
cd experiments/siamban_r50_l234
Refer to Pytorch distributed training for detailed description.
CUDA_VISIBLE_DEVICES=0,1,2
python -m torch.distributed.launch \
--nproc_per_node=3 \
--master_port=2333 \
../../tools/train.py --cfg config.yaml
After training, you can test snapshots on VOT dataset. For example, you need to test snapshots from 10 to 20 epoch.
START=10
END=20
seq $START 1 $END | \
xargs -I {} echo "snapshot/checkpoint_e{}.pth" | \
xargs -I {} \
python -u ../../tools/test.py \
--snapshot {} \
--config config.yaml \
--dataset VOT2018 2>&1 | tee logs/test_dataset.log
Or:
mpiexec -n 3 python ../../tools/test_epochs.py \
--start_epoch 10 \
--end_epoch 20 \
--gpu_nums 3 \
--threads 3 \
--dataset VOT2018
python ../../tools/eval.py \
--tracker_path ./results \ # result path
--dataset VOT2018 \ # dataset name
--num 4 \ # number thread to eval
--tracker_prefix 'ch*' # tracker_name
The tuning toolkit will not stop unless you do.
python ../../tools/tune.py \
--dataset VOT2018 \
--snapshot snapshot/checkpoint_e20.pth \
--gpu_id 0