Skip to content

Commit

Permalink
update STARK-Lightning README
Browse files Browse the repository at this point in the history
  • Loading branch information
MasterBin-IIAU committed Jul 24, 2021
1 parent d1136fd commit 524013e
Show file tree
Hide file tree
Showing 4 changed files with 31 additions and 5 deletions.
2 changes: 0 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,8 +96,6 @@ python tracking/train.py --script stark_st2 --config baseline --save_dir . --mod
# STARK-ST101
python tracking/train.py --script stark_st1 --config baseline_R101 --save_dir . --mode multiple --nproc_per_node 8 # STARK-ST101 Stage1
python tracking/train.py --script stark_st2 --config baseline_R101 --save_dir . --mode multiple --nproc_per_node 8 --script_prv stark_st1 --config_prv baseline_R101 # STARK-ST101 Stage2
# STARK-Lightning
python tracking/train.py --script stark_lightning_X_trt --config baseline_rephead_4_lite_search5 --save_dir . --mode multiple --nproc_per_node 8 # STARK-Lightning
```
(Optionally) Debugging training with a single GPU
```
Expand Down
2 changes: 1 addition & 1 deletion install_pytorch17.sh
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ pip install git+https://github.com/votchallenge/vot-toolkit-python
echo ""
echo ""
echo "****************** Installing onnx and onnxruntime-gpu ******************"
pip install onnx onnxruntime-gpu==1.5.1
pip install onnx onnxruntime-gpu==1.6.0

echo ""
echo ""
Expand Down
16 changes: 15 additions & 1 deletion lib/tutorials/STARK_Lightning_Ch.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
# STARK-Lightning 中文教程
**前言**[ONNXRUNTIME](https://github.com/microsoft/onnxruntime) 是微软开源的一个用于网络推理加速的库,在该教程中我们将教给大家如何将训练好的模型导出成ONNX格式,
并使用ONNXRUNTIME来进一步加速推理,加速后的STARK-Lightning在RTX TITAN上的运行速度可达200~300 FPS!让我们开始吧
并使用ONNXRUNTIME来进一步加速推理,加速后的STARK-Lightning在RTX TITAN上的运行速度可达200+ FPS!让我们开始吧
## STARK-Lightning v.s 其他跟踪器
| Tracker | LaSOT (AUC)| Speed (FPS) | Params (MB)|
|---|---|---|---|
|**STARK-Lightning**|**58.2**|**~200**|**8.2**|
|DiMP50|56.8|~50|165|
|DaSiamRPN|41.5|~200|362|
|SiamFC|33.6|~100|8.9|
STARK-Lightning取得了比DiMP50更强的性能,运行速度和DaSiamRPN一样快 :zap: ,而模型大小比SiamFC还要更小一些!
## (非必须) 训练 STARK-Lightning
运行下面的指令,可8卡并行训练
```
python tracking/train.py --script stark_lightning_X_trt --config baseline_rephead_4_lite_search5 --save_dir . --mode multiple --nproc_per_node 8
```
由于STARK-Lightning的训练很快,并且只需要极少的显存,因此也可以考虑用2卡或者4卡训练,只需对应修改 ```nproc_per_node```即可.
## 安装onnx和onnxruntime
如果想在GPU上使用onnxruntime完成推理
```
Expand Down
16 changes: 15 additions & 1 deletion lib/tutorials/STARK_Lightning_En.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
# STARK-Lightning Tutorial
**Introduction**[ONNXRUNTIME](https://github.com/microsoft/onnxruntime) is an open-source library by Microsoft for network inference acceleration. In this tutorial, we will show how to export the trained model to ONNX format
and use ONNXRUNTIME to further accelerate the inference. The accelerated STARK-Lightning can run at 200~300 FPS on a RTX TITAN GPU! let's get started.
and use ONNXRUNTIME to further accelerate the inference. The accelerated STARK-Lightning can run at 200+ FPS on a RTX TITAN GPU! let's get started.
## STARK-Lightning v.s Other Trackers
| Tracker | LaSOT (AUC)| Speed (FPS) | Params (MB)|
|---|---|---|---|
|**STARK-Lightning**|**58.2**|**~200**|**8.2**|
|DiMP50|56.8|~50|165|
|DaSiamRPN|41.5|~200|362|
|SiamFC|33.6|~100|8.9|
STARK-Lightning achieves better performance than DiMP50, runs at a competitive speed as DaSiamRPN, and has a smaller model size than SiamFC!
## (Optionally) Train STARK-Lightning
Train STARK-Lightning with 8 GPUs with the following command
```
python tracking/train.py --script stark_lightning_X_trt --config baseline_rephead_4_lite_search5 --save_dir . --mode multiple --nproc_per_node 8
```
Since the training of STARK-Lightning is fast and memory-friendly, you can also train it with less GPUs (such as 2 or 4) by set ```nproc_per_node``` accordingly.
## Install onnx and onnxruntime
for inference on GPU
```
Expand Down

0 comments on commit 524013e

Please sign in to comment.