forked from researchmm/Stark
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
add complete code for STARK-Lightning
- Loading branch information
1 parent
169a068
commit 75c05a9
Showing
14 changed files
with
91 additions
and
463 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,10 +1,5 @@ | ||
from .base_actor import BaseActor | ||
from .stark_s import STARKSActor | ||
from .stark_st import STARKSTActor | ||
from .stark_s_plus import STARKSPLUSActor | ||
from .stark_s_plus_sp import STARKSPLUSSPActor | ||
from .stark_st_plus_sp import STARKSTPLUSSPActor | ||
from .stark_st_plus_sp_debug import STARKSTPLUSSPActor_debug | ||
from .stark_lightningX import STARKLightningXActor | ||
from .stark_lightningXtrt import STARKLightningXtrtActor | ||
from .stark_lightningXtrt_distill import STARKLightningXtrtdistillActor |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
# STARK-Lightning 中文教程 | ||
**前言**: [ONNXRUNTIME](https://github.com/microsoft/onnxruntime) 是微软开源的一个用于网络推理加速的库,在该教程中我们将教给大家如何将训练好的模型导出成ONNX格式, | ||
并使用ONNXRUNTIME来进一步加速推理,加速后的STARK-Lightning在RTX TITAN上的运行速度可达200~300 FPS!让我们开始吧 | ||
## 安装onnx和onnxruntime | ||
如果想在GPU上使用onnxruntime完成推理 | ||
``` | ||
pip install onnx onnxruntime-gpu==1.6.0 | ||
``` | ||
- 这里onnxruntime-gpu的版本需要和机器上的CUDA版本还有CUDNN版本适配,版本对应关系请参考https://www.onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html | ||
。在我的电脑上,CUDA版本10.2,CUDNN版本8.0.3,故安装的是onnxruntime-gpu==1.6.0 | ||
|
||
如果只需要在CPU上使用 | ||
``` | ||
pip install onnx onnxruntime | ||
``` | ||
##ONNX模型转换与推理测试 | ||
下载训练好的PyTorch模型权重文件 [STARK_Lightning](https://drive.google.com/file/d/18xxbMKCjWi6Gvn5T4o2w5jIbwd3AWN55/view?usp=sharing) | ||
|
||
将训练好的PyTorch模型转换成onnx格式,并测试onnxruntime | ||
``` | ||
python tracking/ORT_lightning_X_trt_backbone_bottleneck_pe.py # for the template branch | ||
python tracking/ORT_lightning_X_trt_complete.py # for the search region branch | ||
``` | ||
- 模型转换在终端里可以跑通,但是在pycharm里面会报找不到libcudnn8.so的错误,后面就在终端运行吧 | ||
|
||
在LaSOT上测试转换后的模型(支持多卡推理) | ||
- 首先在lib/test/tracker/stark_lightning_X_trt.py中设置 use_onnx = True, 之后运行 | ||
``` | ||
python tracking/test.py stark_lightning_X_trt baseline_rephead_4_lite_search5 --threads 8 --num_gpus 2 | ||
``` | ||
其中num_gpus是想使用的GPU数量,threads是进程数量,我们通常将其设置成GPU数量的4倍。 | ||
如果想一个一个视频来跑,可以运行以下指令 | ||
``` | ||
python tracking/test.py stark_lightning_X_trt baseline_rephead_4_lite_search5 --threads 0 --num_gpus 1 | ||
``` | ||
- 评估跟踪指标 | ||
```python tracking/analysis_results_ITP.py --script stark_lightning_X_trt --config baseline_rephead_4_lite_search5``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
# STARK-Lightning Tutorial | ||
**Introduction**:[ONNXRUNTIME](https://github.com/microsoft/onnxruntime) is an open-source library by Microsoft for network inference acceleration. In this tutorial, we will show how to export the trained model to ONNX format | ||
and use ONNXRUNTIME to further accelerate the inference. The accelerated STARK-Lightning can run at 200~300 FPS on a RTX TITAN GPU! let's get started. | ||
## Install onnx and onnxruntime | ||
for inference on GPU | ||
``` | ||
pip install onnx onnxruntime-gpu==1.6.0 | ||
``` | ||
- Here the version of onnxruntime-gpu needs to be compatible to the CUDA version and CUDNN version on the machine. For more details, please refer to https://www.onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html | ||
. For example, on my computer, CUDA version is 10.2, CUDNN version is 8.0.3, so I choose onnxruntime-gpu==1.6.0 | ||
|
||
for inference only on CPU | ||
``` | ||
pip install onnx onnxruntime | ||
``` | ||
## ONNX Conversion and Inference | ||
Download trained PyTorch checkpoints [STARK_Lightning](https://drive.google.com/file/d/18xxbMKCjWi6Gvn5T4o2w5jIbwd3AWN55/view?usp=sharing) | ||
|
||
Export the trained PyTorch model to onnx format, then test it with onnxruntime | ||
``` | ||
python tracking/ORT_lightning_X_trt_backbone_bottleneck_pe.py # for the template branch | ||
python tracking/ORT_lightning_X_trt_complete.py # for the search region branch | ||
``` | ||
- The conversion can run successfully in the terminal. However, it leads to an error of "libcudnn8.so is not found" when running in Pycharm. | ||
So please run these two commands in the terminal. | ||
|
||
Evaluate the converted onnx model on LaSOT (Support multiple-GPU inference). | ||
- Set ```use_onnx=True``` in lib/test/tracker/stark_lightning_X_trt.py, then run | ||
``` | ||
python tracking/test.py stark_lightning_X_trt baseline_rephead_4_lite_search5 --threads 8 --num_gpus 2 | ||
``` | ||
```num_gpus``` is the the number of GPUs to use,```threads``` is the number of processes. we usually set ```threads``` to be four times ```num_gpus```. | ||
If the user want to run the sequences one by one, you can run the following command | ||
``` | ||
python tracking/test.py stark_lightning_X_trt baseline_rephead_4_lite_search5 --threads 0 --num_gpus 1 | ||
``` | ||
- Evaluate the tracking results | ||
```python tracking/analysis_results_ITP.py --script stark_lightning_X_trt --config baseline_rephead_4_lite_search5``` | ||
|
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.