- Used
pixelformer
environment which had cuda-enabled torch. - And used system-wide(not able to install in conda env) installed tensorrt from inside the environment using below line.:
export PYTHONPATH="/usr/lib/python3.8/dist-packages:$PYTHON_PATH"
(run it in every terminal, otherwise tensorrt will not be visible from conda env) - make sure to use
python3
instead of justpython
while running.
- use
1.pytorch_to_onnx.py
to save pytorch model in onnx format. (I did this in vision04 itself, in jetson it was not working for me(the model was not giving right output). So used torch.onnx.export() function in my eval.py file in visin04) - Use
2.onnx_runtime.py
to inference using onnx file(just for sanity check that the model is giving right output or not). - Use
3.onnx_to_tensorRT.py
to convert onnx model to tensorRT and also to inference using tensorRT engine(which is saved).
- Run :
python3 live_demo_with_homography_single_screen.py
frompixelformer
conda environment. - before that to make tensorrt visible : execute what it is inside
run_global_tensorrt_in_conda_env.sh
file.
Device | Dataset | PyTorch Model | ONNX Model | TensorRT | I/O Demo (with Preprocessing) |
---|---|---|---|---|---|
Jetson AGX Orin | NYUv2 | 2FPS | 2.46 sec(per img) | 0.25 sec[6.5 FPS] | 4.5 FPS |
Jetson AGX Orin | Kitti | 1.63 FPS(3.33 FPS on resizing image to half) | - | 0.28 sec[5 FPS] | 2.7 FPS |
NVIDIA A100 GPU | NYUv2 | 6.57 FPS | - | Nil | Nil |
NVIDIA A100 GPU | Kitti | 5.66 FPS | - | 7.2 FPS | 4 FPS |
Size of Images:
- NYUv2 has image of size (480,640).
- Kitti has image of size (352, 1216) [which is kb_crop-ed from original size (375, 1242)]