Skip to content

What's the difference compared to NVIDIA-AI-IOT / torch2trt? #883

Answered by narendasan
boba-milk-tea asked this question in Q&A
Discussion options

You must be logged in to vote

Torch-TensorRT is designed to be a robust path from PyTorch and TorchScript to TensorRT supporting C++ (via LibTorch) and Python (via PyTorch).

Under the hood, Torch-TensorRT compiles stand alone torchscript code (no python dependency) to TensorRT and wraps it in a module, where as torch2trt monkey-patches PyTorch python functions to emit TensorRT layers when they are run, using that to construct the engine which is returned as a module.

In terms of advantages Torch-TensorRT looks to be easy to use, support advanced TensorRT features like quantization as well as give you many different deployment options (C++ w/o Python for example). It also has the ability to automatically segment your m…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by ncomly-nvidia
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants