- Test segmentation models and convert
checkpoint.pth
tocolon.onnx
> colon_seg_net_test.ipynb
How can choosing the right floating point precision improve performance with my application?
- See workflow to convert your images to video and gxf format
- See Inference Parameters (
backend
andenable_fp16
=True/False) in colonoscopy_segmentation.py and further descriptions in this README
Models can also be optimised using tao-coverter
./tao-converter \
-k tlt_encode \
-d 3,544,960 \
-w 40960M \
-t fp16 \
-o output_cov/Sigmoid,output_bbox/BiasAdd \
-e engine.trt \
resnet34_peoplenet_pruned_int8.etlt
- Parameters in tao-converter
-k: The key used to encode the .tlt model when doing the training.
-d: Comma-separated list of input dimensions that should match the dimensions used for tao <model> export.
-w: Maximum workspace size for the TensorRT engine. The default value is 1073741824(1<<30).
-t: Desired engine data type, generates calibration cache if in INT8 mode. The default value is fp32. The options are {fp32, fp16, int8}.
-o: Comma-separated list of output blob names that should match the output configuration used for tao <model> export.
-e: Path to save the engine to. (default: ./saved.engine)
input_file
- Mikael Brudfors introducing Clara AGX/IGX for intelligent Medical Instruments
- Q&As
- Keep hacking
- Prepare documentation and discuss results
- Create PRs
- Review and merge PRS
- Group photo
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#abstract