From 36f44d39873c459cf2b9e8e251486100451a2756 Mon Sep 17 00:00:00 2001 From: Elad Cohen <78862769+elad-c@users.noreply.github.com> Date: Sun, 24 Mar 2024 09:51:18 +0200 Subject: [PATCH] replace pytorch exporter example link (#1012) --- FAQ.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/FAQ.md b/FAQ.md index c44e40e4f..01f3e33f0 100644 --- a/FAQ.md +++ b/FAQ.md @@ -36,7 +36,7 @@ quantized_model = mct.keras_load_quantized_model('my_model.keras') #### PyTorch -PyTorch models can be exported as onnx models. An example of loading a saved onnx model can be found [here](https://github.com/sony/model_optimization/blob/main/docs/api/experimental_api_docs/modules/exporter.html#use-exported-model-for-inference). +PyTorch models can be exported as onnx models. An example of loading a saved onnx model can be found [here](https://github.com/sony/model_optimization/blob/main/tutorials/notebooks/pytorch/export/example_pytorch_export.ipynb). *Note:* Running inference on an ONNX model in the `onnxruntime` package has a high latency. Inference on the target platform (e.g. the IMX500) is not affected by this latency.