From 9350658603e847ac4ad9f0a03070348c6568e24e Mon Sep 17 00:00:00 2001
From: Idan Ben Ami <109598548+Idan-BenAmi@users.noreply.github.com>
Date: Thu, 21 Mar 2024 22:13:07 +0200
Subject: [PATCH] MCT tutorials readme - fix broken links (#1010)
* Fixed broken links in MCT tutorials readme.
---
FAQ.md | 2 +-
README.md | 15 ++++++++-------
tutorials/README.md | 8 ++++----
tutorials/notebooks/IMX500_notebooks.md | 6 +++---
tutorials/notebooks/MCT_notebooks.md | 8 --------
5 files changed, 16 insertions(+), 23 deletions(-)
diff --git a/FAQ.md b/FAQ.md
index 6f6c9c441..c44e40e4f 100644
--- a/FAQ.md
+++ b/FAQ.md
@@ -36,7 +36,7 @@ quantized_model = mct.keras_load_quantized_model('my_model.keras')
#### PyTorch
-PyTorch models can be exported as onnx models. An example of loading a saved onnx model can be found [here](https://sony.github.io/model_optimization/api/api_docs/modules/exporter.html#use-exported-model-for-inference).
+PyTorch models can be exported as onnx models. An example of loading a saved onnx model can be found [here](https://github.com/sony/model_optimization/blob/main/docs/api/experimental_api_docs/modules/exporter.html#use-exported-model-for-inference).
*Note:* Running inference on an ONNX model in the `onnxruntime` package has a high latency.
Inference on the target platform (e.g. the IMX500) is not affected by this latency.
diff --git a/README.md b/README.md
index f20bc3531..02e59a144 100644
--- a/README.md
+++ b/README.md
@@ -38,15 +38,16 @@ For installing the nightly version or installing from source, refer to the [inst
### Quick start & tutorials
-For an example of how to use MCT with TensorFlow or PyTorch on various models and tasks,
-check out the [quick-start page](tutorials/quick_start/README.md) and
-the [results CSV](tutorials/quick_start/results/model_quantization_results.csv).
-
-In addition, a set of [notebooks](tutorials/notebooks) are provided for an easy start. For example:
-* [MobileNet with Tensorflow](tutorials/notebooks/keras/ptq/example_keras_mobilenet.py).
-* [MobileNetV2 with PyTorch](tutorials/notebooks/pytorch/ptq/example_pytorch_mobilenet_v2.py).
+Explore the Model Compression Toolkit (MCT) through our tutorials,
+covering compression techniques for Keras and PyTorch models. Access interactive [notebooks](tutorials/README.md)
+for hands-on learning. For example:
+* [Keras MobileNetV2 post training quantization](tutorials/notebooks/keras/ptq/example_keras_imagenet.ipynb)
+* [Post training quantization with PyTorch](tutorials/notebooks/pytorch/ptq/example_pytorch_quantization_mnist.ipynb)
* [Data Generation for ResNet18 with PyTorch](tutorials/notebooks/pytorch/data_generation/example_pytorch_data_generation.ipynb).
+Additionally, for quick quantization of a variety of models from well-known collections,
+visit the [quick-start page](tutorials/quick_start/README.md) and the
+[results CSV](tutorials/quick_start/results/model_quantization_results.csv).
### Supported Versions
diff --git a/tutorials/README.md b/tutorials/README.md
index 42decc234..ddfe990c3 100644
--- a/tutorials/README.md
+++ b/tutorials/README.md
@@ -6,8 +6,8 @@ Access interactive Jupyter notebooks for hands-on learning.
## Getting started
Learn how to quickly quantize pre-trained models using MCT's post-training quantization technique for both Keras and PyTorch models.
-- [Keras MobileNetV2 post training quantization](notebooks/keras/ptq/example_keras_imagenet.ipynb)
-- [Pytorch MobileNetV2 post training quantization](notebooks/pytorch/ptq/example_pytorch_quantization_mnist.ipynb)
+- [Post training quantization with Keras](notebooks/keras/ptq/example_keras_imagenet.ipynb)
+- [Post training quantization with PyTorch](notebooks/pytorch/ptq/example_pytorch_quantization_mnist.ipynb)
## MCT Features
This set of tutorials covers all the quantization tools provided by MCT.
@@ -15,7 +15,7 @@ The notebooks in this section demonstrate how to configure and run simple and ad
This includes fine-tuning PTQ (Post-Training Quantization) configurations, exporting models,
and exploring advanced compression techniques.
These techniques are essential for further optimizing models and achieving superior performance in deployment scenarios.
-- [MCT notebooks](notebooks/MCT_notebooks.md)
+- [MCT Features notebooks](notebooks/MCT_notebooks.md)
## Quantization for Sony-IMX500 deployment
@@ -23,4 +23,4 @@ This section provides several guides on quantizing pre-trained models to meet sp
[Sony-IMX500](https://developer.sony.com/imx500/) processing platform.
We will cover various tasks and demonstrate the necessary steps to achieve efficient quantization for optimal
deployment performance.
-- [IMX500 notebooks](notebooks/IMX500_notebooks.md)
+- [MCT IMX500 notebooks](notebooks/IMX500_notebooks.md)
diff --git a/tutorials/notebooks/IMX500_notebooks.md b/tutorials/notebooks/IMX500_notebooks.md
index 2495497d3..ad914988c 100644
--- a/tutorials/notebooks/IMX500_notebooks.md
+++ b/tutorials/notebooks/IMX500_notebooks.md
@@ -6,7 +6,7 @@ deployment performance.
| Task | Model | Source Repository | Notebook |
|-----------------------------------------------------------------|----------------|---------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
- | Classification | MobileNetV2 | [Keras Applications](https://keras.io/api/applications/) | [Keras notebook](model_optimization/tutorials/notebooks/keras/ptq/example_keras_imagenet.ipynb) |
- | Object Detection | YOLOv8n | [Ultralytics](https://github.com/ultralytics/ultralytics) | [Keras notebook](model_optimization/tutorials/notebooks/keras/ptq/keras_yolov8n_for_imx500.ipynb) |
- | Semantic Segmentation | DeepLabV3-Plus | [bonlime's repo](https://github.com/bonlime/keras-deeplab-v3-plus) | [Keras notebook](model_optimization/tutorials/notebooks/keras/ptq/keras_deeplabv3plus_for_imx500.ipynb) |
+ | Classification | MobileNetV2 | [Keras Applications](https://keras.io/api/applications/) | [Keras notebook](keras/ptq/example_keras_imagenet.ipynb) |
+ | Object Detection | YOLOv8n | [Ultralytics](https://github.com/ultralytics/ultralytics) | [Keras notebook](keras/ptq/keras_yolov8n_for_imx500.ipynb) |
+ | Semantic Segmentation | DeepLabV3-Plus | [bonlime's repo](https://github.com/bonlime/keras-deeplab-v3-plus) | [Keras notebook](keras/ptq/keras_deeplabv3plus_for_imx500.ipynb) |
diff --git a/tutorials/notebooks/MCT_notebooks.md b/tutorials/notebooks/MCT_notebooks.md
index 44a4050ae..251194ad8 100644
--- a/tutorials/notebooks/MCT_notebooks.md
+++ b/tutorials/notebooks/MCT_notebooks.md
@@ -92,14 +92,6 @@ These techniques are essential for further optimizing models and achieving super
-
- Quantization-Aware Training (QAT)
-
- | Tutorial | Included Features |
- |-----------------------------------------------------------------------------------|--------------|
- | [QAT on MNIST](pytorch/qat/example_pytorch_qat.py) | ✅ QAT |
-
-