This model predicts handwritten digits using a convolutional neural network (CNN).
Model | Download | Download (with sample test data) | ONNX version | Opset version | TOP-1 ERROR |
---|---|---|---|---|---|
MNIST | 27 kB | 26 kB | 1.0 | 1 | 1.1% |
MNIST | 26 kB | 26 kB | 1.2 | 7 | 1.1% |
MNIST | 26 kB | 26 kB | 1.3 | 8 | 1.1% |
MNIST-12 | 26 kB | 26 kB | 1.9 | 12 | 1.1% |
MNIST-12-int8 | 11 kB | 10 kB | 1.9 | 12 | 1.1% |
The model has been trained on the popular MNIST dataset.
The model is trained in CNTK following the tutorial CNTK 103D: Convolutional Neural Network with MNIST. Note that the specific architecture used is the model with alternating convolution and max pooling layers (found under the "Solution" section at the end of the tutorial).
Run MNIST in browser - implemented by ONNX.js with MNIST version 1.2
We used CNTK as the framework to perform inference. A brief description of the inference process is provided below:
Input tensor has shape (1x1x28x28)
, with type of float32.
One image at a time. This model doesn't support mini-batch.
Images are resized into (28x28) in grayscale, with a black background and a white foreground (the number should be in white). Color value is scaled to [0.0, 1.0].
Example:
import numpy as np
import cv2
image = cv2.imread('input.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.resize(gray, (28,28)).astype(np.float32)/255
input = np.reshape(gray, (1,1,28,28)
The likelihood of each number before softmax, with shape of (1x10)
.
Route the model output through a softmax function to map the aggregated activations across the network to probabilities across the 10 classes.
Sets of sample input and output files are provided in
- serialized protobuf TensorProtos (
.pb
), which are stored in the folderstest_data_set_*/
.
MNIST-12-int8 is obtained by quantizing MNIST-12 model. We use Intel® Neural Compressor with onnxruntime backend to perform quantization. View the instructions to understand how to use Intel® Neural Compressor for quantization.
onnx: 1.9.0 onnxruntime: 1.10.0
wget https://github.com/onnx/models/raw/main/vision/classification/mnist/model/mnist-12.onnx
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--config=mnist.yaml \
--output_model=path/to/save
- mengniwang95 (Intel)
- airMeng (Intel)
- ftian1 (Intel)
- hshen14 (Intel)
MIT