The MCT Quantizers library is an open-source library developed by researchers and engineers working at Sony Semiconductor Israel.
It provides tools for easily representing a quantized neural network in both Keras and PyTorch. The library offers researchers, developers, and engineers a set of useful quantizers, along with a simple interface for implementing new custom quantizers.
The library's quantizers interface consists of two main components:
QuantizationWrapper
: This object takes a layer with weights and a set of weight quantizers to infer a quantized layer.ActivationQuantizationHolder
: An object that holds an activation quantizer to be used during inference.
Users can set the quantizers and all the quantization information for each layer by initializing the weights_quantizer and activation_quantizer API.
Please note that the quantization wrapper and the quantizers are framework-specific.
The library provides the "Inferable Quantizer" interface for implementing new quantizers.
This interface is based on the BaseInferableQuantizer
class, which allows the definition of quantizers used for emulating inference-time quantization.
On top of BaseInferableQuantizer
the library defines a set of framework-specific quantizers for both weights and activations:
The @mark_quantizer
decorator is used to assign each quantizer with static properties that define its task compatibility. Each quantizer class should be decorated with this decorator, which defines the following properties:
QuantizationTarget
: An Enum that indicates whether the quantizer is intended for weights or activations quantization.QuantizationMethod
: A list of quantization methods (Uniform, Symmetric, etc.).identifier
: A unique identifier for the quantizer class. This is a helper property that allows the creation of advanced quantizers for specific tasks.
This section provides a quick guide to getting started. We begin with the installation process, either via source code or the pip server. Then, we provide a short example of usage.
To install the latest stable release of MCT Quantizer from PyPi, run the following command:
pip install mct-quantizers
If you prefer to use the nightly package (unstable version), you can install it with the following command:
pip install mct-quantizers-nightly
To work with the MCT Quantizers source code, follow these steps:
git clone https://github.com/sony/mct_quantizers.git
cd mct_quantizers
python setup.py install
To use MCT Quantizers, you need to have one of the supported frameworks, Tensorflow or PyTorch, installed.
For use with Tensorflow, please install the following package: tensorflow,
For use with PyTorch, please install the following package: torch
You can also use the requirements file to set up your environment.