QuantLib is an open source quantization toolbox based on PyTorch.
- LSQ (ICLR'2019) LEARNED STEP SIZE QUANTIZATION
- LSQ+ (CVPR'2020) LSQ+: Improving low-bit quantization through learnable offsets and better initialization
- DAQ (ICCV'2021) Distance-aware Quantization
- Python == 3.7
- PyTorch == 1.8.2
- Clone github repository.
$ git clone git@github.com:iimmortall/QuantLib.git
- Install dependencies
$ pip install -r requirements.txt
- Cifar-10
- This can be automatically downloaded by learning our code, you can config the save path in the '*.yaml' file.
- ImageNet
- This is available at here
Cifar-10 dataset (ResNet-20 architecture)
- First, download full-precision model into your folder(you can config the model path in your *.yaml file). Link: [weights]
# DAQ: Cifar-10 & ResNet-20 W1A1 model
$ python run.py --config configs/daq/resnet20_daq_W1A1.yml
# DAQ Cifar-10 & ResNet-20 W1A32 model
$ python run.py --config configs/daq/resnet20_daq_W1A32.yml
# LSQ: Cifar-10 & ResNet-20 W8A8 model
$ python run.py --config configs/lsq/resnet20_lsq_W8A8.yml
# LSQ+: Cifar-10 & ResNet-20 W8A8 model
$ python run.py --config configs/lsq_plus/resnet20_lsq_plus_W8A8.yml
- Weight quantization: Signed, Symmetric, Per-tensor. Why use symmetric quantization.
- Activation quantization: Unsigned, Asymmetric.
- Don't quantize the first and last layer.
methods | Weight | Activation | Accuracy | Models |
---|---|---|---|---|
float | - | - | 91.4 | download |
LSQ | 8 | 8 | 91.9 | download |
LSQ+ | 8 | 8 | 92.1 | download |
DAQ | 1 | 1 | 85.8 | download |
DAQ | 1 | 32 | 91.2 | download |
QuantLib is an open source project. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
This project is released under the MIT license.