diff --git a/README.md b/README.md index 9f25eefd94..931e3830ca 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@

# Optimum Intel - + 🤗 Optimum Intel is the interface between the 🤗 Transformers and Diffusers libraries and the different tools and libraries provided by Intel to accelerate end-to-end pipelines on Intel architectures. Intel [Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) is an open-source library enabling the usage of the most popular compression techniques such as quantization, pruning and knowledge distillation. It supports automatic accuracy-driven tuning strategies in order for users to easily generate quantized model. The users can easily apply static, dynamic and aware-training quantization approaches while giving an expected accuracy criteria. It also supports different weight pruning techniques enabling the creation of pruned model giving a predefined sparsity target. diff --git a/optimum/intel/openvino/configuration.py b/optimum/intel/openvino/configuration.py index a45ee281f6..2d358250e1 100644 --- a/optimum/intel/openvino/configuration.py +++ b/optimum/intel/openvino/configuration.py @@ -70,7 +70,6 @@ "{re}.*conv_.*", ], }, - "overflow_fix": "disable", }