Skip to content

Commit

Permalink
Deprecate weight_only in OVQuantizer (#475)
Browse files Browse the repository at this point in the history
* Remove deprecated section from documentation

* deprecate weight_only in quantizer

* fix
  • Loading branch information
echarlaix authored Nov 6, 2023
1 parent a1397e0 commit e9230ff
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion optimum/intel/openvino/quantization.py
Original file line number Diff line number Diff line change
Expand Up @@ -164,8 +164,11 @@ def quantize(
if save_directory is None:
# TODO : can be set to self.model.config.name_or_path for OVModels when not provided
raise ValueError("`save_directory` needs to be specified")

if weights_only:
logger.warning(
"Applying weight only quantization using the `OVQuantizer` will be deprecated in the next release of optimum-intel. "
"To apply weight only quantization, please set `load_in_8bit=True` when loading your model with `from_pretrained()` or set `--int8` use when exporting your model with the CLI."
)
if calibration_dataset is not None:
logger.warning(
"`calibration_dataset` was provided but will not be used as `weights_only` is set to `True`."
Expand Down

0 comments on commit e9230ff

Please sign in to comment.