From f4ec21570bb94ad08983e10ce4cc92672f514a48 Mon Sep 17 00:00:00 2001 From: Alexander Date: Mon, 12 Feb 2024 17:18:49 +0400 Subject: [PATCH 1/4] Updated docs with load_in_4bit --- docs/source/optimization_ov.mdx | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/docs/source/optimization_ov.mdx b/docs/source/optimization_ov.mdx index 09986961ba..f378076433 100644 --- a/docs/source/optimization_ov.mdx +++ b/docs/source/optimization_ov.mdx @@ -74,21 +74,22 @@ model = OVModelForCausalLM.from_pretrained(model_id, load_in_8bit=True) > **NOTE:** `load_in_8bit` is enabled by default for models larger than 1 billion parameters. -For the 4-bit weight quantization we recommend using the NNCF API like below: +For the 4-bit weight quantization you can use `load_in_4bit` option. The `quantization_config` can be used to controll the optimization parameters, for example: + ```python -from optimum.intel import OVModelForCausalLM +from optimum.intel import OVModelForCausalLM, OVWeightQuantizationConfig import nncf -model = OVModelForCausalLM.from_pretrained(model_id, load_in_8bit=False) -model.model = nncf.compress_weights( - model.model, - mode=nncf.CompressWeightsMode.INT4_SYM, - ratio=0.8, - group_size=128, - ) -model.save_pretrained("compressed_model") +model = OVModelForCausalLM.from_pretrained( + model_id, + export=True, + load_in_4bit=True, + quantization_config=OVWeightQuantizationConfig(mode=nncf.CompressWeightsMode.INT4_ASYM, ratio=0.8, dataset="ptb"), +) ``` +> **NOTE:** if `load_in_4bit` is used without `quantization_config` provided, a pre-defined `model_id` specific configuration is used in case it exists or a default 4-bit configuration is used otherwise. + For more details, please refer to the corresponding NNCF [documentation](https://github.com/openvinotoolkit/nncf/blob/develop/docs/compression_algorithms/CompressWeights.md). From 11857b600dbf9c1a3ebb4e16ae41e8e30dc5f176 Mon Sep 17 00:00:00 2001 From: Ella Charlaix Date: Fri, 16 Feb 2024 10:22:11 +0100 Subject: [PATCH 2/4] Update documentation --- docs/source/optimization_ov.mdx | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/docs/source/optimization_ov.mdx b/docs/source/optimization_ov.mdx index f378076433..eb184c8e87 100644 --- a/docs/source/optimization_ov.mdx +++ b/docs/source/optimization_ov.mdx @@ -74,17 +74,15 @@ model = OVModelForCausalLM.from_pretrained(model_id, load_in_8bit=True) > **NOTE:** `load_in_8bit` is enabled by default for models larger than 1 billion parameters. -For the 4-bit weight quantization you can use `load_in_4bit` option. The `quantization_config` can be used to controll the optimization parameters, for example: +For the 4-bit weight quantization you can use yhe `quantization_config` to specify the optimization parameters, for example: ```python from optimum.intel import OVModelForCausalLM, OVWeightQuantizationConfig -import nncf model = OVModelForCausalLM.from_pretrained( model_id, export=True, - load_in_4bit=True, - quantization_config=OVWeightQuantizationConfig(mode=nncf.CompressWeightsMode.INT4_ASYM, ratio=0.8, dataset="ptb"), + quantization_config=OVWeightQuantizationConfig(bits=4, sym=False, ratio=0.8, dataset="ptb"), ) ``` From b114fddcfc19cb4c08f9c052f07aea8399f15a89 Mon Sep 17 00:00:00 2001 From: Ella Charlaix Date: Fri, 16 Feb 2024 10:22:58 +0100 Subject: [PATCH 3/4] Update documentation --- docs/source/optimization_ov.mdx | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/source/optimization_ov.mdx b/docs/source/optimization_ov.mdx index eb184c8e87..ce8695356f 100644 --- a/docs/source/optimization_ov.mdx +++ b/docs/source/optimization_ov.mdx @@ -86,8 +86,6 @@ model = OVModelForCausalLM.from_pretrained( ) ``` -> **NOTE:** if `load_in_4bit` is used without `quantization_config` provided, a pre-defined `model_id` specific configuration is used in case it exists or a default 4-bit configuration is used otherwise. - For more details, please refer to the corresponding NNCF [documentation](https://github.com/openvinotoolkit/nncf/blob/develop/docs/compression_algorithms/CompressWeights.md). From 75b794b10d90d3d805b232f3de9b92e6a400674b Mon Sep 17 00:00:00 2001 From: Ella Charlaix Date: Wed, 21 Feb 2024 14:10:03 +0100 Subject: [PATCH 4/4] typo --- docs/source/optimization_ov.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/optimization_ov.mdx b/docs/source/optimization_ov.mdx index ce8695356f..0b653cf726 100644 --- a/docs/source/optimization_ov.mdx +++ b/docs/source/optimization_ov.mdx @@ -74,7 +74,7 @@ model = OVModelForCausalLM.from_pretrained(model_id, load_in_8bit=True) > **NOTE:** `load_in_8bit` is enabled by default for models larger than 1 billion parameters. -For the 4-bit weight quantization you can use yhe `quantization_config` to specify the optimization parameters, for example: +For the 4-bit weight quantization you can use the `quantization_config` to specify the optimization parameters, for example: ```python from optimum.intel import OVModelForCausalLM, OVWeightQuantizationConfig