Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/source/openvino/export.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Check out the help for more options:

```text
usage: optimum-cli export openvino [-h] -m MODEL [--task TASK] [--framework {pt}] [--trust-remote-code]
[--weight-format {fp32,fp16,int8,int4,mxfp4,nf4,cb4}]
[--weight-format {fp32,fp16,int8,int4,mxfp4,nf4,fp4,fp8_e4m3,cb4}]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion we should rename fp8_e4m3 to f8e4m3 to keep being aligned with f8e4m3 option for --quant-mode.

cc @ljaljushkin

Copy link
Contributor

@ljaljushkin ljaljushkin Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to be honest, don't like that optimum and nncf names are different and not aligned.
not clear why quant-mode in optimum is f8e4m3, but nncf has QuantizationMode.FP8_E4M3

MXFP4, MXFP8 and NVFP4 are established names and don't follow mxf4_e2m1 and nvf4_e2m1 format convention.
fp32 and fp16 are also not f32 and f16. not sure it was a good choice for --quant-mode. Can we reconsider it in optimum? will it affect someone?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, I don't remember the exact reasons why these data types were introduced with the names different from the ones in NNCF, because it was done by Nikita M. (#1100, ticket 160144). It was approved by Alexander K. back then so I believe there was at least some reasoning from his side regarding this.

I personally don't see much difference between the optimum-intel and NNCF names as long they are consistent within a single repo. The only thing is that if we decide to rename these data types in optimum-intel, the transition will be a bit painful, because we will have to keep both names for a couple of releases in order to properly deprecate the old ones.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@helena-intel could you please provide your feedback on this? The proposal is to rename some low precision data type names in optimum-intel to be consistent with their corresponding names in NNCF, for example f8e4m3 -> fp8_e4m3.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly prefer having the same names, and using standard established names. One reason why this is useful: people read NNCF documentation to learn more about different quantization options, with the notion that the examples use NNCF, but the concepts apply to optimum-intel too. Having different names is then unexpected, also if you then go search for examples/source code for what you just read. Not terrible, but not great either. I think now is a better time to change too, since quant-mode is not yet very widely used, but it will probably become more important.

[--quant-mode {int8,f8e4m3,f8e5m2,cb4_f8e4m3,int4_f8e4m3,int4_f8e5m2}]
[--library {transformers,diffusers,timm,sentence_transformers,open_clip}]
[--cache_dir CACHE_DIR] [--pad-token-id PAD_TOKEN_ID] [--ratio RATIO] [--sym]
Expand Down Expand Up @@ -66,7 +66,7 @@ Optional arguments:
--trust-remote-code Allows to use custom code for the modeling hosted in the model repository. This option should
only be set for repositories you trust and in which you have read the code, as it will execute
on your local machine arbitrary code present in the model repository.
--weight-format {fp32,fp16,int8,int4,mxfp4,nf4,cb4}
--weight-format {fp32,fp16,int8,int4,mxfp4,fp4,fp8_e4m3,nf4,cb4}
The weight format of the exported model. Option 'cb4' represents a codebook with 16
fixed fp8 values in E4M3 format.
--quant-mode {int8,f8e4m3,f8e5m2,cb4_f8e4m3,int4_f8e4m3,int4_f8e5m2}
Expand Down
2 changes: 1 addition & 1 deletion optimum/commands/export/openvino.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def parse_args_openvino(parser: "ArgumentParser"):
optional_group.add_argument(
"--weight-format",
type=str,
choices=["fp32", "fp16", "int8", "int4", "mxfp4", "nf4", "cb4"],
choices=["fp32", "fp16", "int8", "int4", "mxfp4", "fp4", "fp8_e4m3", "nf4", "cb4"],
default=None,
help=(
"The weight format of the exported model. Option 'cb4' represents a codebook with 16 fixed fp8 values in E4M3 format."
Expand Down
6 changes: 3 additions & 3 deletions optimum/intel/openvino/configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -686,7 +686,7 @@ class OVWeightQuantizationConfig(OVQuantizationConfigBase):
Indicates whether to apply a scale estimation algorithm that minimizes the L2 error between the original and
compressed layers. Providing a dataset is required to run scale estimation.
dtype (`str`, *optional*):
Data type weights are compressed to. Possible values: ['int4', 'int8', 'mxfp4', 'nf4', 'cb4'].
Data type weights are compressed to. Possible values: ['int4', 'int8', 'mxfp4', 'nf4', 'cb4', 'fp4', 'fp8_e4m3'].
Option 'cb4' represents a codebook with 16 fixed fp8 values in E4M3 format.
qptq (`bool`, *optional*):
Whether to apply GPTQ algorithm. GPTQ optimizes compressed weights in a layer-wise fashion to minimize the
Expand Down Expand Up @@ -879,10 +879,10 @@ def post_init(self):

if self.dtype is None:
self.dtype = "int4" if self.bits == 4 else "int8"
if self.dtype not in ["int4", "int8", "mxfp4", "nf4", "cb4"]:
if self.dtype not in ["int4", "int8", "mxfp4", "nf4", "cb4", "fp4", "fp8_e4m3"]:
raise ValueError(
"Weights quantization data type must be one of the following: "
f"['int4', 'int8', 'mxfp4', 'nf4', 'cb4'], but found: {self.dtype}."
f"['int4', 'int8', 'mxfp4', 'nf4', 'cb4', 'fp4', 'fp8_e4m3'], but found: {self.dtype}."
)
if self.dtype in ["mxfp4", "nf4", "cb4"]:
if self.bits != 4:
Expand Down
12 changes: 12 additions & 0 deletions tests/openvino/test_exporters_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -494,6 +494,18 @@ class OVCLIExportTestCase(unittest.TestCase):
"mxfp4",
{"model": {"int8": 4, "f4e2m1": 72, "f8e8m0": 72}},
),
(
"text-generation-with-past",
"opt125m",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please use some other model, e.g. llama? opt125m is too large and in the future we'd like to replace it with a different one. Perhaps, group size will need to reduced.

"fp4",
{"model": {"int8": 4, "f4e2m1": 72}},
),
(
"text-generation-with-past",
"opt125m",
"fp8_e4m3",
{"model": {"int8": 4, "f8e4m3": 72}},
),
(
"text-generation-with-past",
"opt125m",
Expand Down
Loading