Skip to content

Conversation

@daniil-lyakhov
Copy link
Contributor

What does this PR do?

[NNCF] FP8/FP4 support

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Copy link
Collaborator

@nikita-savelyevv nikita-savelyevv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Could you please also add a couple of cases to TRANSFORMERS_4BIT_CONFIGURATIONS? https://github.com/huggingface/optimum-intel/blob/main/tests/openvino/test_quantization.py#L558

```text
usage: optimum-cli export openvino [-h] -m MODEL [--task TASK] [--framework {pt}] [--trust-remote-code]
[--weight-format {fp32,fp16,int8,int4,mxfp4,nf4,cb4}]
[--weight-format {fp32,fp16,int8,int4,mxfp4,nf4,fp4,fp8_e4m3,cb4}]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion we should rename fp8_e4m3 to f8e4m3 to keep being aligned with f8e4m3 option for --quant-mode.

cc @ljaljushkin

Copy link
Contributor

@ljaljushkin ljaljushkin Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to be honest, don't like that optimum and nncf names are different and not aligned.
not clear why quant-mode in optimum is f8e4m3, but nncf has QuantizationMode.FP8_E4M3

MXFP4, MXFP8 and NVFP4 are established names and don't follow mxf4_e2m1 and nvf4_e2m1 format convention.
fp32 and fp16 are also not f32 and f16. not sure it was a good choice for --quant-mode. Can we reconsider it in optimum? will it affect someone?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, I don't remember the exact reasons why these data types were introduced with the names different from the ones in NNCF, because it was done by Nikita M. (#1100, ticket 160144). It was approved by Alexander K. back then so I believe there was at least some reasoning from his side regarding this.

I personally don't see much difference between the optimum-intel and NNCF names as long they are consistent within a single repo. The only thing is that if we decide to rename these data types in optimum-intel, the transition will be a bit painful, because we will have to keep both names for a couple of releases in order to properly deprecate the old ones.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@helena-intel could you please provide your feedback on this? The proposal is to rename some low precision data type names in optimum-intel to be consistent with their corresponding names in NNCF, for example f8e4m3 -> fp8_e4m3.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly prefer having the same names, and using standard established names. One reason why this is useful: people read NNCF documentation to learn more about different quantization options, with the notion that the examples use NNCF, but the concepts apply to optimum-intel too. Having different names is then unexpected, also if you then go search for examples/source code for what you just read. Not terrible, but not great either. I think now is a better time to change too, since quant-mode is not yet very widely used, but it will probably become more important.

),
(
"text-generation-with-past",
"opt125m",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please use some other model, e.g. llama? opt125m is too large and in the future we'd like to replace it with a different one. Perhaps, group size will need to reduced.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants