-
Notifications
You must be signed in to change notification settings - Fork 159
[NNCF] FP8/FP4 support #1524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[NNCF] FP8/FP4 support #1524
Conversation
nikita-savelyevv
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Could you please also add a couple of cases to TRANSFORMERS_4BIT_CONFIGURATIONS? https://github.com/huggingface/optimum-intel/blob/main/tests/openvino/test_quantization.py#L558
| ```text | ||
| usage: optimum-cli export openvino [-h] -m MODEL [--task TASK] [--framework {pt}] [--trust-remote-code] | ||
| [--weight-format {fp32,fp16,int8,int4,mxfp4,nf4,cb4}] | ||
| [--weight-format {fp32,fp16,int8,int4,mxfp4,nf4,fp4,fp8_e4m3,cb4}] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion we should rename fp8_e4m3 to f8e4m3 to keep being aligned with f8e4m3 option for --quant-mode.
cc @ljaljushkin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to be honest, don't like that optimum and nncf names are different and not aligned.
not clear why quant-mode in optimum is f8e4m3, but nncf has QuantizationMode.FP8_E4M3
MXFP4, MXFP8 and NVFP4 are established names and don't follow mxf4_e2m1 and nvf4_e2m1 format convention.
fp32 and fp16 are also not f32 and f16. not sure it was a good choice for --quant-mode. Can we reconsider it in optimum? will it affect someone?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, I don't remember the exact reasons why these data types were introduced with the names different from the ones in NNCF, because it was done by Nikita M. (#1100, ticket 160144). It was approved by Alexander K. back then so I believe there was at least some reasoning from his side regarding this.
I personally don't see much difference between the optimum-intel and NNCF names as long they are consistent within a single repo. The only thing is that if we decide to rename these data types in optimum-intel, the transition will be a bit painful, because we will have to keep both names for a couple of releases in order to properly deprecate the old ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@helena-intel could you please provide your feedback on this? The proposal is to rename some low precision data type names in optimum-intel to be consistent with their corresponding names in NNCF, for example f8e4m3 -> fp8_e4m3.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I strongly prefer having the same names, and using standard established names. One reason why this is useful: people read NNCF documentation to learn more about different quantization options, with the notion that the examples use NNCF, but the concepts apply to optimum-intel too. Having different names is then unexpected, also if you then go search for examples/source code for what you just read. Not terrible, but not great either. I think now is a better time to change too, since quant-mode is not yet very widely used, but it will probably become more important.
| ), | ||
| ( | ||
| "text-generation-with-past", | ||
| "opt125m", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please use some other model, e.g. llama? opt125m is too large and in the future we'd like to replace it with a different one. Perhaps, group size will need to reduced.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
What does this PR do?
[NNCF] FP8/FP4 support
Before submitting