Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support weight-only quantization with quantized operators in intel-extension-for-transformers. #455

Merged

Conversation

PenghuiCheng
Copy link
Contributor

What does this PR do?

Intel-extension-for-transformers package implements weight-only quantization operators with jblas kernel. So we integrated weight-only quantization with intel-extension-for-transformers in this PR.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

setup.py Outdated Show resolved Hide resolved
@PenghuiCheng PenghuiCheng force-pushed the penghuic/weight_only_with_itrex branch 3 times, most recently from fbf9ddf to 9d03415 Compare October 23, 2023 08:22
@PenghuiCheng PenghuiCheng force-pushed the penghuic/weight_only_with_itrex branch from ecaac6e to ed873c9 Compare January 16, 2024 12:27
@PenghuiCheng
Copy link
Contributor Author

Hi, @echarlaix , I rebased code with main branch. Please review it, thanks very much!

Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
@PenghuiCheng PenghuiCheng force-pushed the penghuic/weight_only_with_itrex branch from bee33e1 to de190fd Compare January 17, 2024 00:47
@PenghuiCheng
Copy link
Contributor Author

Hi, @echarlaix , it seems to build intel-extension-for-transformers failed in the pre-CI test. Before executing the "python setup.py install" command, we should pip install dependency packages with "pip install -r requirements.txt" in intel-extension-for-transformers. Could you review it? thanks!

Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Copy link
Collaborator

@echarlaix echarlaix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for your work @PenghuiCheng

examples/neural_compressor/language-modeling/run_clm.py Outdated Show resolved Hide resolved
examples/neural_compressor/language-modeling/run_clm.py Outdated Show resolved Hide resolved
optimum/intel/neural_compressor/quantization.py Outdated Show resolved Hide resolved
optimum/intel/utils/import_utils.py Outdated Show resolved Hide resolved
optimum/intel/neural_compressor/quantization.py Outdated Show resolved Hide resolved
optimum/intel/neural_compressor/quantization.py Outdated Show resolved Hide resolved
tests/neural_compressor/test_optimization.py Outdated Show resolved Hide resolved
PenghuiCheng and others added 6 commits March 13, 2024 14:56
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
* [OV]: Fixed inferece after 4 bit weight compression

* Fixed issue

* Update optimum/intel/openvino/modeling_decoder.py

Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>

* Applied comments

* Fixed issue when request is None

---------

Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
* Updated docs with load_in_4bit

* Update documentation

* Update documentation

* typo

---------

Co-authored-by: Ella Charlaix <ella@huggingface.co>
* fix compatibility for latest transformers release

* update setup

* update setup

* fix test input size

* fix prepare generation for llama models
* deprecate compression options

* style

* fix configuration

* Update CLI argument

* update documentation

* deprecate torch nn modules for ov quantizer

* fix ov config for fp32 models

* fix format

* update documentation

* Add check for configuration

* fix ratio default value for SD models

* add quantization_config argument for OVModel

* remove commented line

* Update docs/source/inference.mdx

Co-authored-by: Alexander Kozlov <alexander.kozlov@intel.com>

* add default config for causal LM

* fix  warning message

---------

Co-authored-by: Alexander Kozlov <alexander.kozlov@intel.com>
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
tests/openvino/test_modeling_basic.py Show resolved Hide resolved
examples/neural_compressor/language-modeling/README.md Outdated Show resolved Hide resolved
examples/neural_compressor/language-modeling/run_clm.py Outdated Show resolved Hide resolved
examples/neural_compressor/language-modeling/run_clm.py Outdated Show resolved Hide resolved
examples/neural_compressor/language-modeling/run_clm.py Outdated Show resolved Hide resolved
examples/neural_compressor/language-modeling/run_clm.py Outdated Show resolved Hide resolved
examples/neural_compressor/language-modeling/run_clm.py Outdated Show resolved Hide resolved
optimum/intel/neural_compressor/quantization.py Outdated Show resolved Hide resolved
optimum/intel/neural_compressor/modeling_base.py Outdated Show resolved Hide resolved
PenghuiCheng and others added 10 commits March 23, 2024 21:12
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
@PenghuiCheng PenghuiCheng force-pushed the penghuic/weight_only_with_itrex branch from 2a905cc to 5ddd360 Compare March 25, 2024 02:37
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
.github/workflows/test_inc.yml Show resolved Hide resolved
optimum/intel/neural_compressor/modeling_base.py Outdated Show resolved Hide resolved
optimum/intel/neural_compressor/quantization.py Outdated Show resolved Hide resolved
optimum/intel/neural_compressor/quantization.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@echarlaix echarlaix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great thanks @PenghuiCheng

PenghuiCheng and others added 3 commits March 26, 2024 09:34
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
.github/workflows/test_inc.yml Show resolved Hide resolved
optimum/intel/neural_compressor/modeling_base.py Outdated Show resolved Hide resolved
examples/neural_compressor/language-modeling/run_clm.py Outdated Show resolved Hide resolved
optimum/intel/neural_compressor/modeling_base.py Outdated Show resolved Hide resolved
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
@echarlaix echarlaix merged commit 08fc8ed into huggingface:main Mar 27, 2024
13 of 18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.