-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model patcher #567
Model patcher #567
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Hi @echarlaix Thanks for your review. I have fixed most of them and still have some issues with the comments and tests, would you please take the next round of review? Thx! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for iterating on the PR @jiqing-feng
Hi @echarlaix . I think I have fixed all your comments, and I also added ipex model generation testing with multi inputs. Would you please help to review it? Thx! |
Hi @echarlaix , I have fixed all your comments. Do you mind taking the last round? I think we could merge it first since it does not affect the current version, and will let you know once the new version ipex is released. Thx! |
also cc @ofirzaf for visibility |
Hi @echarlaix , thanks for your detailed review! I have disabled everything for the next ipex release, you can check on my codes and see that the PR will not make any change if the ipex version <= 2.3.0. Although the PR works well with our internal ipex version (2.3.0.dev), I will double-check it and make it compatible when the public ipex 2.3.0 is released. I think it is ready to merge, would like to hear your opinion @ofirzaf . Thx! |
tests/ipex/test_modeling.py
Outdated
@@ -128,7 +131,7 @@ def test_compare_to_transformers(self, model_arch): | |||
outputs = ipex_model(**tokens) | |||
# Compare tensor outputs | |||
for output_name in {"logits", "last_hidden_state"}: | |||
if output_name in transformers_outputs: | |||
if output_name in transformers_outputs and output_name in outputs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if output_name in transformers_outputs and output_name in outputs: | |
if output_name in transformers_outputs: |
we should have the same output for both so we need to keep as is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually IPEXModel
didn't return last_hidden_states
when return_dict=False
. See here. return_dict=False
is needed by jit trace.
optimum/intel/ipex/modeling_base.py
Outdated
|
||
|
||
def ipex_jit_trace(model, task, use_cache): | ||
if version.parse(ipex.__version__) <= version.parse("2.3.0") or not is_model_support_ipex_export(model, task): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reply to the comment, I checked ipex version here, so if ipex.version <= 2.3.0, it will not use exports.ipex functions.
Isn't all this model patching suppose to be taken care by |
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Hi @ofirzaf . The utility of ipex is to supply basic ops for the model just like pytorch. |
Hi @echarlaix . I have applied all your changes, thanks for that. The failed CI was caused by #589 , I think we could fix it soon. And this PR should be ready to merge : ) |
* llama model patcher * fix jit model * fix jit model * rm autocast in model * add llama model patcher * support assisted decoding and add reorder cache function * add comment for _prepare_past_key_values * rebase main * fix model_dtype * rm useless comments * fix llama * add comments for ipex_rope and ipex_scale_dot_product * fix comments * add enable_tpp comments * fix import * fix review aroun2 * add torch.no_grad to avoid auto_kernel_selection issue * use torch.no_grad in jit trace * fix ipex model testing * add tests for ipex model generation with multi inputs * fix code style * remove __get__(self) as _reorder_cache is static method for the class * fix reorder_cache * use model_type * check if reorder_cache is a static method * fix _reorder_cache * fix raise import error * test ipex patching * fix comments * update API name and testing * disable untill ipex version 2.5.0 * update testing name * Update optimum/intel/ipex/modeling_base.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update tests/ipex/test_modeling.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * fix tests --------- Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
…tension-for-transformers. (#455) * Support weight-only quantization with quantized operators in intel-extension-for-transformers * Update code style * Update readme for weight-only quantization example * Update code * Adapt intel-extension-for-transformers 1.3 API change Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Support weight-only quantization with quantized operators in intel-extension-for-transformers * Update code * rebase code on main branch Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Update example Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Update optimum/intel/neural_compressor/quantization.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * [OV]: Fixed inference after 4 bit weight compression (#569) * [OV]: Fixed inferece after 4 bit weight compression * Fixed issue * Update optimum/intel/openvino/modeling_decoder.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Applied comments * Fixed issue when request is None --------- Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Updated docs with load_in_4bit (#558) * Updated docs with load_in_4bit * Update documentation * Update documentation * typo --------- Co-authored-by: Ella Charlaix <ella@huggingface.co> * Update Transformers dependency requirements (#571) * Fix compatibility for latest transformers release (#570) * fix compatibility for latest transformers release * update setup * update setup * fix test input size * fix prepare generation for llama models * Deprecate compression options (#565) * deprecate compression options * style * fix configuration * Update CLI argument * update documentation * deprecate torch nn modules for ov quantizer * fix ov config for fp32 models * fix format * update documentation * Add check for configuration * fix ratio default value for SD models * add quantization_config argument for OVModel * remove commented line * Update docs/source/inference.mdx Co-authored-by: Alexander Kozlov <alexander.kozlov@intel.com> * add default config for causal LM * fix warning message --------- Co-authored-by: Alexander Kozlov <alexander.kozlov@intel.com> * Add default quantization int4 config for Mixtral-8x7B (#576) * Update stable diffusion example requirements (#579) * Fix collecting duplicate tensors in quantization calibration dataset (#577) * Added deepcopying of inputs collected by InferRequestWrapper. Added a test covering the fixed issue. * Phrasing tweaks * Add soundfile to test requirements * Added librosa to test requirements * Added copying to other data cache appends * Remove the need for real test data * Process __call__ call properly * Addressed suggested changes * Save an openvino config summarizing all information related to quantization when saving model (#578) * fix doc * remove default compression value * set default compression config when not provided * save openvino config to include quantization configuration * fix style * add test * update setup * style * remove from quantization_config key from ov_config * add test * update setup * modify method name * Fix warning (#582) * Fix warning * fix message warning * Add reference to the temporary directory for windows fix (#581) * Fix documentation (#583) * Fix documentation * fix * Add llama test model to cover MQA (#585) * change llama test model to cover MQA * keep llama and llama2 in tests * fix code style * Include nncf in openvino extra (#586) * Fix title documentation (#588) * Update OpenVINO documentation links in README.md (#587) * Update OpenVINO documentation links in README.md The links are now aligned with OpenVINO 2024.0 documentation, and include permalinks instead of direct links, when possible. * Update inference.mdx * Update index.mdx * Update installation.mdx * Update README.md * Fix default int8 quantization for CLI (#592) * Change model output parameter to last_hidden_states for IPEXModel (#589) * change model output parameter to last_hidden_states * update ipex model testiong * update testing * add output name to ipex model * Add IPEX model patcher (#567) * llama model patcher * fix jit model * fix jit model * rm autocast in model * add llama model patcher * support assisted decoding and add reorder cache function * add comment for _prepare_past_key_values * rebase main * fix model_dtype * rm useless comments * fix llama * add comments for ipex_rope and ipex_scale_dot_product * fix comments * add enable_tpp comments * fix import * fix review aroun2 * add torch.no_grad to avoid auto_kernel_selection issue * use torch.no_grad in jit trace * fix ipex model testing * add tests for ipex model generation with multi inputs * fix code style * remove __get__(self) as _reorder_cache is static method for the class * fix reorder_cache * use model_type * check if reorder_cache is a static method * fix _reorder_cache * fix raise import error * test ipex patching * fix comments * update API name and testing * disable untill ipex version 2.5.0 * update testing name * Update optimum/intel/ipex/modeling_base.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update tests/ipex/test_modeling.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * fix tests --------- Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Updates weight quantization section in the docs (#593) * Remove accelerate and onnxruntime from required dependencies (#590) * Remove accelerate dependency * Add accelerate to import backend mapping * Add eval method to OVModels * add onnxruntime install for OV test * fix test expected int8 * Fix OpenVINO image classification examples (#598) * Fix weights compression for OPenVINO models (#596) * hot fix for weights compression * rewrite mcok tests * Fix default ov config (#600) * Add warning for transformers>=4.38 and OpenVINO 2024.0 (#599) * Add warning for transformers>=4.38 and OpenVINO 2024.0 * Use is_openvino_version to compare versions * Show version warning only for llama and gpt-bigcode * Fix style, show OpenVINO version * Include affected model types in warning message * Add hybrid quantization for StableDiffusion pipelines (#584) * Add hybrid quantization for StableDiffusion pipelines * apply black * fix tests * fix ruff * fix lcm bug * apply review comments * rework dataset processing * Add doc * remove SDXL test * Apply comments * reformat * Show device name in _print_compiled_model_properties (#541) * Show device name in _print_compiled_model_properties Enable CACHE_DIR also for devices like "GPU:0" * Update optimum/intel/openvino/modeling_seq2seq.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Change check for gpu device --------- Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update code with comments Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Fixed pylint error Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Update optimum/intel/neural_compressor/configuration.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Fixed example and UT for weight-only quantization Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Fixed pre-ci test error Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Fixed pre-ci test error Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Fixed UT and examples error Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Fixed pre-CI error Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Fixed UT error Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Update tests/openvino/test_modeling_basic.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update examples/neural_compressor/language-modeling/README.md Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update examples/neural_compressor/language-modeling/run_clm.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update examples/neural_compressor/language-modeling/run_clm.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update examples/neural_compressor/language-modeling/run_clm.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update examples/neural_compressor/language-modeling/run_clm.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update examples/neural_compressor/language-modeling/run_clm.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Load weight-only quantized model with INCModelForCausalLM Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Changed parameters name for GPTQ in example Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Changed parameters order in INCQuantizer.quantize Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Fixed UT error Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Update examples/neural_compressor/text-generation/run_generation.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update optimum/intel/neural_compressor/quantization.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update optimum/intel/neural_compressor/quantization.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Update import message Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Limit intel-extension-for-transformers version Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Limit torch version for weight-only quantization Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> * Fixed doc building error Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> --------- Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com> Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> Co-authored-by: Alexander Kozlov <alexander.kozlov@intel.com> Co-authored-by: Ella Charlaix <ella@huggingface.co> Co-authored-by: Lyalyushkin Nikolay <nikolay.lyalyushkin@intel.com> Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com> Co-authored-by: Nikita Savelyev <nikita.savelyev@intel.com> Co-authored-by: jiqing-feng <107918818+jiqing-feng@users.noreply.github.com> Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com> Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> Co-authored-by: Liubov Talamanova <liubov.talamanova@intel.com>
This PR enables ipex llama model by patching functions and classes, and it has 30% speed-up than the original optimization.
The ipex optimization ops will be released soon, and I will add the CI tests once it released. We can focus on the integration for now.
BTW, this PR includes #566, and I will rebase it after #566 is merged.