Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ipex doc #828

Merged
merged 32 commits into from
Aug 27, 2024
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
d7b0fc4
change readme, source/index, source/installation
jiqing-feng Jul 16, 2024
78f7c61
add ipex doc 1st step
jiqing-feng Jul 17, 2024
b531a72
update readme for command line usage
jiqing-feng Jul 17, 2024
90d9000
fix bug for ipex readme
jiqing-feng Jul 17, 2024
b39be97
add export doc
jiqing-feng Jul 17, 2024
a90cb23
update all ipex docs
jiqing-feng Jul 17, 2024
e884158
rm diffusers
jiqing-feng Jul 17, 2024
2100cd9
change register
jiqing-feng Jul 17, 2024
84305bc
Update README.md
jiqing-feng Jul 17, 2024
23f8756
Update docs/source/installation.mdx
jiqing-feng Jul 17, 2024
644d197
fix readme
jiqing-feng Jul 17, 2024
fde311c
fix ipex exporter args comments
jiqing-feng Jul 17, 2024
a9a2c38
extend ipex export explain
jiqing-feng Jul 17, 2024
4368205
fix ipex reference.mdx
jiqing-feng Jul 18, 2024
c31696d
add comments for auto doc
jiqing-feng Jul 18, 2024
c5412da
rm cli export
jiqing-feng Jul 18, 2024
291c73d
Update optimum/commands/export/ipex.py
jiqing-feng Jul 18, 2024
8772c51
rm commit hash in export command
jiqing-feng Jul 18, 2024
39f27dd
rm export
jiqing-feng Jul 22, 2024
1d8fc29
rm jit
jiqing-feng Jul 22, 2024
02fa235
add ipex on doc's docker file
jiqing-feng Jul 26, 2024
4ed2620
indicate that ipex model only supports for cpu and the export format …
jiqing-feng Jul 26, 2024
770d82f
Update docs/source/ipex/inference.mdx
jiqing-feng Jul 29, 2024
0a9ce3d
explain patching
jiqing-feng Jul 29, 2024
378144e
rm ipex reference
jiqing-feng Aug 6, 2024
be7097d
Update docs/source/ipex/inference.mdx
echarlaix Aug 26, 2024
21f06cf
Update docs/source/ipex/inference.mdx
echarlaix Aug 26, 2024
ae8143a
Update docs/source/ipex/inference.mdx
echarlaix Aug 26, 2024
9a25ac7
Update docs/source/index.mdx
echarlaix Aug 26, 2024
52bec25
Update docs/source/ipex/inference.mdx
echarlaix Aug 26, 2024
d6153b7
Update docs/source/ipex/models.mdx
echarlaix Aug 26, 2024
8115bf6
Update docs/Dockerfile
echarlaix Aug 26, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,15 +223,14 @@ To load your IPEX model, you can just replace your `AutoModelForXxx` class with
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
results = pipe("He's a dreadful magician and")

```

For more details, please refer to the [documentation](https://intel.github.io/intel-extension-for-pytorch/#introduction).


## Running the examples

Check out the [`examples`](https://github.com/huggingface/optimum-intel/tree/main/examples) directory to see how 🤗 Optimum Intel can be used to optimize models and accelerate inference.
Check out the [`examples`](https://github.com/huggingface/optimum-intel/tree/main/examples) and [`notebooks`](https://github.com/huggingface/optimum-intel/tree/main/notebooks) directory to see how 🤗 Optimum Intel can be used to optimize models and accelerate inference.

Do not forget to install requirements for every example:

Expand Down
2 changes: 1 addition & 1 deletion docs/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,4 @@ RUN npm install npm@9.8.1 -g && \
RUN python3 -m pip install --no-cache-dir --upgrade pip
RUN python3 -m pip install --no-cache-dir git+https://github.com/huggingface/doc-builder.git
RUN git clone $clone_url && cd optimum-intel && git checkout $commit_sha
RUN python3 -m pip install --no-cache-dir ./optimum-intel[neural-compressor,openvino,diffusers,quality]
RUN python3 -m pip install --no-cache-dir ./optimum-intel[ipex,neural-compressor,openvino,diffusers,quality]
echarlaix marked this conversation as resolved.
Show resolved Hide resolved
13 changes: 13 additions & 0 deletions docs/source/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,5 +30,18 @@
title: Tutorials
isExpanded: false
title: OpenVINO
- sections:
- local: ipex/inference
title: Inference
- local: ipex/models
title: Supported Models
- local: ipex/reference
title: Reference
- sections:
- local: ipex/tutorials/notebooks
title: Notebooks
title: Tutorials
isExpanded: false
title: IPEX
title: Optimum Intel
isExpanded: false
2 changes: 2 additions & 0 deletions docs/source/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ limitations under the License.

🤗 Optimum Intel is the interface between the 🤗 Transformers and Diffusers libraries and the different tools and libraries provided by Intel to accelerate end-to-end pipelines on Intel architectures.

[Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/#introduction) is an open-source library which provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch* normally yields better performance from optimization techniques, such as operation fusion.
echarlaix marked this conversation as resolved.
Show resolved Hide resolved

[Intel Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) is an open-source library enabling the usage of the most popular compression techniques such as quantization, pruning and knowledge distillation. It supports automatic accuracy-driven tuning strategies in order for users to easily generate quantized model. The users can easily apply static, dynamic and aware-training quantization approaches while giving an expected accuracy criteria. It also supports different weight pruning techniques enabling the creation of pruned model giving a predefined sparsity target.

[OpenVINO](https://docs.openvino.ai) is an open-source toolkit that enables high performance inference capabilities for Intel CPUs, GPUs, and special DL inference accelerators ([see](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) the full list of supported devices). It is supplied with a set of tools to optimize your models with compression techniques such as quantization, pruning and knowledge distillation. Optimum Intel provides a simple interface to optimize your Transformers and Diffusers models, convert them to the OpenVINO Intermediate Representation (IR) format and run inference using OpenVINO Runtime.
Expand Down
3 changes: 2 additions & 1 deletion docs/source/installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ To install the latest release of 🤗 Optimum Intel with the corresponding requi
|:-----------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|
| [Intel Neural Compressor (INC)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) | `pip install --upgrade --upgrade-strategy eager "optimum[neural-compressor]"`|
| [Intel OpenVINO](https://docs.openvino.ai ) | `pip install --upgrade --upgrade-strategy eager "optimum[openvino]"` |
| [Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/#introduction) | `pip install --upgrade --upgrade-strategy eager "optimum[ipex]"` |

The `--upgrade-strategy eager` option is needed to ensure `optimum-intel` is upgraded to the latest version.

Expand All @@ -42,4 +43,4 @@ or to install from source including dependencies:
python -m pip install "optimum-intel[extras]"@git+https://github.com/huggingface/optimum-intel.git
```

where `extras` can be one or more of `neural-compressor`, `openvino`, `nncf`.
where `extras` can be one or more of `neural-compressor`, `openvino`, `ipex`.
47 changes: 47 additions & 0 deletions docs/source/ipex/inference.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Inference

Optimum Intel can be used to load models from the [Hub](https://huggingface.co/models) and create pipelines to run inference with IPEX optimization (including patching, weight prepack and graph mode) on a variety of Intel processors (currently only support for CPU)
Copy link
Collaborator

@echarlaix echarlaix Jul 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the plan for the models format in the next release ? From my understanding we will stop using TorchScript, I'd prefer to wait for the next release once we have something that won't get deprecated, before adding it to the documentation

Copy link
Collaborator Author

@jiqing-feng jiqing-feng Jul 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean the Pytorch release? If so, it depends on the models performance under torch.compile. If all models can have an acceptable speed-up under torch.compile, we will remove jit.trace and apply torch.compile; otherwise, we will keep torchscript or convert parts of models to torch.compile, I will discuss with you before I take any actions.

I don't think waiting for the next release is the best option since the torch.compile is not under our control. Currently, torch.compile is not working on all models for all tasks; many performance regressions need to be fixed. We will apply the torch.compile one by one for models. Changing all models to compile without any issues is impossible for now, so it's a long-term project. Besides, we will not change the API in IPEXModel, so it's okay to deliver this ipex doc to users.

echarlaix marked this conversation as resolved.
Show resolved Hide resolved


## Loading

### Transformers models

echarlaix marked this conversation as resolved.
Show resolved Hide resolved
You can load models from HuggingFace Model Hub, it will be optimized by ipex (including patching, weight prepack and graph mode) during loading.
echarlaix marked this conversation as resolved.
Show resolved Hide resolved
For now, the IPEX optimization only supports for CPU model, and it will export the original model to torchscript model. The export format will be changed to torch compile in the future.
jiqing-feng marked this conversation as resolved.
Show resolved Hide resolved

```diff
import torch
from transformers import AutoTokenizer, pipeline
- from transformers import AutoModelForCausalLM
+ from optimum.intel import IPEXModelForCausalLM

model_id = "gpt2"
- model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
+ model = IPEXModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, export=True)
jiqing-feng marked this conversation as resolved.
Show resolved Hide resolved
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
results = pipe("He's a dreadful magician and")
```

As shown in the table below, each task is associated with a class enabling to automatically load your model.

| Auto Class | Task |
|--------------------------------------|--------------------------------------|
| `IPEXModelForSequenceClassification` | `text-classification` |
| `IPEXModelForTokenClassification` | `token-classification` |
| `IPEXModelForQuestionAnswering` | `question-answering` |
| `IPEXModelForImageClassification` | `image-classification` |
| `IPEXModel` | `feature-extraction` |
| `IPEXModelForMaskedLM` | `fill-mask` |
| `IPEXModelForAudioClassification` | `audio-classification` |
| `IPEXModelForCausalLM` | `text-generation` |
46 changes: 46 additions & 0 deletions docs/source/ipex/models.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Supported models

🤗 Optimum handles the export of models to IPEX in the `exporters.ipex` module. It provides classes, functions, and a command line interface to perform the export easily.
echarlaix marked this conversation as resolved.
Show resolved Hide resolved
Here is the list of the supported architectures :

## [Transformers](https://huggingface.co/docs/transformers/index)

- Albert
- Bart
- Beit
- Bert
- BlenderBot
- BlenderBotSmall
- Bloom
- CodeGen
- DistilBert
- Electra
- Flaubert
- GPT-2
- GPT-BigCode
- GPT-Neo
- GPT-NeoX
- Llama
- MPT
- Mistral
- MobileNet v1
- MobileNet v2
- MobileVit
- OPT
- ResNet
- Roberta
- Roformer
- SqueezeBert
- UniSpeech
- Vit
- Wav2Vec2
- XLM
72 changes: 72 additions & 0 deletions docs/source/ipex/reference.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Models

## Generic model classes

[[autodoc]] ipex.modeling_base.IPEXModel
- _from_pretrained
- forward

## Natural Language Processing

The following classes are available for the following natural language processing tasks.

### IPEXModelForCausalLM

[[autodoc]] ipex.modeling_base.IPEXModelForCausalLM
- forward
- generate

### IPEXModelForMaskedLM

[[autodoc]] ipex.modeling_base.IPEXModelForMaskedLM
- forward

### IPEXModelForQuestionAnswering

[[autodoc]] ipex.modeling_base.IPEXModelForQuestionAnswering
- forward

### IPEXModelForSequenceClassification

[[autodoc]] ipex.modeling_base.IPEXModelForSequenceClassification
- forward

### IPEXModelForTokenClassification

[[autodoc]] ipex.modeling_base.IPEXModelForTokenClassification
- forward


## Audio

The following classes are available for the following audio tasks.

### IPEXModelForAudioClassification

[[autodoc]] ipex.modeling_base.IPEXModelForAudioClassification
- forward


## Computer Vision

The following classes are available for the following computer vision tasks.

### IPEXModelForImageClassification

[[autodoc]] ipex.modeling_base.IPEXModelForImageClassification
- forward


### IPEXModelForFeatureExtraction

[[autodoc]] ipex.modeling_base.IPEXModelForFeatureExtraction
- forward
16 changes: 16 additions & 0 deletions docs/source/ipex/tutorials/notebooks.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Notebooks

## Inference

| Notebook | Description | | |
|:---------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------- |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|------:|
| [How to run inference with the IPEX](https://github.com/huggingface/optimum-intel/tree/main/notebooks/ipex) | Explains how to export your model to IPEX and to run inference with IPEX model on text-generation task | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/optimum-intel/blob/main/notebooks/ipex/text_generation.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-intel/blob/main/notebooks/ipex/text_generation.ipynb) |
36 changes: 35 additions & 1 deletion optimum/intel/ipex/modeling_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -198,14 +198,48 @@ def _from_pretrained(
token: Optional[Union[bool, str]] = None,
revision: Optional[str] = None,
force_download: bool = False,
cache_dir: str = HUGGINGFACE_HUB_CACHE,
cache_dir: Union[str, Path] = HUGGINGFACE_HUB_CACHE,
subfolder: str = "",
local_files_only: bool = False,
torch_dtype: Optional[Union[str, "torch.dtype"]] = None,
trust_remote_code: bool = False,
file_name: Optional[str] = WEIGHTS_NAME,
**kwargs,
):
"""
Loads a model and its configuration file from a directory or the HF Hub.

Arguments:
model_id (`str` or `Path`):
The directory from which to load the model.
Can be either:
- The model id of a pretrained model hosted inside a model repo on huggingface.co.
- The path to a directory containing the model weights.
use_auth_token (Optional[Union[bool, str]], defaults to `None`):
Deprecated. Please use `token` instead.
token (Optional[Union[bool, str]], defaults to `None`):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
when running `huggingface-cli login` (stored in `~/.huggingface`).
revision (`str`, *optional*):
The specific model version to use. It can be a branch name, a tag name, or a commit id.
force_download (`bool`, defaults to `False`):
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
cache_dir (`Union[str, Path]`, *optional*):
The path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
subfolder (`str`, *optional*)
In case the relevant files are located inside a subfolder of the model repo either locally or on huggingface.co, you can specify the folder name here.
local_files_only (`bool`, *optional*, defaults to `False`):
Whether or not to only look at local files (i.e., do not try to download the model).
torch_dtype (`Optional[Union[str, "torch.dtype"]]`, *optional*)
float16 or bfloat16 or float32: load in a specified dtype, ignoring the model config.torch_dtype if one exists. If not specified, the model will get loaded in float32.
trust_remote_code (`bool`, *optional*)
Allows to use custom code for the modeling hosted in the model repository. This option should only be set for repositories you trust and in which you have read the code, as it will execute on your local machine arbitrary code present in the model repository.
file_name (`str`, *optional*):
The file name of the model to load. Overwrites the default file name and allows one to load the model
with a different name.
"""
if use_auth_token is not None:
warnings.warn(
"The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.",
Expand Down
4 changes: 4 additions & 0 deletions optimum/intel/ipex/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,12 @@


_HEAD_TO_AUTOMODELS = {
"feature-extraction": "IPEXModel",
"text-generation": "IPEXModelForCausalLM",
"text-classification": "IPEXModelForSequenceClassification",
"token-classification": "IPEXModelForTokenClassification",
"question-answering": "IPEXModelForQuestionAnswering",
"fill-mask": "IPEXModelForMaskedLM",
"image-classification": "IPEXModelForImageClassification",
"audio-classification": "IPEXModelForAudioClassification",
}
Loading