Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add ipex readme #595

Merged
merged 15 commits into from
Mar 22, 2024
21 changes: 21 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ where `extras` can be one or more of `ipex`, `neural-compressor`, `openvino`, `n

# Quick tour


echarlaix marked this conversation as resolved.
Show resolved Hide resolved
## Neural Compressor

Dynamic quantization can be used through the Optimum command-line interface:
Expand Down Expand Up @@ -202,6 +203,26 @@ Quantization aware training (QAT) is applied in order to simulate the effects of
You can find more examples in the [documentation](https://huggingface.co/docs/optimum/intel/index).


## IPEX
To load your IPEX model, you can just replace your `AutoModelForXxx` class with the corresponding `IPEXModelForXxx` class. You can set `export=True` to load a PyTorch checkpoint, export your model via TorchScript and apply IPEX optimizations : both operators optimization (replaced with customized IPEX operators) and graph-level optimization (like operators fusion) will be applied on your model.
```diff
from transformers import AutoTokenizer, pipeline
- from transformers import AutoModelForCausalLM
+ from optimum.intel.ipex import IPEXModelForCausalLM
echarlaix marked this conversation as resolved.
Show resolved Hide resolved


model_id = "gpt2"
- model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
+ model = IPEXModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, export=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
results = pipe("He's a dreadful magician and")

```

For more details, please refer to the [documentation](https://intel.github.io/intel-extension-for-pytorch/#introduction).


## Running the examples

Check out the [`examples`](https://github.com/huggingface/optimum-intel/tree/main/examples) directory to see how 🤗 Optimum Intel can be used to optimize models and accelerate inference.
Expand Down
Loading