Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update imports in examples/notebooks #687

Merged
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions .github/workflows/test_openvino_examples.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ on:
push:
paths:
- '.github/workflows/test_openvino_examples.yml'
- 'examples/openvino/*'
- 'examples/openvino/**'
pull_request:
paths:
- '.github/workflows/test_openvino_examples.yml'
- 'examples/openvino/*'
- 'examples/openvino/**'

concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
Expand All @@ -22,9 +22,9 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.8", "3.10"]
python-version: ["3.8", "3.11"]

runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
AlexKoff88 marked this conversation as resolved.
Show resolved Hide resolved

steps:
- uses: actions/checkout@v2
Expand All @@ -35,12 +35,12 @@ jobs:

- name: Install dependencies
run: |
pip install optimum[openvino] jstyleson nncf pytest
pip install -r examples/openvino/audio-classification/requirements.txt
pip install -r examples/openvino/image-classification/requirements.txt
pip install -r examples/openvino/question-answering/requirements.txt
pip install -r examples/openvino/text-classification/requirements.txt
pip install .[openvino] jstyleson pytest
pip install -r examples/openvino/audio-classification/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r examples/openvino/image-classification/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r examples/openvino/question-answering/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r examples/openvino/text-classification/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
- name: Test examples
run: |
python -m pytest examples/openvino/test_examples.py
python -m pytest examples/openvino/test_examples.py
4 changes: 2 additions & 2 deletions .github/workflows/test_openvino_notebooks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.8", "3.10"]
python-version: ["3.8", "3.11"]

runs-on: ubuntu-20.04
runs-on: ubuntu-22.04

steps:
- uses: actions/checkout@v2
Expand Down
3 changes: 2 additions & 1 deletion examples/openvino/audio-classification/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
datasets>=1.14.0
evaluate
librosa
torchaudio
torchaudio
accelerate
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
from transformers.utils import check_min_version, send_example_telemetry
from transformers.utils.versions import require_version

from optimum.intel.openvino import OVConfig, OVTrainer, OVTrainingArguments
from optimum.intel import OVConfig, OVTrainer, OVTrainingArguments


logger = logging.getLogger(__name__)
Expand Down
1 change: 1 addition & 0 deletions examples/openvino/image-classification/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ datasets >= 1.8.0
torch >= 1.9.0
torchvision>=0.6.0
evaluate
accelerate
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@
from transformers.utils import check_min_version, send_example_telemetry
from transformers.utils.versions import require_version

from optimum.intel.openvino import OVConfig, OVTrainer, OVTrainingArguments
from optimum.intel import OVConfig, OVTrainer, OVTrainingArguments


logger = logging.getLogger(__name__)
Expand Down
1 change: 1 addition & 0 deletions examples/openvino/question-answering/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
datasets >= 1.8.0
torch >= 1.9.0
evaluate
accelerate
2 changes: 1 addition & 1 deletion examples/openvino/question-answering/run_qa.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@
from transformers.utils.versions import require_version
from utils_qa import postprocess_qa_predictions

from optimum.intel.openvino import OVConfig, OVTrainingArguments
from optimum.intel import OVConfig, OVTrainingArguments


# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
Expand Down
2 changes: 1 addition & 1 deletion examples/openvino/question-answering/trainer_qa.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
import torch.nn.functional as F
from transformers.trainer_utils import PredictionOutput

from optimum.intel.openvino.trainer import OVTrainer
from optimum.intel import OVTrainer


class QuestionAnsweringOVTrainer(OVTrainer):
Expand Down
3 changes: 2 additions & 1 deletion examples/openvino/text-classification/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@ scipy
scikit-learn
protobuf
torch >= 1.3
evaluate
evaluate
accelerate
2 changes: 1 addition & 1 deletion examples/openvino/text-classification/run_glue.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@
from transformers.utils import check_min_version, send_example_telemetry
from transformers.utils.versions import require_version

from optimum.intel.openvino import OVConfig, OVTrainer, OVTrainingArguments
from optimum.intel import OVConfig, OVTrainer, OVTrainingArguments


# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
Expand Down
24 changes: 12 additions & 12 deletions notebooks/openvino/optimum_openvino_inference.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@
}
],
"source": [
"from optimum.intel.openvino import OVModelForQuestionAnswering\n",
"from optimum.intel import OVModelForQuestionAnswering\n",
"\n",
"# Load PyTorch model from the Hub and export to OpenVINO in the background\n",
"model = OVModelForQuestionAnswering.from_pretrained(\"distilbert-base-uncased-distilled-squad\", export=True)\n",
Expand Down Expand Up @@ -182,7 +182,7 @@
}
],
"source": [
"from optimum.intel.openvino import OVModelForQuestionAnswering\n",
"from optimum.intel import OVModelForQuestionAnswering\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"model = OVModelForQuestionAnswering.from_pretrained(\"distilbert-base-uncased-distilled-squad-ov-fp32\")\n",
Expand Down Expand Up @@ -240,7 +240,7 @@
],
"source": [
"import torch\n",
"from optimum.intel.openvino import OVModelForQuestionAnswering\n",
"from optimum.intel import OVModelForQuestionAnswering\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"model = OVModelForQuestionAnswering.from_pretrained(\"distilbert-base-uncased-distilled-squad-ov-fp32\")\n",
Expand Down Expand Up @@ -324,7 +324,7 @@
}
],
"source": [
"from optimum.intel.openvino import OVModelForQuestionAnswering\n",
"from optimum.intel import OVModelForQuestionAnswering\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"model = OVModelForQuestionAnswering.from_pretrained(\n",
Expand Down Expand Up @@ -529,7 +529,7 @@
],
"source": [
"from IPython.display import Audio\n",
"from optimum.intel.openvino import OVModelForAudioClassification\n",
"from optimum.intel import OVModelForAudioClassification\n",
"from transformers import AutoFeatureExtractor, pipeline\n",
"from datasets import load_dataset\n",
"\n",
Expand Down Expand Up @@ -638,7 +638,7 @@
}
],
"source": [
"from optimum.intel.openvino import OVModelForCausalLM\n",
"from optimum.intel import OVModelForCausalLM\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"model_id = \"helenai/gpt2-ov\"\n",
Expand Down Expand Up @@ -704,7 +704,7 @@
],
"source": [
"from IPython.display import Image\n",
"from optimum.intel.openvino import OVModelForImageClassification\n",
"from optimum.intel import OVModelForImageClassification\n",
"from transformers import AutoImageProcessor, pipeline\n",
"\n",
"model_id = \"helenai/microsoft-swin-tiny-patch4-window7-224-ov\"\n",
Expand Down Expand Up @@ -766,7 +766,7 @@
}
],
"source": [
"from optimum.intel.openvino import OVModelForMaskedLM\n",
"from optimum.intel import OVModelForMaskedLM\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"model_id = \"helenai/bert-base-uncased-ov\"\n",
Expand Down Expand Up @@ -835,7 +835,7 @@
}
],
"source": [
"from optimum.intel.openvino import OVModelForQuestionAnswering\n",
"from optimum.intel import OVModelForQuestionAnswering\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"# Load the model and tokenizer saved in Part 1 of this notebook. Or use the line below to load them from the hub\n",
Expand Down Expand Up @@ -890,7 +890,7 @@
}
],
"source": [
"from optimum.intel.openvino import OVModelForSeq2SeqLM\n",
"from optimum.intel import OVModelForSeq2SeqLM\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"model_id = \"helenai/t5-small-ov\"\n",
Expand Down Expand Up @@ -998,7 +998,7 @@
}
],
"source": [
"from optimum.intel.openvino import OVModelForSequenceClassification\n",
"from optimum.intel import OVModelForSequenceClassification\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"model_id = \"helenai/papluca-xlm-roberta-base-language-detection-ov\"\n",
Expand Down Expand Up @@ -1047,7 +1047,7 @@
}
],
"source": [
"from optimum.intel.openvino import OVModelForTokenClassification\n",
"from optimum.intel import OVModelForTokenClassification\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"model_id = \"helenai/dslim-bert-base-NER-ov-fp32\"\n",
Expand Down
4 changes: 2 additions & 2 deletions notebooks/openvino/question_answering_quantization.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
"import transformers\n",
"from evaluate import evaluator\n",
"from openvino.runtime import Core\n",
"from optimum.intel.openvino import OVModelForQuestionAnswering, OVQuantizer, OVQuantizationConfig, OVConfig\n",
"from optimum.intel import OVModelForQuestionAnswering, OVQuantizer, OVQuantizationConfig, OVConfig\n",
"from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline\n",
"\n",
"transformers.logging.set_verbosity_error()\n",
Expand Down Expand Up @@ -286,7 +286,7 @@
"**NOTE:** if you notice very low accuracy after post-training quantization, it is likely caused by an overflow issue which affects processors that do not contain VNNI (Vector Neural Network Instruction). NNCF has an `overflow_fix` option to address this. It will effectively use 7-bits for quantizing instead of 8-bits to prevent the overflow. To use this option, modify the code in the next cell to add an explicit quantization configuration, and set `overflow_fix` to `\"enable\"`:\n",
"\n",
"```\n",
"from optimum.intel.openvino import OVConfig, OVQuantizationConfig\n",
"from optimum.intel import OVConfig, OVQuantizationConfig\n",
"\n",
"ov_config = OVConfig(quantization_config=OVQuantizationConfig(overflow_fix=\"enable\")\n",
"quantizer = OVQuantizer.from_pretrained(model)\n",
Expand Down
2 changes: 1 addition & 1 deletion notebooks/openvino/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
optimum-intel[openvino, nncf]
optimum-intel[openvino]
datasets
evaluate[evaluator]
ipywidgets
Expand Down
2 changes: 1 addition & 1 deletion notebooks/openvino/stable_diffusion_optimization.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"metadata": {},
"outputs": [],
"source": [
"from optimum.intel.openvino import OVStableDiffusionPipeline\n",
"from optimum.intel import OVStableDiffusionPipeline\n",
"from diffusers.training_utils import set_seed\n",
"from IPython.display import display"
]
Expand Down
4 changes: 2 additions & 2 deletions optimum/intel/openvino/modeling_seq2seq.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@

```python
>>> from transformers import {processor_class}
>>> from optimum.intel.openvino import {model_class}
>>> from optimum.intel import {model_class}
>>> from datasets import load_dataset

>>> processor = {processor_class}.from_pretrained("{checkpoint}")
Expand All @@ -241,7 +241,7 @@

```python
>>> from transformers import {processor_class}, pipeline
>>> from optimum.intel.openvino import {model_class}
>>> from optimum.intel import {model_class}
>>> from datasets import load_dataset

>>> processor = {processor_class}.from_pretrained("{checkpoint}")
Expand Down
4 changes: 2 additions & 2 deletions optimum/intel/openvino/quantization.py
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ def quantize(

Examples:
```python
>>> from optimum.intel.openvino import OVQuantizer, OVModelForCausalLM
>>> from optimum.intel import OVQuantizer, OVModelForCausalLM
>>> from transformers import AutoModelForCausalLM
>>> model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-3b")
>>> quantizer = OVQuantizer.from_pretrained(model, task="text-generation")
Expand All @@ -245,7 +245,7 @@ def quantize(
```

```python
>>> from optimum.intel.openvino import OVQuantizer, OVModelForSequenceClassification
>>> from optimum.intel import OVQuantizer, OVModelForSequenceClassification
>>> from transformers import AutoModelForSequenceClassification
>>> model = OVModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english", export=True)
>>> # or
Expand Down
Loading