Skip to content

Commit

Permalink
Update workflow to run example tests
Browse files Browse the repository at this point in the history
  • Loading branch information
helena-intel committed Apr 24, 2024
1 parent 88139d3 commit ba106c2
Show file tree
Hide file tree
Showing 4 changed files with 15 additions and 14 deletions.
20 changes: 10 additions & 10 deletions .github/workflows/test_openvino_examples.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ on:
push:
paths:
- '.github/workflows/test_openvino_examples.yml'
- 'examples/openvino/*'
- 'examples/openvino/**'
pull_request:
paths:
- '.github/workflows/test_openvino_examples.yml'
- 'examples/openvino/*'
- 'examples/openvino/**'

concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
Expand All @@ -22,9 +22,9 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.8", "3.10"]
python-version: ["3.8", "3.11"]

runs-on: ubuntu-20.04
runs-on: ubuntu-22.04

steps:
- uses: actions/checkout@v2
Expand All @@ -35,12 +35,12 @@ jobs:

- name: Install dependencies
run: |
pip install optimum[openvino] jstyleson nncf pytest
pip install -r examples/openvino/audio-classification/requirements.txt
pip install -r examples/openvino/image-classification/requirements.txt
pip install -r examples/openvino/question-answering/requirements.txt
pip install -r examples/openvino/text-classification/requirements.txt
pip install .[openvino] jstyleson pytest
pip install -r examples/openvino/audio-classification/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r examples/openvino/image-classification/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r examples/openvino/question-answering/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r examples/openvino/text-classification/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
- name: Test examples
run: |
python -m pytest examples/openvino/test_examples.py
python -m pytest examples/openvino/test_examples.py
4 changes: 2 additions & 2 deletions .github/workflows/test_openvino_notebooks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.8", "3.10"]
python-version: ["3.8", "3.11"]

runs-on: ubuntu-20.04
runs-on: ubuntu-22.04

steps:
- uses: actions/checkout@v2
Expand Down
3 changes: 2 additions & 1 deletion examples/openvino/question-answering/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Question answering
# Question nswering

This folder contains [`run_qa.py`](https://github.com/huggingface/optimum/blob/main/examples/openvino/question-answering/run_qa.py), a script to fine-tune a 🤗 Transformers model on a question answering dataset while applying quantization aware training (QAT). QAT can be easily applied by replacing the Transformers [`Trainer`](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#trainer) with the Optimum [`OVTrainer`].
An `QuestionAnsweringOVTrainer` is defined in [`trainer_qa.py`](https://github.com/huggingface/optimum/blob/main/examples/openvino/question-answering/trainer_qa.py), which inherits from `OVTrainer` and is adapted to perform question answering tasks evaluation.
Expand Down Expand Up @@ -47,6 +47,7 @@ python run_qa.py \
```

### Joint Pruning, Quantization and Distillation (JPQD) for BERT on SQuAD1.0

`OVTrainer` also provides an advanced optimization workflow through the NNCF when Transformer model can be structurally pruned along with 8-bit quantization and distillation. Below is an example which demonstrates how to jointly prune, quantize BERT-base for SQuAD 1.0 using NNCF config `--nncf_compression_config` and distill from BERT-large teacher. This example closely resembles the movement sparsification work of [Lagunas et al., 2021, Block Pruning For Faster Transformers](https://arxiv.org/pdf/2109.04838.pdf). This example takes about 12 hours with a single V100 GPU and ~40% of the weights of the Transformer blocks were pruned. For launching the script on multiple GPUs specify `--nproc-per-node=<number of GPU>`. Note, that different batch size and other hyperparameters qmight be required to achieve the same results as on a single GPU.

More on how to configure movement sparsity, see NNCF documentation [here](https://github.com/openvinotoolkit/nncf/blob/develop/nncf/experimental/torch/sparsity/movement/MovementSparsity.md).
Expand Down
2 changes: 1 addition & 1 deletion notebooks/openvino/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
optimum-intel[openvino, nncf]
optimum-intel[openvino]
datasets
evaluate[evaluator]
ipywidgets
Expand Down

0 comments on commit ba106c2

Please sign in to comment.