Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

Commit

Permalink
Pre gold dev merge (#174)
Browse files Browse the repository at this point in the history
* Added support for OneDNN FusedBatchNormEx and FusedConV2D (#112)

* Added support for OneDNN FusedBatchNormEx and FusedConV2D

Co-authored-by: Cavus Mustafa <mustafa.cavus@intel.com>

* FusedDepthwiseConv2d and improved FusedConv2D support (#114)

* Upgradation of TensorFlow version to v2.5.0 (#111)

* added math_op to skip list for gpu

* changed tf build location in ci

* change abi1 tf links, add docker tf pip whls build patch

Co-authored-by: Suryaprakash Shanmugam <suryaprakash.shanmugam@intel.com>

* Lazy Fallback (Support for Dynamic Fallback) (#113)

* Initial implementation of dynamic fallback

* Error check fix for dynamic fallback

* Improved coverage for dynamic fallback

* dyn fb bugfix & dyn fb disabled by default

* Fixed python api typo

* Code cleanup in executable.cc

* tf 1.x support for dyn. fallback

* Fix for freezing inf. with dyn. fb. when interop is 1

* memory leak fixes  (#110)

* OCM submodule updated for fused ops (#115)

* Updated code for some checks (#104)

* Update build_utils.py

* add suggested checks (#119)

* checks in py files

* added exception catching for assert statement

* fixed uninitialized variable issue (#120)

* doc update (#123)

* installing opencv during venv setup

* typo fix and removed opencv installation manually

* Adding azure pipelines using docker (#122)

* Removed shell as True with subprocess pipe (#127)

* Initial TROUBLESHOOTING.md and updates on ARCHITECTURE.md (#126)

* GUIDE document added, ARCHITECTURE updated for dyn. fb.

* Filename change: GUIDE.md to TROUBLESHOOTING.md

* AWS, Azure and Colab classification notebook added (#125)

* collab notebook added for object detection

* fixing path traversal issues (#129)

* Code changes for models support (#130)

* Added following op translations and changes:
* NonmaxsuppresionV3, FusedConv2d with BiasAdd LeakyRelu, FusedConv2d with BiasAdd Add Relu
* get_shape replaced with get_partial_shape where only rank is needed
* deassign cluster modifed for dynamic ops

* translation for fusedops with ELU added

* dynamic input condition handled for Fill op

* zero dim check updated

* Updated TopkV2 translation for zero dim inputs

* fix for execution errors and collab badge support added

* fix build error during tensorflow installation (#132)

* Fix for build error during tensorflow installation
Properly check cmake and gcc versions
Force re-install packages for them to take effect after every re-build

* OV2021.4 build support enabled (#131)

* Updated the code for OV2021.4

* OCM updated for OV2021_4

* Enabled OpenVINO 2021.4, but kept default build version as 2021.3 due to SetBlob issue

Co-authored-by: Suryaprakash Shanmugam <suryaprakash.shanmugam@intel.com>

* Fixed bug in FusedBatchNormEx translation (#134)

* Fix cmake and gcc version checks (#136)

* Fix cmake and gcc version checks

* review comments incorporated in colab notebooks

* Added Op translations and tests (#137)

* Added translation for Resize Nearest Neighbor, GatherNd and Round OP translations

* Added CropAndResize op and flip crop fix for CropAndResize

* Added Reverse, Reciprocal, BatchToSpaceND, SpaceToBatchND, and Elu

* Added TF Python tests for the above

* Handle corner case in TranslateBatchNDAndSpaceNDOp

* Add ops to gpu and myriad devices; update ocm

* Adjust tolerance precision for fusedMatMul

* Properly handle dynfallback flag; Print cluster summary

Co-authored-by: ck-intel <chandrakant.khandelwal@intel.com>

* Fetch and print OpenVINO version that OVTF was built against (#142)

* Fetch and print OpenVINO version that OVTF was built against

* Add import openvino_tensorflow to TF tests using find and replace instead of using patch (#141)

* Enable cluster fallback through static initializers; Change default summary log info (#146)

* Enable cluster fallback through static initializers; Change default summary log info

* OpenVINO 2021 4 Upgrade Docs, CI, and Tests (#145)

* Update OV version in docs, pipelines, and code

* Update UT lists; fix OpenVINO 2021.4 download links in Dockerfiles

* Fix ci random failures (#148)

* Documentation review changes

* Remove Code format check from Ubuntu-20 as clang-format-3.9 is not available in it

* Installation table updated with MAC OS entries (#153)

* Enable macOS Support (#133)

* Initial commit for macOS

* Merge commit '03379a2b92e0e11e94c6e6f709e3ee7770723f1a'

* add TF 2 dylib var

* fix virtualenv creation in py3

* Changing default OpenVINO version 2021.3 due to SetBlob issue

* identify python minor version for load_venv

* Adding the mac azure pipelines for ovtf project

* Updating mac yml files

* Fix CPU plugin issue, Fix unit test spawn issue for macOS

* add mkldnn lib for release and debug build types

* Update mac azure pipeline

* Update OV version in docs, pipelines, and code

* Update darwin tests list

* Rebased on pre_gold_dev; Remove build errors; Update tests list

* Enable VPU in OpenVINO, make on all cores

* Update ABI0 based build pipeline on macOS

* Add rpath to myriadPlugin and include myriad unit tests

* Add myriad UT list files

* Run myriad TF UT directly through tf_unittest_runner due to mutex error

* Use -march=native only for Linux

Co-authored-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Co-authored-by: bhadur <bhadur.a.sm@intel.com>

* Supporting backends (#152)

* GPU_FP16 changes (#149)

* Device name check for GPU FP16

Co-authored-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

* Export IR API is and related functionality is added (#150)

* Document text correction for export ir

* Fix TF1.x build failures that were due to Timestamp logic and dtype errors (#155)

* Conditional compilation for timestamp logic; Fix DType error in TF 1.x in ovtf_builder

* Models list updated (#151)

* Corrected link for NCF-1B model

* Devices name alignment in table

Co-authored-by: Chandrakant Khandelwal <chandrakant.khandelwal@intel.com>

* Fixed cmake flags issue and updated OV branch tags for previous released versions (#156)

* Upgrade OVTF Python Samples to TF 2.x (#135)

* added multi input support (image, video, camera)

* added python examples to cpu ci pipline

* removed tf 1.15.2 from req

* api changes to tf2.5

* Python Inference Example Detection stage update

* removed python samples in ci stages

* tested camera input

* Adding TF1x samples folder

* TF 1.x yolo v3 model implementation to TF 2.x

* added the pip packges to requirements

* added assertion for input_file

* added python sample to ci pipeline

* added matplotlib as requirement

* added TF1 python sample to ci pipeline

* added requirements.txt

* added --input arg help message

* removed matplotlib installation

* added requirements.txt to python TF2 stage

* removed pip packages for ovtf samples

* added python3.6 for TF1.x yolov3 conversion

* bug fix for TF1.x python sample on MYRIAD

* explicitly added the backends

* python version check

* installing python3.6

* saving the .h5 keras format for yolov3 darknet

* added TF2.x API inference on .h5 kears format

* added TF2.X APIs for inference with .h5 model

* rollback to tf.compat.v1 for loading the graph

* Renamed to TF_1_x and removed the unused files.

* updated the TF1.x folder name to TF_1_x

* removed filtering for listing ovtf backends

* removing unused files

* renaming output model

* removed data folder in TF_1_x , using common utils

* default model name change

* Updated help message

* removed python TF1 OD samples

* assigned 0 for camera input

* updated CI pipelines for python samples

* assigned 0 for camera input

* Unit tests update (#116)

* TF unit test updates for all the devices

* Add Explicit Padding to Conv2D.

* Update TF unit test cases for MAC OS

Co-authored-by: Peetha Veeramalai <preetha.veeramalai@intel.com>

* OVTF Mandarin documentation (#138)

* Updated OVTF Mandarin documentation

Co-authored-by: Chandrakant Khandelwal <chandrakant.khandelwal@intel.com>

* GPU core dump issue fixed (#159)

Co-authored-by: Mustafa Cavus <cavusmustafa@intel.com>

* Updated Mandarin readme documentation (#162)

* Samples README update and yolov3 anchors file addition (#157)

* added yolov3 anchors file

* added license and writing output detections

* changeed python default version to python3

* releasing memory after backends_len call

* check backend updated and removed MIT license

Co-authored-by: pratiksha123507 <pratikshax.bapusaheb.vanse@intel.com>

* Update documentation for BUILD.md, INSTALL.md, and add requirements.txt (#158)

* Add dependency install steps for Ubuntu and macOS

* Add Bazelisk optional step

* Upgrade OVTF version across files

* Add separate instruction for PIP installations and update docs

* Fix networkx version as 2.5.1 is common supported version for both Ubuntu 18.04 and 20.04

* Updated the installation commands in Interactive Table

* Change PIL minimum required version

* Added INSTALL.md link

* Updated the build dependencies steps

Co-authored-by: chandrakant khandelwal <chandrakant.khandelwal@intel.com>
Co-authored-by: Ritesh Rajore <ritesh.kumar.rajore@intel.com>

* Updated readme and license; Update interactive installation table (#163)

Co-authored-by: Suryaprakash Shanmugam <suryaprakash.shanmugam@intel.com>

* ManyLinux2014 Wheel Generation (#165)

* Add patch file for manylinux

* Update manylinux2014 Dockerfile

* Add python 3.9

* Update versions in main.js

* Dynamic fallback output fix (#166)

* Dynamic fallback output issue fixed

* Added INT8 quantization support (#144)

* Added Translations and Transposes needed for Int8 Quantization

* Added documentation for INT8 quantization support

* Added CPU device condition for Relu6 and Constant Folding pass

* Updated the CPU condition

* Removed default enabling of constant folding on CPU

Co-authored-by: chandrakant khandelwal <chandrakant.khandelwal@intel.com>

* Version number changes in docs, code, and pipelines (#169)

* updated MODELS.md (#167)

* Upgrade to OV 2021.4.1 version (#170)

* Code changes for OV 2021.4.1 upgrade

* Updated docs for 2021.4.1

* Add changes to ABI1 whl generation Dockerfile

Co-authored-by: Suryaprakash Shanmugam <suryaprakash.shanmugam@intel.com>

* Updated docs based on review comments (#171)

* Document updates for GPU_FP16

* Update TROUBLESHOOTING.md

* Additional document updates on GPU_FP16

* Update AWS_instructions.md

* Update Azure_instructions.md

* Installation table intro modified

Co-authored-by: Yamini Nimmagadda <yamini.nimmagadda@intel.com>
Co-authored-by: Cavus Mustafa <mustafa.cavus@intel.com>
Co-authored-by: Sai Jayanthi <sai.jayanthi@intel.com>

* TF version updated in colab notebooks & Update examples documentation (#168)

Co-authored-by: adisan3 <adityax.sanivarapu@intel.com>

* Update OCM submodule(#172)

* Export IR race condition issues are removed (#173)

* Pillow version update (#175)

* Updating readme for users using OVTF with CUDA capable GPUs (#177)

* Model doc files updated with new models (#178)

* Model doc files updated with 2 new models

* Added complete model support for Facenet and lm_1b

* Added warm-up inference run for samples (#179)

* Added warm-up for first inference

* Add Python3.9 Support for ABI1 Wheels (#176)

* Add support for Python 3.9 wheels in ABI1 builds

* Add python3.9-dev apt package

* Updated INT8 quantization documentation in MODELS.md (#181)

* Desciprtion for enabling oneDNN added to the README.md (#182)

* Updated MODELS_cn doc (#180)

* Update PyPi Documentation (#183)

* Updated models file (#186)

Co-authored-by: Chandrakant Khandelwal <chandrakant.khandelwal@intel.com>

* Update Mandarin Documentation and DevCloud Links(#185)

* Add updated Mandarin documentation

Co-authored-by: Suryaprakash Shanmugam <suryaprakash.shanmugam@intel.com>

* Update mandarin readme and models file (#187)

* Print TF version instead of git version (#188)

Co-authored-by: Yamini Nimmagadda <yamini.nimmagadda@intel.com>
Co-authored-by: Cavus Mustafa <mustafa.cavus@intel.com>
Co-authored-by: mohdansx <mohdx.ansari@intel.com>
Co-authored-by: Suryaprakash Shanmugam <suryaprakash.shanmugam@intel.com>
Co-authored-by: Sai Jayanthi <sai.jayanthi@intel.com>
Co-authored-by: adisan3 <adityax.sanivarapu@intel.com>
Co-authored-by: bhadur <bhadur.a.sm@intel.com>
Co-authored-by: Ambarish Das <ambarish.das@intel.com>
Co-authored-by: Surya Siddharth Pemmaraju <surya.siddharth.pemmaraju@intel.com>
Co-authored-by: pratiksha123507 <pratikshax.bapusaheb.vanse@intel.com>
Co-authored-by: Peetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: Mustafa Cavus <cavusmustafa@intel.com>
Co-authored-by: Ritesh Rajore <ritesh.kumar.rajore@intel.com>
  • Loading branch information
14 people authored Sep 27, 2021
1 parent 4d4f0d2 commit 70d6c1c
Show file tree
Hide file tree
Showing 119 changed files with 8,181 additions and 2,557 deletions.
12 changes: 10 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,11 @@ if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
endif()

if(APPLE)
set(LIBNGRAPH "libngraph.dylib")
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
set(LIBNGRAPH "libngraphd.dylib")
else()
set(LIBNGRAPH "libngraph.dylib")
endif()
else()
set(LIBNGRAPH "libngraph.so")
endif(APPLE)
Expand Down Expand Up @@ -170,7 +174,11 @@ message(STATUS "UNIT_TEST_ENABLE: ${UNIT_TEST_ENABLE}")
message(STATUS "OPENVINO_ARTIFACTS_DIR: ${OPENVINO_ARTIFACTS_DIR}")
message(STATUS "USE_PRE_BUILT_OPENVINO: ${USE_PRE_BUILT_OPENVINO}")
message(STATUS "OPENVINO_VERSION: ${OPENVINO_VERSION}")
if (${OPENVINO_VERSION} MATCHES "2021.3")
if (${OPENVINO_VERSION} MATCHES "2021.4.1")
add_definitions(-DOPENVINO_2021_4_1=1)
elseif (${OPENVINO_VERSION} MATCHES "2021.4")
add_definitions(-DOPENVINO_2021_4=1)
elseif (${OPENVINO_VERSION} MATCHES "2021.3")
add_definitions(-DOPENVINO_2021_3=1)
elseif (${OPENVINO_VERSION} MATCHES "2021.2")
add_definitions(-DOPENVINO_2021_2=1)
Expand Down
59 changes: 34 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,46 +1,48 @@
<p>English | <a href="./README_cn.md">简体中文</a></p>

<p align="center">
<img src="images/openvino_wbgd.png">
</p>

# **OpenVINO™ integration with TensorFlow (Preview Release)**
# **OpenVINO™ integration with TensorFlow**

This repository contains the source code of **OpenVINO™ integration with TensorFlow**, a product that delivers [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) inline optimizations and runtime needed for an enhanced level of TensorFlow compatibility. It is designed for developers who want to get started with OpenVINO™ in their inferencing applications to enhance inferencing performance with minimal code modifications. **OpenVINO™ integration with TensorFlow** accelerates inference across many [AI models](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/docs/MODELS.md) on a variety of Intel<sup>®</sup> silicon such as:
This repository contains the source code of **OpenVINO™ integration with TensorFlow**, designed for TensorFlow* developers who want to get started with [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) in their inferencing applications. This product delivers [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) inline optimizations which enhance inferencing performance with minimal code modifications. **OpenVINO™ integration with TensorFlow** accelerates inference across many [AI models](docs/MODELS.md) on a variety of Intel<sup>®</sup> silicon such as:
- Intel<sup>®</sup> CPUs
- Intel<sup>®</sup> integrated GPUs
- Intel<sup>®</sup> Movidius™ Vision Processing Units - referred as VPU
- Intel<sup>®</sup> Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred as VAD-M or HDDL
- Intel<sup>®</sup> Movidius™ Vision Processing Units - referred to as VPU
- Intel<sup>®</sup> Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL

[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend going beyond this component to adopt native OpenVINO™ APIs and its runtime.]
[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.]

## Installation
### Prerequisites

- Ubuntu 18.04, 20.04
- Python 3.6, 3.7, or 3.8
- TensorFlow v2.5.0
- Ubuntu 18.04, 20.04 or macOS 11.2.3
- Python* 3.6, 3.7, 3.8 or 3.9
- TensorFlow* v2.5.1

Check our [Interactive Installation Table](https://openvinotoolkit.github.io/openvino_tensorflow/) for a menu of installation options. The table will help you configure the installation process.

### Install **OpenVINO™ integration with TensorFlow** alongside PyPi TensorFlow

This **OpenVINO™ integration with TensorFlow** package comes with pre-built libraries of OpenVINO™ version 2021.3 meaning you don't have to install OpenVINO™ separately. This package supports:
The **OpenVINO™ integration with TensorFlow** package comes with pre-built libraries of OpenVINO™ version 2021.4.1. The users do not have to install OpenVINO™ separately. This package supports:
- Intel<sup>®</sup> CPUs
- Intel<sup>®</sup> integrated GPUs
- Intel<sup>®</sup> Movidius™ Vision Processing Units (VPUs)


pip3 install -U pip==21.0.1
pip3 install -U tensorflow==2.5.0
pip3 install openvino-tensorflow
pip3 install pip==21.0.1
pip3 install tensorflow==2.5.1
pip3 install -U openvino-tensorflow


If you want to leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, install [**OpenVINO™ integration with TensorFlow** alongside the Intel® Distribution of OpenVINO™ Toolkit](docs/BUILD.md#install-openvino-integration-with-tensorflow-alongside-the-intel-distribution-of-openvino-toolkit).
To leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, install [**OpenVINO™ integration with TensorFlow** alongside the Intel® Distribution of OpenVINO™ Toolkit](docs/INSTALL.md#12-install-openvino-integration-with-tensorflow-alongside-the-intel-distribution-of-openvino-toolkit).

For more details on other modes of installation, please refer to [BUILD.md](docs/BUILD.md)
For more details on installation please refer to [INSTALL.md](docs/INSTALL.md), and for build from source options please refer to [BUILD.md](docs/BUILD.md)

## Configuration

Once you've installed **OpenVINO™ integration with TensorFlow**, you can use TensorFlow to run inference using a trained model.
Once you've installed **OpenVINO™ integration with TensorFlow**, you can use TensorFlow* to run inference using a trained model.

For the best results, it is advised to enable [oneDNN Deep Neural Network Library (oneDNN)](https://github.com/oneapi-src/oneDNN) by setting the environment variable `TF_ENABLE_ONEDNN_OPTS=1`.

To see if **OpenVINO™ integration with TensorFlow** is properly installed, run

Expand All @@ -49,27 +51,31 @@ To see if **OpenVINO™ integration with TensorFlow** is properly installed, run

This should produce an output like:

TensorFlow version: 2.5.0
OpenVINO integration with TensorFlow version: b'0.5.0'
OpenVINO version used for this build: b'2021.3'
TensorFlow version used for this build: v2.5.0
TensorFlow version: 2.5.1
OpenVINO integration with TensorFlow version: b'1.0.0'
OpenVINO version used for this build: b'2021.4.1'
TensorFlow version used for this build: v2.5.1
CXX11_ABI flag used for this build: 0
OpenVINO integration with TensorFlow built with Grappler: False

By default, Intel<sup>®</sup> CPU is used to run inference. However, you can change the default option to either Intel<sup>®</sup> integrated GPU or Intel<sup>®</sup> VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.

openvino_tensorflow.set_backend('<backend_name>')

Supported backends include 'CPU', 'GPU', 'MYRIAD', and 'VAD-M'.
Supported backends include 'CPU', 'GPU', 'GPU_FP16', 'MYRIAD', and 'VAD-M'.

To determine what processing units are available on your system for inference, use the following function:

openvino_tensorflow.list_backends()
For more API calls and environment variables, see [USAGE.md](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/docs/USAGE.md).
For more API calls and environment variables, see [USAGE.md](docs/USAGE.md).

[Note: If a CUDA capable device is present in the system then set the environment variable CUDA_VISIBLE_DEVICES to -1]

## Examples

To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples) directory.
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples](./examples) directory.

## Try it on Intel<sup>®</sup> DevCloud
Sample tutorials are also hosted on [Intel<sup>®</sup> DevCloud](https://software.intel.com/content/www/us/en/develop/tools/devcloud/edge/build/ovtfoverview.html). The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel<sup>®</sup> DevCloud nodes, compare the results of **OpenVINO™ integration with TensorFlow**, native TensorFlow and OpenVINO™.

## License
**OpenVINO™ integration with TensorFlow** is licensed under [Apache License Version 2.0](LICENSE).
Expand All @@ -88,3 +94,6 @@ We welcome community contributions to **OpenVINO™ integration with TensorFlow*
* Submit a [pull request](https://github.com/openvinotoolkit/openvino_tensorflow/pulls).

We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build **OpenVINO™ integration with TensorFlow** and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.

---
\* Other names and brands may be claimed as the property of others.
96 changes: 96 additions & 0 deletions README_cn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
[English](./README.md) | 简体中文

<p align="center">
<img src="images/openvino_wbgd.png">
</p>

# **OpenVINO™ integration with TensorFlow**

该仓库包含 **OpenVINO™ integration with TensorFlow** 的源代码,该产品可提供所需的 [OpenVINO™](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) 内联优化和运行时,显著增强对TensorFlow 的兼容性。该产品专为开发人员设计,支持他们将 OpenVINO™ 运用在自己的推理应用,只需稍微修改代码,就可显著增强推理性能。**OpenVINO™ integration with TensorFlow** 可在各种英特尔<sup>®</sup> 芯片上加速AI模型(如下所示) [AI 模型](docs/MODELS_cn.md)的推理速度:

- 英特尔<sup>®</sup> CPU
- 英特尔<sup>®</sup> 集成 GPU
- 英特尔<sup>®</sup> Movidius™ 视觉处理单元 (VPU)
- 支持 8 颗英特尔 Movidius™ MyriadX VPU 的英特尔<sup>®</sup> 视觉加速器设计(称作 VAD-M 或 HDDL)

[注:为实现最佳的性能、效率、工具定制和硬件控制,我们建议使用原生 OpenVINO™ API 及其运行时。]

## 安装
### 前提条件

- Ubuntu 18.04, 20.04 or macOS 11.2.3
- Python* 3.6, 3.7, 3.8 or 3.9
- TensorFlow* v2.5.1

请参阅我们的[交互式安装表](https://openvinotoolkit.github.io/openvino_tensorflow/),查看安装选项菜单。该表格将引导您完成安装过程。

**OpenVINO™ integration with TensorFlow** 安装包附带 OpenVINO™ 2021.4.1 的预构建库,这意味着您无需单独安装 OpenVINO™。该安装包支持:
- 英特尔<sup>®</sup> CPU
- 英特尔<sup>®</sup> 集成 GPU
- 英特尔<sup>®</sup> Movidius™ 视觉处理单元 (VPU)


pip3 install pip==21.0.1
pip3 install tensorflow==2.5.1
pip3 install -U openvino-tensorflow


如果您想使用支持 Movidius™ 的英特尔® 视觉加速器设计 (VAD-M) 进行推理,请安装 [**OpenVINO™ integration with TensorFlow** 以及英特尔® OpenVINO™ 工具套件发布版](docs/INSTALL_cn.md#12-install-openvino-integration-with-tensorflow-alongside-the-intel-distribution-of-openvino-toolkit)

更多关于其他安装模式的详情,请参阅 [INSTALL.md](docs/INSTALL_cn.md), 更多构建选项请参阅 [BUILD.md](docs/BUILD_cn.md)

## 配置

安装 **OpenVINO™ integration with TensorFlow** 后,您可以在TensorFlow* 上对训练好的模型运行推理操作。

为了获得最佳效果,建议通过设置环境变量 `TF_ENABLE_ONEDNN_OPTS=1` 来启用[oneDNN Deep Neural Network Library (oneDNN)](https://github.com/oneapi-src/oneDNN)

如要查看 **OpenVINO™ integration with TensorFlow** 是否安装正确,请运行

python3 -c "import tensorflow as tf; print('TensorFlow version: ',tf.__version__);\
import openvino_tensorflow; print(openvino_tensorflow.__version__)"

它会生成以下输出:

TensorFlow version: 2.5.1
OpenVINO integration with TensorFlow version: b'1.0.0'
OpenVINO version used for this build: b'2021.4.1'
TensorFlow version used for this build: v2.5.1
CXX11_ABI flag used for this build: 0

默认情况下,英特尔<sup>®</sup> CPU 用于运行推理。您也可以将默认选项改为英特尔<sup>®</sup> 集成 GPU 或英特尔<sup>®</sup> VPU 来进行 AI 推理。调用以下函数,更改执行推理的硬件。

openvino_tensorflow.set_backend('<backend_name>')

支持的后端包括‘CPU'、‘GPU'、‘GPU_FP16'、‘MYRIAD’和‘VAD-M'。

如要确定系统上的哪些处理单元用于推理,可使用以下函数:

openvino_tensorflow.list_backends()
如欲了解更多 API 调用和环境变量的信息,请查看 [USAGE.md](docs/USAGE_cn.md)

[注意:如果系统中存在支持 CUDA 的设备,则将环境变量 CUDA_VISIBLE_DEVICES 设置为 -1]

## 示例

如欲了解 **OpenVINO™ integration with TensorFlow** 的具体功能,请查看[示例](./examples)目录中的演示。

示例教程也托管在 [Intel<sup>®</sup> DevCloud for the Edge](https://software.intel.com/content/www/us/en/develop/tools/devcloud/edge/build/ ovtfoverview.html)。 演示应用程序是使用 Jupyter Notebooks 实现的。 您可以在Intel<sup>®</sup> DevCloud 节点上执行它们,比较 **OpenVINO™ integration with TensorFlow** 原生 TensorFlow 和 OpenVINO™ 不同实现方式的性能结果。

## 许可
**OpenVINO™ integration with TensorFlow** 依照 [Apache 许可版本 2.0](LICENSE)。通过贡献项目,您同意其中包含的许可和版权条款,并根据这些条款发布您的贡献。

## 支持

通过 [GitHub 问题](https://github.com/openvinotoolkit/openvino_tensorflow/issues)提交您的问题、功能请求和漏洞报告。

## 如何贡献

我们欢迎您为 **OpenVINO™ integration with TensorFlow** 做出社区贡献。如您在改进方面有好的想法:

* 请通过 [GitHub 问题](https://github.com/openvinotoolkit/openvino_tensorflow/issues)分享您的建议。
* 提交 [pull 请求](https://github.com/openvinotoolkit/openvino_tensorflow/pulls)

我们将以最快的速度审核您的贡献!如果需要进行其他修复或修改,我们将为您提供引导和反馈。贡献之前,请确保您可以构建 **OpenVINO™ integration with TensorFlow** 并运行所有示例和修复/补丁。如果您想推出重要特性,可以创建特性测试案例。您的 pull 请求经过验证之后,我们会将其合并到存储库中,前提是 pull 请求满足上述要求并经过认可。
---
\* 其他名称和品牌可能已被声称为他人资产。
4 changes: 2 additions & 2 deletions build_ov.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@


def main():
openvino_version = "releases/2021/3"
openvino_version = "2021.4.1"
build_dir = 'build_cmake'
cxx_abi = "1"
print("openVINO version: ", openvino_version)
Expand Down Expand Up @@ -98,4 +98,4 @@ def main():
# ./build_ovtf.py --use_openvino_from_location /prebuilt/ov/dir/artifacts/openvino
# cd ..; mkdir ovtf_2; cd ovtf_2
# git clone https://github.com/openvinotoolkit/openvino_tensorflow.git
# ./build_ovtf.py --use_openvino_from_location /prebuilt/ov/dir/artifacts/openvino
# ./build_ovtf.py --use_openvino_from_location /prebuilt/ov/dir/artifacts/openvino
Loading

0 comments on commit 70d6c1c

Please sign in to comment.