-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
16cba30
commit 4aab7f7
Showing
79 changed files
with
3,141 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,287 @@ | ||
# Deploy examples on FVP simulation environment | ||
|
||
- This repository is for building and deploying examples on FVP simulate environment. | ||
- The example including | ||
- [Person detection example without vela](#build-with-person-detection-tflite-mdoel-without-passing-vela): | ||
- Input image size: 96 x 96 x 1 (Monochrome) | ||
- Using google person detection example model without passing vela run inference with cortex-m55. | ||
- [How to use HIMAX config file to generate vela model](#how-to-use-himax-config-file-to-generate-vela-model) | ||
- [Person detection example run inference with Ethos-U55 NPU](#build-with-person-detection-tflite-mdoel-passing-vela): | ||
- Input image size: 96 x 96 x 1 (Monochrome) | ||
- Using google person detection example model passing vela run inference with Ethos-U55 NPU. | ||
- [Yolo Fastest Object detection example](#build-with-yolo-fastest-object-detection-tflite-mdoel-passing-vela): | ||
- Input image size: 256 x 256 x 3 (RGB) | ||
- We only release the model which passes himax_vela.ini (Ethos-U55 64 MACS configuration). | ||
- We can run infernce using the images which captured by our own HIMAX 01B0 sensor. | ||
- [Yolo Fastest XL Object detection example](#build-with-yolo-fastest-xl-object-detection-tflite-mdoel-passing-vela): | ||
- Input image size: 256 x 256 x 3 (RGB) | ||
- We only release the model which passes himax_vela.ini (Ethos-U55 64 MACS configuration). | ||
- We can run infernce using the images which captured by our own HIMAX 01B0 sensor. | ||
- To run evaluations using this software, we suggest using Ubuntu 20.04 LTS environment. | ||
|
||
## Prerequisites | ||
- Install the toolkits listed below: | ||
- Install necessary packages: | ||
``` | ||
sudo apt-get update | ||
sudo apt-get install cmake | ||
sudo apt-get install curl | ||
sudo apt install xterm | ||
sudo apt install python3 | ||
sudo apt install python3.8-venv | ||
``` | ||
- Corstone SSE-300 FVP: aligned with the Arm MPS3 development platform and includes both the Cortex-M55 and the Ethos-U55 processors. | ||
``` | ||
# Fetch Corstone SSE-300 FVP | ||
wget https://developer.arm.com/-/media/Arm%20Developer%20Community/Downloads/OSS/FVP/Corstone-300/MPS3/FVP_Corstone_SSE-300_Ethos-U55_11.14_24.tgz | ||
``` | ||
 | ||
``` | ||
# Create folder to be extracted | ||
mkdir temp | ||
# Extract the archive | ||
tar -C temp -xvzf FVP_Corstone_SSE-300_Ethos-U55_11.14_24.tgz | ||
``` | ||
 | ||
``` | ||
# Execute the self-install script | ||
temp/FVP_Corstone_SSE-300_Ethos-U55.sh --i-agree-to-the-contained-eula --no-interactive -d CS300FVP | ||
``` | ||
 | ||
- GNU Arm Embedded Toolchain 10-2020-q4-major is the only version supports Cortex-M55. | ||
``` | ||
# fetch the arm gcc toolchain. | ||
wget https://developer.arm.com/-/media/Files/downloads/gnu-rm/10-2020q4/gcc-arm-none-eabi-10-2020-q4-major-x86_64-linux.tar.bz2 | ||
# Extract the archive | ||
tar -xjf gcc-arm-none-eabi-10-2020-q4-major-x86_64-linux.tar.bz2 | ||
# Add gcc-arm-none-eabi/bin into PATH environment variable. | ||
export PATH="${PATH}:/[location of your GCC_ARM_NONE_EABI_TOOLCHAIN_ROOT]/gcc-arm-none-eabi/bin" | ||
``` | ||
- Arm ML embedded evaluation kit Machine Learning (ML) applications targeted for Arm Cortex-M55 and Arm Ethos-U55 NPU. | ||
- We use Arm ML embedded evaluation kit to run the Person detection FVP example. | ||
``` | ||
# Fetch Arm ML embedded evaluation kit | ||
wget https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ml-embedded-evaluation-kit/+archive/refs/tags/22.02.tar.gz | ||
mkdir ml-embedded-evaluation-kit | ||
tar -C ml-embedded-evaluation-kit -xvzf 22.02.tar.gz | ||
cp -r download_dependencies.py ./ml-embedded-evaluation-kit/ | ||
cd ml-embedded-evaluation-kit/ | ||
rm -rf ./dependencies | ||
python3 ./download_dependencies.py | ||
./build_default.py --npu-config-name ethos-u55-64 | ||
#go out ml-embedded-evaluation-kit folder and copy the example resources to ML embedded evaluation kit | ||
cd .. | ||
cp -r ./resources/img_person_detect ./ml-embedded-evaluation-kit/resources | ||
cp -r ./source/use_case/img_person_detect ./ml-embedded-evaluation-kit/source/use_case | ||
cp -r ./vela/img_person_detect ./ml-embedded-evaluation-kit/resources_downloaded/ | ||
cp -r ./resources/img_yolofastest_relu6_256_himax ./ml-embedded-evaluation-kit/resources | ||
cp -r ./source/use_case/img_yolofastest_relu6_256_himax ./ml-embedded-evaluation-kit/source/use_case | ||
cp -r ./vela/img_yolofastest_relu6_256_himax ./ml-embedded-evaluation-kit/resources_downloaded/ | ||
cp -r ./resources/img_yolofastest_xl_relu6_256_himax ./ml-embedded-evaluation-kit/resources | ||
cp -r ./source/use_case/img_yolofastest_xl_relu6_256_himax ./ml-embedded-evaluation-kit/source/use_case | ||
cp -r ./vela/img_yolofastest_xl_relu6_256_himax ./ml-embedded-evaluation-kit/resources_downloaded/ | ||
``` | ||
## Build with person detection tflite mdoel without passing vela | ||
- Go under folder of ml-embedded-evaluation-kit | ||
``` | ||
cd ml-embedded-evaluation-kit | ||
``` | ||
- First, Create the output file and go under the folder | ||
``` | ||
mkdir build_img_person_detect && cd build_img_person_detect | ||
``` | ||
- Second, Configure the person detection example and set ETHOS_U_NPU_ENABLED to be OFF.And you can run only with Cortex-M55. | ||
``` | ||
cmake ../ -DUSE_CASE_BUILD=img_person_detect \-DETHOS_U_NPU_ENABLED=OFF | ||
``` | ||
- Finally, Compile the person detection example. | ||
``` | ||
make -j4 | ||
``` | ||
## Run with person detection tflite mdoel without passing vela and inference with only Cortex-M55 | ||
- Go out and under the folder of ML_FVP_EVALUATION | ||
``` | ||
cd ../../ | ||
``` | ||
- Run with the commad about | ||
``` | ||
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ml-embedded-evaluation-kit/build_img_person_detect/bin/ethos-u-img_person_detect.axf | ||
``` | ||
- You with see the FVP telnetterminal result below: | ||
- Start inference: | ||
- You will see the input size and tflite op on telnetterminal. | ||
 | ||
- Run inference: | ||
- key-in `1` on telnetterminal and you will start to inference first image with only Cortex-M55. You can see the NPU cycle is 0. | ||
 | ||
- And you will see the input image on the screen. | ||
 | ||
## How to use HIMAX config file to generate vela model | ||
- Go under vela folder | ||
``` | ||
cd vela | ||
``` | ||
- Install necessary package: | ||
``` | ||
pip install ethos-u-vela | ||
``` | ||
- Run vela with himax config ini file with mac=64 and the person detect example tflite model | ||
``` | ||
vela --accelerator-config ethos-u55-64 --config himax_vela.ini --system-config My_Sys_Cfg --memory-mode My_Mem_Mode_Parent --output-dir ./img_person_detect ./img_person_detect/person_int8_model.tflite | ||
``` | ||
- You will see the vela report on the terminal: | ||
 | ||
## Build with person detection tflite mdoel passing vela | ||
- Go under folder of ml-embedded-evaluation-kit | ||
``` | ||
cd ml-embedded-evaluation-kit | ||
``` | ||
- First, Create the output file and go under the folder | ||
``` | ||
mkdir build_img_person_detect_npu && cd build_img_person_detect_npu | ||
``` | ||
- Second, Configure the person detection example and set ETHOS_U_NPU_ENABLED to be ON.And you can run with Cortex-M55 and Ethos-U55 NPU. | ||
``` | ||
cmake ../ -DUSE_CASE_BUILD=img_person_detect \-DETHOS_U_NPU_ENABLED=ON | ||
``` | ||
- Compile the person detection example | ||
``` | ||
make -j4 | ||
``` | ||
## Run with person detection tflite mdoel passing vela and run inference using Ethos-U55 NPU | ||
- Go out and under the folder of ML_FVP_EVALUATION | ||
``` | ||
cd ../../ | ||
``` | ||
- Run with the commad about | ||
``` | ||
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -C ethosu.num_macs=64 ml-embedded-evaluation-kit/build_img_person_detect_npu/bin/ethos-u-img_person_detect.axf | ||
``` | ||
Be careful of the `ethosu.num_macs` number of the MACS at the command. If you use missmatch MACS number with vela model, it will be invoke fail. | ||
- You with see the FVP telnetterminal result below: | ||
- Start inference: | ||
- You will see the input size and MACS size on telnetterminal. | ||
- The tflite op has run with the ethos-u op. | ||
 | ||
- Run inference: | ||
- key-in `1` on telnetterminal and you will start to inference first image with Ethos-U55 NPU. | ||
 | ||
- And you will see the input image on the screen. | ||
 | ||
## Build with Yolo Fastest Object detection tflite mdoel passing vela | ||
- Go under folder of ml-embedded-evaluation-kit | ||
``` | ||
cd ml-embedded-evaluation-kit | ||
``` | ||
- First, Create the output file and go under the folder | ||
``` | ||
mkdir build_img_yolofastest_relu6_256_himax_npu && cd build_img_yolofastest_relu6_256_himax_npu | ||
``` | ||
- Second, Configure the Yolo Fastest Object detection example and set ETHOS_U_NPU_ENABLED to be ON.And you can run with only Ethos-U55 NPU. | ||
``` | ||
cmake ../ -DUSE_CASE_BUILD=img_yolofastest_relu6_256_himax \-DETHOS_U_NPU_ENABLED=ON | ||
``` | ||
- Compile the Yolo Fastest Object detection example | ||
``` | ||
make -j4 | ||
``` | ||
## Run with Yolo Fastest Object detection tflite mdoel and inference only using Ethos-U55 NPU | ||
- Go out and under the folder of ML_FVP_EVALUATION | ||
``` | ||
cd ../../ | ||
``` | ||
- Run with the commad about | ||
``` | ||
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -C ethosu.num_macs=64 ml-embedded-evaluation-kit/build_img_yolofastest_relu6_256_himax_npu/bin/ethos-u-img_yolofastest_relu6_256_himax.axf | ||
``` | ||
Be careful of the `ethosu.num_macs` number of the MACS at the command. If you use missmatch MACS number with vela model, it will be invoke fail. | ||
- You with see the FVP telnetterminal result below: | ||
- Start inference: | ||
- You will see the input size, output tensor size and MACS size on telnetterminal. | ||
- The tflite op has run with the ethos-u op. | ||
 | ||
- Run inference: | ||
- key-in `1` on telnetterminal and you will start to inference first image with Ethos-U55 NPU. | ||
 | ||
- First, you will see the input image on the screen. | ||
- Then, you will see the detection result with bbox and class on the screen. | ||
 | ||
- The Himax sensor image result will be the following. | ||
 | ||
## Build with Yolo Fastest XL Object detection tflite mdoel passing vela | ||
- Go under folder of ml-embedded-evaluation-kit | ||
``` | ||
cd ml-embedded-evaluation-kit | ||
``` | ||
- First, Create the output file and go under the folder | ||
``` | ||
mkdir build_img_yolofastest_xl_relu6_256_himax_npu && cd build_img_yolofastest_xl_relu6_256_himax_npu | ||
``` | ||
- Second, Configure the Yolo Fastest XL Object detection example and set ETHOS_U_NPU_ENABLED to be ON.And you can run with only Ethos-U55 NPU. | ||
``` | ||
cmake ../ -DUSE_CASE_BUILD=img_yolofastest_xl_relu6_256_himax \-DETHOS_U_NPU_ENABLED=ON | ||
``` | ||
- Compile the Yolo Fastest XL Object detection example | ||
``` | ||
make -j4 | ||
``` | ||
## Run with Yolo Fastest XL Object detection tflite mdoel and inference only using Ethos-U55 NPU | ||
- Go out and under the folder of ML_FVP_EVALUATION | ||
``` | ||
cd ../../ | ||
``` | ||
- Run with the commad about | ||
``` | ||
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -C ethosu.num_macs=64 ml-embedded-evaluation-kit/build_img_yolofastest_xl_relu6_256_himax_npu/bin/ethos-u-img_yolofastest_xl_relu6_256_himax.axf | ||
``` | ||
Be careful of the `ethosu.num_macs` number of the MACS at the command. If you use missmatch MACS number with vela model, it will be invoke fail. | ||
- You with see the FVP telnetterminal result below: | ||
- Start inference: | ||
- You will see the input size, output tensor size and MACS size on telnetterminal. | ||
- The tflite op has run with the ethos-u op. | ||
 | ||
- Run inference: | ||
- key-in `1` on telnetterminal and you will start to inference first image with Ethos-U55 NPU. | ||
 | ||
- First, you will see the input image on the screen. | ||
- Then, you will see the detection result with bbox and class on the screen. | ||
 | ||
- The Himax sensor image result will be the following. | ||
 | ||
## Appendix | ||
- Add more test image | ||
- You can add more test image under the file address `ml-embedded-evaluation-kit/resources/img_person_detect/samples`, `ml-embedded-evaluation-kit/resources/img_yolofastest_relu6_256_himax/samples` and `ml-embedded-evaluation-kit/resources/img_yolofastest_xl_relu6_256_himax/samples`. Configure and compile the examples again to test more image. | ||
- If you want to run the example about the mobilenet image classfication example. | ||
- Run inference with vela macs=64 or not | ||
- You should make sure your `ml-embedded-evaluation-kit/source/use_case/img_class/usecase.cmake` is use the vela model is macs 64 or 128 model at line 50. | ||
- Your building command while using deault macs 64 model will be | ||
``` | ||
cmake ../ -DUSE_CASE_BUILD=img_class \-DETHOS_U_NPU_ENABLED=ON \-DETHOS_U_NPU_CONFIG_ID=H64 | ||
make -j4 | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,87 @@ | ||
#!/usr/bin/env python3 | ||
|
||
# Copyright (c) 2021-2022 Arm Limited. All rights reserved. | ||
# SPDX-License-Identifier: Apache-2.0 | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
"""This script does effectively the same as "git submodule update --init" command.""" | ||
import logging | ||
import sys | ||
import tarfile | ||
import tempfile | ||
from urllib.request import urlopen | ||
from zipfile import ZipFile | ||
from pathlib import Path | ||
|
||
TF = "https://github.com/tensorflow/tflite-micro/archive/02715237c1fc0a23f465226364d206277f54ebce.zip" | ||
CMSIS = "https://github.com/ARM-software/CMSIS_5/archive/29615088b12e3ba8ce50d316cf7f38c1bd7fc620.zip" | ||
ETHOS_U_CORE_DRIVER = "https://git.mlplatform.org/ml/ethos-u/ethos-u-core-driver.git/snapshot/ethos-u-core-driver-22.05.tar.gz" | ||
ETHOS_U_CORE_PLATFORM = "https://git.mlplatform.org/ml/ethos-u/ethos-u-core-platform.git/snapshot/ethos-u-core-platform-22.05.tar.gz" | ||
|
||
|
||
def download(url_file: str, post_process=None): | ||
with urlopen(url_file) as response, tempfile.NamedTemporaryFile() as temp: | ||
logging.info(f"Downloading {url_file} ...") | ||
temp.write(response.read()) | ||
temp.seek(0) | ||
logging.info(f"Finished downloading {url_file}.") | ||
if post_process: | ||
post_process(temp) | ||
|
||
|
||
def unzip(file, to_path): | ||
with ZipFile(file) as z: | ||
for archive_path in z.infolist(): | ||
archive_path.filename = archive_path.filename[archive_path.filename.find("/") + 1:] | ||
if archive_path.filename: | ||
z.extract(archive_path, to_path) | ||
target_path = to_path / archive_path.filename | ||
attr = archive_path.external_attr >> 16 | ||
if attr != 0: | ||
target_path.chmod(attr) | ||
|
||
|
||
def untar(file, to_path): | ||
with tarfile.open(file) as z: | ||
for archive_path in z.getmembers(): | ||
index = archive_path.name.find("/") | ||
if index < 0: | ||
continue | ||
archive_path.name = archive_path.name[index + 1:] | ||
if archive_path.name: | ||
z.extract(archive_path, to_path) | ||
|
||
|
||
def main(dependencies_path: Path): | ||
|
||
download(CMSIS, | ||
lambda file: unzip(file.name, to_path=dependencies_path / "cmsis")) | ||
download(ETHOS_U_CORE_DRIVER, | ||
lambda file: untar(file.name, to_path=dependencies_path / "core-driver")) | ||
download(ETHOS_U_CORE_PLATFORM, | ||
lambda file: untar(file.name, to_path=dependencies_path / "core-platform")) | ||
download(TF, | ||
lambda file: unzip(file.name, to_path=dependencies_path / "tensorflow")) | ||
|
||
|
||
if __name__ == '__main__': | ||
logging.basicConfig(filename='download_dependencies.log', level=logging.DEBUG, filemode='w') | ||
logging.getLogger().addHandler(logging.StreamHandler(sys.stdout)) | ||
|
||
download_dir = Path(__file__).parent.resolve() / "dependencies" | ||
|
||
if download_dir.is_dir(): | ||
logging.info(f'{download_dir} exists. Skipping download.') | ||
else: | ||
main(download_dir) |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 7 additions & 0 deletions
7
resources/img_person_detect/labels/labels_person_detect_96.txt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
person | ||
no_person | ||
person | ||
no_person | ||
person | ||
no_person | ||
person |
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+192 KB
resources/img_yolofastest_relu6_256_himax/samples/himax_motorcycle.bmp
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+192 KB
resources/img_yolofastest_xl_relu6_256_himax/samples/coco_keyboard.bmp
Binary file not shown.
Binary file added
BIN
+192 KB
resources/img_yolofastest_xl_relu6_256_himax/samples/coco_person_1.bmp
Binary file not shown.
Binary file added
BIN
+192 KB
resources/img_yolofastest_xl_relu6_256_himax/samples/coco_person_2.bmp
Binary file not shown.
Binary file added
BIN
+192 KB
resources/img_yolofastest_xl_relu6_256_himax/samples/coco_person_3.bmp
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+192 KB
resources/img_yolofastest_xl_relu6_256_himax/samples/himax_bicycle.bmp
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+192 KB
resources/img_yolofastest_xl_relu6_256_himax/samples/himax_motorcycle.bmp
Binary file not shown.
Binary file added
BIN
+192 KB
resources/img_yolofastest_xl_relu6_256_himax/samples/himax_person_1.bmp
Binary file not shown.
Binary file added
BIN
+192 KB
resources/img_yolofastest_xl_relu6_256_himax/samples/himax_person_2.bmp
Binary file not shown.
Oops, something went wrong.