Skip to content

Commit

Permalink
initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
kris-himax committed Mar 16, 2023
1 parent 16cba30 commit 4aab7f7
Show file tree
Hide file tree
Showing 79 changed files with 3,141 additions and 0 deletions.
287 changes: 287 additions & 0 deletions Readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,287 @@
# Deploy examples on FVP simulation environment

- This repository is for building and deploying examples on FVP simulate environment.
- The example including
- [Person detection example without vela](#build-with-person-detection-tflite-mdoel-without-passing-vela):
- Input image size: 96 x 96 x 1 (Monochrome)
- Using google person detection example model without passing vela run inference with cortex-m55.
- [How to use HIMAX config file to generate vela model](#how-to-use-himax-config-file-to-generate-vela-model)
- [Person detection example run inference with Ethos-U55 NPU](#build-with-person-detection-tflite-mdoel-passing-vela):
- Input image size: 96 x 96 x 1 (Monochrome)
- Using google person detection example model passing vela run inference with Ethos-U55 NPU.
- [Yolo Fastest Object detection example](#build-with-yolo-fastest-object-detection-tflite-mdoel-passing-vela):
- Input image size: 256 x 256 x 3 (RGB)
- We only release the model which passes himax_vela.ini (Ethos-U55 64 MACS configuration).
- We can run infernce using the images which captured by our own HIMAX 01B0 sensor.
- [Yolo Fastest XL Object detection example](#build-with-yolo-fastest-xl-object-detection-tflite-mdoel-passing-vela):
- Input image size: 256 x 256 x 3 (RGB)
- We only release the model which passes himax_vela.ini (Ethos-U55 64 MACS configuration).
- We can run infernce using the images which captured by our own HIMAX 01B0 sensor.
- To run evaluations using this software, we suggest using Ubuntu 20.04 LTS environment.

## Prerequisites
- Install the toolkits listed below:
- Install necessary packages:
```
sudo apt-get update
sudo apt-get install cmake
sudo apt-get install curl
sudo apt install xterm
sudo apt install python3
sudo apt install python3.8-venv
```
- Corstone SSE-300 FVP: aligned with the Arm MPS3 development platform and includes both the Cortex-M55 and the Ethos-U55 processors.
```
# Fetch Corstone SSE-300 FVP
wget https://developer.arm.com/-/media/Arm%20Developer%20Community/Downloads/OSS/FVP/Corstone-300/MPS3/FVP_Corstone_SSE-300_Ethos-U55_11.14_24.tgz
```
![alt text](images/Fetch_Corstone_SSE_300_FVP.png)
```
# Create folder to be extracted
mkdir temp
# Extract the archive
tar -C temp -xvzf FVP_Corstone_SSE-300_Ethos-U55_11.14_24.tgz
```
![alt text](images/Extract_the_archive.png)
```
# Execute the self-install script
temp/FVP_Corstone_SSE-300_Ethos-U55.sh --i-agree-to-the-contained-eula --no-interactive -d CS300FVP
```
![alt text](images/Execute_self-install_script.png)
- GNU Arm Embedded Toolchain 10-2020-q4-major is the only version supports Cortex-M55.
```
# fetch the arm gcc toolchain.
wget https://developer.arm.com/-/media/Files/downloads/gnu-rm/10-2020q4/gcc-arm-none-eabi-10-2020-q4-major-x86_64-linux.tar.bz2
# Extract the archive
tar -xjf gcc-arm-none-eabi-10-2020-q4-major-x86_64-linux.tar.bz2
# Add gcc-arm-none-eabi/bin into PATH environment variable.
export PATH="${PATH}:/[location of your GCC_ARM_NONE_EABI_TOOLCHAIN_ROOT]/gcc-arm-none-eabi/bin"
```
- Arm ML embedded evaluation kit Machine Learning (ML) applications targeted for Arm Cortex-M55 and Arm Ethos-U55 NPU.
- We use Arm ML embedded evaluation kit to run the Person detection FVP example.
```
# Fetch Arm ML embedded evaluation kit
wget https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ml-embedded-evaluation-kit/+archive/refs/tags/22.02.tar.gz
mkdir ml-embedded-evaluation-kit
tar -C ml-embedded-evaluation-kit -xvzf 22.02.tar.gz
cp -r download_dependencies.py ./ml-embedded-evaluation-kit/
cd ml-embedded-evaluation-kit/
rm -rf ./dependencies
python3 ./download_dependencies.py
./build_default.py --npu-config-name ethos-u55-64
#go out ml-embedded-evaluation-kit folder and copy the example resources to ML embedded evaluation kit
cd ..
cp -r ./resources/img_person_detect ./ml-embedded-evaluation-kit/resources
cp -r ./source/use_case/img_person_detect ./ml-embedded-evaluation-kit/source/use_case
cp -r ./vela/img_person_detect ./ml-embedded-evaluation-kit/resources_downloaded/
cp -r ./resources/img_yolofastest_relu6_256_himax ./ml-embedded-evaluation-kit/resources
cp -r ./source/use_case/img_yolofastest_relu6_256_himax ./ml-embedded-evaluation-kit/source/use_case
cp -r ./vela/img_yolofastest_relu6_256_himax ./ml-embedded-evaluation-kit/resources_downloaded/
cp -r ./resources/img_yolofastest_xl_relu6_256_himax ./ml-embedded-evaluation-kit/resources
cp -r ./source/use_case/img_yolofastest_xl_relu6_256_himax ./ml-embedded-evaluation-kit/source/use_case
cp -r ./vela/img_yolofastest_xl_relu6_256_himax ./ml-embedded-evaluation-kit/resources_downloaded/
```
## Build with person detection tflite mdoel without passing vela
- Go under folder of ml-embedded-evaluation-kit
```
cd ml-embedded-evaluation-kit
```
- First, Create the output file and go under the folder
```
mkdir build_img_person_detect && cd build_img_person_detect
```
- Second, Configure the person detection example and set ETHOS_U_NPU_ENABLED to be OFF.And you can run only with Cortex-M55.
```
cmake ../ -DUSE_CASE_BUILD=img_person_detect \-DETHOS_U_NPU_ENABLED=OFF
```
- Finally, Compile the person detection example.
```
make -j4
```
## Run with person detection tflite mdoel without passing vela and inference with only Cortex-M55
- Go out and under the folder of ML_FVP_EVALUATION
```
cd ../../
```
- Run with the commad about
```
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ml-embedded-evaluation-kit/build_img_person_detect/bin/ethos-u-img_person_detect.axf
```
- You with see the FVP telnetterminal result below:
- Start inference:
- You will see the input size and tflite op on telnetterminal.
![alt text](images/inference_start.png)
- Run inference:
- key-in `1` on telnetterminal and you will start to inference first image with only Cortex-M55. You can see the NPU cycle is 0.
![alt text](images/Run_inference.png)
- And you will see the input image on the screen.
![alt text](images/inference_input_with_m55.png)
## How to use HIMAX config file to generate vela model
- Go under vela folder
```
cd vela
```
- Install necessary package:
```
pip install ethos-u-vela
```
- Run vela with himax config ini file with mac=64 and the person detect example tflite model
```
vela --accelerator-config ethos-u55-64 --config himax_vela.ini --system-config My_Sys_Cfg --memory-mode My_Mem_Mode_Parent --output-dir ./img_person_detect ./img_person_detect/person_int8_model.tflite
```
- You will see the vela report on the terminal:
![alt text](images/vela_report.png)
## Build with person detection tflite mdoel passing vela
- Go under folder of ml-embedded-evaluation-kit
```
cd ml-embedded-evaluation-kit
```
- First, Create the output file and go under the folder
```
mkdir build_img_person_detect_npu && cd build_img_person_detect_npu
```
- Second, Configure the person detection example and set ETHOS_U_NPU_ENABLED to be ON.And you can run with Cortex-M55 and Ethos-U55 NPU.
```
cmake ../ -DUSE_CASE_BUILD=img_person_detect \-DETHOS_U_NPU_ENABLED=ON
```
- Compile the person detection example
```
make -j4
```
## Run with person detection tflite mdoel passing vela and run inference using Ethos-U55 NPU
- Go out and under the folder of ML_FVP_EVALUATION
```
cd ../../
```
- Run with the commad about
```
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -C ethosu.num_macs=64 ml-embedded-evaluation-kit/build_img_person_detect_npu/bin/ethos-u-img_person_detect.axf
```
Be careful of the `ethosu.num_macs` number of the MACS at the command. If you use missmatch MACS number with vela model, it will be invoke fail.
- You with see the FVP telnetterminal result below:
- Start inference:
- You will see the input size and MACS size on telnetterminal.
- The tflite op has run with the ethos-u op.
![alt text](images/npu_inference_start.png)
- Run inference:
- key-in `1` on telnetterminal and you will start to inference first image with Ethos-U55 NPU.
![alt text](images/npu_inference_result.png)
- And you will see the input image on the screen.
![alt text](images/inference_input_image.png)
## Build with Yolo Fastest Object detection tflite mdoel passing vela
- Go under folder of ml-embedded-evaluation-kit
```
cd ml-embedded-evaluation-kit
```
- First, Create the output file and go under the folder
```
mkdir build_img_yolofastest_relu6_256_himax_npu && cd build_img_yolofastest_relu6_256_himax_npu
```
- Second, Configure the Yolo Fastest Object detection example and set ETHOS_U_NPU_ENABLED to be ON.And you can run with only Ethos-U55 NPU.
```
cmake ../ -DUSE_CASE_BUILD=img_yolofastest_relu6_256_himax \-DETHOS_U_NPU_ENABLED=ON
```
- Compile the Yolo Fastest Object detection example
```
make -j4
```
## Run with Yolo Fastest Object detection tflite mdoel and inference only using Ethos-U55 NPU
- Go out and under the folder of ML_FVP_EVALUATION
```
cd ../../
```
- Run with the commad about
```
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -C ethosu.num_macs=64 ml-embedded-evaluation-kit/build_img_yolofastest_relu6_256_himax_npu/bin/ethos-u-img_yolofastest_relu6_256_himax.axf
```
Be careful of the `ethosu.num_macs` number of the MACS at the command. If you use missmatch MACS number with vela model, it will be invoke fail.
- You with see the FVP telnetterminal result below:
- Start inference:
- You will see the input size, output tensor size and MACS size on telnetterminal.
- The tflite op has run with the ethos-u op.
![alt text](images/Yolo_Fastest_run_inference_start.png)
- Run inference:
- key-in `1` on telnetterminal and you will start to inference first image with Ethos-U55 NPU.
![alt text](images/Yolo_Fastest_run_inference_coco_dog_terminal.png)
- First, you will see the input image on the screen.
- Then, you will see the detection result with bbox and class on the screen.
![alt text](images/Yolo_Fastest_run_inference_coco_dog_result.png)
- The Himax sensor image result will be the following.
![alt text](images/Yolo_Fastest_run_inference_himax_result.png)
## Build with Yolo Fastest XL Object detection tflite mdoel passing vela
- Go under folder of ml-embedded-evaluation-kit
```
cd ml-embedded-evaluation-kit
```
- First, Create the output file and go under the folder
```
mkdir build_img_yolofastest_xl_relu6_256_himax_npu && cd build_img_yolofastest_xl_relu6_256_himax_npu
```
- Second, Configure the Yolo Fastest XL Object detection example and set ETHOS_U_NPU_ENABLED to be ON.And you can run with only Ethos-U55 NPU.
```
cmake ../ -DUSE_CASE_BUILD=img_yolofastest_xl_relu6_256_himax \-DETHOS_U_NPU_ENABLED=ON
```
- Compile the Yolo Fastest XL Object detection example
```
make -j4
```
## Run with Yolo Fastest XL Object detection tflite mdoel and inference only using Ethos-U55 NPU
- Go out and under the folder of ML_FVP_EVALUATION
```
cd ../../
```
- Run with the commad about
```
CS300FVP/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -C ethosu.num_macs=64 ml-embedded-evaluation-kit/build_img_yolofastest_xl_relu6_256_himax_npu/bin/ethos-u-img_yolofastest_xl_relu6_256_himax.axf
```
Be careful of the `ethosu.num_macs` number of the MACS at the command. If you use missmatch MACS number with vela model, it will be invoke fail.
- You with see the FVP telnetterminal result below:
- Start inference:
- You will see the input size, output tensor size and MACS size on telnetterminal.
- The tflite op has run with the ethos-u op.
![alt text](images/Yolo_Fastest_XL_run_inference_start.png)
- Run inference:
- key-in `1` on telnetterminal and you will start to inference first image with Ethos-U55 NPU.
![alt text](images/Yolo_Fastest_XL_run_inference_coco_dog_terminal.png)
- First, you will see the input image on the screen.
- Then, you will see the detection result with bbox and class on the screen.
![alt text](images/Yolo_Fastest_XL_run_inference_coco_dog_result.png)
- The Himax sensor image result will be the following.
![alt text](images/Yolo_Fastest_XL_run_inference_himax_result.png)
## Appendix
- Add more test image
- You can add more test image under the file address `ml-embedded-evaluation-kit/resources/img_person_detect/samples`, `ml-embedded-evaluation-kit/resources/img_yolofastest_relu6_256_himax/samples` and `ml-embedded-evaluation-kit/resources/img_yolofastest_xl_relu6_256_himax/samples`. Configure and compile the examples again to test more image.
- If you want to run the example about the mobilenet image classfication example.
- Run inference with vela macs=64 or not
- You should make sure your `ml-embedded-evaluation-kit/source/use_case/img_class/usecase.cmake` is use the vela model is macs 64 or 128 model at line 50.
- Your building command while using deault macs 64 model will be
```
cmake ../ -DUSE_CASE_BUILD=img_class \-DETHOS_U_NPU_ENABLED=ON \-DETHOS_U_NPU_CONFIG_ID=H64
make -j4
```
87 changes: 87 additions & 0 deletions download_dependencies.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
#!/usr/bin/env python3

# Copyright (c) 2021-2022 Arm Limited. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""This script does effectively the same as "git submodule update --init" command."""
import logging
import sys
import tarfile
import tempfile
from urllib.request import urlopen
from zipfile import ZipFile
from pathlib import Path

TF = "https://github.com/tensorflow/tflite-micro/archive/02715237c1fc0a23f465226364d206277f54ebce.zip"
CMSIS = "https://github.com/ARM-software/CMSIS_5/archive/29615088b12e3ba8ce50d316cf7f38c1bd7fc620.zip"
ETHOS_U_CORE_DRIVER = "https://git.mlplatform.org/ml/ethos-u/ethos-u-core-driver.git/snapshot/ethos-u-core-driver-22.05.tar.gz"
ETHOS_U_CORE_PLATFORM = "https://git.mlplatform.org/ml/ethos-u/ethos-u-core-platform.git/snapshot/ethos-u-core-platform-22.05.tar.gz"


def download(url_file: str, post_process=None):
with urlopen(url_file) as response, tempfile.NamedTemporaryFile() as temp:
logging.info(f"Downloading {url_file} ...")
temp.write(response.read())
temp.seek(0)
logging.info(f"Finished downloading {url_file}.")
if post_process:
post_process(temp)


def unzip(file, to_path):
with ZipFile(file) as z:
for archive_path in z.infolist():
archive_path.filename = archive_path.filename[archive_path.filename.find("/") + 1:]
if archive_path.filename:
z.extract(archive_path, to_path)
target_path = to_path / archive_path.filename
attr = archive_path.external_attr >> 16
if attr != 0:
target_path.chmod(attr)


def untar(file, to_path):
with tarfile.open(file) as z:
for archive_path in z.getmembers():
index = archive_path.name.find("/")
if index < 0:
continue
archive_path.name = archive_path.name[index + 1:]
if archive_path.name:
z.extract(archive_path, to_path)


def main(dependencies_path: Path):

download(CMSIS,
lambda file: unzip(file.name, to_path=dependencies_path / "cmsis"))
download(ETHOS_U_CORE_DRIVER,
lambda file: untar(file.name, to_path=dependencies_path / "core-driver"))
download(ETHOS_U_CORE_PLATFORM,
lambda file: untar(file.name, to_path=dependencies_path / "core-platform"))
download(TF,
lambda file: unzip(file.name, to_path=dependencies_path / "tensorflow"))


if __name__ == '__main__':
logging.basicConfig(filename='download_dependencies.log', level=logging.DEBUG, filemode='w')
logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))

download_dir = Path(__file__).parent.resolve() / "dependencies"

if download_dir.is_dir():
logging.info(f'{download_dir} exists. Skipping download.')
else:
main(download_dir)
Binary file added images/Execute_self-install_script.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/Extract_the_archive.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/Fetch_Corstone_SSE_300_FVP.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/Run_inference.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/Yolo_Fastest_XL_run_inference_start.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/Yolo_Fastest_run_inference_start.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/inference_input_image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/inference_input_with_m55.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/inference_start.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/npu_inference_result.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/npu_inference_start.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/vela_report.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
person
no_person
person
no_person
person
no_person
person
Binary file added resources/img_person_detect/samples/1_160x160.bmp
Binary file not shown.
Binary file added resources/img_person_detect/samples/2_160x160.bmp
Binary file not shown.
Binary file added resources/img_person_detect/samples/3_160x160.bmp
Binary file not shown.
Binary file added resources/img_person_detect/samples/4_160x160.bmp
Binary file not shown.
Binary file added resources/img_person_detect/samples/5_160x160.bmp
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading

0 comments on commit 4aab7f7

Please sign in to comment.