Skip to content

VeriSilicon/TIM-VX

Folders and files

NameName
Last commit message
Last commit date
Jan 8, 2025
Jul 3, 2023
Jul 29, 2024
May 22, 2024
Nov 9, 2023
Jan 10, 2024
Jan 8, 2025
Aug 12, 2023
Feb 7, 2021
May 14, 2021
Oct 20, 2021
Sep 7, 2021
Jul 19, 2023
May 6, 2021
Aug 12, 2023
Dec 20, 2023
Jan 3, 2024
Dec 12, 2024
Jan 8, 2025
Oct 8, 2021
Nov 22, 2022

Repository files navigation

TIM-VX - Tensor Interface Module

bazel_x86_vsim_unit_test cmake_x86_vsim

TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on Verisilicon ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.

Main Features

  • Over 150 operators with rich format support for both quantized and floating point
  • Simplified C++ binding API calls to create Tensors and Operations Guide
  • Dynamic graph construction with support for shape inference and layout inference
  • Built-in custom layer extensions
  • A set of utility functions for debugging

Framework Support

Feel free to raise a github issue if you wish to add TIM-VX for other frameworks.

Architecture Overview

TIM-VX Architecture

Technical documents

Get started

Build and Run

TIM-VX supports both bazel and cmake.


cmake

To build TIM-VX for x86 with prebuilt:

mkdir host_build
cd host_build
cmake ..
make -j8
make install

All install files (both headers and *.so) is located in : host_build/install

cmake options:

option name Summary Default
TIM_VX_ENABLE_TEST Enable unit test case for public APIs and ops OFF
TIM_VX_ENABLE_LAYOUT_INFER Build with tensor data layout inference support ON
TIM_VX_USE_EXTERNAL_OVXLIB Replace internal with a prebuilt libovxlib library OFF
OVXLIB_LIB full path to libovxlib.so include so name, required if TIM_VX_USE_EXTERNAL_OVXLIB=ON Not set
OVXLIB_INC ovxlib's include path, required if TIM_VX_USE_EXTERNAL_OVXLIB=ON Not set
EXTERNAL_VIV_SDK Give external vivante openvx driver libraries Not set
TIM_VX_BUILD_EXAMPLES Build example applications OFF
TIM_VX_ENABLE_40BIT Enable large memory (over 4G) support in NPU driver OFF
TIM_VX_ENABLE_PLATFORM Enable multi devices support OFF
TIM_VX_ENABLE_PLATFORM_LITE Enable lite multi-device support, only work when TIM_VX_ENABLE_PLATFORM=ON OFF
VIP_LITE_SDK full path to VIPLite sdk, required when TIM_VX_ENABLE_PLATFORM_LITE=ON Not set
TIM_VX_ENABLE_GRPC Enable gPRC support, only work when TIM_VX_ENABLE_PLATFORM=ON OFF
TIM_VX_DBG_ENABLE_TENSOR_HNDL Enable built-in tensor from handle ON
TIM_VX_ENABLE_TENSOR_CACHE Enable tensor cache for const tensor, check OpenSSL build notes OFF

Run unit test:

cd host_build/src/tim

export LD_LIBRARY_PATH=`pwd`/../../../prebuilt-sdk/x86_64_linux/lib:<path to libgtest_main.so>:$LD_LIBRARY_PATH
export VIVANTE_SDK_DIR=`pwd`/../../../prebuilt-sdk/x86_64_linux/
export VSIMULATOR_CONFIG=<hardware name should get from chip vendor>
# if you want to debug wit gdb, please set
export DISABLE_IDE_DEBUG=1
./unit_test

Build with local google test source

    cd <wksp_root>
    git clone --depth 1 -b release-1.10.0 git@github.com:google/googletest.git

    cd <root_tim_vx>/build/
    cmake ../ -DTIM_VX_ENABLE_TEST=ON -DFETCHCONTENT_SOURCE_DIR_GOOGLETEST=<wksp_root/googletest> <add other cmake define here>

Build for evk-boards

  1. prepare toolchain file follow cmake standard
  2. make sure cross build low-level driver with toolchain separately, we need the sdk from the low-level driver
  3. add -DEXTERNAL_VIV_SDK=<low-level-driver/out/sdk> to cmake definitions, also remember -DCMAKE_TOOLCHAIN_FILE=<Toolchain_Config>
  4. or for using a buildroot toolchain with extrnal VIV-SDK add:
    -DCONFIG=BUILDROOT -DCMAKE_SYSROOT=${CMAKE_SYSROOT} -DEXTERNAL_VIV_SDK=${BUILDROOT_SYSROOT}
  5. then make

Important notice for integration

If you want to build tim-vx as a static library, and link it to your shared library or application, please be carefull with the linker, "-Wl,--whole-archive" is required.

@see samples/lenet/CMakeLists.txt for reference

Bazel

Install bazel to get started.

TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.

To build TIM-VX:

bazel build libtim-vx.so

To run sample LeNet:

# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux

bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc

Other

To build and run Tensorflow-Lite with TIM-VX, please see README

To build and run TVM with TIM-VX, please see TVM README

Reference board

Chip Vendor References Success Stories
i.MX 8M Plus NXP ML Guide, BSP SageMaker with 8MP
A311D Khadas - VIM3 A311D datasheet, BSP Paddle-lite demo
S905D3 Khadas - VIM3L S905D3 , BSP

Support

Create issue on github or email to ML_Support at verisilicon dot com