Skip to content

Enable TFLite model parsing with FlatBuffer support and comprehensive TFLite enhancements#146

Open
tdarote wants to merge 4 commits intoqualcomm-linux:mainfrom
tdarote:tflite
Open

Enable TFLite model parsing with FlatBuffer support and comprehensive TFLite enhancements#146
tdarote wants to merge 4 commits intoqualcomm-linux:mainfrom
tdarote:tflite

Conversation

@tdarote
Copy link

@tdarote tdarote commented Jan 30, 2026

This pull request enables TFLite model parsing capabilities by integrating FlatBuffer support and implements comprehensive enhancements to the TFLite recipe.

Key Changes:
-> FlatBuffer Integration for TFLite:
- Added flatbuffer bbappend file to enable TFLite's schema handling capabilities for model parsing

-> Comprehensive TFLite Enhancements:
- Added benchmark_model config option
- Fixed protobuf dependency in benchmark tools
- Added dynamic OpenCL library loading support
- Excluded subdirectories from all builds
- Force delegate symbols from shared library
- Add version support to C API
- Fix label_image dependencies
- Add install rule for C interface shared library

@lumag
Copy link
Contributor

lumag commented Jan 30, 2026

Waiting for the patches to be posted upstream

Copy link
Contributor

@lumag lumag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix commit subject for the flatbuffers patch

@tdarote tdarote force-pushed the tflite branch 2 times, most recently from 175b3f6 to 3ae95c6 Compare February 2, 2026 10:31
@tdarote
Copy link
Author

tdarote commented Feb 2, 2026

Waiting for the patches to be posted upstream

-updated upstream status to submitted

@tdarote tdarote closed this Feb 2, 2026
@tdarote tdarote reopened this Feb 2, 2026
@tdarote
Copy link
Author

tdarote commented Feb 2, 2026

Please fix commit subject for the flatbuffers patch

DONE

@lumag
Copy link
Contributor

lumag commented Feb 2, 2026

Okay. Upstream (meta-oe) uses flatbuffers 25.12.19. To prevent possible issues with other packages which might depend on that version, we need to provide a separate version of the recipe (flatbuffers-tflite.bb, require recipes-devtools/flatbuffers/flatbuffers.bb) and use it for building TFLite.

@tdarote
Copy link
Author

tdarote commented Feb 2, 2026

Okay. Upstream (meta-oe) uses flatbuffers 25.12.19. To prevent possible issues with other packages which might depend on that version, we need to provide a separate version of the recipe (flatbuffers-tflite.bb, require recipes-devtools/flatbuffers/flatbuffers.bb) and use it for building TFLite.

--done

@lumag
Copy link
Contributor

lumag commented Feb 2, 2026

This doesn't build in my test system:

| DEBUG: Python function extend_recipe_sysroot finished
| DEBUG: Executing shell function do_configure
| CMake Error at CMakeLists.txt:23 (project):
|   VERSION ".." format invalid.
|
|
| -- Configuring incomplete, errors occurred!
| WARNING: exit code 1 from a shell command.
ERROR: Task (/home/lumag/Projects/RPB/build-rpb/conf/../../layers/meta-qcom-distro/recipes-ml/tflite/tensorflow-lite_2.20.0.qcom.bb:do_configure) failed with exit code '1'
NOTE: Tasks Summary: Attempted 5751 tasks of which 5578 didn't need to be rerun and 1 failed.

@ricardosalveti
Copy link
Contributor

Please explain why this flatbuffer recipe is needed, why the one from meta-oe is not enough, differences, etc, as part of your git commit message.

@tdarote
Copy link
Author

tdarote commented Feb 3, 2026

Please explain why this flatbuffer recipe is needed, why the one from meta-oe is not enough, differences, etc, as part of your git commit message.

If we are using flatbuffers version from meta-oe then we are facing below config issue as :

| /local/mnt/workspace/tushar/kas-tflite-build/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0.qcom/sources/tensorflow-lite-2.20.0.qcom/tensorflow/compiler/mlir/lite/schema/schema_generated.h:25:41: error: static assertion failed: Non-compatible flatbuffers version included
| 25 | static_assert(FLATBUFFERS_VERSION_MAJOR == 24 &&
| | ^
| /local/mnt/workspace/tushar/kas-tflite-build/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0.qcom/sources/tensorflow-lite-2.20.0.qcom/tensorflow/compiler/mlir/lite/schema/schema_generated.h:25:41: note: the comparison reduces to '(25 == 24)'
| ninja: build stopped: subcommand failed.
|
| WARNING: /local/mnt/workspace/tushar/kas-tflite-build/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0.qcom/temp/run.do_compile.3752035:153 exit 1 from 'eval ${DESTDIR:+DESTDIR=${DESTDIR} }VERBOSE=1 cmake --build '/local/mnt/workspace/tushar/kas-tflite-build/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0.qcom/build' "$@" -- ${EXTRA_OECMAKE_BUILD}'

koenkooi
koenkooi previously approved these changes Feb 12, 2026
@tdarote tdarote requested a review from quaresmajose February 12, 2026 12:58
@ricardosalveti
Copy link
Contributor

Didn't really build for me, do_configure is still trying to reach network:


|           --- LOG END ---
|           error: downloading 'https://github.com/Maratyszcza/FP16/archive/0a92994d729ff76a58f692d3028ca1b64b145d91.zip' failed
|           status_code: 6
|           status_string: "Could not resolve hostname"
|           log:
|           --- LOG BEGIN ---
|           timeout on name lookup is not supported
|
|   getaddrinfo(3) failed for github.com:443
|
|   Store negative name resolve for github.com:443
|
|   Could not resolve host: github.com
|
|   closing connection #0
|
|
|
|           --- LOG END ---
|
|
|
|
| ninja: build stopped: subcommand failed.
| CMake Warning at /home/rsalveti/build/qualcomm-linux/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0/recipe-sysroot-native/usr/share/cmake-4.2/Modules/FetchContent.cmake:2121 (message):
...
| WARNING: Backtrace (BB generated script):
|       #1: cmake_do_configure, /home/rsalveti/build/qualcomm-linux/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0/temp/run.do_configure.2827760, line 173
|       #2: do_configure, /home/rsalveti/build/qualcomm-linux/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0/temp/run.do_configure.2827760, line 152
|       #3: main, /home/rsalveti/build/qualcomm-linux/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0/temp/run.do_configure.2827760, line 191
ERROR: Task (/home/rsalveti/build/qualcomm-linux/build/../meta-qcom-distro/recipes-ml/tflite/tensorflow-lite_2.20.0.bb:do_configure) failed with exit code '1'

@tdarote
Copy link
Author

tdarote commented Feb 14, 2026

Didn't really build for me, do_configure is still trying to reach network:


|           --- LOG END ---
|           error: downloading 'https://github.com/Maratyszcza/FP16/archive/0a92994d729ff76a58f692d3028ca1b64b145d91.zip' failed
|           status_code: 6
|           status_string: "Could not resolve hostname"
|           log:
|           --- LOG BEGIN ---
|           timeout on name lookup is not supported
|
|   getaddrinfo(3) failed for github.com:443
|
|   Store negative name resolve for github.com:443
|
|   Could not resolve host: github.com
|
|   closing connection #0
|
|
|
|           --- LOG END ---
|
|
|
|
| ninja: build stopped: subcommand failed.
| CMake Warning at /home/rsalveti/build/qualcomm-linux/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0/recipe-sysroot-native/usr/share/cmake-4.2/Modules/FetchContent.cmake:2121 (message):
...
| WARNING: Backtrace (BB generated script):
|       #1: cmake_do_configure, /home/rsalveti/build/qualcomm-linux/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0/temp/run.do_configure.2827760, line 173
|       #2: do_configure, /home/rsalveti/build/qualcomm-linux/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0/temp/run.do_configure.2827760, line 152
|       #3: main, /home/rsalveti/build/qualcomm-linux/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0/temp/run.do_configure.2827760, line 191
ERROR: Task (/home/rsalveti/build/qualcomm-linux/build/../meta-qcom-distro/recipes-ml/tflite/tensorflow-lite_2.20.0.bb:do_configure) failed with exit code '1'

Hi @ricardosalveti,

Could you please share the detailed setup steps and your testing procedure? We are not able to reproduce the fetcher issue on our side. From the do_configure logs, we can see that CMake is still performing a download during configuration.

We also tried with BB_NO_NETWORK = "1", but the download still occurs at configure time. It appears that CMake is attempting to fetch a ZIP file, and we are unable to prevent this network access.

Right now, we are working on avoiding this download based on the references found in the do_configure logs. However, we are running into issues because the ZIP path used by CMake is not a proper source location. Even if we add the GitHub source link in SRC_URI, the configure step still tries to download files.

Could you please clarify:

  • How you are testing this scenario?
  • What additional steps or setups are you using that trigger this issue?
  • How you are preventing the configure-time download on your environment?

Your inputs will help us reproduce and address the problem more effectively.

@tdarote
Copy link
Author

tdarote commented Feb 14, 2026

With the latest patch, we were able to avoid the source download during the configuration stage by using the details from the do_configure logs.

@tdarote tdarote requested a review from koenkooi February 14, 2026 15:09
OECMAKE_TARGET_COMPILE += "benchmark_model label_image"

EXTRA_OECMAKE += " \
-DFETCHCONTENT_FULLY_DISCONNECTED=ON \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, noted. Since cmake.bbclass already sets this flag, I’ll remove it from the recipe

koenkooi
koenkooi previously approved these changes Feb 16, 2026
# NOTE: Dependencies are managed manually
# To update dependencies:
# 1. For each repo, run: git ls-remote <url> HEAD
# 2. Get the latest commit hash
Copy link
Contributor

@lumag lumag Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is still misleading and far from being true.

Copy link
Author

@tdarote tdarote Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this okay to update ?
--># 2. Read dependency revisions from TFLite's Bazel configuration files

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And what would that mean? If you don't want to implement a script, provide the steps for the developer to follow.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated the steps for updating dependency hash values, can you please review once.

# Source archive checksum for the main TensorFlow fetch
SRC_URI[sha256sum] = "cfc7749b96f63bd31c3c42b5c471bf756814053e847c10f3eb003417bc523d30"

REQUIRED_DISTRO_FEATURES += "opengl"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit pick: don't put anything else in between SRC_URI entries, depends go on top, just below SUMMARY/LICENSE/etc , since they could include a custom fetcher.

We try to order variables 'chronologically', which some freedom to make it more understandable. For example, REQUIRED_DISTRO_FEATURES should go near the top, since it's one of the first checks, but for readability it makes more sense to group it with the either the DEPENDS that need the check or the PACKAGE_CONFIG option that has the guarded options.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated order, can you check once ?

koenkooi
koenkooi previously approved these changes Feb 17, 2026
Copy link
Contributor

@lumag lumag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementing a script that provides SRC_URI fragment and corresponding SRCREV would be a much preferred approach.

#
# 1) In the TensorFlow repo at the exact branch/tag you build (e.g. v2.20.0),
# read the pinned dependency revisions in:
# - WORKSPACE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any pins here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. I really want a script to extract all that info.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

provided script and its uses as comments in tflite recipes, can you please check once

SRCREV_fp16 = "4dfe081cf6bcd15db339cf2680b9281b8451eeb3"
SRCREV_kleidiai = "dc69e899945c412a8ce39ccafd25139f743c60b1"
SRCREV_pthreadpool = "c2ba5c50bb58d1397b693740cf75fad836a0d1bf"
SRCREV_fxdiv = "63058eff77e11aa15bf531df5dd34395ec3017c8"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sort the entries

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We maintain the dependency sequence consistent with how TFLite fetches them upstream, please confirm still you need in sorted order ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorted as per alphanumeric, can you check once

#
# 3) For each third‑party dependency used by TFLite (xnnpack, cpuinfo,
# pthreadpool, ruy, fp16, fxdiv, gemmlowp, farmhash, CL/Vulkan headers, etc.),
# copy the commit hash (and sha256 if present) exactly as declared upstream,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which sha256? How to identify deps which are actually used by TFLite?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please review once, updated in recipe comments

…nd C API

Apply multiple patches to enhance TensorFlow Lite functionality:
- Add benchmark_model config option
- Fix protobuf dependency in benchmark tools
- Add dynamic OpenCL library loading support
- Exclude subdirectories from all builds
- Force delegate symbols from shared library
- Add version support to C API
- Fix label_image dependencies
- Add install rule for C interface shared library

Signed-off-by: Tushar Darote <tdarote@qti.qualcomm.com>
#
# How to update these SRCREV_* values:
#
# 1) Use the automated script 'extract_tflite_srcrevs_from_github.py' to generate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What prevents you from integrating it as a task? I pointed you to vulkan-cts several times.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason we cannot automatically integrate this as a task is due to the nature of dependency management in TensorFlow Lite builds. While many dependencies are pinned to specific commits from the TensorFlow repository, some dependencies like fft2d come from external repositories (Android's external/fft2d) and may not follow the same pinning strategy.

Specifically:

TensorFlow-pinned dependencies (farmhash, gemmlowp, cpuinfo, etc.) can be reliably updated using the automated script since they're all sourced from TensorFlow's workspace files
External dependencies like fft2d come from Android's external repository and may not be pinned to the same commit that TensorFlow expects
Testing requirement: We need to validate that the specific commit works correctly with our target platform and build configurations
Failure risk: If we automatically update all dependencies, we risk introducing breaking changes that wouldn't be caught until later testing phases
We currently test fft2d with its main branch tip and validate that it works correctly. This manual approach ensures stability while still allowing us to benefit from the automated script for the majority of dependencies that are properly pinned by TensorFlow.

The script is designed to be used as a maintenance tool that developers can run when updating to new TensorFlow versions, but the final validation and selection of the exact commits is done manually to ensure compatibility and stability.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

External dependencies like fft2d come from Android's external repository and may not be pinned to the same commit that TensorFlow expects

How comes?

  URL https://storage.googleapis.com/mirror.tensorflow.org/github.com/petewarden/OouraFFT/archive/v1.0.tar.gz
  # Sync with tensorflow/workspace2.bzl
  URL_HASH SHA256=5f4dabc2ae21e1f537425d58a49cdca1c49ea11db0d6271e2a4b27e9697548eb

I think this exactly defines the location and the way to fetch fft2d.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I converted to use git protocol instead of giving tar since I was facing below issue with tar download as :

ERROR: tensorflow-lite-2.20.0-r0 do_recipe_qa: QA Issue: tensorflow-lite: SRC_URI uses unstable GitHub/GitLab archives, convert recipe to use git protocol [src-uri-bad]
ERROR: tensorflow-lite-2.20.0-r0 do_recipe_qa: Fatal QA errors were found, failing task.
ERROR: Logfile of failure stored in: /local/mnt/workspace/tushar/5-2-26/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0/temp/log.do_recipe_qa.2045484
ERROR: Task (/local/mnt/workspace/tushar/5-2-26/build/../meta-qcom-distro/recipes-ml/tflite/tensorflow-lite_2.20.0.bb:do_recipe_qa) failed with exit code '1'

@tdarote tdarote requested a review from lumag February 17, 2026 15:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants