Enable TFLite model parsing with FlatBuffer support and comprehensive TFLite enhancements#146
Enable TFLite model parsing with FlatBuffer support and comprehensive TFLite enhancements#146tdarote wants to merge 4 commits intoqualcomm-linux:mainfrom
Conversation
recipes-ml/tflite/files/0001-lite-Add-config-option-to-enable-benchmark_model.patch
Outdated
Show resolved
Hide resolved
recipes-ml/tflite/files/0002-cmake-lite-tools-benchmark-require-protobuf-through-.patch
Outdated
Show resolved
Hide resolved
recipes-ml/tflite/files/0003-feat-tflite-Add-dynamic-OpenCL-library-loading-suppo.patch
Outdated
Show resolved
Hide resolved
recipes-ml/tflite/files/0004-cmake-Exclude-subdirectories-from-all-builds.patch
Outdated
Show resolved
Hide resolved
|
Waiting for the patches to be posted upstream |
recipes-ml/tflite/files/0005-c-Force-delegate-symbols-from-shared-library.patch
Show resolved
Hide resolved
lumag
left a comment
There was a problem hiding this comment.
Please fix commit subject for the flatbuffers patch
recipes-ml/tflite/files/0002-cmake-lite-tools-benchmark-require-protobuf-through-.patch
Outdated
Show resolved
Hide resolved
175b3f6 to
3ae95c6
Compare
-updated upstream status to submitted |
recipes-ml/tflite/files/0008-cmake-Add-install-rule-for-c-interface-shared-librar.patch
Show resolved
Hide resolved
recipes-ml/tflite/files/0002-cmake-lite-tools-benchmark-require-protobuf-through-.patch
Outdated
Show resolved
Hide resolved
recipes-ml/tflite/files/0002-cmake-lite-tools-benchmark-require-protobuf-through-.patch
Outdated
Show resolved
Hide resolved
DONE |
|
Okay. Upstream (meta-oe) uses flatbuffers 25.12.19. To prevent possible issues with other packages which might depend on that version, we need to provide a separate version of the recipe (flatbuffers-tflite.bb, |
--done |
|
This doesn't build in my test system: |
|
Please explain why this flatbuffer recipe is needed, why the one from meta-oe is not enough, differences, etc, as part of your git commit message. |
If we are using flatbuffers version from meta-oe then we are facing below config issue as : | /local/mnt/workspace/tushar/kas-tflite-build/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0.qcom/sources/tensorflow-lite-2.20.0.qcom/tensorflow/compiler/mlir/lite/schema/schema_generated.h:25:41: error: static assertion failed: Non-compatible flatbuffers version included |
|
Didn't really build for me, do_configure is still trying to reach network: |
Hi @ricardosalveti, Could you please share the detailed setup steps and your testing procedure? We are not able to reproduce the fetcher issue on our side. From the do_configure logs, we can see that CMake is still performing a download during configuration. We also tried with BB_NO_NETWORK = "1", but the download still occurs at configure time. It appears that CMake is attempting to fetch a ZIP file, and we are unable to prevent this network access. Right now, we are working on avoiding this download based on the references found in the do_configure logs. However, we are running into issues because the ZIP path used by CMake is not a proper source location. Even if we add the GitHub source link in SRC_URI, the configure step still tries to download files. Could you please clarify:
Your inputs will help us reproduce and address the problem more effectively. |
|
With the latest patch, we were able to avoid the source download during the configuration stage by using the details from the do_configure logs. |
| OECMAKE_TARGET_COMPILE += "benchmark_model label_image" | ||
|
|
||
| EXTRA_OECMAKE += " \ | ||
| -DFETCHCONTENT_FULLY_DISCONNECTED=ON \ |
There was a problem hiding this comment.
This is set by cmake.bbclass: https://git.openembedded.org/openembedded-core/tree/meta/classes-recipe/cmake.bbclass#n258
There was a problem hiding this comment.
Thanks, noted. Since cmake.bbclass already sets this flag, I’ll remove it from the recipe
| # NOTE: Dependencies are managed manually | ||
| # To update dependencies: | ||
| # 1. For each repo, run: git ls-remote <url> HEAD | ||
| # 2. Get the latest commit hash |
There was a problem hiding this comment.
This comment is still misleading and far from being true.
There was a problem hiding this comment.
is this okay to update ?
--># 2. Read dependency revisions from TFLite's Bazel configuration files
There was a problem hiding this comment.
And what would that mean? If you don't want to implement a script, provide the steps for the developer to follow.
There was a problem hiding this comment.
updated the steps for updating dependency hash values, can you please review once.
| # Source archive checksum for the main TensorFlow fetch | ||
| SRC_URI[sha256sum] = "cfc7749b96f63bd31c3c42b5c471bf756814053e847c10f3eb003417bc523d30" | ||
|
|
||
| REQUIRED_DISTRO_FEATURES += "opengl" |
There was a problem hiding this comment.
Nit pick: don't put anything else in between SRC_URI entries, depends go on top, just below SUMMARY/LICENSE/etc , since they could include a custom fetcher.
We try to order variables 'chronologically', which some freedom to make it more understandable. For example, REQUIRED_DISTRO_FEATURES should go near the top, since it's one of the first checks, but for readability it makes more sense to group it with the either the DEPENDS that need the check or the PACKAGE_CONFIG option that has the guarded options.
There was a problem hiding this comment.
updated order, can you check once ?
lumag
left a comment
There was a problem hiding this comment.
Implementing a script that provides SRC_URI fragment and corresponding SRCREV would be a much preferred approach.
| # | ||
| # 1) In the TensorFlow repo at the exact branch/tag you build (e.g. v2.20.0), | ||
| # read the pinned dependency revisions in: | ||
| # - WORKSPACE |
There was a problem hiding this comment.
There was a problem hiding this comment.
Okay. I really want a script to extract all that info.
There was a problem hiding this comment.
provided script and its uses as comments in tflite recipes, can you please check once
| SRCREV_fp16 = "4dfe081cf6bcd15db339cf2680b9281b8451eeb3" | ||
| SRCREV_kleidiai = "dc69e899945c412a8ce39ccafd25139f743c60b1" | ||
| SRCREV_pthreadpool = "c2ba5c50bb58d1397b693740cf75fad836a0d1bf" | ||
| SRCREV_fxdiv = "63058eff77e11aa15bf531df5dd34395ec3017c8" |
There was a problem hiding this comment.
We maintain the dependency sequence consistent with how TFLite fetches them upstream, please confirm still you need in sorted order ?
There was a problem hiding this comment.
sorted as per alphanumeric, can you check once
| # | ||
| # 3) For each third‑party dependency used by TFLite (xnnpack, cpuinfo, | ||
| # pthreadpool, ruy, fp16, fxdiv, gemmlowp, farmhash, CL/Vulkan headers, etc.), | ||
| # copy the commit hash (and sha256 if present) exactly as declared upstream, |
There was a problem hiding this comment.
Which sha256? How to identify deps which are actually used by TFLite?
There was a problem hiding this comment.
can you please review once, updated in recipe comments
…nd C API Apply multiple patches to enhance TensorFlow Lite functionality: - Add benchmark_model config option - Fix protobuf dependency in benchmark tools - Add dynamic OpenCL library loading support - Exclude subdirectories from all builds - Force delegate symbols from shared library - Add version support to C API - Fix label_image dependencies - Add install rule for C interface shared library Signed-off-by: Tushar Darote <tdarote@qti.qualcomm.com>
| # | ||
| # How to update these SRCREV_* values: | ||
| # | ||
| # 1) Use the automated script 'extract_tflite_srcrevs_from_github.py' to generate |
There was a problem hiding this comment.
What prevents you from integrating it as a task? I pointed you to vulkan-cts several times.
There was a problem hiding this comment.
The reason we cannot automatically integrate this as a task is due to the nature of dependency management in TensorFlow Lite builds. While many dependencies are pinned to specific commits from the TensorFlow repository, some dependencies like fft2d come from external repositories (Android's external/fft2d) and may not follow the same pinning strategy.
Specifically:
TensorFlow-pinned dependencies (farmhash, gemmlowp, cpuinfo, etc.) can be reliably updated using the automated script since they're all sourced from TensorFlow's workspace files
External dependencies like fft2d come from Android's external repository and may not be pinned to the same commit that TensorFlow expects
Testing requirement: We need to validate that the specific commit works correctly with our target platform and build configurations
Failure risk: If we automatically update all dependencies, we risk introducing breaking changes that wouldn't be caught until later testing phases
We currently test fft2d with its main branch tip and validate that it works correctly. This manual approach ensures stability while still allowing us to benefit from the automated script for the majority of dependencies that are properly pinned by TensorFlow.
The script is designed to be used as a maintenance tool that developers can run when updating to new TensorFlow versions, but the final validation and selection of the exact commits is done manually to ensure compatibility and stability.
There was a problem hiding this comment.
External dependencies like fft2d come from Android's external repository and may not be pinned to the same commit that TensorFlow expects
How comes?
URL https://storage.googleapis.com/mirror.tensorflow.org/github.com/petewarden/OouraFFT/archive/v1.0.tar.gz
# Sync with tensorflow/workspace2.bzl
URL_HASH SHA256=5f4dabc2ae21e1f537425d58a49cdca1c49ea11db0d6271e2a4b27e9697548eb
I think this exactly defines the location and the way to fetch fft2d.
There was a problem hiding this comment.
I converted to use git protocol instead of giving tar since I was facing below issue with tar download as :
ERROR: tensorflow-lite-2.20.0-r0 do_recipe_qa: QA Issue: tensorflow-lite: SRC_URI uses unstable GitHub/GitLab archives, convert recipe to use git protocol [src-uri-bad]
ERROR: tensorflow-lite-2.20.0-r0 do_recipe_qa: Fatal QA errors were found, failing task.
ERROR: Logfile of failure stored in: /local/mnt/workspace/tushar/5-2-26/build/tmp/work/armv8-2a-qcom-linux/tensorflow-lite/2.20.0/temp/log.do_recipe_qa.2045484
ERROR: Task (/local/mnt/workspace/tushar/5-2-26/build/../meta-qcom-distro/recipes-ml/tflite/tensorflow-lite_2.20.0.bb:do_recipe_qa) failed with exit code '1'
This pull request enables TFLite model parsing capabilities by integrating FlatBuffer support and implements comprehensive enhancements to the TFLite recipe.
Key Changes:
-> FlatBuffer Integration for TFLite:
- Added flatbuffer bbappend file to enable TFLite's schema handling capabilities for model parsing
-> Comprehensive TFLite Enhancements:
- Added benchmark_model config option
- Fixed protobuf dependency in benchmark tools
- Added dynamic OpenCL library loading support
- Excluded subdirectories from all builds
- Force delegate symbols from shared library
- Add version support to C API
- Fix label_image dependencies
- Add install rule for C interface shared library