Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model conversion from onnx to blob format #74

Open
martin0496 opened this issue Sep 17, 2024 · 17 comments
Open

Model conversion from onnx to blob format #74

martin0496 opened this issue Sep 17, 2024 · 17 comments
Labels
bug Something isn't working

Comments

@martin0496
Copy link

Hi all ,

I am trying to convert a model from onnx to blob format .
I am attempting to convert the DepthAnythingV2 small model (https://huggingface.co/onnx-community/depth-anything-v2-small/blob/main/onnx/model_fp16.onnx) using BlobConverter (https://blobconverter.luxonis.com/), but the conversion process seems to be running indefinitely. It has been going for several hours with no progress.
In the attempt of solving the issue I tried to host locally the blobconverter library, but I unfortunately got the following error :

requests.exceptions.HTTPError: 400 Client Error: BAD REQUEST for url: https://blobconverter.luxonis.com/compile?version=2022.1&no_cache=False

I would greatly appreciate any assistance or guidance you can provide. Thank you in advance for your help.

@martin0496 martin0496 added the bug Something isn't working label Sep 17, 2024
@jkbmrz
Copy link

jkbmrz commented Sep 17, 2024

Hi,

could you also try with the model from here? I've tried it some time ago and it worked. If it doesn't, we'll have to look into it further.

@martin0496
Copy link
Author

Hi @jkbmrz
Thank you very much for your reply.
I tried also with that version but it keeps running indefinitely. I also tried onnx --> openvino --> blob and the same happens.
are there any specific params you used during the conversion ?

@jkbmrz
Copy link

jkbmrz commented Sep 17, 2024

Okay, I see. I've actually used this tool instead of blobconverter. Can you please try and report back?

@Link199292
Copy link

Link199292 commented Sep 17, 2024

Hi @jkbmrz, when I try to initialize the docker I get the following error:

ERROR [base  8/17] RUN sed -i 's/libtbb2/libtbbmalloc2/g'     /opt/intel/install_dependencies/install_openvin  0.2s
------
 > [base  8/17] RUN sed -i 's/libtbb2/libtbbmalloc2/g'     /opt/intel/install_dependencies/install_openvino_dependencies.sh &&     bash /opt/intel/install_dependencies/install_openvino_dependencies.sh -y:
0.212 sed: can't read /opt/intel/install_dependencies/install_openvino_dependencies.sh: No such file or directory

I tried to create a symbolic link from /opt/intel/install_dependecies/install_openvino_dependencies.sh to modelconverter/docker/rvc2. It seems that is not correctly taken from the downloaded openvino_2022_3_vpux_drop_patched.tar.gz located in modelconverter/docker/extra_packages.

Thanks for your help.

@martin0496
Copy link
Author

Hi @jkbmrz , we are still trying to fix the docker initialization error but with no success so far. We will appreciate any advice or guidance to make modelconverter work.
In the meanwhile, is it possible to pass me the converted s version of depthanything if the converter is working fine for you ?

Thank you very much for your help.

@jkbmrz
Copy link

jkbmrz commented Sep 18, 2024

@Link199292 @martin0496 we are looking into the issue. In the meantime, I'm sharing the model we've converted before (HERE).

@martin0496
Copy link
Author

@jkbmrz Thank you very much for all the support.

@HonzaCuhel
Copy link
Contributor

@Link199292, @martin0496 I tried to build the modelconverter on Ubuntu manually, and it was successful. What are the specs of your device?

One thing you could do is to build the docker file only until the erroneous step (by commenting out everything after including the problematic step) and then browse the filesystem inside the Docker image to check if the previous step:

RUN tar xvf openvino_2022_3_vpux_drop_patched.tar.gz -C /opt/intel/ --strip-components 1

did extract the OpenVino archive, and the install_openvino_dependencies.sh bash script is in the /opt/intel/install_dependencies/ folder.

To browse the filesystem of the Docker image, use this command:

docker run -it luxonis/modelconverter-rvc2 /bin/bash

Best,
Jan

@Link199292
Copy link

@Link199292, @martin0496 I tried to build the modelconverter on Ubuntu manually, and it was successful. What are the specs of your device?

One thing you could do is to build the docker file only until the erroneous step (by commenting out everything after including the problematic step) and then browse the filesystem inside the Docker image to check if the previous step:

RUN tar xvf openvino_2022_3_vpux_drop_patched.tar.gz -C /opt/intel/ --strip-components 1

did extract the OpenVino archive, and the install_openvino_dependencies.sh bash script is in the /opt/intel/install_dependencies/ folder.

To browse the filesystem of the Docker image, use this command:

docker run -it luxonis/modelconverter-rvc2 /bin/bash

Best, Jan

Thanks for your reply @HonzaCuhel, I am running Ubuntu 22.04 on Windows 11 through wsl. While trying to build the modelconverter on Windows I had an issue at a previous step during the building, that's why I switched to Ubuntu. The specs of my device are the following:

Motherboard: MAG B650
Processor: AMD Ryzen 7 7800X3D
Graphics: NVIDIA Geforce RTX 4090
RAM: DDR5 32 gb

I'll also check for the steps you suggested.

Thanks for your support.

@HonzaCuhel
Copy link
Contributor

I see; yeah, my guess would be that the issue lies in the extraction of the archive; either it's not extracted, or it's extracted to a different location, so you might have to change some of the commands in the Dockerfile in order for it to work.

Best,
Jan

@martin0496
Copy link
Author

martin0496 commented Oct 8, 2024

@jkbmrz @HonzaCuhel

We really appreciate your help and guidance. The model is working fine but still we are not able to do the conversion ourselves with the dockerfile.
is it possible to ask you for these resolutions to do the conversion :
1- 320 x 320
2- 256 x 256
3- 518 x 518 using UINT8

Thank you very much for all your patience and support.

@HonzaCuhel
Copy link
Contributor

@martin0496 sure, we'll convert the model for you and send it to you!

Best,
Jan

@martin0496
Copy link
Author

@HonzaCuhel Thank you so much

@martin0496
Copy link
Author

Hi @HonzaCuhel. Any updates on the converted models ?
Thank you in advance

@HonzaCuhel
Copy link
Contributor

Hi @martin0496, @Link199292,

I apologize for the delay in getting back to you. I exported the model for the following shapes: 322x322 and 518x518. The height must be divisible by 14, so that's why I chose the input shape 322x322 as the one closest to 320x320. Unfortunately, exporting the 518x518 model to UINT8 blob is impossible, so I converted the model without quantization. Furthermore, conversion of 252x252 fails as the export from onnx to blob never stops, which is an issue we'll investigate further.

The converted blobs can be found here.

Best,
Jan

@martin0496
Copy link
Author

@HonzaCuhel

Thank you very much for the help.

best regards

@HonzaCuhel
Copy link
Contributor

No problem! Glad to help!

Kind regards,
Jan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants