Replies: 6 comments 5 replies
-
Hi @glucasol, we'll get back to you asap! Cheers! |
Beta Was this translation helpful? Give feedback.
-
Thank you @samet-akcay! |
Beta Was this translation helpful? Give feedback.
-
@glucasol, one potential approach would be to use NNCF that would accelerate the inference. @paularamo was planning on expanding the notebook with NNCF support, but she is currently out of office. To get an idea you could refer to this NNCF notebook. We will hopefully add more to the robotics arm notebook soon. |
Beta Was this translation helpful? Give feedback.
-
Let's continue this from the discussions section. Cheers! |
Beta Was this translation helpful? Give feedback.
-
@samet-akcay , following your suggestion, I have modified this notebook 501a_training_a_model_with_cubes_from_a_robotic_arm.ipynb in the Callbacks section, to include nncf optimization, so my callbacks list looks like:
and when I run trainer.fit, it returns me the following error:
When I run nvcc -V:
When I run nvidia-smi:
|
Beta Was this translation helpful? Give feedback.
-
Thank you guys @AlexanderDokuchaev @samet-akcay for your support! By the way, Do you know if its possible to run OpenVINO inference on Jetson devices? |
Beta Was this translation helpful? Give feedback.
-
What is the motivation for this task?
I am running this Notebook - 501a_training_a_model_with_cubes_from_a_robotic_arm.ipynb and getting ~16 FPS in inference (CPU: Intel(R) Core(TM) i5-9300H CPU @ 2.40GHz; GPU: NVIDIA GeForce GTX 1650 (dGPU)). I want to know if it's possible to improve it's performance and get more FPS in OpenVINO Inference.
Thanks!
Describe the solution you'd like
I would like to improve the inference performance.
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions