You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to run some evaluations on my own dataset of renders of Synthetic ShapeNet models (same models you trained on); but, I am failing to run the inference script. It looks like the object detection pipeline fails (heat map outputs) and the resulting point cloud reconstruction and bounding boxes are wrong. The renders contain only a single object in the middle. Here are my input color and depth images:
And here is the output from the inference script:
Peaks_output:
Bounding Box output:
Point cloud projection output:
Let me know if you have any ideas how to get inference to work on these types of synthetic renders. Many thanks!
Matias
The text was updated successfully, but these errors were encountered:
Thanks for your interest in our work. It could be worth looking into the following few things:
What are the camera intrinsic of the shapenet renderings. Do they match or are they close to the camera used to render NOCS synthetic? Please also see FAQ.1 here and if it could be helpful.
In what form is the depth input to the network? I presume your depth is object-centric and not scene-centric and there could be a difference in how we have trained the model and how you might be performing inference. Please see the image below on the scene-depth we use as an input to the model. This could be found under camera_composed_depths here. This is what the original NOCS dataset provided and we perform training/inference in such a way as to reduce the sim2real gap since the real depth is usually scene-centric and not object-centric.
Note that you may train your model on your data from scratch (highly recommended) but since you are interested in zero-shot inference, it would be good to test the model on data which is matching the training distribution.
Which checkpoint are you using to perform inference? Please note that the checkpoints we have released only work for real scenes and maybe sub optimal for synthetic scenes (i.e. in the following notebook)
Hi @zubair-irshad,
I am trying to run some evaluations on my own dataset of renders of Synthetic ShapeNet models (same models you trained on); but, I am failing to run the inference script. It looks like the object detection pipeline fails (heat map outputs) and the resulting point cloud reconstruction and bounding boxes are wrong. The renders contain only a single object in the middle. Here are my input color and depth images:
And here is the output from the inference script:
Peaks_output:
Bounding Box output:
Point cloud projection output:
Let me know if you have any ideas how to get inference to work on these types of synthetic renders. Many thanks!
Matias
The text was updated successfully, but these errors were encountered: