-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Hello authors, thank you for the great work on Embodied-R1.
I am currently trying to reproduce the Visual Trace Branch (-V) pipeline described in the paper.
In Section 3.4 Action Executor and the Motion Planning with Object-Centric Visual Traces paragraph, the paper states:
“The 2D trace τ is first mapped to 3D Cartesian coordinates using the pinhole camera model and initial depth information. These discrete 3D points are then interpolated to form a continuous motion trajectory in SE(3) space, which the robot follows for execution.”
I am attempting to implement this process myself, but I want to ensure my implementation matches the one used in the paper.
⭐ Feature Request
Could you please provide:
- The code (or pseudocode) used for mapping 2D trace points τ = (u, v) into 3D points (x, y, z) using depth information and camera intrinsics
- The interpolation method used to convert discrete 3D points into a continuous SE(3) trajectory
- Any example snippet that shows how this 3D trajectory is fed into the controller or planner (e.g., CuRobo or custom executor)
Thank you again for releasing Embodied-R1
Metadata
Metadata
Assignees
Labels
No labels