Skip to content

Request for Code Example: Converting 2D Visual Trace (τ) to 3D SE(3) Trajectory #7

@kelly062001

Description

@kelly062001

Hello authors, thank you for the great work on Embodied-R1.
I am currently trying to reproduce the Visual Trace Branch (-V) pipeline described in the paper.

In Section 3.4 Action Executor and the Motion Planning with Object-Centric Visual Traces paragraph, the paper states:

“The 2D trace τ is first mapped to 3D Cartesian coordinates using the pinhole camera model and initial depth information. These discrete 3D points are then interpolated to form a continuous motion trajectory in SE(3) space, which the robot follows for execution.”

I am attempting to implement this process myself, but I want to ensure my implementation matches the one used in the paper.

⭐ Feature Request

Could you please provide:

  1. The code (or pseudocode) used for mapping 2D trace points τ = (u, v) into 3D points (x, y, z) using depth information and camera intrinsics
  2. The interpolation method used to convert discrete 3D points into a continuous SE(3) trajectory
  3. Any example snippet that shows how this 3D trajectory is fed into the controller or planner (e.g., CuRobo or custom executor)

Thank you again for releasing Embodied-R1

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions