You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using my own mp4 human manipulation dataset to redirect it to the bot. The visualization shows the correct movements, but the robot appears to be floating and its feet are not touching the ground. What could be the reason? How should I modify it? Could you provide the modified code? My current approach is to multiply the redirected robot's pose_aa (22×3) by the rotation matrix, but it does not make the robot stand upright.
#27
Open
AKshang opened this issue
Dec 7, 2024
· 6 comments
I'm using my own mp4 human manipulation dataset to redirect it to the bot. The visualization shows the correct movements, but the robot appears to be floating and its feet are not touching the ground. What could be the reason? How should I modify it? Could you provide the modified code? My current approach is to multiply the redirected robot's pose_aa (22×3) by the rotation matrix, but it does not make the robot stand upright.
The text was updated successfully, but these errors were encountered:
AKshang
changed the title
屏幕截图 2024-12-02 212827 I'm using my own mp4 human manipulation dataset to redirect it to the bot. The visualization shows the correct movements, but the robot appears to be floating and its feet are not touching the ground. What could be the reason? How should I modify it? Could you provide the modified code? My current approach is to multiply the redirected robot's pose_aa (22×3) by the rotation matrix, but it does not make the robot stand upright.
I'm using my own mp4 human manipulation dataset to redirect it to the bot. The visualization shows the correct movements, but the robot appears to be floating and its feet are not touching the ground. What could be the reason? How should I modify it? Could you provide the modified code? My current approach is to multiply the redirected robot's pose_aa (22×3) by the rotation matrix, but it does not make the robot stand upright.
Dec 7, 2024
I directly process an .mp4 video file using VIBE (a 3D pose estimation method that can directly obtain pose_aa(frame, 72) in the SMLP coordinate system). Then, I run the grad_fit_h1.py file (which utilizes the pkl file generated by VIBE as well as another shape_optimized_v1.pkl( I believe this is a pkl file containing the shape parameters of an intermediate state digital human model between a robot and a standard SMPL digital human) file generated by grad_fit_h1_shape.py. I've noticed that for the digital human model, only the "pose_aa" and "trans" keys are used, even though the corresponding pkl file for the digital human contains multiple keys) to generateamss_test.pkl. Finally, I visualize theamss_test.pkl file using vis_motion.py.I'm not sure which step I've missed, and I've also tried multiplying the "pose_aa" by a rotation matrix, but I still can't get robot to face forward.
I directly process an .mp4 video file using VIBE (a 3D pose estimation method that can directly obtain pose_aa(frame, 72) in the SMLP coordinate system). Then, I run the grad_fit_h1.py file (which utilizes the pkl file generated by VIBE as well as another shape_optimized_v1.pkl( I believe this is a pkl file containing the shape parameters of an intermediate state digital human model between a robot and a standard SMPL digital human) file generated by grad_fit_h1_shape.py. I've noticed that for the digital human model, only the "pose_aa" and "trans" keys are used, even though the corresponding pkl file for the digital human contains multiple keys) to generateamss_test.pkl. Finally, I visualize theamss_test.pkl file using vis_motion.py.I'm not sure which step I've missed, and I've also tried multiplying the "pose_aa" by a rotation matrix, but I still can't get robot to face forward.
The paper mentions that real-time teleoperation can be achieved, but in the end, teleoperation still requires running grad_fit_h1.py. So how can real-time operation be achieved?
I haven't considered real-time processing yet. I want to first reproduce the process of redirecting a self-built .mp4 video of human actions to a robot. How can this be done? Can running play_hydra.py achieve this?
I'm using my own mp4 human manipulation dataset to redirect it to the bot. The visualization shows the correct movements, but the robot appears to be floating and its feet are not touching the ground. What could be the reason? How should I modify it? Could you provide the modified code? My current approach is to multiply the redirected robot's pose_aa (22×3) by the rotation matrix, but it does not make the robot stand upright.
The text was updated successfully, but these errors were encountered: