Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I'm using my own mp4 human manipulation dataset to redirect it to the bot. The visualization shows the correct movements, but the robot appears to be floating and its feet are not touching the ground. What could be the reason? How should I modify it? Could you provide the modified code? My current approach is to multiply the redirected robot's pose_aa (22×3) by the rotation matrix, but it does not make the robot stand upright. #27

Open
AKshang opened this issue Dec 7, 2024 · 6 comments

Comments

@AKshang
Copy link

AKshang commented Dec 7, 2024

I'm using my own mp4 human manipulation dataset to redirect it to the bot. The visualization shows the correct movements, but the robot appears to be floating and its feet are not touching the ground. What could be the reason? How should I modify it? Could you provide the modified code? My current approach is to multiply the redirected robot's pose_aa (22×3) by the rotation matrix, but it does not make the robot stand upright.

@AKshang AKshang changed the title 屏幕截图 2024-12-02 212827 I'm using my own mp4 human manipulation dataset to redirect it to the bot. The visualization shows the correct movements, but the robot appears to be floating and its feet are not touching the ground. What could be the reason? How should I modify it? Could you provide the modified code? My current approach is to multiply the redirected robot's pose_aa (22×3) by the rotation matrix, but it does not make the robot stand upright. I'm using my own mp4 human manipulation dataset to redirect it to the bot. The visualization shows the correct movements, but the robot appears to be floating and its feet are not touching the ground. What could be the reason? How should I modify it? Could you provide the modified code? My current approach is to multiply the redirected robot's pose_aa (22×3) by the rotation matrix, but it does not make the robot stand upright. Dec 7, 2024
@AKshang
Copy link
Author

AKshang commented Dec 7, 2024

image

@Maxwell-Zhao
Copy link

How do you convert the mocap data in the format (24, 3) (from SMPL’s 24 joints) into the input data used in the code? Could you explain the process?

@AKshang
Copy link
Author

AKshang commented Dec 10, 2024

I directly process an .mp4 video file using VIBE (a 3D pose estimation method that can directly obtain pose_aa(frame, 72) in the SMLP coordinate system). Then, I run the grad_fit_h1.py file (which utilizes the pkl file generated by VIBE as well as another shape_optimized_v1.pkl( I believe this is a pkl file containing the shape parameters of an intermediate state digital human model between a robot and a standard SMPL digital human) file generated by grad_fit_h1_shape.py. I've noticed that for the digital human model, only the "pose_aa" and "trans" keys are used, even though the corresponding pkl file for the digital human contains multiple keys) to generateamss_test.pkl. Finally, I visualize theamss_test.pkl file using vis_motion.py.I'm not sure which step I've missed, and I've also tried multiplying the "pose_aa" by a rotation matrix, but I still can't get robot to face forward.

@Maxwell-Zhao
Copy link

Maxwell-Zhao commented Dec 10, 2024

I directly process an .mp4 video file using VIBE (a 3D pose estimation method that can directly obtain pose_aa(frame, 72) in the SMLP coordinate system). Then, I run the grad_fit_h1.py file (which utilizes the pkl file generated by VIBE as well as another shape_optimized_v1.pkl( I believe this is a pkl file containing the shape parameters of an intermediate state digital human model between a robot and a standard SMPL digital human) file generated by grad_fit_h1_shape.py. I've noticed that for the digital human model, only the "pose_aa" and "trans" keys are used, even though the corresponding pkl file for the digital human contains multiple keys) to generateamss_test.pkl. Finally, I visualize theamss_test.pkl file using vis_motion.py.I'm not sure which step I've missed, and I've also tried multiplying the "pose_aa" by a rotation matrix, but I still can't get robot to face forward.

The paper mentions that real-time teleoperation can be achieved, but in the end, teleoperation still requires running grad_fit_h1.py. So how can real-time operation be achieved?

@AKshang
Copy link
Author

AKshang commented Dec 14, 2024

I haven't considered real-time processing yet. I want to first reproduce the process of redirecting a self-built .mp4 video of human actions to a robot. How can this be done? Can running play_hydra.py achieve this?

@AKshang
Copy link
Author

AKshang commented Dec 14, 2024

Does your code include functionality to directly redirect human motion from a self-built .mp4 video to a robot? How can this task be accomplished?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants