- ROS package that applies MediaPipe Pose solution
- Tested Kinect V1 RGB and Depth images
- Important addition to MediaPipe Pose: ability to calculate and detect the person's 3D position and publish this information in ROS Topics so the robot can know its relative position from the person detected.
- Support for use of person bounding box image
cd catkin_ws/src
git clone https://github.com/UtBotsAtHome-UTFPR/mediapipe_track.git
cd ..
catkin_make
This package depends on freenect_launch and runs on python, with mediapipe library.
The code runs on Python 3.8 and depends on mediapipe. Install the requirements:
roscd mediapipe_track/src
pip3 install -r requirements.txt
OBS: because of permission problems with the model access in the library, mediapipe's libraries will not be located in a virtualenv yet
First, run freenect:
roslaunch mediapipe_track freenect.launch
Then, to run the Mediapipe pose estimation and 3d points positions:
roslaunch mediapipe_track body_pose_and_points.launch
To run only the Mediapipe pose estimation:
rosrun mediapipe_track body_pose.py
To view the 3D map with the published 3D point referred as the person detected position, run Rviz with:
roslaunch mediapipe_track rviz.launch