This project demonstrates the usage of MediaPipe for hand, pose, and face detection in videos.
- Python: 39660 bytes
- HTML: 2905 bytes
- Jupyter Notebook: 691 bytes
- Detects hand, pose, and face landmarks using MediaPipe.
- Displays the detected landmarks overlaid on the video frames.
- Supports resizing frames to fit the screen while maintaining aspect ratio.
- Displays the name of the video file as a text overlay on the frames.
- Python 3.x
- OpenCV
- MediaPipe
- Clone this repository:
git clone https://github.com/cagatay-softgineer/MediaPipe.git
- Install the required libraries:
pip install opencv-python mediapipe websockets
- Place your video files in the
videos
directory. - Update the Video Path in
main.py.
useMediaPipe("videos/Test.mp4")
- Run the
main.py
script:
python main.py
- Press 'q' to exit the program.
- Navigate to the WebSocketServer folder.
- Run the executable
websoc.exe
file to start the WebSocket server.
- Create a script for run websocket.py
python3 websoc.py # Use python if not working with python3
- Save the file with a
.sh
extension, for examplerun_websoc.sh
- Make the Script
- Open Terminal.
- Navigate to the directory where you saved run_websoc.sh.
- Make the script executable by running the following command
chmod +x run_websoc.sh
- Run the Script
./run_websoc.sh
useMediaPipe("videos/Test.mp4",Send2WSS=True)
- You can customize the screen width and height in the
useMediaPipe
function to adjust the size of the displayed frames. - Additional customization can be done by modifying the code in
modelUsageTests.py
according to your requirements.
This project is licensed under the MIT License - see the LICENSE file for details.