Skip to content

This Repository contains Gesture to text conversion using ML, OpenCV & Python

Notifications You must be signed in to change notification settings

AkashPatilkulkarni/Gesture-To-Text-for-Differently-Abled

Repository files navigation

Gesture-to-Text for Differently Abled

This project focuses on hand gesture recognition using OpenCV and Python, with an emphasis on real-time video processing. The goal is to detect and classify hand gestures, addressing challenges such as pose variation, lighting conditions, and noise.

The process involves multiple stages: image acquisition, pre-processing, feature extraction, and gesture recognition. Image acquisition captures video frames from a webcam, and pre-processing includes tasks like colour filtering, smoothing, and thresholding. Feature extraction involves identifying hand contours, while gesture recognition classifies gestures based on these features. Detecting the hand in real-time video poses challenges like unstable brightness, noise, poor resolution, and contrast. Segmentation and edge detection, considering colour, hand posture, and shape, are crucial for accurate gesture recognition.

The project tackles two main issues: hand detection and gesture recognition. Hand detection relies on webcam input, facing challenges of brightness, noise, and contrast. Gesture recognition involves segmentation, edge detection, and consideration of colour, hand posture, and shape in real time. Additionally creating suitable signs for one-handed use involves extracting hand contours and addressing convexity defects, which require depth calculation equations.

DATA SET:- image

OUTPUTS:-

Picture1

Picture2

About

This Repository contains Gesture to text conversion using ML, OpenCV & Python

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published