Major Project in Final Year B.Tech (IT). Live Stream Sign Language Detection using Deep Learning.
-
Updated
Oct 22, 2021 - Jupyter Notebook
Major Project in Final Year B.Tech (IT). Live Stream Sign Language Detection using Deep Learning.
American Sign Language Detection Model
This project aims to develop a cutting-edge system that detects sign language from images and converts it into audio, and vice versa.
Sign Language Detection : A Python project for detecting American Sign Language gestures using OpenCV and Mediapipe. The model recognizes signs from live web cam.
A YOLOv5 model developed from scratch to convey the signs to a blind perosn and can generate the text out from the signs made by mute person. It is a prototype to showcase the possibility on developing a interpreter for mute and blind people.
This project demonstrates hand sign detection using TensorFlow Lite and Flask.
In this project, we have created a model to detect sign language using mediapipe holistic keypoints and LSTM layered model.
Add a description, image, and links to the sign-language-detection topic page so that developers can more easily learn about it.
To associate your repository with the sign-language-detection topic, visit your repo's landing page and select "manage topics."