sign.language.detection.8mb.mp4
The Sign Language Interpreter project aims to facilitate communication for deaf and mute individuals by recognizing sign language gestures using a custom dataset with six classes. The application utilizes a deep learning model, potentially leveraging transfer learning with pre-trained CNNs, to accurately identify signs from webcam video input. It features an interactive web interface built with Streamlit, allowing users to practice signing in real-time. The project emphasizes user engagement, providing a home section with project information and a dedicated sign detection area. Future plans include expanding the dataset and enhancing model accuracy for broader sign language recognition.
- Real-time Sign Recognition: The application captures video from the user's webcam to detect and interpret sign language gestures in real time.
- Custom Dataset: Utilizes a tailored dataset with six distinct sign classes, ensuring targeted learning and recognition.
- Model Training with Teachable Machine: The model is trained using Google's Teachable Machine, allowing for easy and effective training with user-uploaded data.
- Deployment with Streamlit: The web application is built and deployed using Streamlit, providing an interactive user interface for seamless user experience.
- Future Expansion: Plans to enhance the dataset and improve model accuracy for broader sign language recognition capabilities.:smile:
To deploy this project run
pip install streamlit streamlit run app.py