– HyperTalk mobile app and the website both are computer vision-based solutions to facilitate communication in sign language for individuals with hearing and talking issues. Which is capable of real-time translation in both directions:
1] Sign language camera feed to Voice.
2] Voice feed to sign language animations.
– Ongoing focus is on enhancing the first feature with a new continuous Word-level Sign Language Recognition model that is capable of more accurate and faster translations with different sign language options for different regions in the world - Based on the following Data sets:
- Phoenix 2014 Dataset (German Sign Language Videos)
- OpenASL Dataset (American Sign Language Videos)
- CSL Dataset (Chinese Sign Language Videos)
- BOBSL Dataset (British Sign Language Videos)
- Team Members:
– Tools & technologies used: PyTorch, CUDA, ONNX, Flutter, OpenCV, Django back-end development