Real-time fingerspelling video recognition achieving 74.4% letter accuracy on ChicagoFSWild+
-
Updated
Nov 10, 2020 - Python
Real-time fingerspelling video recognition achieving 74.4% letter accuracy on ChicagoFSWild+
A simple sign language detection web app built using Next.js and Tensorflow.js. 2020 Congressional App Challenge. Winner! Developed by Mahesh Natamai and Arjun Vikram.
Signapse is an open source software tool for helping everyday people learn sign language for free!
ASL gesture recognition from the webcam using OpenCV & CNN
A simple app that analyses and recognises the alphabet in sign language using machine learning
A Computer Vision based project that uses CNN to translate American Sign Language(ASL) to text and speech
EchoSign was made as part of an IBM internship project which we won with this project. It uses transfer learning on MobileNet on a hand-curated dataset of ASL images. The website for this classification was developed in Flask and it uses TTS technology for ASL text to speech conversion.
Portable sign language (ASL) recognition device that utilizes real-time and efficient programming to help mute and deaf by establishing two-way communication channel with people who have never studied sign language.
American Sign Language Alphabet recognition with Deep Learning's CNN architecture
Repo for storing files for the graduation project. It holds code for the CV and NLP Model
The purpose of the Sign-Interfaced Machine Operating Network, or SIMON, is to develop a machine learning classifier that translates a discrete set of ASL sign language presentations from images of a hand into a response from another system.
There are many applications where hand gesture can be used for interaction with systems like videogames, controlling UAV’s, medical equipment’s, etc. These hand gestures can also be used by handicapped people to interact with the systems. The main focus of this work is to create a vision based system to identify sign language gestures from real-…
Sign Language Detection system based on computer vision and deep learning using OpenCV and Tensorflow/Keras frameworks.
A simple for web app for enabling people to communicate with deaf and dumb people.
A pi setup to recognize ASL signs using a pre-trained CNN model and speak it out using a suitable TTS engine with adaptive settings.
Sign Language Alphabet Recognition System that automatically detects American Sign Language and convert gestures from live webcam into text and speech.
Bangla Sign Language Interpreter using CNN
My projects from Udacity's Artificial Intelligence Nanodegree
Add a description, image, and links to the asl-recognizer topic page so that developers can more easily learn about it.
To associate your repository with the asl-recognizer topic, visit your repo's landing page and select "manage topics."