Baby Monitor for Deaf & Hearing Impaired Parents
Our program detects and classifies the baby-sound inputs and notifies the caretaker when the baby is in need. (ie. Baby Crying)
The program was built as the team submission to AI for Social Good Hackathon. We created our model using multilayer perception neural network(MLP) from the scratch in order to classify 4 different sound parameters. These parameters include:
Quiet/Silent Background Noisy Background Baby Laughing Baby Crying
Since this was the project submission for a hackathon we had a very limited time to create the program. The collection of the dataset was somewhat limited and the processing of a sound was very heavy on our everyday-use laptop.
By designing and implementing the multilayer perception model, we were able to acquire nearly 100 percent of accuracy in classifying of the test data.
We learned as a team that choosing the right machine learning model to fit and achieve our goal was the �crucial step when started the project. We also learned to work under a very short time limit as a team.
Possible improvements in the future include:
An interactive UI/UX mobile application Collection of a larger dataset Increased complexity of the model to classify more sound parameters
Sound classification is a big question to solve in AI and presently most are done in the domain of speech recognition. We believe that we can use our model to contribute to the classification of sound in other areas of �domain to better support both hearing and hearing-impaired community.