Baby Monitor for Deaf & Hearing Impaired Parents
Our program detects and classifies the baby-sound inputs and notifies the caretaker when the baby is in need. (ie. Baby Crying)
The program was built as the team submission to AI for Social Good Hackathon. We created our model using multilayer perception neural network(MLP) from the scratch in order to classify 4 different sound parameters. These parameters include:
Quiet/Silent Background Noisy Background Baby Laughing Baby Crying
Since this was the project submission for a hackathon we had a very limited time to create the program. The collection of the dataset was somewhat limited and the processing of a sound was very heavy on our everyday-use laptop.
By designing and implementing the multilayer perception model, we were able to acquire nearly 100 percent of accuracy in classifying of the test data.
We learned as a team that choosing the right machine learning model to fit and achieve our goal was the �crucial step when started the project. We also learned to work under a very short time limit as a team.
Possible improvements in the future include:
An interactive UI/UX mobile application Collection of a larger dataset Increased complexity of the model to classify more sound parameters
The focus of sound classification in present AI field is predominantly in speech recognition. We believe that our project can help contribute to the study of other areas of sound domains to better support both hearing and hearing-impaired community.
https://docs.google.com/presentation/d/1ww5qf22RXpZ9dHm3nVQrwmjkTZF2H-s2U4cPCq_Padw/edit?usp=sharing
- Zirui (Sherry) Kuai : https://www.linkedin.com/in/zirui-kuai/
- Andrea Eunbee Jang : https://www.linkedin.com/in/andreaejang
- Airi Chow : https://www.linkedin.com/in/airi-chow-6314b9159
- Earl Aromin : https://www.linkedin.com/in/earomin