Created for Hackdavis2022
uses coqui STT to create live captions for the real world.
https://devpost.com/software/captionhat
Those who are hard of hearing or deaf often struggle to communicate with others, even when they learn to lipread, it's not easy, and often difficult to understand. Many causes of hearing loss also cause an impaired sense of balance.
Caption hat listens to the speech around its wearer, and creates real-time captions of what was said. It can also text you reminders on command. It also provides an artificial horizon to help the wearer stay balanced.
We used a Raspberry Pi 4, coqui.ai speech recognition, and a 24x2 character lcd to display the captions. We used the SSD1315 display and LIS3DHTR Accelerometer for the balance sensor.
Our SD card got corrupted partway through development, but since we'd been documenting the installation procedure and pushing changes to github, we were able to restore our project on a fresh SD card without issue
We were well prepared with equipment for this challenge, and the project went really smoothly overall. All team members contributed and got along well, and the project works fantastically!
We learned the value of good preparation and documentation as the project progresses, not only at the end. Having a team that works together well is essential as well. We each learned some aspects of the other's specialties.
Migrating to a more elegant form factor is a priority, probably something more like google glass. Improving the screen's resolution in the process, so we can fit more text on screen at once.
We also want to add the ability to do real-time translation between languages, drastically increasing the benefit to many people worldwide.