Our inspiration with ToneSense is to create an application that offers individuals on the autistic spectrum with real-time feedback to interpret social cues and navigate interactions with confidence. By offering personalized support tailored to emotional needs, ranging from therapeutic interventions to communication aids, we envision fostering social growth and integration. Our goal is to illuminate pathways to inclusion and active participation in society for our users.
The ToneSense application is designed to analyze spoken or written content provided by the user and determine the predominant emotion conveyed within that content. Users can either record a piece of speech directly through the app or paste a segment of text into the interface.
In building the ToneSense application, we leveraged machine learning to achieve accurate sentiment analysis. Specifically, we deployed the "roberta-base-go_emotions" model for text sentiment analysis as it provided classification into a wide range of emotions. For analyzing audio content, we employed a speech-to-text conversion tool to transcribe spoken words into text format. We tested the models with data from diverse emotions to evaluate their performance and identify strengths and weaknesses.
We developed a user-friendly application interface that seamlessly integrates with the models' APIs and data conversion. This interface enables users to effortlessly record live audio or input text, streamlining the process with minimal effort required. Users only need a single click to identify the emotion in the piece of data, enhancing user experience and accessibility.
- Archana Ganesh
- Anju Santhosh Kumar
- Rigved Manoj
- Sachin Thomas