Welcome to ReadifyaAi, an innovative kids' reading and learning app designed to make education fun and engaging for children aged 4-10.
Visit Site:- Click here
This is a capstone project developed by my team and me as part of the capstone module in the MSc Business Analytics program at University College Dublin, Ireland. You can find our complete comprehensive report in the "Report.pdf" file located in this repository. Please refrain from altering the report, as it is intended for investigation purposes.
- Interactive Stories: Animated tales with voice narration and sound effects.
- Personalized Learning: AI-driven recommendations, progress tracking, and adaptive challenges.
- Educational Games: Word puzzles, spelling bees, and comprehension quizzes.
ReadifyaAi leverages a robust tech stack to deliver a seamless and engaging user experience:
- GPT-3.5 Turbo: Natural language processing for interactive storytelling and personalized learning paths.
- OpenAI API: Integration of AI-driven features and recommendations.
- FastAPI: Backend framework for building the API.
- AWS: Cloud services for scalable and reliable infrastructure.
- Angular: Frontend framework for a responsive web application.
- Flutter: Cross-platform mobile app development for iOS and Android.
This architecture not only supports real-time content generation and filtering but also ensures that the educational material provided through "ReadifyAI" meets the highest standards of safety and educational value.
The above figure is a small part of the entire Python FastAPI service and it gets the meaning of a word by first asking GPT3.5 turbo after a lot of prompt engineering and then generating audio using Amazon Polly of the text generated by GPT3.5. The audio is then stored in mp3 format in S3 buckets and the URL of the audio is sent as an output.
To ensure the LLM outputs are in a specified format, we used LangChain’s Response Schema and StructuredOutputParser modules. These modules allow us to define the expected structure of the output and parse the LLM responses accordingly. The following code snippet illustrates how we implemented this to get response in a specific format:
By leveraging LangChain's tools, we were able to define strict output schemas and ensure the generated content adhered to these structures, thereby improving the application's reliability and user experience. Response Schema and StructuredOutputParser modules are really powerful tools to specify the response structure of the output of the LLM which facilitates integration with various other services whether it be frontend or backend. Fig above shows how we used these two modules by first using Response Schema to define the structure of the quiz that we want the LLM to generate and then using StructuredOutputParser to get the raw response from the LLM and structure it according to the schema defined in the Response Schema.
Parental Controls: Monitor progress, set learning goals, and ensure a safe, ad-free environment.
- Shreyas Lengade
- Sathya Jayagopi
- Shubham Rao
Sathya Jayagopi
Shubham Rao
- Shreyas Lengade :- lengadeshreyas06work@gmail.com
- Sathya Jayagopi :- sathyajayagpoi@gmail.com
- Shubham Rao :- shubhamraolive@gmail.com