- Table of Contents
- About The Project
- Getting Started
- Usage
- Contributing
- License
- Contact
- Thank You!
- Acknowledgements
Face Recognition | Object Detection | OCR with speech output | Text Summarization | Google Translate API |
---|---|---|---|---|
GPS Location Tracing | Video Call Backend Server | Video Call App Client |
---|---|---|
Please wait for sometime for the above GIFs to load. Otherwise, click on them to view them individually.
This is a Flutter app that uses Firebase ML vision, Tensorflow Lite, and in-built speech recognition and text-to-speech capabilities to act like a third eye for blind people. It uses Firebase ML vision to detect human faces, and Tensorflow Lite model implementations of MobileFaceNets and SSD MobileNetV2 to perform face recognition and object detection respectively. The blind user can authenticate with fingerprint, issue voice commands to perform face recognition, object detection, OCR, automatic URL and text summarization, translate languages, send GPS location, and perform video calling with volunteer. The app responds appropriately via voice output for every command issued. The text summarization API is built with Flask, Sumy, Trafilatura and is deployed to Heroku. It uses Latent Semantic Analysis(LSA) algorithm for text summarization. The blind user can use this app to detect and save human faces, detect objects in front of him/her, get voice output of text within objects, summarized result of text and URLs, translate sentences to different languages, video call, and also send his/her GPS location for tracing purposes.
This project is entirely built with the following components and languages:
You can download the pre-built apk file found in the Releases
section. Follow these instructions in order to get a copy of the project up and running on your local machine for development and testing purposes.
Java Runtime Environment(JRE)>=8, Android SDK API level 28 or higher should be installed. Flutter and Dart SDKs should be installed. After installation, check Java version, and Flutter configuration using
java --version
flutter doctor
- Download or Clone the repo
git clone https://github.com/vijethph/Insight.git
- Open the downloaded project folder
cd Insight
- Make sure Flutter executable is added to environment variables. Go to project root and execute the following command in console to get the required dependencies
flutter pub get
- Connect your Android device to your desktop. Make sure it is properly connected by using
flutter devices
- Install and run the app using
flutter run
Once the app starts, authenticate yourself with fingerprint. Then, tap to issue voice commands like recognize face
, detect objects
, read text
and send my location
to perform respective functionalities. In face recognition screen, double tap to change camera, and once human face is detected, long tap to save detected face. The name for detected face can be given with voice input by tapping onto screen.
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License.
Vijeth P H - @vijethph
Sathya M - @sathya5278
Shashank Ashok Gadavi - @Shashankgadavi
Sagar V - @sagubantii1911
Project Link: https://github.com/vijethph/Insight
If you like this project, please ⭐ this repo and share it with others 👍