Live Emotion Detection with Emoji Expression
EmojiDetector is a Python-based project that detects live emotions and represents them with corresponding emoji expressions. This project leverages machine learning and computer vision techniques to analyze facial expressions in real-time and map them to emojis that best represent the detected emotions.
To get started with EmojiDetector, follow these steps:
-
Clone the repository:
git clone https://github.com/Aurjay/EmojiDetector.git cd EmojiDetector
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # For Unix-based systems venv\Scripts\activate # For Windows systems
-
Install the required dependencies:
pip install -r requirements.txt
To run the EmojiDetector, use the following command:
python main.py
Ensure you have a webcam connected as the program will use it to capture live video feed for emotion detection.
- Real-time Emotion Detection: Captures live video feed from the webcam and detects emotions in real-time.
- Emoji Representation: Maps detected emotions to their corresponding emojis and displays them.
- User-Friendly Interface: Simple and intuitive interface for users to interact with.
We welcome contributions to enhance the EmojiDetector project. To contribute:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Make your changes and commit them (
git commit -m 'Add some feature'
). - Push to the branch (
git push origin feature-branch
). - Create a new Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- Special thanks to the contributors of open-source libraries and tools that made this project possible.
- OpenCV for computer vision functionalities.
- TensorFlow for machine learning models.