This project aims to classify people’s emotions based on their facial images and predict key facial points using deep learning models. It involves training and deploying systems to monitor facial emotions and expressions automatically.
- Objective: Predict the x and y coordinates of 15 key facial points.
- Dataset: Includes over 2000 images with facial key-point annotations.
- Approach: Build a Convolutional Neural Network (CNN) with Residual Blocks (ResNet) to detect key facial points.
-
Objective: Classify facial emotions into 5 categories:
- 0: Angry
- 1: Disgust
- 2: Sad
- 3: Happy
- 4: Surprise
General Steps
- Image Visualization: Explore and understand the dataset through visualizations.
- Image Augmentation: Apply transformations to enhance the dataset.
- Data Normalization and Scaling: Prepare data for efficient model training.
Key Facial Points Detection
- Build ResNet Model: Design a CNN with Residual Blocks for key point detection.
- Compile and Train Model: Train the model to predict facial key points.
- Model Performance Assessment: Evaluate the model’s accuracy and reliability.
Facial Expression Detection
- Explore Emotion Dataset: Import and analyze the emotion dataset.
- Visualize Emotion Data: Gain insights through visual exploration.
- Build Emotion Classifier: Create a classifier model for emotion detection.
- Train the Model: Optimize the classifier to detect emotions accurately.
- Evaluate Classifier: Understand key performance indicators (KPIs) and assess results.
Predictions and Deployment
- Model Predictions: Generate predictions using:
- Key Facial Points Detection Model
- Emotion Classifier Model
- Save Trained Models: Prepare models for deployment.
- TensorFlow Serving: Serve models using TensorFlow Serving.