Skip to content

This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech.

Notifications You must be signed in to change notification settings

Data-Science-kosta/Speech-Emotion-Classification-with-PyTorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Speech-Emotion-Classification-with-PyTorch

This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech:

  1. Stacked Time Distributed 2D CNN - LSTM
  2. Stacked Time Distributed 2D CNN - Bidirectional LSTM with attention
  3. Parallel 2D CNN - Bidirectional LSTM with attention
  4. Parallel 2D CNN - Transformer Encoder

DATASET

Models are trained on RAVDESS Emotional Speech Audio dataset. It consits of 1440 speech audio-only files (16 bits, 48kHz, .wav).
Dataset is balanced:
dataset1
Emotions have 2 intensities: strong and normal (except for the neutral emotion, which only has normal intensity).
dataset2

PREPROCESSING

Signals are loaded with sample rate of 48kHz and cut off to be in the range of [0.5, 3] seconds. If the signal is shorter than 3s it is padded with zeros.
MEL spectrogram is calculated and used as an input for the models (for the 1st and 2nd model the spectrogram is splitted into 7 chunks).
Example of the MEL spectrogram:
spectrogram
Dataset is splitted into train, validation and test sets, with following percentage: (80,10,10)%.
Data augmentation is performed by adding Additive White Gaussian Noise (with SNR in range [15,30]) on the original signal. This enormously improved accuracy and removed overfitting.
Datasets are scaled with Standard Scaler.

MODELS

Architectures for all 4 models are shown from left to right respectively:

spectrogram

RESULTS

1. Model:
Accuracy: 94.02%

Confusion Matrix Influence of Emotion intensity on correctness
KM1 EI1

2. Model:
Accuracy: 96.55%

Confusion Matrix Influence of Emotion intensity on correctness
KM2 EI2

3. Model:
Accuracy: 95.40%

Confusion Matrix Influence of Emotion intensity on correctness
KM3 EI3

4. Model:
Accuracy: 96.78%

Confusion Matrix Influence of Emotion intensity on correctness
KM4 EI4

Releases

No releases published

Packages

No packages published