Skip to content
#

speech-emotion-classification

Here are 17 public repositories matching this topic...

In this work is proposed a speech emotion recognition model based on the extraction of four different features got from RAVDESS sound files and stacking the resulting matrices in a one-dimensional array by taking the mean values along the time axis. Then this array is fed into a 1-D CNN model as input.

  • Updated Feb 27, 2022
  • Python

Exploration of different audio features and CNN-based architectures for building an effective Speech Emotion Recognition (SER) system. The goal is to improve the accuracy of detecting emotions embedded in speech signals. The repository contains code, notebook, and detailed explanations of the experiments conducted.

  • Updated May 16, 2023
  • Jupyter Notebook

his is a Speech Emotion Recognition system that classifies emotions from speech samples using deep learning models. The project uses four datasets: CREMAD, RAVDESS, SAVEE, and TESS. The model achieves an accuracy of 96% by combining CNN, LSTM, and CLSTM architectures, along with data augmentation techniques and feature extraction methods.

  • Updated Nov 22, 2024
  • Jupyter Notebook

A Convolutional Neural Network that distinguishes between the speakers emotions. Comes with multiple preprocessors to improve the models performance.

  • Updated Jan 20, 2022
  • Python

Improve this page

Add a description, image, and links to the speech-emotion-classification topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the speech-emotion-classification topic, visit your repo's landing page and select "manage topics."

Learn more