This contains code for detecting human emotion from sound. The datasets used are Toronto emotional speech set (TESS) and selective parts of The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS).
The notebook 'Sound Preprocessing .ipynb' contains code for preprocessing the sound files and the notebook 'Training .ipynb' contains code for training the neural network. An accuracy of 83.1 % was achieved.