Facial Expression Detection Model using CNN #832
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Related Issues or bug
Human-computer interaction is advancing rapidly, and systems need to respond not only to explicit commands but also to implicit signals, such as human emotions. Facial expressions are a crucial indicator of a person's emotional state, and detecting these expressions accurately in real-time can enhance applications across healthcare, security, customer service, and entertainment. However, accurately predicting emotions from facial expressions is challenging due to variations in face shape, lighting, occlusion, and subtle expression differences. This project addresses the problem by developing a CNN model capable of predicting facial expressions in real-time with high accuracy.
Fixes: #830
Proposed Changes
This project focuses on building a Facial Expression Detection Model using a Convolutional Neural Network (CNN). It aims to classify facial expressions into distinct categories such as "angry," "happy," "sad," "surprise," "neutral," and others. The model uses the FER2013 dataset, which contains grayscale images of faces annotated with emotion labels, as the primary training data. The architecture leverages convolutional and pooling layers to capture spatial hierarchies in images, followed by dense layers for classification. TensorFlow and Keras are the primary libraries for implementing the CNN, while data augmentation and preprocessing techniques enhance the model's robustness.