We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This feature is for when the user interacts with Cheerbot via audio input.
Base assumption: The user, when sad, will have a low pitch of voice as compared to his/her non-sad (happy, angry, surprised) emotional states.
Perform voice pitch analysis to determine user's emotional state
Create dataset for sad and non-sad classes
Implement a simple learning/inference model to detect sad class from input voice
TechStack:
The text was updated successfully, but these errors were encountered:
No branches or pull requests
This feature is for when the user interacts with Cheerbot via audio input.
Base assumption: The user, when sad, will have a low pitch of voice as compared to his/her non-sad (happy, angry, surprised) emotional states.
Perform voice pitch analysis to determine user's emotional state
Create dataset for sad and non-sad classes
Implement a simple learning/inference model to detect sad class from input voice
TechStack:
The text was updated successfully, but these errors were encountered: