Code Repository for the INTERSPEECH'24 Paper - Exploring Multilingual Unseen Speaker Emotion Recognition: Leveraging Co-Attention Cues in Multitask Learning
-
Updated
Jun 8, 2024
Code Repository for the INTERSPEECH'24 Paper - Exploring Multilingual Unseen Speaker Emotion Recognition: Leveraging Co-Attention Cues in Multitask Learning
Code from the paper "Towards Speech-to-Pictograms Translation" (Interspeech 2024)
Code for our INTERSPEECH 2024 paper: Comparing ASR Systems in the Context of Speech Disfluencies.
Add a description, image, and links to the interspeech2024 topic page so that developers can more easily learn about it.
To associate your repository with the interspeech2024 topic, visit your repo's landing page and select "manage topics."