-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training for Korean Languages #6
Comments
Hi, @hathubkhn. What kind of dataset are you using? How's the quality and what's the style of the speech (reading style or expressive style)? These factors affect the TTS training a lot. To my understanding, if you want to train a high-quality TTS system, you need high-quality data. And I am also doing some experiments combining a speech enhancement system as a front-end (or as a preprocessing step). You can check this repo: https://github.com/facebookresearch/denoiser. The enhanced quality is quite good. As for the style, if your data is expressive, like emotional, you might need another encoder to model these styles. A famous work is the Global Style Token (https://arxiv.org/abs/1803.09017). However, we do not support modeling the styles in this repo. |
Thank you for your reply. Actually, I am using KSS dataset (https://www.kaggle.com/datasets/bryanpark/korean-single-speaker-speech-dataset). The sampling rate is 22050. However, when I finished training and check some generated files. I see the performance is not good. I dont know when we use Hifi-Gan universal, doesnt it affect or not. And also I have other question, if I want to combine emotion style, I have to use another encoder like you said and also I need to add other emotion loss for training that kind of data? |
eval.zip |
Hi, @hathubkhn I listened to your sample. It sounds like some phonemes cannot be appropriately pronounced, but some phonemes sound okay (like the ones at the beginning). Just guessing; maybe you can check your text preprocessing to see if it transforms the text into the correcet phonemes. You can also check if it can synthesize well for the "training data". |
Thank you, |
Hello authors,
First of all, thank you for giving us an impressive repository.
For now, I want to re-trained your model with Korean language, for example KSS (korean single speaker). However, when I synthesize, I see it is not good for korean language. Can you give me some guidelines for that.
Thank you very much
The text was updated successfully, but these errors were encountered: