Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training for Korean Languages #6

Open
hathubkhn opened this issue Oct 13, 2022 · 6 comments
Open

Training for Korean Languages #6

hathubkhn opened this issue Oct 13, 2022 · 6 comments

Comments

@hathubkhn
Copy link

Hello authors,
First of all, thank you for giving us an impressive repository.
For now, I want to re-trained your model with Korean language, for example KSS (korean single speaker). However, when I synthesize, I see it is not good for korean language. Can you give me some guidelines for that.

Thank you very much

@hathubkhn
Copy link
Author

neutral-normal-3_0751-Actor_01

@ga642381
Copy link
Owner

Hi, @hathubkhn. What kind of dataset are you using? How's the quality and what's the style of the speech (reading style or expressive style)? These factors affect the TTS training a lot.

To my understanding, if you want to train a high-quality TTS system, you need high-quality data. And I am also doing some experiments combining a speech enhancement system as a front-end (or as a preprocessing step). You can check this repo: https://github.com/facebookresearch/denoiser. The enhanced quality is quite good.

As for the style, if your data is expressive, like emotional, you might need another encoder to model these styles. A famous work is the Global Style Token (https://arxiv.org/abs/1803.09017). However, we do not support modeling the styles in this repo.

@hathubkhn
Copy link
Author

Thank you for your reply. Actually, I am using KSS dataset (https://www.kaggle.com/datasets/bryanpark/korean-single-speaker-speech-dataset). The sampling rate is 22050. However, when I finished training and check some generated files. I see the performance is not good. I dont know when we use Hifi-Gan universal, doesnt it affect or not.
About dataset, this is single speaker with normal voice.

And also I have other question, if I want to combine emotion style, I have to use another encoder like you said and also I need to add other emotion loss for training that kind of data?

@hathubkhn
Copy link
Author

eval.zip
This is result that I extracted from model. Could you help me to evaluate what is the problem? :(

@ga642381
Copy link
Owner

ga642381 commented Oct 15, 2022

Hi, @hathubkhn
I think using Hifi-Gan can actually improve the quality. Just remember you make sure you extract the MelSpectrogram properly (consistent with your Hifi-GAN) in the preprocessing stage.

I listened to your sample. It sounds like some phonemes cannot be appropriately pronounced, but some phonemes sound okay (like the ones at the beginning). Just guessing; maybe you can check your text preprocessing to see if it transforms the text into the correcet phonemes.

You can also check if it can synthesize well for the "training data".

@hathubkhn
Copy link
Author

hathubkhn commented Oct 15, 2022

Thank you,
What is happend if model transform the text into correct phonemes? Could we have any other reasons for this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants