We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanks for your great work! I ran your code and it works well.
Btw, I use a dynamic "max_len" instead of 400 when I synthesis the speech. But it has some errors when the long text was given since your position_embedding's max length is 1024 (such as https://github.com/soobinseo/Transformer-TTS/blob/master/network.py#17, https://github.com/soobinseo/Transformer-TTS/blob/master/network.py#63). I think it's better to increase the number to make it work when feeding long text.
Thanks for your work again. :)
The text was updated successfully, but these errors were encountered:
Thanks for your advice.
I will check it soon.
Sorry, something went wrong.
No branches or pull requests
Thanks for your great work!
I ran your code and it works well.
Btw, I use a dynamic "max_len" instead of 400 when I synthesis the speech.
But it has some errors when the long text was given since your position_embedding's max length is 1024 (such as https://github.com/soobinseo/Transformer-TTS/blob/master/network.py#17, https://github.com/soobinseo/Transformer-TTS/blob/master/network.py#63).
I think it's better to increase the number to make it work when feeding long text.
Thanks for your work again. :)
The text was updated successfully, but these errors were encountered: