-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Anyone can share a 44k pretrain or gives some guide for training 44k from scratch by tiny dataset? #704
Comments
set a large epoch number and train longer |
Thank. Should I use small model not base model? @SWivid |
smaller size is fine |
I have set epoch to 100000, and use F5 small model arch. |
have you reset the epoch and restart the training or continue from previous one? |
I create a new project, not resume to train. the learning rate curve is ok. because I set epoch to 100000 |
how is the batchsize, e.g. batch_size_per_gpu and gpu numbers? for reference: we use default setting as in yaml file for small model to train with 24 hours LJSpeech |
{ This is the settings, I have only 1 4080 GPU |
equals to 10k updates with for reference, under it surely takes some time to train from scratch |
you mean I need 100k*307200/4800 = 6400k? |
Hello, how many steps do you train and how many hours do you cost? |
check previous response |
Checks
Question details
I want to train a 44k model to get a better voice quality, but failed to train. My dataset is about 10 hours, after about 300k updates, the learning rate has deceased to 1e-13, seems not update, increase to 400k still not improved, the voice is clear, But the content is still a mess. I think the model can not learn alignment with tiny dataset. Anyone has a success example?
The text was updated successfully, but these errors were encountered: