Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The rough order of magnitude for the loss during pretraining is how many? #2

Open
kssmmm opened this issue Apr 10, 2024 · 3 comments
Open

Comments

@kssmmm
Copy link

kssmmm commented Apr 10, 2024

I pretrained the model with librispeech960h and get the loss of 0.2. However, when I used the checkpoint to finetune with the librispeech100h, I got a dev-wer about 100. Did I make a mistake during the pretraining phase or the fine-tuning phase?

@Alexander-H-Liu
Copy link
Owner

Alexander-H-Liu commented Apr 10, 2024

Hi,
Your training loss seems too low, should be ~1.4 after training for 200k steps and ~1.1 after 400k steps.
super low loss in self-distillation usually means the teacher model collapsed (constant output regardless of input) and the training runs into trivial task.

@kssmmm
Copy link
Author

kssmmm commented Apr 11, 2024

Hi, Your training loss seems too low, should be ~1.4 after training for 200k steps and ~1.1 after 400k steps. super low loss in self-distillation usually means the teacher model collapsed (constant output regardless of input) and the training runs into trivial task.

Previously, I modified the values in the config file from fp16 to bf16, and also changed the max token value from 3.8 million to 2.4 million. Now I have changed them back. It seems that the loss during the pretraining phase is consistent with what you mentioned, I didn't expect these two parameters to have such a significant impact.

@hadas
Copy link

hadas commented Apr 12, 2024

Hi, I ran into a similar issue with a very low loss and cluster collapse. Except for the batch size (4), I haven't changed anything in the base configuration, but it also happened with the default size. What can I do to prevent it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants