-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: hit nan for variance_normalized #30
Comments
Am also seeing this. |
To be fair I'm also seeing this on Facebook's MADGRAD now, so I wonder if Adam/madgrad are just more likely to trigger this kind of divergence or if a bug slipped into the training data. Basically one of the loss values NaN's and this causes the optimizer to instantly fail (I guess SGD just recovers if that happens). |
Reducing my learning rate solved it. |
I've had the same issue. Reducing the learning rate did help, but I'm at 1e-5 with default parameters and 1e-6 with madgrad still gave NaN on loss values. Curious if there's something else I can do. |
I've just hit it too :( |
I found my error. I had some training data with values way outside me expected range of 0-1 which I found by adding an assert in my dataloader. |
I integrated ranger21 into https://github.com/glinscott/nnue-pytorch and exploring different parameters. I'm hitting this issue always after first step of training. This is what I'm using:
changing lr, eps, weight_decay, use_adaptive_gradient_clipping, use_warmup appears to have no effect. The NaN comes from the forward pass in the second step, so some weights become NaN. Adam and AdaBelief cores work fine. |
Calling Ranger21 with mostly default parameters:
Training seems fine for half a day with decent progress on all loss metrics, but then halts:
The text was updated successfully, but these errors were encountered: