You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In large datasets, the training may take only 1-2 epochs, meaning that the loss profile should be inspected on the step. Thus, it would be great to add an option to log losses and metrics on steps instead of epochs. I would suggest this for both training and validation losses, as in some pathological cases where there was sth wrong with the data setup, the training and validation losses actually did not match.
The text was updated successfully, but these errors were encountered:
Can you create a PR for this (can be in just the regular Training code)? Do you have some benchmarking for training time to add validation loss at each step?
In large datasets, the training may take only 1-2 epochs, meaning that the loss profile should be inspected on the step. Thus, it would be great to add an option to log losses and metrics on steps instead of epochs. I would suggest this for both training and validation losses, as in some pathological cases where there was sth wrong with the data setup, the training and validation losses actually did not match.
The text was updated successfully, but these errors were encountered: