You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If training batches are limited and the total number of batches is not reached then validation does not start. This bug is fixed by setting train batched to null.
What are the steps to reproduce the bug?
Set dataloader.training.limit_batches > number of batches in dataset.
Version
0.3.3
Platform (OS and architecture)
All
Relevant log output
Accompanying data
No response
Organisation
No response
The text was updated successfully, but these errors were encountered:
So i think this issue is more related to pytorch lightning not knowing the length of the dataset prior to training run starting.
If it does know the dataset length then regardless of what limit batches value is entered the val loop is not skipped and the end of validation plots are also not skipped
This entails implementing the len property on the Dataset class
What happened?
If training batches are limited and the total number of batches is not reached then validation does not start. This bug is fixed by setting train batched to null.
What are the steps to reproduce the bug?
Set dataloader.training.limit_batches > number of batches in dataset.
Version
0.3.3
Platform (OS and architecture)
All
Relevant log output
Accompanying data
No response
Organisation
No response
The text was updated successfully, but these errors were encountered: