You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
as noticed on the image above, on the first epoch, the training speed is only about 8 it/s, but it is continuously accelerating and increased to 30 it/s at the third epoch.
The text was updated successfully, but these errors were encountered:
This is expected due to the way data loading is done. As images are read from disk, they are continually cached into memory. Eventually all of the images are loaded into memory and training is no longer bottlenecked by data loading.
@jakesnell Thanks for you reply. It makes sense to explain the phenomenon in this way. In fact, I recently reimplemented PN using most of your code, and I got nearly the same train acc and val acc as your demo, but the training speed is equal in every epoch(the cache in the memory may be not used in next epoch), so I was wondering that if there are any special mechanisams to achieve it?
@aiyolo It's a interesting phenomenon, are datasets stored in the same kind of hard disk? May be your datas store in SSD?
By the way, does that mater if the start epoch' speed is a little low?
as noticed on the image above, on the first epoch, the training speed is only about 8 it/s, but it is continuously accelerating and increased to 30 it/s at the third epoch.
The text was updated successfully, but these errors were encountered: