Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why the training speed of an epoch is accelerating? #15

Open
aiyolo opened this issue Oct 10, 2019 · 3 comments
Open

why the training speed of an epoch is accelerating? #15

aiyolo opened this issue Oct 10, 2019 · 3 comments

Comments

@aiyolo
Copy link

aiyolo commented Oct 10, 2019

image

as noticed on the image above, on the first epoch, the training speed is only about 8 it/s, but it is continuously accelerating and increased to 30 it/s at the third epoch.

@jakesnell
Copy link
Owner

This is expected due to the way data loading is done. As images are read from disk, they are continually cached into memory. Eventually all of the images are loaded into memory and training is no longer bottlenecked by data loading.

@aiyolo
Copy link
Author

aiyolo commented Oct 11, 2019

@jakesnell Thanks for you reply. It makes sense to explain the phenomenon in this way. In fact, I recently reimplemented PN using most of your code, and I got nearly the same train acc and val acc as your demo, but the training speed is equal in every epoch(the cache in the memory may be not used in next epoch), so I was wondering that if there are any special mechanisams to achieve it?

@Bryce1010
Copy link

@aiyolo It's a interesting phenomenon, are datasets stored in the same kind of hard disk? May be your datas store in SSD?
By the way, does that mater if the start epoch' speed is a little low?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants