Replies: 2 comments
-
Oh.. I just noticed it is accumulated time. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi @Matcom274 , Correct, it is cumulative time; this looks normal. That said, there has previously been reported #311, so if you ever observe that issue, please do post in that discussion. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Literally, is it normal that the training time per epoch (real time or wall time) increases as training evolves just like the attached image.
I have tested with 5 models with adjusting several parameter values such as l_max, rcutoff, even though these parameters migh not be related... The drop of the line indicates when I restart the training using append=true setting.
It seems the usage of gpu memory slightly increased after 20 epoch but saturated at last. Would it be the reason? I thought that the model complexity is initially fixed (fixed memory) and the weights are the only trainable values. Maybe I don't fully understand how GNN exactly works.
settings
Thank you for your help in advance..!
Sorry if this question has been already discussed. I tried to search but couldn't find it.
Beta Was this translation helpful? Give feedback.
All reactions