-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Queston about training loop #669
Comments
In trial.py see method torchbearer/torchbearer/trial.py Line 946 in 9d97c60
and torchbearer/torchbearer/trial.py Line 1019 in 9d97c60
I'd have thought most additions to the training loop can be added via one of the many callback hooks rather than modifying the source itself though |
Thanks for getting back to me! I'm trying to integrate stochastic weight averaging as in swa_utils from Pytorch. The way they implemented swa is like a wrapper on top of the torch optimizer (see their example below). Based on this, it seems I will need to pass the swa_model, the optimizer, and at least the swa_scheduler. And then I have to handle parameter update after swa kicks in during the training loop. Do you have any suggestions to go about doing this? Sorry, I'm new to both Torchbearer and SWA... I really appreciate any suggestions.
|
really, really (,really!) untested, but something like this should be equivalent based on the above and guessing the correct indentation:
|
@jonhare Thank you so much! I've spent the weekend playing around with it. At first it was working weird. Instead of just passing the swa_callback alone, I was adding it to my list of callbacks (which has other things in it), and then passing the whole list. So it seems some things don't play well together but I suspect it's due to some unnecessary schedulers etc in there. Of course now I have to combine the necessary portions but it's working! Note that I added the update_bn line before the trial as it looked like it wasn't updating properly. Hopefully this is correct! I'm extremely grateful that you took time our of your day to help me out! You really saved mmme a ton of headache. And I love torchbearer so much that I didn't want to switch the whole thing. |
Hi! I'm trying to fork the repo and add some functionality for an experiment. But that requires an addition in the training loop. I've read the documentation and the code but I can't seem to understand where the training loop itself is defined. Can somebody point me in the right direction?
Thanks in advance!
The text was updated successfully, but these errors were encountered: