You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 11, 2023. It is now read-only.
I ran the train.py for training warp stage twice. (python train.py --name deep_fashion/warp --model warp --dataroot data/deep_fashion)
However, the training does not proceed beyond epoch 3. Could you help me with this issue?
I have attached screenshots for reference.
The text was updated successfully, but these errors were encountered:
One thing I could notice is that when loss values are exactly same, the execution freezes. Have you used callbacks or something which stops the training?
I went through the code, but could not find such things.
One thing I could notice is that when loss values are exactly same, the execution freezes. Have you used callbacks or something which stops the training?
I went through the code, but could not find such things.
The loss values are the same may come from the printing format for %.3f.
Please try in visualizer.py | line 242 message+ = '%s: %.3f ' --> if you increase to %.6f it may show the difference between iterations?
For your issues, may be you can debug and check the values of: opt.start_epoch + 1, opt.n_epochs + 1
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi,
I ran the train.py for training warp stage twice. (python train.py --name deep_fashion/warp --model warp --dataroot data/deep_fashion)
However, the training does not proceed beyond epoch 3. Could you help me with this issue?
I have attached screenshots for reference.
The text was updated successfully, but these errors were encountered: