-
Notifications
You must be signed in to change notification settings - Fork 273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
During the training process, the loss has been rising #84
Comments
i didn't change training code, feel confused about it |
Maybe you can change the "disc_loss" in the config.yml to ragan_ls which is stated in the paper. By the way, could you share your Gopro dataset download from the link with me? I have no access to it for some reason. |
OK THANKS ! i will try it immediately |
请问你训练成功了吗 |
你好,我看到你train_batches_per_epoch: 1000 |
Hi, I'm also wondering why you # these two lines. Are they unnecessary for the training? BTW, I'm also wondering what does "bounds" do, which is also #-ed by you. |
Commenting train/val_batches_per_epoch ensures each epoch the network sees all the training/validation samples. "bounds" s are intended to split the train/val set, which should all be set to [0.0, 1.0] if you specify separate training/validation sets. |
Have you solved this issue? I also have a similar problem. The loss is not improving during training regardless of what I do. |
i use the mobilenet and fpn_inception, training data is Gopro, config is as follows:
anyone has trained successfully?
The text was updated successfully, but these errors were encountered: