-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training and Validation Loss #46
Comments
Hi @gopi231091 |
the same result... I want know your final loss and your result on your test set if it possible. Thanks. @hongsamvo @gopi231091 |
besides, if you remanbered, is there any other tricks? thank you so much. |
@diegolitianfu I did this project for a long time ago, so I can't remember that I solved this issue or not. |
I am training the Yolo with MobilenetV1 on VOC07+12 trainval dataset. I loaded Mobilenet with imagenet weights and built Yolo layers on top of it. I freeze the Mobilnet for first 50 epochs and tried to train to get a stabilized loss, after that I unfreeze all layers to train the network further.
I trained up to 105 epochs and still on it, the training loss decreased from 26 to around 12, val loss is around 14, and now further the loss is not decreasing and fluctuating around 12 and 14 respectively, as the loss is on Plateau the learning rate got decreased from 1e-4 to 1e-8.
I want to know:
Is this normal ?
Should I still continue training further ?
If anyone trained on the same dataset, I wanted to know the optimum loss they could reach ? (My each epoch is taking approximately 20mins)
Specifications:
Dataset : VOC 07+12 trainval with 0.1% validation set
Batch size : 16
Input shape : (416, 416)
Training on : Google Colab GPU
The text was updated successfully, but these errors were encountered: