-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does model training exist in an additive manner #355
Comments
I don't fully understand the meaning of your question. I also use this option for training each noise level. waifu2x/appendix/train_cunet_art.sh Lines 10 to 37 in d5171bc
|
Can you explain to me the meaning of "save_history"? I haven't seen the detailed parameter document about train.lua. If the "save_history" parameter is not used, does the learning rate start again when I train this model again? |
When
The learning rate starts from the default value (0.00025) unless you specify the |
This is the output of each round of training: [======================================= 134528/134528 ===============================>] Tot: 19m2s | Step: 8ms validation[======================================= 21120/21120 =================================>] Tot: 1m37s | Step: 4ms
updateDoes the "best" field represent the latest learning rate? |
No. It is lowest MSE(mean squared error).
|
I didn't see the "learning rate" field, this is the output from the beginning of the script: th train.lua -model_dir models/my_model -method noise -noise_level 2 -learning_rate 0.00029588596682772 -test images/3688-gigapixel-scale-1_50x.tif 0 small images are removed make validation-setload .. 2102============================= 100/110 =============================>........] ETA: 1s81ms | Step: 108ms 1resampling[======================================= 2102/2102 ===================================>] Tot: 48s274ms | Step: 23ms update[======================================= 75969/134528 .................................] ETA: 8m29s | Step: 8ms validation[======================================= 21120/21120 =================================>] Tot: 1m37s | Step: 4ms
update[======================================= 134528/134528 ===============================>] Tot: 20m17s | Step: 9ms validation[======================================= 21120/21120 =================================>] Tot: 1m24s | Step: 3ms update[================>...................... 27969/134528 .................................] ETA: 14m3s | Step: 7ms |
Sorry, I think I misunderstood the meaning, I thought that each "validation" represents a round, it seems that the next round is really long |
It is shown after # 2. |
Thank you very much for your warm help, I have learned a lot. |
For the same model, some additional images are added, an image list is generated, and then image data is generated for training. Does the model remember the image data of the last training?
The text was updated successfully, but these errors were encountered: