Residual Encoder are much slower to train #2296
romainVala
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello
First thanks for this very nice tool !
I recently give a try with the new preset, and run a training with nnUNetResEncUNetLPlans
Sorry if I miss the information but I could not found benchmark for those model,
on the exact same dataset, Is it expected that the training take 4 time more compare the standard 3D_fullres ? (I have an epoch time around 300 whereas, it takes ony 75 for the 3D_fullres)
I have an other (may be related) question about the model parameter, I was surprise to see very different setting for encoder and decoder : (after running nnUNetPlannerResEncL)
nnunet found the following parameter for ResEncL
whereas for the 3D_fullres, I get something more expected with
is that expected ?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions