You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using nnU-Net for a 3D tumors segmentation task. The lesions can be of very different sizes, from few voxels to big chunks of the input image. What would you recommend ?
Loss: I plan to use the standard dice + CE loss, but with a higher weight on the foreground classes for the CE (e.g. 100:1), and using a mask for voxels with an uncertain label (parameter ignore_label in the loss + remove RemoveLabelTransform in the transforms)
Optimization : I plan to benchmark several initial learning rates (1e-2, 1e-3, 1e-4), and I wonder if I should get a try with AdamW with default weight decay
Data augmentation : I have no intuition on this, I saw there are some alternatives already on the repo (noDA, default, moreDA, insaneDA). I do have a large dataset so not sure how useful it is.
I will test pixel spacing and model architecture in a second time. For architecture I see in Table 3 of this paper that SE norm seems efficient popular, any implementation here ?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I am using nnU-Net for a 3D tumors segmentation task. The lesions can be of very different sizes, from few voxels to big chunks of the input image. What would you recommend ?
I will test pixel spacing and model architecture in a second time. For architecture I see in Table 3 of this paper that SE norm seems efficient popular, any implementation here ?
Any feedback is welcome :)
Beta Was this translation helpful? Give feedback.
All reactions