You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm implementing MALA's network in this pipeline. It saves memory by using convolution without padding, therefore can afford a larger input size during training (for example [64, 268, 268] with batch size 4 on a single GPU).
However, the data loading time became unaffordable under this input size, where 90% of the time is spent on data-loading. I found that this is caused by SMOOTH, the post-process of the label after augmentation.
I wonder if you are aware of this? Will discarding smooth influence training much?
Merry Christmas :)
The text was updated successfully, but these errors were encountered:
Hi @Levishery, thanks for reporting the performance issue! We will investigate it and get back to you. Basically, without AUGMENTOR.SMOOTH, the object masks after nearest interpolation will have very coarse boundaries.
Thank you very much for your contributions! :)
I'm implementing MALA's network in this pipeline. It saves memory by using convolution without padding, therefore can afford a larger input size during training (for example [64, 268, 268] with batch size 4 on a single GPU).
However, the data loading time became unaffordable under this input size, where 90% of the time is spent on data-loading. I found that this is caused by SMOOTH, the post-process of the label after augmentation.
I wonder if you are aware of this? Will discarding smooth influence training much?
Merry Christmas :)
The text was updated successfully, but these errors were encountered: