Heavy Augmentations + More Epochs #119
-
Hello! Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 4 replies
-
@alexriedel1, increasing number of epochs would help for cflow and stfpm. You might want to either remove early stopping, or increase the patience in early stopping. Other algorithms don't require CNN training and only performs feature extraction. That's why the number of epochs are set to 1. For the augmentations, I remember that @blakshma performed some experiments to see how augmentations impact the performance. Maybe he could comment. @blakshma ? Also, @alexriedel1, would you be alright if we move this to Discussions that we recently enabled? |
Beta Was this translation helpful? Give feedback.
-
@samet-akcay For the feature extracting methods I think increasing the dataloader with augmented samples will have the same effect as multiple epoch learning |
Beta Was this translation helpful? Give feedback.
-
@alexriedel1 we did experiment some experiments with augmentation. For the MVTec AD dataset, augmentation policies needs to be carefully crafted for each category. For example, rotation deteriorates performance for transistor category but improves it for hazel_nut category. Overall, the performance improvement that we get heavy augmentation is very small for MVTec case. However, given a more complex dataset, augmentation is warranted to provide significant performance improvement. |
Beta Was this translation helpful? Give feedback.
-
Slightly off topic so I apologize beforehand if I should have started another thread, but did anyone experience GPU memory issues when using augmentations? I am just trying to add some Gaussian noise to my images and CUDA runs out of memory (I dropped the batch size all the way down to 1 and this error still occurs). When not using any augmentations, I have no issues running training sessions with increased batch sizes. |
Beta Was this translation helpful? Give feedback.
@alexriedel1 we did experiment some experiments with augmentation. For the MVTec AD dataset, augmentation policies needs to be carefully crafted for each category. For example, rotation deteriorates performance for transistor category but improves it for hazel_nut category.
Overall, the performance improvement that we get heavy augmentation is very small for MVTec case. However, given a more complex dataset, augmentation is warranted to provide significant performance improvement.