Using validation as test for leave one out methodology #2352
noambitton
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
We are currently using nnUNet for training with a slight modification from the typical 5-fold ensemble approach. Instead, we are exploring the leave-one-out(LOO) methodology to identify outliers in our dataset. Specifically, we have 101 patients with medical images, and our objective is to designate one patient as the test set while using the remaining 100 patients as the training set for each fold.
We have a query regarding the validation set: Can we configure it such that only one patient is designated as the validation set in each iteration, while the remaining 100 patients are used for training? Essentially, we are considering replacing the traditional test set with the validation set for this purpose.
Our main concern is whether using the validation set in this manner would yield comparable results to using a separate test set. Does the validation set play a role in training, or can it be effectively repurposed for these outlier detection purposes?
Additionally, if feasible, could we utilize the splits_final.json file to define the specific fold (patient) designated for validation in each iteration?
Thank you for your insights!
More explanations on LOO: https://arxiv.org/abs/2203.03443 , Bachmann, G., et al. (2022, March). Generalization through the lens of leave-one-out error.
Beta Was this translation helpful? Give feedback.
All reactions