Nan in nifti files breaks inference #2191
Replies: 5 comments
-
Interesting. I guess it's tricky - the more checks we do, the slower we'll be and the more we'll pigeonhole users into a specific way of using MONAI. On the flip side, those sorts of warnings are useful for debugging. Was the whole of your image NaN when only a few of the input voxels were NaN? Perhaps you figured it out already, but you could create a Lamba transform that would run np.nan_to_num for you, e.g.,: transforms = Compose([
...,
Lambdad(["image", "label"], np.nan_to_num),
...
]) |
Beta Was this translation helpful? Give feedback.
-
Only a few voxels inside the input images were Nan. If you give the images a quick glance with ITK snap or similar tools you don't notice anything. This caused my whole sliding window inference to only output empty predictions. This has the potential to completely ruin challenge submissions if you don't double check what you submit. Sure if one knows about the presence of the Nans, they can be mitigated easily. My suggestion would be to include a nan check and throw atleast warning. Those that are worried about computation time could disable the check with a flag? |
Beta Was this translation helpful? Give feedback.
-
Hmm thinking more about it..I am fan of leaving data untouched. So probably Sliding window inferer should become robust to Nans or at least throw warnings? |
Beta Was this translation helpful? Give feedback.
-
I don't think this is the job of the inferrer, which should be simply inferring. The problem here is the pre-processing of your data, which should set any NaN's to 0 prior to running it through your network. If we start trying to handle this in MONAI, we'll have to handle all edge cases. What do we do for +ve and -ve infinity etc.? It seems to me that this isn't something that MONAI should be handling, so I still think that using a Lambda transformation in your case is the best solution. |
Beta Was this translation helpful? Give feedback.
-
I understand your argument..in my case I am taking part in a small non public challenge..the test set features these Nan voxels, which were not present in training and validation dataset. I was lucky that the challenge organizer asked me why most of my segmentations are empty. Maybe a good compromise would be that the inferer throws a warning if the returned outputs are empty? In many cases this would be a strong indicator that something is off? |
Beta Was this translation helpful? Give feedback.
-
Is your feature request related to a problem? Please describe.
I just ran inference on a dataset, some of the nifti files contain Nan. My resulting segmentations where empty due to this.
Describe the solution you'd like
Monai should run
np.nan_to_num
on the array in the nifti loader or at least throw a warning.Describe alternatives you've considered
warning or
np.nan_to_num
Thanks :)
Beta Was this translation helpful? Give feedback.
All reactions