Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Many false positive noises on volume boundary side #104

Open
JaekwangChung opened this issue Sep 8, 2022 · 2 comments
Open

Many false positive noises on volume boundary side #104

JaekwangChung opened this issue Sep 8, 2022 · 2 comments

Comments

@JaekwangChung
Copy link

Describe the bug
I evaluated BTCV dataset using below SwinUNETR codes and pre-trained model of 'Swin UNETR/Base' (swin_unetr.base_5000ep_f48_lr2e-4_pretrained.pt)
https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV

I can see many false positive noises at the boundary of the volume on some BTCV models.
(See below screenshots)

It seems to be a problem similar to the following issue.
#93

How can I remove these false positive noises?

To Reproduce

  1. Go to '...'
    https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV

  2. Install '....'
    conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
    pip install git+https://github.com/Project-MONAI/MONAI.git@07de215c
    pip install nibabel==3.1.1
    pip install tqdm==4.59.0
    pip install einops==0.4.1
    pip install tensorboardX==2.1
    pip install scipy

  3. Run commands '....'
    test.py --json_list=dataset_0.json --data_dir=../dataset/Abdomen2 --feature_size=48 --infer_overlap=0.7 --workers=8 --sw_batch_size=2 --pretrained_model_name=swin_unetr.base_5000ep_f48_lr2e-4_pretrained.pt --exp_name=result
    ==> Added 'sliding windows batch size option' (sw_batch_size) and set to 2 because of GPU memory shortage.

Expected behavior
There should be no false positive noise or it should be kept to a minimum.

Screenshots
Inference on case img0035.nii.gz
Mean Organ Dice: 0.771858516264921
Inference on case img0036.nii.gz
Mean Organ Dice: 0.8404263705815952
Inference on case img0037.nii.gz
Mean Organ Dice: 0.8325490601052332
Inference on case img0038.nii.gz
Mean Organ Dice: 0.7960240923146462
Inference on case img0039.nii.gz
Mean Organ Dice: 0.8385704222867443
Inference on case img0040.nii.gz
Mean Organ Dice: 0.822130831278294
Overall Mean Dice: 0.816926548805239

img0035_inference
< Inference from img0035.nii.gz >

img0039_inference
< Inference from img0039.nii.gz >

img0040_inference
< Inference from img0040.nii.gz >

Environment (please complete the following information):

Additional context
Add any other context about the problem here.

@charlesmoatti
Copy link

I actually have exactly the same issue as you, I trained my own Swin-UNETR on my brain MRI data with 7 tissues (keeping the same monai data transforms as in the BTCV/BRATS case) and in the segmentation outputs at test time, I get this weird out-of-skull prediction. The prediction looks good but the out-of-skull artifacts make it very bad when evaluating metrics like dice score, ...

See pictures:

Screenshot from 2022-09-20 13-01-06
< Inference on a brain MRI >
Screenshot from 2022-09-20 13-01-47
< Inference on another brain MRI >

@emi-dm
Copy link

emi-dm commented Jul 4, 2024

Hey @JaekwangChung and @charlesmoatti !!! Try with the parameter "ignore_empty" to False in DiceMetric!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants