Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test script does not properly normalize denoised movie #21

Open
bantin opened this issue Jan 21, 2025 · 11 comments
Open

test script does not properly normalize denoised movie #21

bantin opened this issue Jan 21, 2025 · 11 comments

Comments

@bantin
Copy link

bantin commented Jan 21, 2025

Hi Support team,

I am having an issue where the test script (src/test.py) outputs a denoised movie which has a very different scale than the input movie. I suspect this is due to an issue undoing the normalization which is performed in the dataloader. I have attached a screenshot from FIJI showing this issue. In the screenshot, the top panel is the original movie, the middle panel is SUPPORT output, and the bottom panel is the residual (original - denoised). You can see that the denoised frame has a much larger scale than the original frame. Can you offer any advice here?

Image

@SteveJayH
Copy link
Member

Hi bantin, it seems weird since there was no issue about the scale previously.

To better assist you, could you provide the following details?

  1. Did you train the model on your dataset, or are you using a pre-trained model?
  2. Does the scale mismatch occur across all frames, or is it specific to certain ones?
  3. Could you share the parameter settings for src/train.py and src/test.py? It's worth checking if there’s any mismatch between the training and testing phases. Please provide:
  • (1) The training script you used (e.g., python -m src.train --expname XX --noisy_data XX --n_epochs XX --patch_size XX XX XX).
  • (2) The test settings (lines 61 to 65 in [src/test.py](https://github.com/NICALab/SUPPORT/blob/6c24137074bd22f48f9fbf44a1db1a13f3a9d604/src/test.py#L61)).

@bantin
Copy link
Author

bantin commented Jan 21, 2025

Hi, thanks for the response!

  1. I trained the model on my dataset.
  2. The scale mismatch seems to occur across all frames
  3. Training command:
    python -m src.train --exp_name cell4_full --noisy_data ../dendrites/data/cell4/cell4_preprocessed.tiff --n_epochs 50 --checkpoint_interval 1

For testing, I modified the testing script to use omegaconf (and to run on only a subset of the frames) but didn't make any other changes. The testing script I used is at this gist

@SteveJayH
Copy link
Member

Thanks. Seems that the patch_size differs between training and testing. Could you test by changing the patch size from [61, 64, 64] to [61, 128, 128]? And also [1, 64, 64] for the patch_interval.

@bantin
Copy link
Author

bantin commented Jan 21, 2025

Hi, you're talking about changing the patch size used in test.py right? I changed that to the values you suggested but it didn't fix the problem.

@SteveJayH
Copy link
Member

Yes that's correct. If then, I'm currently unsure what is causing the scale mismatch.

If you don't mind, please send the raw noisy TIFF file (or a small portion of it if it's too large). I'll check the raw tif and test it to see if this issue is occurring on my end. My email is [email protected] .

@bantin
Copy link
Author

bantin commented Jan 22, 2025

Thanks, will be happy to send a section of the video later today. However, I'm trying to do a bit of debugging myself.
In the test script, each patch passed to the image has size [61, 128, 128] corresponding to the patch size. If I run the network on a patch of size [1, 61, 128, 128], I get an output of size [1, 1, 128, 128]. I would have thought that the output should be the same as the input. My best guess here is that this is the prediction for one of the middle frames since the network has a blind spot. Can you explain what's going on here? What are the expected input and output dimensions for the support network? I looked through the docs but don't see this.

One thing I've noticed: If I comment out the line that denormalizes:
denoised_stack = denoised_stack * test_dataloader.dataset.std_image.numpy() + test_dataloader.dataset.mean_image.numpy()

then the output looks much closer to the input. Is it possible that the denormalization is being called twice somehow?

@SteveJayH
Copy link
Member

Thank you for the detailed exploration.

Our denoising model operates on 3D patches of size [61, 128, 128] as input, but the output is a single 2D denoised frame of size [1, 128, 128]. This is expected behavior due to the model's design.

This is how our model works. The input consists of the front $N$ and back $N$ frames ($30+30$ frames in this example) and the center frame ($1$ frame). The front and back frames are processed using a UNet (without blind spot), while the center frame is processed with a 2D network that incorporates a blind spot at its center. In this design, the blind spot applies only to the center frame. The surrounding frames processed by the UNet do not incorporate a blind spot.

Thus, our model only outputs the center frame, with assistance of front and back frames, resulting a model that processes 3D (xyt) information with a blind spot of (1, 1, 1). Easily saying, if our model outputs the size of [1, 61, 128, 128], the frames except the center frame will contain identical data to the input without removing noise, since the model does not have blind spot to that location.


And thank you for your inspection about denormalization. I'll check that and let you know if I discover any issues.

@bantin
Copy link
Author

bantin commented Jan 24, 2025

Hi @SteveJayH. I tried the same pipeline on synthetic data and it seemed to work. I probably had a bug in my code -- I will try rerunning my analysis and see if that was the issue.

In the meantime -- I do have a different question. My synthetic data is a simulated action potential propagating along a dendritic tree. It is a very small FOV: 176 x 138 pixels. Can you suggest settings for running support on this video? I tried using a patch size of [32 x 32 x 32] but I see some block artifacts when I run the denoising. Can you also tell me how to adjust the patch interval when I adjust the patch size? From a user perspective, it's frustrating that the patch interval does not automatically update when I adjust the patch size.

@SteveJayH
Copy link
Member

Hi @bantin , glad to hear it works on synthetic data. I've also trained and tested with the data you sent via email. From my side, there didn't seem to be any brightness issues. While my inspection wasn't throrough, here's how I set it up

Train

python -m src.train --exp_name benantin --noisy_data /media/user/HDD4_collab/SUPPORT/data/benantin/cell4_preprocessed.tiff --n_epochs 100 --checkpoint_interval 1

The test code is in this gist. For both training and testing, I made no modifications other than specifying the data path.

After inference, I examinded both raw and denoised files using Fiji. I combined (Image-Stacks-Tools-Combine) two images into a single image, where this combine tool makes a shared dynamic range and observed normal results. If the two images had totally different intensity scales, the combined output would show one image as completely saturated or the one image as nearly invisible.

Additionally, the z-profiles of both images appear similar.

Image


Regarding block artifacts

The block artifacts may be caused by overlap intervals being too similar to the patch size. During testing, patches are created with an interval of patch_interval. These patches are then processed and stitched back together (See figure below).

Image

Because our model is a CNN, predictions near the patch boundaries can be less accurate due to limited contextual information compared to the patch center. Note that, the model only processes individual patches, so "boundary" and "center" refer to regions within each patch.

To mitigate artifacts, we generally use the "half rule". For a patch_size of 128, set patch_interval to 64. For a patch_size of 64, set patch_interval to 32. This approache works well in most cases.

If you reduce the patch_size to 32 but leave the 32 for patch_interval, block artifact may occur. If artifacts persist even with a patch_interval of 16, you can try reducing it furthre. Note that, smaller interval will increase testing time, as more patches need to be processed.


Thanks for checking, and let me know if you have further questions or encounter any other issues.

@bantin
Copy link
Author

bantin commented Jan 25, 2025

Thanks! That is helpful. And another clarification -- I've trained a model with patch size [32 32 32] and patch interval [1 32 32]. I'd like to reduce the patch interval to [1 16 16] as you sugggested. In your experience, does that mean I should re-train the network with the new patch interval? Thanks again :)

@SteveJayH
Copy link
Member

No, I think just changing parameter in evaluation code (32 -> 16) without re-training will be sufficient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants