-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test script does not properly normalize denoised movie #21
Comments
Hi bantin, it seems weird since there was no issue about the scale previously. To better assist you, could you provide the following details?
|
Hi, thanks for the response!
For testing, I modified the testing script to use omegaconf (and to run on only a subset of the frames) but didn't make any other changes. The testing script I used is at this gist |
Thanks. Seems that the |
Hi, you're talking about changing the patch size used in test.py right? I changed that to the values you suggested but it didn't fix the problem. |
Yes that's correct. If then, I'm currently unsure what is causing the scale mismatch. If you don't mind, please send the raw noisy TIFF file (or a small portion of it if it's too large). I'll check the raw tif and test it to see if this issue is occurring on my end. My email is [email protected] . |
Thanks, will be happy to send a section of the video later today. However, I'm trying to do a bit of debugging myself. One thing I've noticed: If I comment out the line that denormalizes: then the output looks much closer to the input. Is it possible that the denormalization is being called twice somehow? |
Thank you for the detailed exploration. Our denoising model operates on 3D patches of size This is how our model works. The input consists of the front Thus, our model only outputs the center frame, with assistance of front and back frames, resulting a model that processes 3D (xyt) information with a blind spot of And thank you for your inspection about denormalization. I'll check that and let you know if I discover any issues. |
Hi @SteveJayH. I tried the same pipeline on synthetic data and it seemed to work. I probably had a bug in my code -- I will try rerunning my analysis and see if that was the issue. In the meantime -- I do have a different question. My synthetic data is a simulated action potential propagating along a dendritic tree. It is a very small FOV: 176 x 138 pixels. Can you suggest settings for running support on this video? I tried using a patch size of [32 x 32 x 32] but I see some block artifacts when I run the denoising. Can you also tell me how to adjust the patch interval when I adjust the patch size? From a user perspective, it's frustrating that the patch interval does not automatically update when I adjust the patch size. |
Hi @bantin , glad to hear it works on synthetic data. I've also trained and tested with the data you sent via email. From my side, there didn't seem to be any brightness issues. While my inspection wasn't throrough, here's how I set it up Train
The test code is in this gist. For both training and testing, I made no modifications other than specifying the data path. After inference, I examinded both raw and denoised files using Fiji. I combined (Image-Stacks-Tools-Combine) two images into a single image, where this combine tool makes a shared dynamic range and observed normal results. If the two images had totally different intensity scales, the combined output would show one image as completely saturated or the one image as nearly invisible. Additionally, the z-profiles of both images appear similar. Regarding block artifacts The block artifacts may be caused by overlap intervals being too similar to the patch size. During testing, patches are created with an interval of Because our model is a CNN, predictions near the patch boundaries can be less accurate due to limited contextual information compared to the patch center. Note that, the model only processes individual patches, so "boundary" and "center" refer to regions within each patch. To mitigate artifacts, we generally use the "half rule". For a If you reduce the Thanks for checking, and let me know if you have further questions or encounter any other issues. |
Thanks! That is helpful. And another clarification -- I've trained a model with patch size [32 32 32] and patch interval [1 32 32]. I'd like to reduce the patch interval to [1 16 16] as you sugggested. In your experience, does that mean I should re-train the network with the new patch interval? Thanks again :) |
No, I think just changing parameter in evaluation code (32 -> 16) without re-training will be sufficient. |
Hi Support team,
I am having an issue where the test script (src/test.py) outputs a denoised movie which has a very different scale than the input movie. I suspect this is due to an issue undoing the normalization which is performed in the dataloader. I have attached a screenshot from FIJI showing this issue. In the screenshot, the top panel is the original movie, the middle panel is SUPPORT output, and the bottom panel is the residual (original - denoised). You can see that the denoised frame has a much larger scale than the original frame. Can you offer any advice here?
The text was updated successfully, but these errors were encountered: