Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Asymmetric loss of frames beginning/end #17

Open
lschm opened this issue Jun 19, 2024 · 9 comments
Open

Asymmetric loss of frames beginning/end #17

lschm opened this issue Jun 19, 2024 · 9 comments

Comments

@lschm
Copy link

lschm commented Jun 19, 2024

Hi,

your FAQ states that the first and last N frames of each timeseries are removed during the inference process. However, when I closely compare the raw and processed data it seems that actually the first N+1 frames are removed and the last N-1.

Best wishes,
Lena

@EOMMINHO
Copy link
Member

Hi Lena,

Could you tell me which version (GUI or command line) of our code you used?
If you could also tell me the parameters used, it would also help solve the problem.

Best,
Minho

@lschm
Copy link
Author

lschm commented Jun 19, 2024

Hi,

I used the command line, here is the call I used for training the model (for parameters):

python -m src.train --exp_name xx --noisy_data xx --is_folder --results_dir xx --patch_size 61 22 22 --bs_size 3 3 --bp --n_epochs 40 --logging_interval_batch 5000

Best,
Lena

@EOMMINHO
Copy link
Member

It seems like the --bp mode was the problem.
I changed the test.py on GitHub.
Could you download the updated test.py and change bp_mode to True?

@lschm
Copy link
Author

lschm commented Jun 19, 2024

I had already modified the test.py to
model = SUPPORT(in_channels=61, mid_channels=[16, 32, 64, 128, 256], bp=True, depth=5,
blind_conv_channels=64, one_by_one_channels=[32, 16], last_layer_channels=[64, 32, 16], bs_size=bs_size).cuda()
so the change you made to the code doesn't have any effect, as far as I can see

@EOMMINHO
Copy link
Member

EOMMINHO commented Jun 19, 2024

Oh I see,,
Could you tell me the process of checking N+1 frames and the last N-1 were removed?

@lschm
Copy link
Author

lschm commented Jun 19, 2024

I just compared signal onset and movements between the raw and denoised data (images). I didn't do any comprehensive testing with synthetic data. However, it was consistent for two different models trained on different data and analyzing different data.

@trose-neuro
Copy link

trose-neuro commented Jan 16, 2025

Hi! Great tool! But also for us It would be great to understand precisely which frames are cropped so we can do posthoc padding. Also, Do I understand correctly that you suggest bypassing the blind-spot network entirely by default (that's my understanding of what bp=True does)? Different subject - but any tips on how to optimize blind-spot size?

@SteveJayH
Copy link
Member

SteveJayH commented Jan 21, 2025

@trose-neuro Hi! Thank you for your interest. Basically, the bp option skips the part of our entire model, so we do not recommend using it (and it is not currently set as the default in the code). This option was for an experimental purpose and was mistakenly set as the default at that time. Turning this option does not fully using our model, and lead to worse performance.


Regarding cropped frames, our SUPPORT model processes a total N frames and produce the denoised center frame. As a result, the first N//2 frames and the last N//2 frames are discarded because there are insufficient frames to process those regions. It can be found at line 82 of src/test.py, where the first and last (model.in_channels-1)//2 frames are removed before saving.

For example, with the default in_channels value of 61, the model uses 30 frames before, 1 center frame, and 30 frames after to produce the denoised center frame. Consequently, the first 30 frames and last 30 frames in the final output stack will be empty, and they are removed in the current code.

If you do not want to discard these frames and wish to save tif stacks with the same number of frames as the input, you can simply remove that slicing as follows: skio.imsave(output_file, denoised_stack, metadata={'axes': 'TYX'})


The blind-spot size largely affects the quality of the output. A smaller size results in higher resolution denoised output, but sometimes fail to remove noise. This issue can occur due to characteristics of microscope or camera (e.g., pixel bleeding), or when denoising motion-corrected video. A larger blind-spot size can address these issues and reduce noise more effectively, but it may degrades the resolution of the denoised output, and too large size may produce blurry output.

Here's our strategy: Start with a blind spot size of [1, 1] and check the output. If noise remains, increase it to [3, 3], and check again. If noise remains, increase it to [5, 5]. In most cases, either [1, 1] or [3, 3] usually produces a good result.

Hope this information helps!

@trose-neuro
Copy link

trose-neuro commented Jan 28, 2025

Thanks! Very useful. 3x3 seems to work also for us.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants