-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Asymmetric loss of frames beginning/end #17
Comments
Hi Lena, Could you tell me which version (GUI or command line) of our code you used? Best, |
Hi, I used the command line, here is the call I used for training the model (for parameters): python -m src.train --exp_name xx --noisy_data xx --is_folder --results_dir xx --patch_size 61 22 22 --bs_size 3 3 --bp --n_epochs 40 --logging_interval_batch 5000 Best, |
It seems like the --bp mode was the problem. |
I had already modified the test.py to |
Oh I see,, |
I just compared signal onset and movements between the raw and denoised data (images). I didn't do any comprehensive testing with synthetic data. However, it was consistent for two different models trained on different data and analyzing different data. |
Hi! Great tool! But also for us It would be great to understand precisely which frames are cropped so we can do posthoc padding. Also, Do I understand correctly that you suggest bypassing the blind-spot network entirely by default (that's my understanding of what bp=True does)? Different subject - but any tips on how to optimize blind-spot size? |
@trose-neuro Hi! Thank you for your interest. Basically, the Regarding cropped frames, our SUPPORT model processes a total N frames and produce the denoised center frame. As a result, the first N//2 frames and the last N//2 frames are discarded because there are insufficient frames to process those regions. It can be found at line 82 of src/test.py, where the first and last For example, with the default If you do not want to discard these frames and wish to save tif stacks with the same number of frames as the input, you can simply remove that slicing as follows: The Here's our strategy: Start with a blind spot size of [1, 1] and check the output. If noise remains, increase it to [3, 3], and check again. If noise remains, increase it to [5, 5]. In most cases, either [1, 1] or [3, 3] usually produces a good result. Hope this information helps! |
Thanks! Very useful. 3x3 seems to work also for us. |
Hi,
your FAQ states that the first and last N frames of each timeseries are removed during the inference process. However, when I closely compare the raw and processed data it seems that actually the first N+1 frames are removed and the last N-1.
Best wishes,
Lena
The text was updated successfully, but these errors were encountered: