Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regularization level #42

Open
gschramm opened this issue Jul 8, 2024 · 4 comments
Open

Regularization level #42

gschramm opened this issue Jul 8, 2024 · 4 comments
Assignees
Labels
documentation Improvements or additions to documentation participant Team requests question Further information is requested

Comments

@gschramm
Copy link

gschramm commented Jul 8, 2024

How are the levels of regularization for ranking chosen (in the "training" and "test" data sets).
Or in other words, is it important that the submitted method converges quickly for a wide
range of (reasonable) regularization levels?

@KrisThielemans
Copy link
Member

Regularisation level is fixed for the challenge data, see also https://discord.com/channels/1242028164105109574/1252651306779152535/1259815364712464455

For all our current data, the penalisation_factor is set to 1/700, thanks to incorporating the kappa images from Tsai's paper. That worked fine. However, theoretically speaking, I'm not 100% convinced this will work for all data (due to different scaling between the log-likelihood and the RDP_with_kappa). We're getting some more data in to test that. At the moment however, please assume that it will be set to 1/700.

@gschramm
Copy link
Author

gschramm commented Jul 8, 2024

Thanks a lot for the clarification, Kris. Can you say sth about the expected range of true counts of the test data?
According to my experience, optimization of number of subsets and also step sizes depends a lot on the quality of the data and I want to make sure that we don't over-optimize.

@casperdcl casperdcl added participant Team requests question Further information is requested documentation Improvements or additions to documentation labels Jul 10, 2024
@KrisThielemans
Copy link
Member

We've found that the penalisation factor needs adjusting for different noise levels/scanners. Currently, we do this by hand such that the images look somewhat reasonable.

We ask contributors to give data at count levels that are "clinically relevant", but it's hard to guarantee this. A lot of phantom data are acquired at much higher count levels. If we have the listmode data, we try and cut it down in that case. But again, this is more an "intuitive" process than very rigurous. Sorry.

@gschramm
Copy link
Author

I think that is totally fine as long as the penalty factors and count levels in the "training data" are somewhat representative for the "testing data" ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation participant Team requests question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants