Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about normalization #274

Open
Esther-PAN opened this issue Feb 8, 2025 · 1 comment
Open

Question about normalization #274

Esther-PAN opened this issue Feb 8, 2025 · 1 comment

Comments

@Esther-PAN
Copy link

Hello,Thank you for your excellent work in this series of research.
I noticed that in the "full-spectrum ... " paper, the COVID benchmark mentioned uses a medical dataset(BIMCV\CT and so on) with 4 channels and a size of 224×224. I am curious about how to handle normalization when using this benchmark for OOD detection comparisons with datasets like CIFAR-10, which have 3 channels and a size of 32×32.
Could you provide some insights or suggestions on the best practices for normalizing these different datasets to ensure a fair and effective OOD detection comparison?

@zjysteven
Copy link
Collaborator

zjysteven commented Feb 10, 2025

I'm not exactly sure what's been done in v1, but a general principal is that you would want to use the same preprocessing for all incoming data, as otherwise you would already assume to know whether the current input is ID or OOD.

So in the case described above, I think expanding CIFAR to 4 dimensions, resizing it to 224, and then applying the same normalization parameters would make the most sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants