You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,Thank you for your excellent work in this series of research.
I noticed that in the "full-spectrum ... " paper, the COVID benchmark mentioned uses a medical dataset(BIMCV\CT and so on) with 4 channels and a size of 224×224. I am curious about how to handle normalization when using this benchmark for OOD detection comparisons with datasets like CIFAR-10, which have 3 channels and a size of 32×32.
Could you provide some insights or suggestions on the best practices for normalizing these different datasets to ensure a fair and effective OOD detection comparison?
The text was updated successfully, but these errors were encountered:
I'm not exactly sure what's been done in v1, but a general principal is that you would want to use the same preprocessing for all incoming data, as otherwise you would already assume to know whether the current input is ID or OOD.
So in the case described above, I think expanding CIFAR to 4 dimensions, resizing it to 224, and then applying the same normalization parameters would make the most sense.
Hello,Thank you for your excellent work in this series of research.
I noticed that in the "full-spectrum ... " paper, the COVID benchmark mentioned uses a medical dataset(BIMCV\CT and so on) with 4 channels and a size of 224×224. I am curious about how to handle normalization when using this benchmark for OOD detection comparisons with datasets like CIFAR-10, which have 3 channels and a size of 32×32.
Could you provide some insights or suggestions on the best practices for normalizing these different datasets to ensure a fair and effective OOD detection comparison?
The text was updated successfully, but these errors were encountered: