You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
first of all, I want to thank you for all the support that you already provided.
I'm currently using a Dino-SwinT, and I'm wondering how the number of dn_queries affects the training. I trained a model on two different datasets: $\mathcal{I}$ and $\mathcal{I}\text{undersampled}$ , where $\mathcal{I}\text{undersampled} \cup \mathcal{I}$.
Does the number of instances per batch/image influence the performance? Does the number of dn_queries introduce a hard/soft cap for the number of instances per image?
To be honest, I read the dino paper, but I cannot put this together?
Every dn group should have a dynamic size (2x N instances), right? For my two sets, the smaller one is slightly more balanced but ALL AP scores got worse. The model did not change, nor the training schedule. The only thing that changed is the number of instances in the majority class. However this doubled the average number of instances per image. How can this be possibly connected?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
first of all, I want to thank you for all the support that you already provided.$\mathcal{I}$ and $\mathcal{I}\text{undersampled}$ , where $\mathcal{I}\text{undersampled} \cup \mathcal{I}$.
I'm currently using a Dino-SwinT, and I'm wondering how the number of dn_queries affects the training. I trained a model on two different datasets:
Does the number of instances per batch/image influence the performance? Does the number of dn_queries introduce a hard/soft cap for the number of instances per image?
To be honest, I read the dino paper, but I cannot put this together?
Every dn group should have a dynamic size (2x N instances), right? For my two sets, the smaller one is slightly more balanced but ALL AP scores got worse. The model did not change, nor the training schedule. The only thing that changed is the number of instances in the majority class. However this doubled the average number of instances per image. How can this be possibly connected?
Thank you in advance!
Henrik
Beta Was this translation helpful? Give feedback.
All reactions