You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone,
I would like to ask about the varifocalNet head architecture. As I understand, the output from feature pyramid has different levels and thus different dimensions. However, I read in the paper, you shown that the input dimension to the head is HxWx256. Is it the same for every levels? The outputs from feature pyramid with backbone resnet50 are (batch, 256, 52, 52), (batch, 256, 26, 26), (batch, 256, 13, 13), (batch, 256, 7, 7), (batch, 256, 4, 4) which mean that I need to upsample to HxW for every feature height and width?
I hope my question will get a reply :D. Have a good day and research.
The text was updated successfully, but these errors were encountered:
Hello everyone,
I would like to ask about the varifocalNet head architecture. As I understand, the output from feature pyramid has different levels and thus different dimensions. However, I read in the paper, you shown that the input dimension to the head is HxWx256. Is it the same for every levels? The outputs from feature pyramid with backbone resnet50 are (batch, 256, 52, 52), (batch, 256, 26, 26), (batch, 256, 13, 13), (batch, 256, 7, 7), (batch, 256, 4, 4) which mean that I need to upsample to HxW for every feature height and width?
I hope my question will get a reply :D. Have a good day and research.
The text was updated successfully, but these errors were encountered: