You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to original paper, the dimensions of feature map generated by Resnet50 are:
self.base[N,C,H,W] = N(images)x2048x7x7
and that of the local features are [N,c,H]=Nx128x7.
However, when I run train.py (with training), the aforementioned dimensions seem to be:
self.base[N,C,H,W] = Nx2048x8x4
and feat_local[N,c,H]=Nx128x8.
Hello!
Thank you for the well-organised code repository.
I am currently working on (modifying) embedded feature dimensions based on the original paper, and the model generating code here.
https://github.com/huanghoujing/AlignedReID-Re-Production-Pytorch/blob/master/aligned_reid/model/Model.py#L23
According to original paper, the dimensions of feature map generated by Resnet50 are:
self.base[N,C,H,W] = N(images)x2048x7x7
and that of the local features are [N,c,H]=Nx128x7.
However, when I run train.py (with training), the aforementioned dimensions seem to be:
self.base[N,C,H,W] = Nx2048x8x4
and feat_local[N,c,H]=Nx128x8.
I wonder where could I modify this? (specifically the 'H' value, i.e. number of local features per image).
Should I set pretrained=False and modify it in the Bottleneck class?
https://github.com/huanghoujing/AlignedReID-Re-Production-Pytorch/blob/master/aligned_reid/model/resnet.py#L55
Thank you very much! :)
The text was updated successfully, but these errors were encountered: