Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local feature dimensions #68

Open
wildcat5566 opened this issue Feb 12, 2019 · 0 comments
Open

Local feature dimensions #68

wildcat5566 opened this issue Feb 12, 2019 · 0 comments

Comments

@wildcat5566
Copy link

Hello!

Thank you for the well-organised code repository.
I am currently working on (modifying) embedded feature dimensions based on the original paper, and the model generating code here.
https://github.com/huanghoujing/AlignedReID-Re-Production-Pytorch/blob/master/aligned_reid/model/Model.py#L23

According to original paper, the dimensions of feature map generated by Resnet50 are:
self.base[N,C,H,W] = N(images)x2048x7x7
and that of the local features are [N,c,H]=Nx128x7.

However, when I run train.py (with training), the aforementioned dimensions seem to be:
self.base[N,C,H,W] = Nx2048x8x4
and feat_local[N,c,H]=Nx128x8.

I wonder where could I modify this? (specifically the 'H' value, i.e. number of local features per image).
Should I set pretrained=False and modify it in the Bottleneck class?
https://github.com/huanghoujing/AlignedReID-Re-Production-Pytorch/blob/master/aligned_reid/model/resnet.py#L55

Thank you very much! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant