-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Structural downsampling and static token sparsification #5
Comments
Hi, thanks for your interest in our work.
|
Thanks for your quick response! Look forwards to seeing your official codes for structural downsampling and static token sparsification after the CVPR deadline. |
Hello, do you have the code for locating Graph-6 probability matrices? I want to reproduce the results of a paper recently but I couldn’t find the corresponding code. Looking forward to your reply, thank you. |
Hi, it's a quite solid and promising work but I have some questions.
(1) In the paper, you perform an average pooling with kernel size 2 × 2 after the sixth block for the structural downsampling. But in Table 3, you show the results of structural downsampling and static dynamic token sparsification. What is the difference between structural downsampling and static token sparsification since their ACCs are not same?
(2) I'm interested in the average pooling with kernel size 2 × 2. Did you do extra experiments in the position of such structural downsampling, like the seventh block or the tenth block in ViT?
(3) Could you provide the codes for reproducing the results of structural downsampling and static token sparsification in Table 3 and the probability heat-map in Figure 6?
Thanks for your help!
The text was updated successfully, but these errors were encountered: