Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] add t2t_vit #2364

Open
holderhe opened this issue Dec 14, 2024 · 1 comment
Open

[FEATURE] add t2t_vit #2364

holderhe opened this issue Dec 14, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@holderhe
Copy link

add t2t_vit

@holderhe holderhe added the enhancement New feature or request label Dec 14, 2024
@rwightman
Copy link
Collaborator

@holderhe at this point I don't have the time to prioritize models like this, if someone created a fully functional and clean PR I'd consider, but it's a bit of work to pass the tests.

For reference, much closer to original vit arch (with a few tweaks), updated training recipe my explorations here appear quite a bit better than those models on in1k pretrain when you compare img/sec .... see the 82.5 top-1% little vs the 81.5 t2t_vit_14, without f.sdpa the vit_little is slightly faster in infer, quite a bit in train, and then with the F.sdpa (default0 enabled it's much faster. https://huggingface.co/timm/vit_little_patch16_reg1_gap_256.sbb_in12k

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants