You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing the source codes!
I found that you exploited 'Pretrained weight file of DeiT' instead of training from scratch.
However, i see you emphasize 'Efficiency' of your model.
I wonder if there exists some issue to train from scratch.
The text was updated successfully, but these errors were encountered:
The pre-trained weights were used since transformers do not have inductive bias. However, for the case of STR, since MJSynth and SynText are both big in number (though lacking in diversity in terms of texture), the ViTSTR may not need pre-trained weights but this could result into lower performance. This would be an interesting future work.
Hi roatienza,
Currently, there are some Transformer Models we can choose (["vitstr_tiny_patch16_224", "vitstr_small_patch16_224", "vitstr_base_patch16_224", "vitstr_tiny_distilled_patch16_224", "vitstr_small_distilled_patch16_224"]). However, those are not support non-latin lanuages. Do you have any instruction that I can train a Transformer Model that support non-latin languages?
Thank you very much
Thanks for sharing the source codes!
I found that you exploited 'Pretrained weight file of DeiT' instead of training from scratch.
However, i see you emphasize 'Efficiency' of your model.
I wonder if there exists some issue to train from scratch.
The text was updated successfully, but these errors were encountered: