Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't find "modules" floder of this project #1

Open
Zhaiyan1996 opened this issue Jul 12, 2023 · 5 comments
Open

Can't find "modules" floder of this project #1

Zhaiyan1996 opened this issue Jul 12, 2023 · 5 comments

Comments

@Zhaiyan1996
Copy link

This is a valuable job, I want to train on my own dataset. Would you like to share the core part such as Encoder, Decoder, Masker, and Demasker?

@shhuangcoder
Copy link

This is a valuable job, I want to train on my own dataset. Would you like to share the core part such as Encoder, Decoder, Masker, and Demasker?

https://github.com/CrossmodalGroup/DynamicVectorQuantization

@Zhaiyan1996
Copy link
Author

This is a valuable job, I want to train on my own dataset. Would you like to share the core part such as Encoder, Decoder, Masker, and Demasker?

https://github.com/CrossmodalGroup/DynamicVectorQuantization

The following files are not included in both projects:
modules.masked_quantization.masker_vanilla_refine.VanillaMasker
modules.masked_quantization.demasker_vanilla.VanillaDemasker
modules.masked_quantization.decoder.Decoder

@simonZhou86
Copy link

Thank you for sharing the code of this amazing work. I am also confusing about some of the code files/modules, specifically:

  1. Could one of the authors please confirm the MaskSelfAttention_SquareGrowth class in modules/transformer/mask_attention.py is the self-attention in the adaptive de-mask module (figure 2c)?
  2. Could you please provide the link to:
    modules.masked_quantization.masker_vanilla_refine.VanillaMasker, and
    modules.masked_quantization.demasker_vanilla.VanillaDemasker

Thank you!

@CrossmodalGroup
Copy link
Owner

Thank you for sharing the code of this amazing work. I am also confusing about some of the code files/modules, specifically:

  1. Could one of the authors please confirm the MaskSelfAttention_SquareGrowth class in modules/transformer/mask_attention.py is the self-attention in the adaptive de-mask module (figure 2c)?
  2. Could you please provide the link to:
    modules.masked_quantization.masker_vanilla_refine.VanillaMasker, and
    modules.masked_quantization.demasker_vanilla.VanillaDemasker

Thank you!

Hi, thanks for your interest. Sorry for the delayed reply and the missing module folder. I have updated with module folder. Feel free to contact if you have any further questions.

@CrossmodalGroup
Copy link
Owner

This is a valuable job, I want to train on my own dataset. Would you like to share the core part such as Encoder, Decoder, Masker, and Demasker?

Hi, I have upload the misssing module folder. Feel free to contact if you have any further questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants