You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 12, 2024. It is now read-only.
I've gone through the code. I learned that object queries are nn.Embedding weights and zeros matrixes are added if you wish to add positional Embedding.
My confusion/question is are the embeddings trainable since it is a weight of the embedding layer?
The text was updated successfully, but these errors were encountered:
In the Colab notebook, it is initialized as nn.parameter. Also, this should be learnable, right? That's the point of the paper, that they use learnable queries. How did you verify that they are not updated?
In the Colab notebook, it is initialized as nn.parameter. Also, this should be learnable, right? That's the point of the paper, that they use learnable queries. How did you verify that they are not updated?
Since the weight are learnt weight. Then they should will be updated every step.
You said nn.parameter, they didn't use Embedding layer?
sorry about the confusion guys. Someday later after posting this comment, I revisited the detr and saw that indeed the query weights are changing and hence learnt. In the colab they used this nn.parameter but in the actual implementation they have used nn.Embeddings
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I've gone through the code. I learned that object queries are nn.Embedding weights and zeros matrixes are added if you wish to add positional Embedding.
My confusion/question is are the embeddings trainable since it is a weight of the embedding layer?
The text was updated successfully, but these errors were encountered: