You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have question about the result of hidden_rep, cls_head = model(token_ids, attention_mask = attention_mask). When I compare the following two values, they are different.
I notice that the output type of the model is transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions. Does it mean that the value of cls_head is actually after pooling of individual tokens' embeddings?
I notice that the output type of the model is transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions. Does it mean that the value of cls_head is actually after pooling of individual tokens' embeddings?
I am following the 3.03. Generating BERT embedding .ipynb notebook to learn how to get the embeddings from a BERT model.
I have question about the result of
hidden_rep, cls_head = model(token_ids, attention_mask = attention_mask)
. When I compare the following two values, they are different.output:
My understanding is that they are both the embeddings for
[CLS]
. So I expect them to be the same. Is my understanding not correct?BTW, I am using
transformers == 4.12.5
if that matters.The text was updated successfully, but these errors were encountered: