You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The position should be calculated for each word in the sentence, relative to the two target entity words.
An example was given in the paper:
People have been moving back into downtown
The correspondent embedding for "moving", regarding to "people" and "downtown" should be:
[WordVec, 3, -3]
However, in the code:
What's used is the position of the two entity words. This kinda misses the point of using position embedding as it's supposed to be a means to extract structure features from the sentence.
The text was updated successfully, but these errors were encountered:
In the paper, it says:
The position should be calculated for each word in the sentence, relative to the two target entity words.
An example was given in the paper:
The correspondent embedding for "moving", regarding to "people" and "downtown" should be:
[WordVec, 3, -3]
However, in the code:
What's used is the position of the two entity words. This kinda misses the point of using position embedding as it's supposed to be a means to extract structure features from the sentence.
The text was updated successfully, but these errors were encountered: