Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Position feature embedding seems not right? #10

Open
peidaqi opened this issue Dec 3, 2018 · 1 comment
Open

Position feature embedding seems not right? #10

peidaqi opened this issue Dec 3, 2018 · 1 comment

Comments

@peidaqi
Copy link

peidaqi commented Dec 3, 2018

In the paper, it says:
image

The position should be calculated for each word in the sentence, relative to the two target entity words.
An example was given in the paper:

People have been moving back into downtown

The correspondent embedding for "moving", regarding to "people" and "downtown" should be:
[WordVec, 3, -3]

However, in the code:
image

What's used is the position of the two entity words. This kinda misses the point of using position embedding as it's supposed to be a means to extract structure features from the sentence.

@FrankWork
Copy link
Owner

FrankWork commented Dec 4, 2018

position feature in code is correct.

position1, position2 = _position_feature(raw_example)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants