Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About negative samples #3

Open
yunzi-94 opened this issue Mar 26, 2021 · 3 comments
Open

About negative samples #3

yunzi-94 opened this issue Mar 26, 2021 · 3 comments

Comments

@yunzi-94
Copy link

yunzi-94 commented Mar 26, 2021

In your paper TeRo, you have said "In this paper, we use the same loss function as the negative sampling loss proposed in (Sun et al.,2019) for optimizing our model."

But in your code (shared), I can only find a simple random negative sampling method # ( Line 80-140 in Train.py)

How do you think about the adversarial negative sampling method used in RotatE (Sun et al.,2019)?

@soledad921
Copy link
Owner

The adversarial negative sampling is actually implemented in the loss function, Please check the log_rank_loss functions defined in the model.py (line 120-123). For sure, the adversarial negative sampling is proven to be helpful for boosting (T)KGE models' performances in many cases including ours.

@yunzi-94
Copy link
Author

Thanks for your reply, and it really helps me a lot.
But I still have a question: without the adversarial negative sampling, can we still make a conclusion how efficiently TeRo can behave? (I have seen the comparisons between with/without adversarial negative sampling in RotatE paper (Sun et al.,2019).)
If you are interested in this query, we can discuss it further.

@soledad921
Copy link
Owner

Thanks for your reply, and it really helps me a lot.
But I still have a question: without the adversarial negative sampling, can we still make a conclusion how efficiently TeRo can behave? (I have seen the comparisons between with/without adversarial negative sampling in RotatE paper (Sun et al.,2019).)
If you are interested in this query, we can discuss it further.

Generally, the adversarial negative sampling can improve 1-3 MRR points in cases of TeRo and RotatE.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants