-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some question about the Result #2
Comments
@ShomyLiu Using the official |
@FrankWork Yeah, At the beginning, I just use the f1-metric in sklearn and the result is quite lower than the paper. Further, I use the scorer.pl to evaluate, it is right as you say: |
Hi, Frank. I run your code get the 82.4 with 50d, but i can't get the same F1 when I re-implemented your model with pytorch. My F1 is only 80 and My data processing is same as yours. I also try save your vector and run in my model and also can not get the same F1. I think the difference is in the weight initiation, but I can't find quite good weight initiation. Can I contact you and ask for some advice. |
@JankinXu Hi, I have re-implemented the model with pytorch as well. However I can not reproduce the 82.4 F1-Score ; a little lower than yours, about 79%, which confused me a long time. |
@FrankWork my model code is here https://paste.ubuntu.com/26385600/ |
@ShomyLiu do you normalize your pretrained word embedding? I didn't normalize it in my code. And make sure you add L2 regularization to your |
@FrankWork Yeah, I have tried to add L2 regularization only in out_linear layer, however, it does not improve the performance.... it's really confusing.. |
@ShomyLiu In my experience, L2 regularization in out_linear layer helps a lot. |
@FrankWork Thanks. I use the following code in pytorch to add L2 regularization:
|
Hello, i want to know how to deal with the imbalance in SemEval-2010 task8 data? thank you. |
@ShomyLiu hello, do you solve your problem, I ams sorry that i just see your message, I achieve 81.7% finally. I don't use L2 but add two dropout, and one is after embedding layer and one is before the last linear network. |
@JankinXu Hi. |
Hi,
Thanks for your released code. I have run the code and got the Accuracy is 0.779 and no F1-score(In fact the F1-score will be smaller about 4-5% than the accuracy); However the F1-score in the paper is almost 80-82% (except the WordNet lexcial features).
So I wonder that whether there are some tricks in the paper? And have you reach the result in the paper?
Thanks.
The text was updated successfully, but these errors were encountered: