(Unofficial) Pytorch Implementation of "Adaptive Co-attention Network for Named Entity Recognition in Tweets" (AAAI 2018)
- python>=3.5
- torch==1.3.1
- torchvision==0.4.2
- pillow==7.0.0
- pytorch-crf==0.7.2
- seqeval==0.0.12
- gdown>=3.10.1
$ pip3 install -r requirements.txt
Train | Dev | Test | |
---|---|---|---|
# of Data | 4,000 | 1,000 | 3,257 |
- Original code's pretrained word embedding can be downloaded at here.
- But it takes quite a long time to download, so I take out the word vectors (
word_vector_200d.vec
) that are only in word vocab.
-
Image features are extracted from last pooling layer of
VGG16
. -
If you want to extract the feature by yourself, follow as below.
- Clone the repo of original code.
- Copy
data/ner_img
from original code to this repo. - Run as below. (
img_vgg_features.pt
will be saved indata
dir)
$ python3 save_vgg_feature.py
-
Extracted features will be downloaded automatically when you run
main.py
.
- There are some differences between the
paper
and theoriginal code
, so I tried to follow the paper's equations as possible. - Build the vocab with
train
,dev
, andtest
dataset. (same as the original code)- Making the vocab only with train dataset decreases performance a lot. (about 5%)
- Use
Adam
optimizer instead ofRMSProp
.
$ python3 main.py --do_train --do_eval
F1 (%) | |
---|---|
Re-implementation | 67.10 |
Baseline (paper) | 70.69 |