Skip to content

Latest commit

 

History

History
89 lines (63 loc) · 7.69 KB

relationship_extraction.md

File metadata and controls

89 lines (63 loc) · 7.69 KB

Relationship Extraction

Relationship extraction is the task of extracting semantic relationships from a text. Extracted relationships usually occur between two or more entities of a certain type (e.g. Person, Organisation, Location) and fall into a number of semantic categories (e.g. married to, employed by, lives in).

New York Times Corpus

The standard corpus for distantly supervised relationship extraction is the New York Times (NYT) corpus, published in Riedel et al, 2010.

This contains text from the New York Times Annotated Corpus with named entities extracted from the text using the Stanford NER system and automatically linked to entities in the Freebase knowledge base. Pairs of named entities are labelled with relationship types by aligning them against facts in the Freebase knowledge base. (The process of using a separate database to provide label is known as 'distant supervision')

Example:

Elevation Partners, the $1.9 billion private equity group that was founded by Roger McNamee

(founded_by, Elevation_Partners, Roger_McNamee)

Different papers have reported various metrics since the release of the dataset, making it difficult to compare systems directly. The main metrics used are either precision at N results or plots of the precision-recall. The range of recall has increased over the years as systems improve, with earlier systems having very low precision at 30% recall.

Model P@10% P@30% Paper / Source Code
RESIDE (Vashishth et al., 2018) 73.6 59.5 RESIDE: Improving Distantly-Supervised Neural Relation Extraction using Side Information RESIDE
PCNN+ATT (Lin et al., 2016) 69* 51* Neural Relation Extraction with Selective Attention over Instances OpenNRE
MIML-RE (Surdeneau et al., 2012) 61*+ - Multi-instance Multi-label Learning for Relation Extraction Mimlre
MultiR (Hoffman et al., 2011) 60*+ - Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations MultiR
(Mintz et al., 2009) 40*+ - Distant supervision for relation extraction without labeled data

(*) Estimated from plots using WebplotDigitizer. These are reported to two significant digits due to the low accuracy when extracting from graphs.

(+) Estimated from results in the paper "Neural Relation Extraction with Selective Attention over Instances"

SemEval-2010 Task 8

SemEval-2010 introduced 'Task 8 - Multi-Way Classification of Semantic Relations Between Pairs of Nominals'. The task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation. The dataset contains nine general semantic relations together with a tenth 'OTHER' relation.

Example:

There were apples, pears and oranges in the bowl.

(content-container, pears, bowl)

The main evaluation metric used is macro-averaged F1, averaged across the nine proper relationships (i.e. excluding the OTHER relation), taking directionality of the relation into account.

Several papers have used additional data (e.g. pre-trained word embeddings, WordNet) to improve performance. The figures reported here are the highest achieved by the model using any external resources.

End-to-End Models

Model F1 Paper / Source Code
CNN-based Models
Multi-Attention CNN (Wang et al. 2016) 88.0 Relation Classification via Multi-Level Attention CNNs lawlietAi's Reimplementation
Attention CNN (Huang and Y Shen, 2016) 84.3
85.9*
Attention-Based Convolutional Neural Network for Semantic Relation Extraction
CR-CNN (dos Santos et al., 2015) 84.1 Classifying Relations by Ranking with Convolutional Neural Network pratapbhanu's Reimplementation
CNN (Zeng et al., 2014) 82.7 Relation Classification via Convolutional Deep Neural Network roomylee's Reimplementation
RNN-based Models
Entity Attention Bi-LSTM (Lee and Seo, 2018) 85.2 Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing
Hierarchical Attention Bi-LSTM (Xiao and C Liu, 2016) 84.3 Semantic Relation Classification via Hierarchical Recurrent Neural Network with Attention
Attention Bi-LSTM (Zhou et al., 2016) 84.0 Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification SeoSangwoo's Reimplementation
Bi-LSTM (Zhang et al., 2015) 82.7
84.3*
Bidirectional long short-term memory networks for relation classification

*: It uses external lexical resources, such as WordNet, part-of-speech tags, dependency tags, and named entity tags.

Dependency Models

Model F1 Paper / Source Code
BRCNN (Cai et al., 2016) 86.3 Bidirectional Recurrent Convolutional Neural Network for Relation Classification
DRNNs (Xu et al., 2016) 86.1 Improved Relation Classification by Deep Recurrent Neural Networks with Data Augmentation
depLCNN + NS (Xu et al., 2015a) 85.6 Semantic Relation Classification via Convolutional Neural Networks with Simple Negative Sampling
SDP-LSTM (Xu et al., 2015b) 83.7 Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Path Sshanu's Reimplementation
DepNN (Liu et al., 2015) 83.6 A Dependency-Based Neural Network for Relation Classification
FCN (Yu et al., 2014) 83.0 Factor-based compositional embedding models
MVRNN (Socher et al., 2012) 82.4 Semantic Compositionality through Recursive Matrix-Vector Spaces pratapbhanu's Reimplementation

Go back to the README