Skip to content

Latest commit

 

History

History
57 lines (39 loc) · 3.63 KB

awards.md

File metadata and controls

57 lines (39 loc) · 3.63 KB
title
SemEval-2021 Awards

SemEval 2021: Best Task, Best Paper!!

SemEval-2021 features two overall awards, one for organizers of a task and one for a team participating in a task.

We are very pleased to announce the winners of these awards for SemEval-2021!

Best Task Paper Award

The Best Task Paper award, for organizers of an individual shared task, recognizes a task that stands out for making an important intellectual contribution to empirical computational semantics, as demonstrated by a creative, interesting, and scientifically rigorous dataset and evaluation design, and a well-written task overview paper.

Corey Harper, Jessica Cox, Curt Kohler, Antony Scerri, Ron Daniel Jr., and Paul Groth

MeasEval is an original information extraction task focused on quantitative measurements in scientific text, with spans for the quantity, units, item measured, and other mentioned attributes, as well as relations between them. The task setup featured a carefully developed annotation schema, guidelines, and dataset, and an evaluation metric with score components for the various kinds of spans and relations. 19 teams participated, and baseline systems developed by the organizers were evaluated as well. The task paper surveys the system papers and includes strong analysis of results, with breakdowns by span/relation type and genre, thoughtful investigation of possible artifacts of the evaluation metric, main conclusions, and ideas for future work.

John Pavlopoulos, Jeffrey Sorensen, Léo Laugier, and Ion Androutsopoulos

Egoitz Laparra, Xin Su, Yipun Zhao, Özlem Uzuner, Timothy Miller, and Steven Bethard

Best System Paper Award

The Best System Paper award, for task participants, recognizes a system description paper that advances our understanding of a problem and available solutions with respect to a task. It need not be the highest-scoring system in the task, but it must have a strong analysis component in the evaluation, as well as a clear and reproducible description of the problem, algorithms, and methodology.

Haoyang Liu, M. Janina Sarol, and Halil Kilicoglu

This paper approaches the task of extracting structured information from scholarly articles. A sophisticated system is developed, blending established ideas in Information Extraction with more recent neural approaches. The various engineering decisions are clearly motivated and discussed, with additional evaluations of the different components. Limitations of both the dataset and the model are discussed, providing ideas for future work.

Yuki Taya, Lis Kanashiro Pereira, Fei Cheng, and Ichiro Kobayashi

Jing Zhang, Yimeng Zhuang, and Yinpei Su