Overview of CheckThat! 2020 English: Automatic Identification and Verification of Claims in Social Media
Date
2020Author
Shaar, ShadenNikolov, Alex
Babulkov, Nikolay
Alam, Firoj
Barrón-Cedeño, Alberto
Elsayed, Tamer
Hasanain, Maram
Suwaileh, Reem
Haouari, Fatima
da San Martino, Giovanni
Nakov, Preslav
...show more authors ...show less authors
Metadata
Show full item recordAbstract
We present an overview of the third edition of the CheckThat! Lab at CLEF 2020. The lab featured five tasks in Arabic and English, and here we focus on the three English tasks. Task 1 challenged the participants to predict which tweets from a stream of tweets about COVID-19 are worth fact-checking. Task 2 asked to retrieve verified claims from a set of previously fact-checked claims, which could help fact-check the claims made in an input tweet. Task 5 asked to propose which claims in a political debate or a speech should be prioritized for fact-checking. A total of 18 teams participated in the English tasks, and most submissions managed to achieve sizable improvements over the baselines using models based on BERT, LSTMs, and CNNs. In this paper, we describe the process of data collection and the task setup, including the evaluation measures used, and we give a brief overview of the participating systems. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and detecting previously fact-checked claims.
DOI/handle
http://hdl.handle.net/10576/52851Collections
- Computer Science & Engineering [2402 items ]