bigIR at CheckThat! 2020: Multilingual BERT for Ranking Arabic Tweets by Check-worthiness
Author | Hasanain, Maram |
Author | Elsayed, Tamer |
Available date | 2024-11-05T06:05:20Z |
Publication Date | 2020 |
Publication Name | CEUR Workshop Proceedings |
Resource | Scopus |
ISSN | 16130073 |
Abstract | This paper describes the third-year participation of our bigIR group at Qatar University in CheckThat! lab at CLEF. This year we participated only in Arabic Task 1 that focuses on detecting check-worthy tweets on a given topic. We submitted four runs using both traditional classification models and a pre-trained language model: multilingual BERT (mBERT). Official results showed that our run using mBERT was the best among all our submitted runs. Furthermore, bigIR team was ranked third among all eight teams participated in the lab, with our best run ranked 6th among 28 runs. |
Sponsor | This work was made possible by NPRP grant# NPRP11S-1204-170060 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors. |
Language | en |
Publisher | CEUR-WS |
Subject | Classification models Language model Qatar university |
Type | Conference Paper |
Volume Number | 2696 |
Files in this item
This item appears in the following Collection(s)
-
Computer Science & Engineering [2402 items ]