عرض بسيط للتسجيلة

المؤلفKutlu, Mucahid
المؤلفMcDonnell, Tyler
المؤلفElsayed, Tamer
المؤلفLease, Matthew
تاريخ الإتاحة2024-02-21T07:47:06Z
تاريخ النشر2020-09-23
اسم المنشورJournal of Artificial Intelligence Research
المعرّفhttp://dx.doi.org/10.1613/jair.1.12012
الاقتباسKutlu, M., McDonnell, T., Elsayed, T., & Lease, M. (2020). Annotator rationales for labeling tasks in crowdsourcing. Journal of Artificial Intelligence Research, 69, 143-189.
الرقم المعياري الدولي للكتاب1076-9757
معرّف المصادر الموحدhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85091935817&origin=inward
معرّف المصادر الموحدhttp://hdl.handle.net/10576/52015
الملخصWhen collecting item ratings from human judges, it can be difficult to measure and enforce data quality due to task subjectivity and lack of transparency into how judges make each rating decision. To address this, we investigate asking judges to provide a specific form of rationale supporting each rating decision. We evaluate this approach on an information retrieval task in which human judges rate the relevance of Web pages for different search topics. Cost-benefit analysis over 10,000 judgments collected on Amazon's Mechanical Turk suggests a win-win. Firstly, rationales yield a multitude of benefits: more reliable judgments, greater transparency for evaluating both human raters and their judgments, reduced need for expert gold, the opportunity for dual-supervision from ratings and rationales, and added value from the rationales themselves. Secondly, once experienced in the task, crowd workers provide rationales with almost no increase in task completion time. Consequently, we can realize the above benefits with minimal additional cost. Copyrights
راعي المشروعThis work was made possible by generous support from NPRP grant# NPRP 7-1313-1-245 from the Qatar National Research Fund (a member of Qatar Foundation), the National Science Foundation (grant No. 1253413), the Micron Foundation, and UT Austin’s Good Systems Grand Challenge Initiative to design a future of responsible AI.
اللغةen
الناشرElsevier
الموضوعCost benefit analysis
Data quality
العنوانAnnotator rationales for labeling tasks in crowdsourcing
النوعArticle
الصفحات143-189
رقم المجلد69
dc.accessType Open Access


الملفات في هذه التسجيلة

Thumbnail

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة