تصفح حسب الناشر
السجلات المعروضة 1 -- 2 من 2
-
Why Is That Relevant? Collecting Annotator Rationales for Relevance Judgments
( AAAI Press , 2016 , Conference)When collecting subjective human ratings of items, it can be difficult to measure and enforce data quality due to task subjectivity and lack of insight into how judges’ arrive at each rating decision. To address this, we ... -
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to Ensure Quality Relevance Annotations
( AAAI Press , 2018 , Conference)While peer-agreement and gold checks are well-established methods for ensuring quality in crowdsourced data collection, we explore a relatively new direction for quality control: estimating work quality directly from ...