Arrow Research search
Back to AAMAS

AAMAS 2018

CrowdEval: A Cost-Efficient Strategy to Evaluate Crowdsourced Worker's Reliability

Conference Paper Session 41: Trust and Reputation Autonomous Agents and Multiagent Systems

Abstract

Crowdsourcing platforms depend on the quality of work provided by a distributed workforce. Yet, it is challenging to dependably measure the reliability of these workers, particularly in the face of strategic or malicious behavior. In this paper, we present a dynamic and efficient solution to keep tracking workers’ reliability. In particular, we use both gold standard evaluation and peer consistency evaluation to measure each worker performance, and adjust the proportion of the two types of evaluation according to the estimated distribution of workers’ behavior (e. g. , being reliable or malicious). Through experiments over real Amazon Mechanical Turk traces, we find that our approach has a significant gain in terms of accuracy and cost compared to state-of-the-art algorithms.

Authors

Keywords

  • Crowdsourcing
  • gold standard evaluation
  • peer consistency evaluation
  • Amazon Mechanical Turk

Context

Venue
International Conference on Autonomous Agents and Multiagent Systems
Archive span
2002-2025
Indexed papers
7403
Paper id
855476482208002571