Introducing the Cognitive-Biases-in-Crowdsourcing Checklist
Tim Draws1, Alisa Rieger1, Oana Inel1, Ujwal Gadiraju1, and Nava Tintarev2
1Delft University of Technology, 2Maastricht University
Cognitive Biases in Crowdsourcing
• Cognitive biases of crowdworkers can negatively
affect annotation quality
– Anchoring effect
– Confirmation bias
• Combating cognitive biases is tricky
– Many cognitive biases exist
– Unclear which bias may apply where
References: Eickhoff (2018); Hube, Fetahu, & Gadiraju (2019); Tversky & Kahneman (1974)
Introducing a Checklist
• Starting point: bias checklist for business decisions
• Adaptation to fit the crowdsourcing context
• Result: 12-item checklist to combat cognitive biases
in crowdsourcing (incl. running example)
References: Kahneman, Lovallo, & Sibony (2011)
(3) Groupthink or Bandwagon Effect. Does my task design give crowd
workers some notion of other people’s evaluation of the items they annotate?
For example, crowd workers may judge products as more likely to be relevant to
“paella pan” when they see that a majority of other crowd workers have judged
this product as being relevant or if it has received high ratings from consumers.
Using the Checklist
1.Measure / assess cognitive biases
2.Mitigate cognitive biases
3.Document cognitive biases
Discussion & Conclusion
• Covering all different types of biases in crowdsourcing
• Updated version of checklist available on repository (link below)
• HCOMP paper: case study + retrospective analysis
Preregistration and supplementary material: https://osf.io/rbucj/
Carsten Eickhoff. 2018. Cognitive biases in crowdsourcing. WSDM 2018 - Proceedings of the 11th ACM International Conference on
Web Search and Data Mining 2018-Febua (2018), 162–170. https://doi.org/10.1145/3159652.3159654
Tim Draws, Alisa Rieger, Oana Inel, Ujwal Gadiraju, and Nava Tintarev. 2021. A Checklist to Combat Cognitive Biases in
Crowdsourcing. Proceedings on the Ninth AAAI Conference on Human Computation and Crowdsourcing (2021).
Christoph Hube, Besnik Fetahu, and Ujwal Gadiraju. 2019. Understanding and mitigating worker biases in the crowdsourced collection
of subjective judgments. Conference on Human Factors in Computing Systems - Proceedings (2019).
Daniel Kahneman, Dan Lovallo, and Olivier Sibony. 2011. Before you make that big decision... Harvard business review 89, 6 (2011).
Amos Tversky and Daniel Kahneman. 1974. Judgment under Uncertainty: Heuristics and Biases. Science 185 (Sept. 1974), 1124–
1. Cognitive biases of crowd workers are an impactful but often neglected type of systemic bias that can reduce the quality of crowdsourced data labels. 2. These cognitive biases are general human tendencies towards irrational decision-making that often occur subconsciously. 3. For example, crowd workers may fall prey to the *anchoring effect* when they are overly influenced by information they encounter first or the *confirmation bias* when judging in line with some (false) preexisting beliefs. 4. Previous work that a plurality of cognitive biases can occur in different types of crowdsourcing tasks. 5. So why do requesters rarely consider the influence of cognitive biases in the tasks they design? 6. One important reason here may be that combating cognitive biases is simply tricky. 7. A large number of different cognitive biases have been identified in psychological literature and may often be unclear which specific cognitive bias may occur in a given crowdsourcing task. To efficiently document, assess, and mitigate cognitive biases in crowdsourcing, requesters need a practical tool to help them navigate this space.
1. Proposing such a practical tool is what we aimed to do in this research. 2. As a starting point, we used a checklist proposed by Kahneman, Lovallo, and Sibony (2011) for the context of business psychology. 3. The big advantage of using a checklist is that it can reduce a complex space (e.g., cognitive biases) to a considerable degree while retaining useful information. 4. Specifically, the checklist proposed by Kahneman, Lovallo, and Sibony aims to help business decision-makers to avoid falling prey to cognitive biases using 12 items. 5. These 12 items cover the vast majority of potential rational mistakes by focusing on most commonly occurring ones and grouping biases together (give example). 6. To develop a similar checklist tool for our context, we adapted the checklist proposed by Kahneman, Lovallo, and Sibony to the context of cognitive biases in crowdsourcing; which we propose in an upcoming paper at HCOMP this year. 7. Our proposed checklist similarly contains 12 items that requesters can consider to identify potential cognitive biases elicited by their crowdsourcing task.
8. Each of the 12 items covers a specific cognitive bias or family of biases that may occur in the crowdsourcing context. 9. For example, the third item in our checklist concerns *groupthink* or the *bandwagon effect*. (read out loud) 10. Going through each of the 12 items in this format should help requesters efficiently combat cognitive biases in crowdsourcing.
What can requesters do with the information they get from the checklist?
1. Having identified one or more cognitive biases that may affect crowd workers in the task at hand (ideally before collecting the data), the requester may use this information for three different purposes. 2. First, they may wish to *measure* the cognitive biases in question to assess whether crowd workers are actually affected by them. This may require adding additional items or metrics to the crowdsourcing task. 3. Second, requesters could *mitigate* the cognitive biases. Earlier work has already proposed a couple of solutions for this. 4. Third, requesters may *document* the cognitive biases so that the collected data is put in the right perspective. Pointing out such potential limitations can help others when interpreting results or re-using the data.
Of course, checklist is limited That’s why we have a live version of it In our upcoming HCOMP paper, we illustrate the use of our checklist at the hand of a case study. We there also present a retrospective analysis of past HCOMP papers to show that cognitive biases may affect crowd workers in a majority of crowdsourcing tasks but are rarely dealt with. We hope that our proposed checklist can meaningfully contribute to general efforts towards more reliable human-labeled data.