Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Exploiting disagreement through open ended tasks for capturing interpretation spaces

3,981 views

Published on

ESWC PhD Symposium

Published in: Science
  • Be the first to comment

  • Be the first to like this

Exploiting disagreement through open ended tasks for capturing interpretation spaces

  1. 1. Exploiting disagreement through open- ended tasks for capturing interpretation spaces Doctoral Consortium By /Benjamin Timmermans @8w
  2. 2. Outline Introduction State of the Art Problem Statement Methodology Preliminary Results Conclusions
  3. 3. Introduction
  4. 4. How many dogs were in the picture?
  5. 5. There is no universal "truth"
  6. 6. For the training, testing and evaluation of machines we rely on a... ground "truth"
  7. 7. State of the Art
  8. 8. Crowdsourcing Approach 1-3 annotators Evaluate workers Inner-annotator agreement Use test questions Predefined answer choices
  9. 9. The CrowdTruth Approach 10-15 annotators Evaluate the input, annotations and workers Disagreement-based analytics
  10. 10. Problem Statement
  11. 11. Problems with multimedia annotations Are sparse Are homogeneous Do not represent everything that can be heard or seen
  12. 12. Problems with crowdsourcing tasks Are designed to stimulate agreement Assumes answers are right or wrong
  13. 13. Closed task How many beams do you see? 1 2 3 4 5 1 1 2 3 4 5
  14. 14. Open- ended tasks How many beams do you see?
  15. 15. Gathering the interpretation space of multimedia through open-ended crowdsourcing tasks Goal More efficient crowdsourcing Higher quality ground truth data Improved search and discovery of multimedia
  16. 16. Are open-ended crowdsourcing tasks a feasible method for capturing the interpretation space of multimedia? Research Question
  17. 17. Methodology
  18. 18. 1. Improving quality evaluation Comparing Closed and open-ended tasks Measure worker confidence
  19. 19. 2. Improving open- ended task design Combine constrains with open-ended designs Showing known annotations Detecting the distribution of answers
  20. 20. 3. Applying the ground "truth" Compare different contexts Improve indexing of multimedia
  21. 21. Preliminary Results
  22. 22. Gathering training data for IBM Watson
  23. 23. Range of tasks Passage Justification Passage Alignment Distributional disambiguation
  24. 24. Sound Interpretations 2.133 short sounds Top 5000 search terms = 11 mil. searches
  25. 25. Sound tag overlap
  26. 26. Conclusions There is no ultimate "truth" Do not stimulate agreement Capture the interpretation space Use open-ended crowdsourcing tasks Evaluation more difficult
  27. 27. Who we are Lora Aroyo Robert-Jan Sips Chris Welty Oana Inel Anca Dumitrache Benjamin Timmermans
  28. 28. Acknowledgements Supervisor: Dr. Lora Aroyo Mentor: Dr. Matteo Palmonari
  29. 29. CrowdTruth.org Benjamin Timmermans btimmermans.com b.timmermans@vu.nl  @8w

×