Odd Leaf Out (IEEE Social Computing 2011)

1,842 views
1,785 views

Published on

Describes paper presented at IEEE Social Computing 2011 conference on novel "serious game" called Odd Leaf Out to identify errors in classified image sets. See http://www.cs.umd.edu/localphp/hcil/tech-reports-search.php?number=2011-17 for the paper.

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,842
On SlideShare
0
From Embeds
0
Number of Embeds
130
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • As data is increasingly collected and tagged via citizen science and other distributed means, it is important to validate the initial classifications in those datasets. Odd Leaf Out is designed to do that. It also can work if the initial classifications are made by automated methods. Once validated the information can be used to improve tools such as LeafSnap and contribute to information resources such as the Encyclopedia of Life.
  • Two main challenges of GWAP:Generating useful data (“gaming” the system may elicit poor data)Make them enjoyable to playNote: Existing games have primarily focused on labeling data that is hard to automatically label. In contrast, we are interested in catching errors in labeled (in our case classified) datasets.
  • 5 leaves are pulled from the same classification (i.e., leaf species), and one “odd leaf” from a different classification. The player needs to choose the leaf that is different. Each person gets 3 “lives” (i.e., misses). If multiple guesses are correct then additional points are awarded. Try it out at http://biotrackers.net/odd_leaf_out/game.php
  • Here the person incorrectly chose the bottom-right leaf, so feedback is provided that it was wrong (outlined in red), with the correct “odd leaf” shown (outlined in green)
  • We have developed several game variations to allow people to skip or contest various rounds that they believe are incorrect. In this paper we focus on two versions: regular version and Skip version.
  • We varied the difficulty so that players of initial games could answer correctly about 75% of the time.
  • Experts were identified via a registration question that asked how many leaf species they could identify “in the wild”.
  • Two odd leaf error sets are much more commonly generated, particularly because of our technique of creating a set with a leaf that is most dissimilar from the seed leaf but still in the same species.
  • We can “catch” half of the errors in the top 1% of images identified via the game data and the method described earlier. The “Mean Species distance” is the best we can do based on sorting images using an automated method without human/game input. Random is based on a random sorting of images.Note: There are 600 total leaves being considered (5 leaves and 120 sets). So 4 of the top 6 leaves identified via the game data are erroneous. Likewise 5 of the top 30 leaves are erroneous.
  • Experts do a bit better at playing the game, but not much. The “errors” experts make are no “better” or “worse” at identifying errors than novices.
  • One limitation of Odd Leaf Out is that we have to have around 10 people play every round for us to generate enough data to be useful.
  • Odd Leaf Out (IEEE Social Computing 2011)

    1. 1. Odd Leaf Out:<br />Improving visual recognition with games<br />IEEE Social Computing<br />Oct 10, 2011<br />
    2. 2. BioTracker Team<br />L to R: ArijitBiswas, Jennifer Preece, Cynthia Parr, Dana Rotman, Erin Stewart, Darcy Lewis.<br />Front Row: David Jacobs, Derek Hansen, Jen Hammond, Anne Boswer<br />Missing: Eric Stevens<br />
    3. 3. BioTracker’s Research Questions<br />How can a socially intelligent system be used to<br />direct human effort and expertise to the most <br />valuable collection and classification tasks?<br />What are the most effective strategies for <br />motivating enthusiasts and experts to voluntarily<br />contribute and collaborate?<br /> <br />
    4. 4. BioTracker’s Research Questions<br />How can a socially intelligent system be used to<br />direct human effort and expertise to the most <br />valuable collection and classification tasks?<br />What are the most effective strategies for <br />motivating enthusiasts and experts to voluntarily<br />contribute and collaborate?<br /> <br />
    5. 5. Goal: Identify Errors in ImageClassification Datasets<br />Augmented <br />Plant identification<br />Citizen Science<br />Data Collection<br />Scientifically Validated Information<br />
    6. 6. Games with a Purpose<br />
    7. 7. Odd Leaf Out Game<br />
    8. 8. Odd Leaf Out Game<br />
    9. 9. Key Game Characteristics<br />Single Player<br />+ No problems with collusion strategies<br />+ No need for 2 players at a time<br />- Lack of excitement of live interaction<br />Learns from Player’s Wrong Answers<br />+ “Gaming the system” is harder<br />- Player frustration when they are actually right<br />
    10. 10. Game Variations<br />
    11. 11. Constructing Leaf Sets<br />Goals<br />Generate useful data<br />Right level of difficulty<br />Process<br />Calculate distance between each pair of leaves using features identified via curvature-based histograms<br />Select an initial leaf at random<br />Select 4 others from the same species including the most dissimilar one<br />Select the “odd leaf” from another species with varying levels of distance from the initial leaf<br />
    12. 12. Evaluation<br />Seed dataset of 120 image sets with 12 errors<br />Difficult errors created by comparing erroneous leaf to the “mean species distance” of other leaves in same species<br />Recruited two groups to play online:<br />Family, friends, colleagues, students, alumni<br />Experienced botanists, plant scientists, ecologists<br />Players randomly assigned to regular or skip version<br />After first game, players rated difficulty & gave suggestions for game improvement<br />
    13. 13. Identifying Errors<br />Two Odd Leaf Error Sets (8)<br />Find most incorrectly selected images<br />No Odd Leaf Error Sets (4)<br />Find odd leaves that were in “hard” sets<br />
    14. 14. Results<br />
    15. 15. Results<br />
    16. 16. Results<br />
    17. 17. Results<br />Errors detected just as well based on novice and experts<br />Skipped rounds don’t necessarily include errors<br />
    18. 18. Results<br />
    19. 19. Results<br />
    20. 20. Discussion<br />Images other than leaves<br />Test other variations of game<br />Education as motivator<br />Other demographic groups? (children)<br />
    21. 21. Questions and Discussion<br />Derek L. Hansen<br />dlhansen@byu.edu<br />www.biotrackers.net<br />

    ×