The BioMoby Semantic Annotation Experiment


Published on

My presentation to the Canadian Semantic Web Workshop, Kelowna BC, June 2009.

The slideshow describes a portion of Dr. Benjamin Good's PhD thesis work in which he examines the quality of annotation done through an open tagging process when the tags are constrained by a controlled vocabulary. The target for annotations were a set of BioMoby Semantic Web Services.

Published in: Technology, Education
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

The BioMoby Semantic Annotation Experiment

  1. 1. Open Semantic Annotation an experiment with BioMoby Web Services Benjamin Good, Paul Lu, Edward Kawas, Mark Wilkinson University of British Columbia Heart + Lung Research Institute St. Paul’s Hospital
  2. 2. The Web contains lots of things
  3. 3. But the Web doesn’t know what they ARE text/html video/mpeg image/jpg audio/aiff
  4. 4. The Semantic Web It’s A Duck
  5. 5. Semantic Web Reasoning Logically… It’s A Duck Defining the world by its properties helps me find the KINDS of things I am looking for Add properties to the things we are describing Walks Like a Duck Quacks Like a Duck Looks Like a Duck
  6. 6. Asserted vs. Reasoned Semantic Web Catalog/ ID Selected Logical Constraints (disjointness, inverse, …) Terms/ glossary Thesauri “ narrower term” relation Formal is-a Frames (Properties) Informal is-a Formal instance Value Restrs. General Logical constraints Originally from AAAI 1999- Ontologies Panel by Gruninger, Lehmann, McGuinness, Uschold, Welty; – updated by McGuinness. Description in:
  7. 7. Who assigns these properties? <ul><li>Works ~well </li></ul><ul><li>… but doesn’t scale </li></ul>
  8. 8. When we say “Web” we mean “Scale”
  9. 9. Natural Language Processing <ul><li>Scales Well… </li></ul><ul><li>Works!! </li></ul><ul><li>… Sometimes… </li></ul><ul><ul><li>… Sort of…. </li></ul></ul>
  10. 10. Natural Language Processing <ul><li>Problem #1 </li></ul><ul><li>Requires text to get the process started </li></ul><ul><ul><li>Problem #2 </li></ul></ul><ul><ul><li>Low accuracy means it can only support, not replace, manual annotation </li></ul></ul>
  11. 11. Web 2.0 Approach <ul><li>OPEN to all Web users (Scale!) </li></ul><ul><li>Parallel, Distributed, </li></ul><ul><li>“ Human Computation” </li></ul>
  12. 12. Human Computation <ul><li>Getting lots of people to solve problems that are difficult for computers. </li></ul><ul><li>(term introduced by Luis Von Ahn, Carnegie Mellon University) </li></ul>
  13. 13. Example: Image Annotation
  14. 14. ESP Game results <ul><li>>4 million images labeled </li></ul><ul><li>>23,000 players </li></ul><ul><li>Given 5,000 players online simultaneously, could label all of the images accessible to Google in a month </li></ul><ul><ul><li>See the “Google image labeling game”… </li></ul></ul>Luis Von Ahn and Laura Dabbish (2004) “Labeling images with a computer game” ACM Conference on Human Factors in Computing Systems (CHI)
  15. 15. Social Tagging <ul><li>Accepted </li></ul><ul><li>Widely applied </li></ul><ul><li>Passive volunteer annotation. </li></ul><ul><li> </li></ul><ul><ul><ul><li>2006 surpassed 1 million users </li></ul></ul></ul><ul><li>Connotea, CiteUlike, etc. </li></ul><ul><ul><ul><li>See also our ED2Connotea extension </li></ul></ul></ul>This is a picture of Japanese traditional wagashi sweets called “seioubo” which is modeled after a peach
  16. 16. BUSTED! I just pulled a bunch of Semantics out of my Seioubo!
  17. 17. BUSTED! This is a picture of Japanese traditional wagashi sweets called “seioubo” which is modeled after a peach This is a totally sweet picture of peaches grown in the city of Seioubo, in the Wagashi region of Japan
  18. 18. So tagging isn’t enough… We need properties, but the properties need to be semantically-grounded in order to enable reasoning (and this ain’t gonna happen through NLP because there is even less context in tags!)
  19. 19. Social Semantic Tagging Q1: Can we design interfaces that assist “the masses” to derive their tags from controlled vocabularies (ontologies)? Q2: How well do “the masses” do when faced with such an interface? Can this data be used “rigorously” for e.g. logical reasoning? Q3: “The masses” seem to be good at tagging things like pictures… no brainer! How do they do at tagging more complex things like bioinformatics Web Services?
  20. 20. Context: BioMoby Web Services BioMoby is a Semantic Web Services framework in which the data-objects consumed/produced by BioMoby service providers are explicitly grounded (semantically and syntactically) in an ontology A second ontology describes the analytical functions that a Web Service can perform
  21. 21. Context: BioMoby Web Services BioMoby ontologies suffer from being semantically VERY shallow… thus it is VERY difficult to discover the Web Service that you REALLY want at any given moment… Can we improve discovery by improving the semantic annotation of the services?
  22. 22. Experiment <ul><li>Implemented The BioMoby Annotator </li></ul><ul><ul><li>Web interface for annotation </li></ul></ul><ul><ul><li>myGrid ontology + Freebase as the grounding </li></ul></ul><ul><li>Recruited volunteers </li></ul><ul><li>Volunteers annotated BioMoby Web Services </li></ul><ul><li>Measured </li></ul><ul><ul><li>Inter-annotator agreement </li></ul></ul><ul><ul><li>Agreement with manually constructed standard </li></ul></ul><ul><ul><ul><li>Individuals, aggregates </li></ul></ul></ul>
  23. 23. BioMoby Annotator Information extracted from Moby Central Web Service Registry Tagging areas
  24. 24. Tagging Type-ahead tag suggestions drawn from myGrid Web Service Ontology & from Freebase
  25. 25. Tagging New simple tags can also be created, as per normal tagging
  26. 26. “ Gold-Standard” Dataset <ul><li>27 BioMoby services were hand-annotated by us </li></ul><ul><li>Typical bioinformatics functions </li></ul><ul><ul><li>Retrieve database record </li></ul></ul><ul><ul><li>Perform sequence alignment </li></ul></ul><ul><ul><li>Identifier-to-Identifier mapping </li></ul></ul>
  27. 27. Volunteers <ul><li>Recruited friends and posted on mailing lists. </li></ul><ul><li>Offered small reward for completing the experiment ($20 Amazon) </li></ul><ul><li>19 participants </li></ul><ul><ul><li>Mix of BioMoby developers, bioinformaticians, statisticians, students. </li></ul></ul><ul><ul><li>Majority had some experience with Web Services </li></ul></ul><ul><ul><li>13 completed annotating all of the selected services </li></ul></ul>
  28. 28. Measurements <ul><li>Inter-annotator agreement </li></ul><ul><ul><li>Standard approach for estimating annotation quality. </li></ul></ul><ul><ul><li>Usually measured for small groups of professional annotators (typically 2-4**) </li></ul></ul><ul><li>Agreement with the “gold standard” </li></ul><ul><ul><li>Measured in the same way but one “annotator” is considered the standard </li></ul></ul>
  29. 29. Inter-annotator Agreement Metric <ul><li>Positive Specific Agreement </li></ul><ul><ul><li>Amount of overlap between all annotations elicited for a particular item comparing annotators pairwise </li></ul></ul>2*I (2*I + a + b) I = intersection of sets A and B a = A without I b = B without I PSA(A, B) =
  30. 30. Gold-standard Agreement Metrics <ul><li>Precision, Recall, F measure </li></ul>True tags by T All tags by T Precision (T) = True tags by T All true tags Recall (T) = (F = PSA if one set considered “true”) F = harmonic mean of P and R (2PR/P+R)
  31. 31. Metrics <ul><li>Average pairwise agreements reported </li></ul><ul><ul><li>Across all pairs of annotators </li></ul></ul><ul><ul><li>By Service Operation (e.g. retrieval) and Objects (e.g. DNA sequence) </li></ul></ul><ul><ul><li>By semantically-grounded tags </li></ul></ul><ul><ul><li>By free-text tags </li></ul></ul>
  32. 32. Inter-Annotator Agreement Type N pairs mean median min max stand. dev. coefficient of variation Free, Object 1658 0.09 0.00 0.00 1.00 0.25 2.79 Semantic, Object 3482 0.44 0.40 0.00 1.00 0.43 0.98 Free, Operation 210 0.13 0.00 0.00 1.00 0.33 2.49 Semantic, Operation 2599 0.54 0.67 0.00 1.00 0.32 0.58
  33. 33. Agreement to “Gold” Standard Subject Type measure mean median min max stand. dev. coefficient of variation Data-types (input & output) PSA 0.52 0.51 0.32 0.71 0.11 0.22 Precision 0.54 0.53 0.33 0.74 0.13 0.24 Recall 0.54 0.54 0.30 0.71 0.12 0.21 Web Service Operations PSA 0.59 0.60 0.36 0.75 0.10 0.18 Precision 0.81 0.79 0.52 1.0 0.13 0.16 Recall 0.53 0.50 0.26 0.77 0.15 0.28
  34. 34. Consensus & Correctness: Datatypes
  35. 35. Consensus and Correctness: Operations
  36. 36. Open Annotations are Different
  37. 37. Trust must be earned <ul><li>Can be decided at runtime </li></ul><ul><ul><li>By consensus agreement (as described here) </li></ul></ul><ul><ul><li>By annotator reputation </li></ul></ul><ul><ul><li>By recency </li></ul></ul><ul><ul><li>By your favorite algorithm </li></ul></ul><ul><ul><li>By you ! </li></ul></ul>
  38. 38. IT’S ALL ABOUT CONTEXT!! We can get REALLY good semantic annotations IF we provide context!!
  39. 39. Open Semantic Annotation Works <ul><li>IF we provide CONTEXT </li></ul><ul><li>IF enough volunteers contribute </li></ul><ul><li>BUT we do not understand why people do or do not contribute without $$$ incentive </li></ul><ul><li>SO further research is needed to understand Social Psychology on the Web </li></ul>
  40. 40. Watch for <ul><li>Forthcoming issue in the International Journal of Knowledge Engineering and Data Mining on </li></ul><ul><li>“ Incentives for Semantic Content Creation” </li></ul>
  41. 41. Ack’s <ul><li>Benjamin Good </li></ul><ul><li>Edward Kawas </li></ul><ul><li>Paul Lu </li></ul><ul><li>MSFHR/CIHR Bioinformatics Training Programme @ UBC </li></ul><ul><li>iCAPTURE Centre @ St. Paul’s Hospital </li></ul><ul><li>NSERC </li></ul><ul><li>Genome Canada/Genome Alberta </li></ul>