Invited talk @Aberdeen, '07: Modelling and computing the quality of information in e-science
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Invited talk @Aberdeen, '07: Modelling and computing the quality of information in e-science

on

  • 376 views

 

Statistics

Views

Total Views
376
Views on SlideShare
376
Embed Views
0

Actions

Likes
0
Downloads
2
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • From traditional DQ to the biologist’s problem of defining quality based on data semantics
  • Data produced for the first time Mention evolution of experimental techniques Its production not streamlined No agreement on how to define its quality
  • Searching for “nuggets of quality knowledge”
  • Here is the compilation model for mapping bound views to a sub-workflow
  • Embedding the sub-flow requires a deployment descriptor : Adapters between host flow and quality subflow Data and control links between host flow tasks and quality flow tasks
  • Activated during execution of the quality sub-flow – blocks the workflow for the duration of the interaction
  • Our quality view specification language allows users to define abstract quality processes. Evidence types are ontology classes. Evidence values are class individuals, which are represented by variables. These variables are bound to values at runtime; the values themselves are either fetched from a repository of persistent annotations, or they are computed on demand by annotation functions. In our use cases, we have found examples of both. This process steps abstracts out from the issue of annotation lifetime Assertions are computed by services, which are represented by ontology classes, too. The tagName is the single output of the service (one for each input data item) Finally, the action step contains the condition/action pairs – here conditions are expressed on the variables introduced earlier, which define the scope. The semantics of the action step is that the expression is evaluated for each data item, and the corresponding action is taken, eg the item is sent to a specific channel
  • Benefit of this model: Ability to share definitions within a community Consistency checking through reasoning -- cite previous papers? Flexibility
  • From right to left: Data / knowledge layer Framework services Quality views management Targeted compiler(s)

Invited talk @Aberdeen, '07: Modelling and computing the quality of information in e-science Presentation Transcript

  • 1. Modelling and computing the quality of information in e-science Paolo Missier, Suzanne Embury, Mark Greenwood School of Computer Science University of Manchester, UK Alun Preece, Binling Jin Department of Computing Science University of Aberdeen, UK http://www.qurator.org Aberdeen, 24/1/07
  • 2. Quality of data
    • Main driver, historically: data cleaning for
    • Integration : use of same IDs across data sources
    • Warehousing, analytics :
      • restore completeness,
      • reconcile referential constraints
      • cross-validation of numeric data by aggregation
    • Focus:
    • Record de-duplication, reconciliation, “linkage”
      • Ample literature – see eg Nov 2006 issue of IEEE TKDE
    • Consistency of data across sources
    • Managing uncertainty in databases (Trio - Stanford)
    The need for data quality control is rooted in the data management practice
  • 3. Common quality issues
    • Completeness : not missing any of the results
    • Correctness : each data should reflect the actual real-world entity that it is intended to model
      • The actual address where you live, the correct balance in your bank account…
    • Timeliness : delivered in time for use by a consumer process
      • Eg stock information
  • 4. Taxonomy for data quality dimensions
  • 5. Our motivation: quality in public e-science data
    • Large volumes of data in many public repositories
    • Increasingly creative uses for this data
    Problem: using third party data of unknown quality may result in misleading scientific conclusions GenBank UniProt EnsEMBL Entrez dbSNP
  • 6. Some quality issues in biology
    • “ Quality” covers a broader spectrum of issues than traditional DQ
    • “ X% of database A may be wrong ( unreliable ) – but I have no easy way to test that”
    • “ This microarray data looks ok but is testing the wrong hypothesis ”
    • The output from this sequence matching algorithm produces false positives
    Each of these issues calls for a separate testing procedure Difficult to generalize
  • 7. Correctness in biology - examples No false positives: Every protein in the output is actually present in the cell sample Generate peptides peak lists, match peak lists (eg Imprint) Qualitative proteomics: Protein identification No false positives, no false negatives Microarray data analysis Transcriptomics: Gene expression report (up/down-regulation) Functional annotation f for p correct if function f can reliably be attributed to p Manual curation Uniprot protein annotation Correctness Creation process Data type
  • 8. Defining quality in e-science is challenging
    • In-silico experiments express cutting-edge research
      • Experimental data liable to change rapidly
      • Definitions of quality are themselves experimental
    • Scientists’ quality requirements often just a hunch
      • Quality tests missing or based on experimental heuristics
      • Definitions of quality criteria are personal and subjective
    • Quality controls tightly coupled to data processing
      • Often implicit and embedded in the experiment
      • Not reusable
  • 9. Research goals
    • Make personal definitions of quality explicit and formal
      • Identify a common denominator for quality concepts
      • Expressed as a conceptual model for Information Quality
    • Make existing data processing quality-aware
      • Define an architectural framework that accommodates personal definitions of quality
      • Compute quality levels and expose them to the user
    Elicit “nuggets” of latent quality knowledge from the experts
  • 10. Example: protein identification Data output Protein identification algorithm “ Wet lab” experiment Protein Hitlist Protein function prediction Correct entry  true positive This evidence is independent of the algorithm / SW package It is readily available and inexpensive to obtain Evidence : mass coverage (MC) measures the amount of protein sequence matched Hit ratio (HR) gives an indication of the signal to noise ratio in a mass spectrum ELDP reflects the completeness of the digestion that precedes the peptide mass fingerprinting
  • 11. Correctness of protein identification Estimator function: (computes a score rather than a probability) PMF score = (HR x 100) + MC + (ELDP x 10) Prediction performance – comparing 3 models: ROC curve: True positives vs false positives
  • 12. Quality process components Data output Protein identification algorithm “ Wet lab” experiment Protein Hitlist Protein function prediction Goal: to automatically add the additional filtering step in a principled way
    • Evidence :
    • mass coverage (MC)
    • Hit ratio (HR)
    • ELDP
    PMF score = (HR x 100) + MC + (ELDP x 10) Quality filtering Quality assertion :
  • 13. Quality Assertions
    • QA(D): any function of evidence (metadata for D) that computes a partial order on D
    • Score model (total or partial order)
    • Classification model with class ordering:
    analyze Reject < analyze < accept      D         reject accept Actions associated to regions
  • 14. Abstract quality views
    • An operational definition for personal quality:
    • Formulate a quality assertion on the dataset:
      • i.e. a ranking of proteins by PMF score
    • Identify underlying evidence necessary to compute the assertion
      • the variables used to compute the score (HR, MC, ELDP)
    • Define annotation functions that compute evidence values
      • Functions that compute HR, MC, ELDP
    • Define quality regions on the ranked dataset
      • In this case, intervals of acceptability
    • Associate actions to each region
  • 15. Computable quality views as commodities
    • Cost-effective quality-awareness for data processing:
    • Reuse of high-level definitions of quality views
    • Compilation of abstract quality views into quality components
    Abstract quality views binding and compilation Executable Quality process
    • runtime environment
    • data-specific quality services
    Qurator architectural framework:
  • 16. Quality hypotheses discovery and testing abstract quality view Quality model Performance assessment Execution on test data Compilation Compilation Targeted Compilation Quality-enhanced User environment Quality-enhanced User environment Quality-enhanced User environment Target-specific Quality component Target-specific Quality component Target-specific Quality component Deployment Deployment Deployment
    • Multiple target environments:
    • Workflow
    • query processor
    Quality model definition
  • 17. Experimental quality
    • Making data processing quality-aware using Quality Views
      • Query, browsing, retrieval, data-intensive workflows
     Discovery and validation of “Quality nuggets” Quality View Model testing Test datasets  Embedding quality views and flow-through testing +
  • 18. Execution model for Quality views
    • Binding  compilation  executable component
      • Sub-flow of an existing workflow
      • Query processing interceptor
    Host workflow Abstract Quality view Embedded quality workflow QV compiler D D’ Quality view on D’ Host workflow: D  D’ Qurator quality framework Services registry Services implementation
  • 19. Example: original proteomics workflow Taverna workflow Quality flow embedding point
  • 20. Example: embedded quality workflow
  • 21. Interactive conditions / actions
  • 22. Generic quality process pattern Collect evidence - Fetch persistent annotations - Compute on-the-fly annotations <variables <var variableName=&quot; Coverage “ evidence=&quot; q:Coverage &quot;/> <var variableName=&quot; PeptidesCount “ evidence=&quot; q:PeptidesCount &quot;/> </variables> Evaluate conditions Execute actions <action> <filter> <condition> ScoreClass in {``q:high'', ``q:mid''} and Coverage > 12 </condition> </filter> </action> Compute assertions Classifier Classifier Classifier <QualityAssertion serviceName=&quot; PIScoreClassifier &quot; serviceType=&quot; q:PIScoreClassifier &quot; tagSemType=&quot; q:PIScoreClassification &quot; tagName=&quot; ScoreClass &quot; Persistent evidence
  • 23. A semantic model for quality concepts Quality “upper ontology” (OWL) Evidence annotations are class instances Quality evidence types Evidence Meta-data model (RDF)
  • 24. Main taxonomies and properties assertion-based-on-evidence: QualityAssertion  QualityEvidence is-evidence-for: QualityEvidence  DataEntity Class restriction: MassCoverage   is-evidence-for . ImprintHitEntry Class restriction: PIScoreClassifier   assertion-based-on-evidence . HitScore PIScoreClassifier   assertion-based-on-evidence . Mass Coverage
  • 25. The ontology-driven user interface Detecting inconsistencies: no annotators for this Evidence type Detecting inconsistencies: Unsatisfied input requirements for Quality Assertion
  • 26. Qurator architecture
  • 27. Quality-aware query processing
  • 28. Research issues
    • Quality modelling:
    • Provenance as evidence
      • Can data/process provenance be turned into evidence?
    • Experimental elicitation of new Quality Assertions
      • Seeking new collaborations with biologists!
    • Classification with uncertainty
      • Data elements belong to a quality class with some probability
    • Computing Quality Assertions with limited evidence
      • Evidence may be expensive and sometimes unavailable
      • Robust classification / score models
    • Architecture:
    • Metadata management model
      • Quality Evidence is a type of metadata with known features…
  • 29. Summary
    • For complex data types, often no single “correct” and agreed-upon definition of quality of data
    • Qurator provides an environment for fast prototyping of quality hypotheses
      • Based on the notion of “evidence” supporting a quality hypothesis
      • With support for an incremental learning cycle
    • Quality views offer an abstract model for making data processing environments quality-aware
      • To be compiled into executable components and embedded
      • Qurator provides an invocation framework for Quality Views
    Publications: http://www.qurator.org Qurator is registered with OMII-UK