Resource Description Framework Approach to Data Publication and Federation
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Resource Description Framework Approach to Data Publication and Federation

on

  • 1,217 views

Bob Stanley, CEO, IO Informatics, explains the utility to RDF as a standard way of defining and redefining data that could have utility in managing life science information.

Bob Stanley, CEO, IO Informatics, explains the utility to RDF as a standard way of defining and redefining data that could have utility in managing life science information.

Statistics

Views

Total Views
1,217
Views on SlideShare
1,217
Embed Views
0

Actions

Likes
0
Downloads
3
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • RDF is very simple. It provides a framework for describing things according to their relationships. A key value proposition for semantic technologies is to make it easy to describe and connect related data sources, in order to search them as deeply as we care to. Although there is much useful effort in the formal terminology space, the broad focus is on practical outcomes – specifically, to provide a standard set of methods and framework for describing and linking data. This is NOT a standard for what the minimum or perfect data definition should be! – it is a standard way of defining and redefining data…
  • Lowering barriers to interoperability using standards does NOT mean using standardized, approved terms in an ontology or API. It means using a common framework for resource description, in which terms can easily be surfaced, visualized, tested, merged – due to the common approach.
  • SPARQL is an open standard Query Service API. It supports a query language with a variety of visual query interfaces and takes advantage of growing linked open data resources . SPARQL APIs can utilize SOAP and REST.
  • Agile resource description is a key benefit of RDF and SPARQL. Although it can support highly formal resource description, RDF and SPARQL lowers barrier to interoperability caused by excessive concern about perfect specification of reference content. RDF opens up this conversation providing standard framework and practices for mixing, mapping and updating resource descriptions via data model providers AND consumers. RDF also can manage provenance and links to original data, full experiment records, etc. --------- SPARQL endpoints provide visible data models that can be visually managed by data providers. They can also be rapidly adapted by local users mapping to more or less well-formed APIs. Depending on user need, these can be narrowly practical application ontologies or formal OWL models. Application ontologies can be rapidly mapped to OWL ontologies and vice versa.
  • Broadly, semantic technologies provide a standard framework and methods for describing and querying data resources and connections between data elements. The technology space has been slowing growing and maturing over the past decade. People were writing papers and doing work in the area in the late ‘90s. Tim Berners-Lee wrote an article in 2001 that was published in Scientific American and began popularizing the concepts. These methods make it possible to visualize and connect data in a way that makes sense both to humans and computers. Using RDF and SPARQL, data descriptions can be modified by end users without refactoring the database or databases.
  • RDF is very simple. It provides a framework for describing things according to their relationships. A key value proposition for semantic technologies is to make it easy to describe and connect related data sources, in order to search them as deeply as we care to. Although there is much useful effort in the formal terminology space, the broad focus is on practical outcomes – specifically, to provide a standard set of methods and framework for describing and linking data. This is NOT a standard for what the minimum or perfect data definition should be! – it is a standard way of defining and redefining data…
  • Clockwise from 6 oclock > Species, (protein description), Protein, Gene, Gene Location Terminology around new methods can cause confusion. For ease, I find it useful at times to refer to “assertions” instead of “triples”, and to “data structure” instead “ontologies”.
  • These methods have been developed by W3C, MIT, Stanford, Dublin Core, DERI and elsewhere for just about a decade. Vendor and consumer adoption and technical maturity has grown over the past 8 years or so, with exponential growth over the past 4 years. This compares very favorably to “proprietary” methods and standards invented by single vendors.
  • Agile method; human readable ontologies, no more over-specification, easy to manage and update, end users can extend the API
  • First we linked all of the data of interest to help the customer understand what happens biologically in organ transplant rejection scenarios. Next we apply SPARQL to run searches across clinical, protein and gene expression databases, to screen for patients at risk.
  • Not a new data standard, a standard way of publishing and linking data Extensible so you can start and get to a ‘good enough’ state quickly Created with federation in mind Out-of-box benefits for project completion, maintenance and extension to new data sources

Resource Description Framework Approach to Data Publication and Federation Presentation Transcript

  • 1. Resource Description Framework Approach to Data Publication and Federation Robert Stanley IO Informatics, Inc. IO Informatics © 2011 June 6 2011 Pistoia Alliance – Open Technical Committee Meeting
  • 2. Agenda
    • Introduction:
      • Requirements, Background, Use Cases
    • Technical Example / Use Case:
      • Requirements for creating a data service and invoking a query
    • Q & A
  • 3. Recap - ELN Query Functional Requirements
    • The ELN Query Services team have produced a rich set of functional requirements/user stories representing common questions scientists have that can be satisfied with an ELN query. Most, if not all, of those requirements have the conceptual form;
    • SELECT <selection> FROM Experiment WHERE <constraints>
    • … A common approach to the problem would be preferable , ideally leveraging existing standards, rather than building a solution that works just for Experiment.
    • … the Pistoia Technical Committee may wish to constrain such query services, by aligning to existing standards, to ensure consistency in approach.
  • 4. Example ELN Query Workflow
    • Scientist researching a class of Agents
      • small molecules (or biologics) intended to hit a target or targets
      • links to…
    • Assays
      • test to determine activity, affinity, binding, promiscuity
      • determine potential toxicity, adverse events, etc.
      • links to…
    • Targets
      • sites where compounds bind -- can be locations on a protein, locations on a gene, active centers on an enzyme, etc.
      • links to…
    • Disease/ Gene relationships
      • e.g. biology, can be from TMO / LODD resources
      • pathways, proteins, catalysts, immunology defense mechanisms, potential for adverse events, etc. can be included
  • 5. Recap - Query Service API
    • As part of the phase two deliverable, the ELN Query Services team produced a prototype SOAP-based service outlining the methods that would be required to support the query service.
    • There are existing protocols and standards for querying structured data over the web. Aligning to an existing approach will prevent re-inventing a query language and will provide confidence in the stability of the query interface. Examples;
    • OData (http://www.odata.org/) is a RESTful query protocol …
    • GData (http://code.google.com/apis/gdata/). Very similar to OData, but published by Google…
    • SPARQL (http://www.w3.org/TR/rdf-sparql-query/) is a RDF query protocol published by W3C for querying linked data. A triple store is [not ] required and there may be synergies with existing Pistoia projects (e.g. VSI, SESL) by adopting SPARQL.
  • 6. Questions for Tech Committee
    • Ontology content and format
      • Regarding reference content
        • How comprehensive and refined does it need to be?
        • How can it relate to records and different detail?
        • Who’s going to maintain the ontology?
        • Is there a practical, agile approach to creating, managing, extending?
      • Standards - driven? (UML, OWL, or ERD, etc.)
      • Investigate service versioning and how backward compatibility might be implemented
    • API Mechanism (OData, Gdata, SPARQL?)
      • Is there a standards-based approach?
  • 7. Moving Forward? Resource Description Framework
    • Semantic Technologies provide a unified, standard framework for:
      • Ontology / API representation, modification, merging
      • Query framework / SOA SPARQL API
      • Rich, extensible Query Federation
        • (Does not require ETL)
    • Agile ontology development & maintenance…
        • Customers can extend the ontologies themselves
    • Broadly – these are standards-driven methods that will make the ELN Query Federation lifecycle much easier
  • 8. What is RDF?
    • Resource Description Framework (“RDF”) defines and links data for more effective discovery, federation, integration and re-use across applications
    • RDF is a fundamentally simple, standard and extensible way of specifying data and data relationships
    • Ontologies describe resources and relationships according to their explicit meaning
    • SPARQL Protocol and RDF Query Language (“SPARQL”) supports federated queries w’ SPARQL API for publication
  • 9.
    • RDF graphs are collections of triples
    • Triples are made up of a subject , a predicate , and an object
    • Resources and relationships (metadata)are stored
    Resource Description Framework (RDF) is… A labeled, directed graph of relations between resources and literal values. Confidential IO Informatics © 2011 subject object predicate
  • 10.
    • “ TP53 encodes Human p53”
    • “ p53 is a tumor suppressor protein”
    • “ TP53 gene is located on the short arm of chromosome 17”
    Example RDF Triples Confidential TP53 p53 encodes p53 tumor suppressor protein is a TP53 located chromosome 17
  • 11. Triples Connect to Form Graphs TP53 Confidential p53 encodes tumor suppressor protein is a located chromosome 17 part of Human part of
  • 12. Why RDF? What’s Different Here?
    • Triples act as a human and machine readable least common denominator for expressing data and relationships
    • Ontologies organize data according to their human readable meaning, to make defining and merging data intuitive…
    • RDF supports inference and disambiguation , so merging data and adding new data and relationships without shared identifiers becomes possible…
    Confidential IO Informatics © 2011
  • 13. Maturation of a Standard Framework for Resource Description
    • This standard method and framework for extensible open data publication is maturing:
      • Standards and practices, ontologies, query and data model specification:
        • W3C, ISCB, IEEE, NCBO, OBO
        • SKOS, LOD / LODD and related resources
      • Federation methods:
        • Ontology Resources, URIs, SPARQL endpoints, ontology alignment, inference, D2R, SWObjects, …
      • Scalability:
        • Security, support for transactional processing
      • Practice:
        • Expertise, training, larger projects (FDA, DOD, NASA, World Cup, Data.gov, Chevron, etc.)
    IO Informatics © 2011
  • 14. Exponentially Growing Resources 2007 2008 2009 2011 IO Informatics © 2011
  • 15. Healthcare / Life Sciences remains the largest data sector for SPARQL APIs
  • 16. Integration / Federation Options Data Sources RDBMS / REST / Web Services Basic Transformation Query-Based Transformation SPARQL 2 SQL (D2R, SWObjects) Federated Semantic Access / Datastore(s) SPARQL APIs / SOAP / REST Provenance Versioning Governance Meaning
  • 17. RDF / SPARQL Solution Stack DBs, Services SPARQL API, SPARQL conversion, ETL=>RDF Extensible Standards-based Framework Federated Access / Integrated Semantic Datastore(s) Prediction and Simulation Apps Query by Meaning Rich Browsing Other Data Sources Public Data, Services Other Apps
  • 18. Example Use Cases
    • Manufacturing: Link data sources across imprecise connections to verify reports
    • Animal Safety: Knowledge Network for discovery, qualification and validation of cross-species biomarkers
    • Personalized Medicine: Knowledge Network and screening application for personalized medicine
    IO Informatics © 2011
  • 19. Example Questions
    • What data sources support this specific manufacturing report about product purity and shelf-life?
    • What toxicity biomarkers are common to most animals?
    • What patients are showing combinations of indicators that predict risk?
    IO Informatics © 2011
  • 20. Qualitative Benefits
    • Link internal data with public resources
      • E.G. – “out of box” linking of ELN data with LODD sources
      • Growing public resources provide cost-effective enrichment and hypothesis generation
      • Reduce dependencies on expensive commercial databases
    • Emergent Properties
      • ELN data enrichment and knowledge building made easy!
      • Supports inference, rich interrogation, serendipitous discovery
    • R&D concepts can now be translated to immediate consumer benefits
      • Previous “out of reach” integration and applications become practical
    IO Informatics © 2011
  • 21. Quantitative Benefits
    • Reduced effort, time and cost to federate, update, extend to new datasets
    • Start sooner - reduce initial design and deployment burden
      • Ontologies are explicit, can be engineered from or decoupled from resources, can be altered without refactoring
    • Finish sooner - reduced time to create and test integrations
      • Agile modeling, integration and testing
    • Extend more easily - add new data sources and applications
      • Building blocks to add new data, create new applications
      • End Users can adapt, modify, extend ontologies
    IO Informatics © 2011
  • 22. UBC: Knowledge Network for Organ Failure; “ASK” for Personalized Medicine IO Informatics © 2011
  • 23. UBC: Knowledge Network for Organ Failure; “ASK” for Personalized Medicine
    • Outcome
      • Knowledge network for enrichment, visualization and qualification of patterns indicating risk of organ failure
      • Web-based deployment of SPARQL-based screening patterns across multiple data sources, indicating patients-at-risk
    IO Informatics © 2011
  • 24.
    • Screening of transplant patients for likelihood of transplant failure, based on combined biomarker patterns
    Personalized Medicine Knowledge Network IO Informatics © 2011
    • Web-based Knowledge Application
    • Applies patterns for predictive screening
    • Weighing, scoring of results
    • Bring “hits” back into Knowledge Network for validation of hypotheses and algorithms
  • 25. ASK™
    • Extended SPARQL-based Knowledge Application
    • Visualization, Testing, Validation of Systems-oriented Hypotheses
    • Automation, Dashboards
    Confidential IO Informatics © 2011
  • 26. UBC / PROOF: Quantitative…
      • Integration and analysis time reduced from estimated 2 years to about 8 months FTE equivalent
      • Time to capture and apply patterns reduced from days to hours
      • Knowledge base can be / has been extended to include new public sources in hours (days / weeks with curation)
        • Expensive commercial database no longer needed due to ease of integrating public resources
    IO Informatics © 2011
  • 27. UBC / PROOF: Qualitative…
      • Visual SPARQL presents queries as hypotheses
        • Make it possible for researchers to iteratively create, test and refine hypotheses
        • Research queries were published to web service for easy scale-up
        • Extended SPARQL delivers practically useful classifiers with scoring
      • Use of RDF makes enrichment with public sources practical
      • Provenance, reference annotation, original data accessible for review
    • “ [The] ability to consume and intuitively represent a wide variety of data-types - from images to quantitative data - and more importantly, display that data in ways that make the significant features immediately obvious to our biologist end-users, has allowed us to move to a completely new level of data analysis….”
    IO Informatics © 2011
  • 28. Recap – Core Benefit Drivers
    • Semantic technologies provide a standard method and framework for data publication and interoperability… not a new “data standard”!
    • Low barrier to entry for data publishing – extensible building blocks
    • for agile, growing integrations
    • Reduced effort, time and cost to deliver and maintain data
    • definitions and applications, particularly those that depend on
    • federation
    • Growing public resources are a catalyst and value-add
    • Projects that were impractical become practical to achieve and
    • maintain
    IO Informatics © 2011
  • 29. Thank you! Next, W3C… For questions:
    • Email: [email_address]
    • Website: www.io-informatics.com
    IO Informatics © 2011