US EPA OSWER Linked Data Workshop 1-Feb-2013


Published on

Overview of US EPA's Linked Data Service to launch in early 2013. Open data published using the Linked Data model increases search engines' ability to find and display high value data sets. Linked Data enables policy makers, analysts and developers to more readily access and re-use data.

Published in: Technology
  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

US EPA OSWER Linked Data Workshop 1-Feb-2013

  1. 1. Linked Data at US EPA 1-Feb-2013 Linked Data Workshop with EPA OSWER By Bernadette Hyland, David Wood & Luke Ruth 1These slides will walk us through a common workflow use case comparing the Linked Data Service to Envirofacts.
  2. 2. Agenda• Intros ...• Trends in data management • Government data publication• Update on EPA Linked Data Service• EPA OSWER moving towards Linked Data? • Review Next steps ... 2
  3. 3. Trends in government data management 3
  4. 4. 4Headlines and agency memos about government transparency with open data and various government Websites.... innovation challenges based on open government data... High energy datapalooza’s are emerging with awards ranging from a couple thousand to $100k+. Thesechallenges open the doors to innovation for better healthcare solutions and more efficient use of energy, toname but a few. They all require access to and re-use of HIGH QUALITY DATA.In 2012, we read many headlines about big data and world’s search engines and social media sites.
  5. 5. Photo credit: 5 5However, while there is lots of gold to be mined from public data, it is an uncomfortable time for GovernmentIT and business managers who are tasked with data management programs.Most people are having a difficult time keeping up. If you feel like you are hanging on while the world changestoo fast, you are not alone.Photo credit:
  6. 6. 6Who is sharing their data as Linked Data? Small and large commercial and government organizations, NGOs,Non-profits ... plus many universities.Governments in the last few years have been responding to Open Government initiatives that mandate publishingopen government data.Some are careful, slow-moving entities who simply needed to find real solutions to real problems.
  7. 7. GovernmentsGoals: Governmental transparency and/or improved internal efficiencies (data warehouses) 7
  8. 8. Big Data Simple data Complex data Legacy data 8KEY POINT: Search, discovery and data access approaches have evolved over the last decade and techniquesare beginning to come together. GoPubMed was launched in 2002 as the first semantic search portal. Later,Microsoft’s Bing, Google’s Knowledge Graph are two of the other well known search engines employingsemantic techniques.Semantic search systems generally considers the context of search, location, intent, variation of words, synonymsand concepts. Semantic search has roots in linguistic research and NLP.Big data research has grown to include the MapReduce algorithm for handling really large data sets, oftenmeasured in terabytes or greater. This is the kind of data that people at the Large Hadron Collider at CERNare working on to provide insights into how the universe works, including the recent discovery of the HiggsBoson, the particle that gives mass to matter.Under the big top tent of semantic search we’re dealing with different types of content, big, public, complex andlegacy data. Simple, complex and legacy data comes in small, medium and large sizes.Many government agencies by contrast have lots of small to medium data sets in structured databases, likeOracle. These databases (and the systems that depend upon them) are not going away however fewer new datawarehouse projects are likely to be started. Data warehouses are widely recognized to be costly to create andmaintain, and change SLOWLY.The biggest win for governments worldwide who adopt a Web architecture for data publishing is combining datasets to discover new or previously uncontemplated relationships.
  9. 9. “Big Data Is Important, but Open Data Is More Valuable” As change agents, enterprise architects can help their organizations become richer through strategies such as open data. David Newman, VP Research, Gartner 9Open data refers to the idea that certain data should be freely available to everyone to use and republish as theywish, without restrictions from copyright, patents or other forms of control.The term “open data” has gained popularity with open data initiatives including, and othergovernment data catalog sites.Enterprise architects are playing an important role in fostering information-sharing practices. Access to, and useof, open data will be particularly critical for a business that operate using the Web; organizations should focus onusing open data to enhance business practices that generate growth and innovation.
  10. 10. Open data + open standards + open platforms Highly scalable computing & hosting via the Cloud International Data Exchange Standards 5 Star Data (Linked Data) Open Source tools 10A Web-oriented approach to information sharing has impacted how scientists, researchers, regulators and thepublic interacts with government.Linked data lowers the barriers to re-use and interoperability among multiple, distributed and heterogeneousdata sources.Access to high-quality Linked Open Data via the Web means millions of researchers and developers will be ableto shorten the time-consuming research process involving data cleansing and modeling.
  11. 11. 11How do we get a loose coupling of shared data over Web architectures? By using the structured data model forthe Web: RDF.There is a project to create freely available data on the Web in this way, which is known as the Linked OpenData project.W3C sees Linked Data as the set of best practices and technologies to support worldwide data access,integration and creative re-use of authoritative data.
  12. 12. 12
  13. 13. The mission of the Government Linked Data (GLD) Working Group is to provide standards and other information which help governments around the world publish their data as effective and usable Linked Data using Semantic Web technologies. 13We are 16 months into the Government Linked Data Working group’s two year charter.
  14. 14. 14A sound government information management strategy requires providing CONTEXT and CONFIDENCE tothose accessing and potentially re-using your data.Giving people have timely access to information, for disaster preparedness, scientific research, policy andresearch, the network effect of people helping people is our greatest hope.On the heels of the recent East Coast hurricane that devastated parts of New York and New Jersey, governmentexecutives suggested that fear of cyber-doom scenarios may be taking too much of our thinking & planning.According to Secretary Panetta, it may be driving us to unrealistic and potentially dangerous responses to threatsthat don’t exist.The reality is that when disaster strike, people come together and help one another. We don’t see paralysis,panic and social collapse.During today’s session, I’ll describe how several agencies and private sector organizations are using Webtechnologies and semantics to improve information access and discovery. Simply put, semantic technologiesprovide CONTEXT.
  15. 15. Open Government Data 15
  16. 16. Growing chorus ... “We’re moving from managing documents to managing discrete pieces of open data and content which can be tagged, shared, secured, mashed up and presented in the way that is most useful for the consumer of that information.” -- Report on Digital Government: Building a 21st Century Platform to Better Serve the American People 16The Digital Government Strategy sets out to accomplish three things: Access to high quality digital information& services; procure and manage devices, applications, and data in smart, secure and affordable ways; and unlockthe power of government data to spur innovation.Governments around the world are defining detailed digital services plans based on open data, open APIs andopen source data platforms. They are defining how governments are publishing data with an eye towardsimproving access and re-use. Administrators and program managers are committing to delivery of digital servicesusing semantic technologies broadly, and Linked Data specifically.
  17. 17. Big data Integrating ... • Simple data • Complex data • Legacy data 17We need to find ways to fit things together that wasn’t originally intended to fit together.NB: This is the Musée du Louvre which has evolved from a late 12th Century fortress under Phillip II, extendedover centuries to incorporate the landmark Inverted Pyramid architected by I.M. Pei that was completed in1993.A recent competition to house its new galleries for Islamic art opened this year, 2012. It continues toaccommodate new works for art & galleries in new & previously unanticipated ways.Today, we need to understand the context of big data + complex data + public data and legacy data into oneconsistent whole.
  18. 18. 18September 2011: 295 datasets that meet the LOD Cloud criteria, consisting of over 31 billionRDF triples and are interlinked by around 504 million links.
  19. 19. THERE IS A PROCESS Identify Model Name Describe Convert Publish Maintain 19Take comfort in the fact that there is a familiar process. It is similar to the process & roles oftraditional data modeling.Creating Linked Data requires that we identify the data, model exemplar records -- what youare going to carry forward & what you are going to leave behind.Name all of the NOUNs. Turn the records into URIs.Next, describe RESOURCES with vocabularies.Write a script or process to convert from canonical form to RDF. Then publish. Maintain overtime.
  20. 20. 3 Round Stones produces the leading platform for the publication of reusable data on the Web. Our commercially supported Open Source platform is used by the Fortune 2000 and US Government agencies to collect, publish and reuse data, both on the public Internet and behind institutional firewalls. 20Our goal is to produce the leading platform for the publication of reusable data on the Web.
  21. 21. Callimachus 21Callimachus is that platform. It is available via or its Open Source
  22. 22. CONTENT LINKED DATA MANAGEMENT MANAGEMENT SYSTEM SYSTEM DATA TEXT UNSTRUCTURED Callimachus STRUCTURED DATA TEXT 22Callimachus may be compared to a distributed CMS. CMS’s manage mostly unstructuredinformation. Callimachus, by contrast to a CMS, manages primarily structured Linked Data. Wecall this a Linked Data management system.
  23. 23. 23Callimachus started in 2009 as a simple online RDF editor. Users could fill out HTML forms,which would create RDF behind the scenes. The resulting RDF could be viewed as HTML and,of course, shared and combined with other RDF data. Since then HTML5 has allowed us tohack the browser environment much less and extend its capabilities.
  24. 24. Data driven Web apps using Callimachus US Legislation + enterprise data Clinical Trials + DBpedia + enterprise linked enterprise datasets data 24 24Callimachus integrates (very) well with other enterprise systems as well as Web content. Itcan form an entire application or part of one.NB: Mention Documentum, Oracle via HTTP
  25. 25. 25• US HHS committed to making a vast array of open data more readily available to improve health care delivery & reduce costs in 2013 and beyond.• In 2012, Sentara created a Web application that integrates authoritative data from 5 different sources including content from NLM, NOAA, EPA and DBpedia• This application utilizes open data, open standards and an open source data platform
  26. 26. User US EPA US EPA NOAA AirNow SunWise NationalDBpedia Library of Medicine 26
  27. 27. US EPA Linked Data• Cloud-based Linked Data provision of 3 coreprograms: • 2.9M Facilities • 100K substances • 25 years of toxic pollution reports• FISMA compliant• 16 Callimachus templates• Official launch Feb 2013 27
  28. 28. 28Envirofacts, EPA’s older system.
  29. 29. 29EPA’s new Linked Data system. Cooperation without coordination. Data reuse breaks the back of API gridlock.Clay Shirky stole that from me :)
  30. 30. 30This data is exactly the same data used to create the interface. Unlike traditional database-driven applications,the data is immediately accessible for reuse by third parties. This prevents data duplication, allows for tracking ofprovenance and avoids reinventing the wheel.
  31. 31. We’ve Seen This Before 31Like HTML and RDF, credit cards have a human-readable side and a machine-readable side.
  32. 32. Linked Data management system located at a Tier 1 Cloud Provider (FISMA compliant) RDF Database Resource URIs REST API SPARQL endpoint Public Web Browser Application, Script or automated client Registered developer 32Introduce Callimachus, an open source, open data platform based on open standards.3 Round Stones provides commercial support for Callimachus and is a major contributor tothe OS project.Users of Callimachus see a generated Web interface, but can also directly access the data viaREST or SPARQL.SPARQL Named Queries (like stored procedures) allow for automated conversion to differentformats for reuse in non-RDF environments.
  33. 33. From EPA From Wikipedia Open Street Map 33Data may be easily combined from several sources.
  34. 34. HOW IT IS DONE TODAY ... 34
  35. 35. Audience for EPA Data • Middle school student doing a science project • Concerned citizen worried about local pollution • Environmental Science PhD from EPA • Doctor from NIH writing a research paper 35To try and understand the advantages and disadvantages of both systems, we need to know the audience for the system. That presents a problem though. It’s nearly impossible to knowyour audience at any given moment. Even if it were possible, the audience is so varying that it would be unwise to cater to a single group at the expense of another.The audience could be a middle school student, a concerned local citizen, a PhD collecting information for a report, or a doctor writing a research paper. We just don’t know.For example, if the system was designed to accommodate a 6th grader, the system would be over simplified and thin. If it was designed with a PhD or a doctor in mind, the average citizencould be overwhelmed and find the system complicated and verbose.That’s why the goal should be to make the simple things easy and the complicated things possible.
  36. 36. How much mercury did Hanson Permanente Cement release in 2004? 36With that in mind, let’s walk through our example trying to keep all our audience members in mind. Let’s pretend our audience members live in Cupertino, California and want to know aboutthe local Cement plant. The question is - How much mercury did Hanson Permanente Cement release in 2004?
  37. 37. 37The process starts out much the same. The user enters their zip code into the search field, which in this case is 95014, and are presented with the results. From here though, the workflow isquite different.
  38. 38. Envirofacts 38
  39. 39. 39Rather than immediately returning a list of the facilities in that zip code, Envirofacts gives users the option to verify their location, drag, drop, and resize a map to match their request. Whilethis provides a high level of granularity it is making a simple thing harder than it needs to be for a lot of users. While some people may know the physical boundary of their zip code, theaverage user would most likely trust the application to take care of that.
  40. 40. 40Envirofacts then returns a page that looks like this (with some rows obviously cut from the table). It’s great that right up front, the user is able to both see the results and have access to thedata. There is a link they can paste into their browser and get the raw data as well as a button to click that will download the results as CSV. The big problem here though is that the datacomes back with formatting that renders it nearly unusable and content that adds very little value above what is present on the screen.Copying and pasting the link at the top of the screen gives us the following data:
  41. 41. 41It comes back in what *appears* to be CSV. However, there is no actual indication of that. There are no column headers, no descriptions of what fields represent, the text is difficult topartition and understand, and there are no links to documentation on the use of the API or how the query is structured. All we know is that it is data from “this report”.The other link - the CSV Table link - returns the following data:
  42. 42. 42Having the table as CSV is theoretically useful but it too comes back without the necessary structure to understand and use the data. This time there are column headers but it is unclearwhat exactly they represent. Many of the fields are just the URL links found in the HTML table from the website. There doesn’t appear to be raw data - just links to where the raw data couldpotentially be found.Moving on from accessing the data, we will try to find the facility we are looking for - Hanson Permanente.
  43. 43. Finding Hanson Permanente 43Finding Hanson Permanente can only be done in this table by scrolling down and finding "Hanson" in the alphabetically sorted list of facilities. There are no options to sort on the contents ofanother column or search within the table. The unfortunate side effect of this is that by the time you scroll down you can no longer see the column headers. Simply looking at this screen, youcan see that there are 8 separate reports that can be viewed, but it is unclear how they are differentiated and what each contains.The key here is that the data reflects internal EPA systems - which are unknown to the majority of users. By doing this, Envirofacts is implicitly asking users to become expert on internalEPA systems which they either are not capable of, or do not have the time for.
  44. 44. Finding Mercury Released in 2004 44Because most users do not have this knowledge, the first report theyll most likely click is the Summary Report. The “Summary Report” brings us to a long page where after quite a bit ofscrolling we can see the Toxic Releases for 2011. However, unlike the previous search results, this data is not available for download or retrieval by any means other than screen-scraping orre-keying. It is also a limited dataset and does not have the data for 2004.
  45. 45. Compliance Report 45The Summary, Facility, AFS, BR, RCRA, TRI, TSCA Reports at their top level do not have the data about Mercury either. It is actually contained in the Compliance Report. However, like theother tables, there is no way to download this data and repurpose it for other applications. The other source of confusion is that this data can be found in multiple places depending on itsoriginating report, and it can be unclear whether the data is in fact the same.For example, this data can also be found by drilling down in the TRI Report by clicking:“View Report”->”P2 Report (Report)”->”P2 Report”-> and then manipulating the view based on the year and view you want. These graphs and charts ultimately contain very interesting andrelevant data but they are so obscured and inaccessible that it becomes extremely difficult to create anything new.
  46. 46. Potential AudienceXMiddle school student doing a science project•XConcerned citizen worried about local pollution•✔Environmental Science PhD from EPA•XDoctor from NIH writing a research paper• 46Who did we cater to? The middle school student? Probably not. The concerned citizen? Unless that citizen happens to have specific knowledge of the EPA system and a great deal ofexperience navigating technology, most likely not. What about the Environmental Impact PhD and the Doctor from NIH? They may have the knowledge to understand column names,chemical compounds, and reporting a bit better but only the Environmental Science PhD with a working knowledge of EPA’s system can determine enough information to make use of it. Thedoctor on the other hand still is still working against the system itself to find the data behind it.
  47. 47. Linked Data 47Now let’s look at the same workflow in the Linked Data Service.
  48. 48. Finding Hanson Permanente 48By keeping the application simple - and letting the results be viewed either as a table or amap - the user can adjust their search as they see fit without extra navigation. Also, byhaving the data in a table that can searched or sorted however the user sees fit, finding aspecific facility is as easy as typing the name in or sorting on relevant criteria. This is madepossible by exposing the data, rather than containing it in a standard HTML table.I fully recognize that Envirofacts could offer identical functionality by tweaking theirapplication, but the key underlying point is that this application was created very cheaply andquickly *because* the data is modeled as Linked Data. When the developing environment is aWeb Browser, and the data is described and Linked, an application can be a simple XHMTLpage with JavaScript, instead of a heavy-weight dedicated application.
  49. 49. Finding Mercury Released in 2004 1 2 49There are two very important things to note on this page. 1 is that on any facility’s page,there is always an option to download the data. This data is available in two formats (RDF/XML and Turtle). With the click of a button a user can have all of the data that was used todrive the creation of the current page, which means he or she can repurpose that data intoany new application. Note here that this download is not an extract, summary, or recreationof the data - it is literally the *same* data that was used to drive that page.2 is that because this page is “data-driven”, navigation relies on exploring the data, not thesystem that contains it. On the same page where we get information like it’s latitude andlongitude, we can also find a link to a report detailing exactly how much mercury wasreleased in 2004. We could easily do an in-page search for 2004 or Mercury to identify thereleases associated with those terms.
  50. 50. TRI Report 50Rather than aggregating the data for presentation, the actual report is presented with the rawdata continuously available in the top right of the page.A subtle difference to be pointed out here is the difference in the name of the facility.Previously it was identified as Hanson Permanente, but now it is known as Lehigh SouthwestCement Co. During the modeling phase, the Linked Data was created to implicitly include thisrelationship (which is known via the mapping of EPA FRS identifiers). On the other hand,pulling down the CSV files would not give the user any obvious way of understanding thisrelationship.
  51. 51. Data Reuse 51Lastly, giving users the ability to grab the data off any page, at any time during navigation,strongly facilitates the reuse of data. These graphs are not natively embedded in the webpageof a given facility. Rather, by downloading the data the user can quickly and easily make newand different visualizations for a report or presentation.For example, this history of air stack pollution reports was made with a single parameterizedSPARQL query and a single JavaScript pattern. This could very easily be applied to any numberof facilities, changed to a bar graph, or altered in any number of other ways with very littleeffort thanks to the fact it was modeled using Linked Data.
  52. 52. Potential Audience✔• Middle school student doing a science project✔• Concerned citizen worried about local pollution✔Environmental Science PhD from EPA•✔• Doctor from NIH writing a research paper 52Linked Data allowed us to reach all the members of our potential audience by giving the useroptions, aggregating based on relevance rather than data source, and by exposing the datathat drives the service for reuse.The middle school student or concerned citizen that want to know the location of a facility,the amount of a particular chemical it released, and the year it was released in never have toclick any of the options in the Linked Data box. They can simply use the interface, explorethe data, and find what they need in a read-only experience.The Environmental Science PhD is still able to find what he is looking for with Linked Data butcan do so in a much more intuitive way. The doctor from NIH is now able to find the datathey’re interested in and if they choose to take the next step, download the actual databehind the page. By quickly and easily obtaining the raw data, anyone from scientists tojournalists can generate their own applications without any knowledge of the Linked DataService itself.
  53. 53. What Callimachus is 53
  54. 54. Subject Predicate Object 54The heart of Callimachus is a template engine used to navigate, visualize and build applications upon Linked Data. Here we see some typicalRDF data, with a subject, a predicate and an object.
  55. 55. Subject Object (Predicate is defined in a template) 55Callimachus can use that data to build complex Web pages.
  56. 56. Subject Predicate (Object gets filled in when template is evaluated) 56It does this with a template language that is simply XHTML with RDFa markup. There are some extensions for syntacticconvenience.
  57. 57. Templates• Written in XHTML+RDFa (declarative pattern);• Parsed to create SPARQL queries;• Query results are filled into the same template. 57
  58. 58. Controller RDF Store Web server HTTP GET Class Resource request Viewable RDF response SPARQL query XHTML XML Template +RDFa template apply.xsl Engine template HTTP response HTML 58Callimachus is implemented as a Web MVC architecture with an underlying RDF DB. Theprocess shown demonstrates how a view is generated from a Web request.
  59. 59. Create/edit templates are HTML forms 59Callimachus templates may also be used to create or edit RDF data by generating HTMLforms.
  60. 60. 60Callimachus provides a pseudo file system that is used to store and represent content,including RDF/OWL data, named SPARQL queries, schemata, templates, etc. The pseudo filesystem provides a common view of content that is abstracted from its actual storage location;RDF data is stored in an RDF store whereas file-oriented content is stored in a BLOB store.
  61. 61. 61Documents, including data and ontologies, can be uploaded via drag-and-drop when usingan HTML5-compliant browser. File upload via a separate interface is available for olderbrowsers.
  62. 62. Linked Data management system located at a Tier 1 Cloud Provider (FISMA compliant) RDF Database Resource URIs REST API SPARQL endpoint Public Web Browser Application, Script or automated client Registered developer 62Users of Callimachus see a generated Web interface, but can also directly access the data viaREST or SPARQL. SPARQL Named Queries (like stored procedures) allow for automatedconversion to different formats for reuse in non-RDF environments.
  63. 63. 63Callimachus can associate SPARQL queries with URLs, so that they are executed when theirURL is resolved. We call these “named queries” and they are analogous to stored proceduresin a relational database. Named queries can accept parameters, which allows them to be avery flexible way to manage routine access to queries that can drive visualizations.
  64. 64. 64The view of a named query displays its results. Results, like template results, are naturallycached to increase performance.
  65. 65. 65The results of named queries may be formatted in a variety of ways, or arbitrarilytransformed via XProc pipelines and XSLT. This screenshot shows the results of a namedquery being used to drive a Google Chart widget. Callimachus also has stock transforms ford3 visualizations.
  66. 66. Leather tags holding metadataPapyrus rolls 66
  67. 67. Credits Gartner: “Innovation Insight: Linked Data Drives Innovation Through Information- David Newman Sharing Network Effects” Published: 15 December 2011 Linking Government Data, Springer (2011) David Wood, ed. Digital Government Strategy: Building a 21st Century Platform to Better Serve the American People, US Executive Branch government.htmlW3C Linked Data Cookbook other photos and images © 2010-2012 3 Round Stones, Inc. and released under a CC-by-sa license 67
  68. 68. This work is Copyright © 2011-2012 3 Round Stones Inc. It is licensed under the Creative Commons Attribution 3.0 Unported License Full details at: You are free: to Share — to copy, distribute and transmit the work to Remix — to adapt the work Under the following conditions: Attribution. You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one. 68This presentation is licensed under a Creative Commons BY-SA license, allowing you to shareand remix its contents as long as you give us attribution and share alike.