0
Gulf of Mexico Hydrocarbon
Database: Integrating
Heterogeneous Data for
Improved Model Development
Anne E. Thessen, Sean
M...
Thank You to Data Providers
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•

NOAA/NOS Office of Response and
Restoration
Commonw...
LTRANS
• Lagrangian Transport
Model
• Open Source
• http://northweb.hpl.umc
es.edu/LTRANS.htm
• Used to predict transport
...
GISR Deepwater Horizon Database

Number of
Data Points

• Over 8 million georeferenced data points
• Over 13 GB
• Over 200...
Database Contents
• Oceanographic Data
–
–
–
–

Salinity
Temperature
Oxygen
More

•
•
•
•

Air
Water
Tissue
Sediment/Soil
...
Challenges
•
•
•
•

Obtaining the data
Heterogeneity
Metadata
Comparison
The Great Data Hunt
• Discovery
– Project directory
– Funding agency records
– Literature
– Internet search

Relevant

Tot...
The Great Data Hunt
• Access
– Online
– Ask directly
– Literature

data and
response
no data and
response
no data no
respo...
Heterogeneity
• Heterogeneity
– Terms
– Units
– Format
– Structure
– Quality Codes

Benzoic Acid

Carboxybenzene

E210

Be...
Heterogeneity
• Heterogeneity

n-Decane

– Terms
– Units
– Format
– Structure
– Quality Codes

122

parts per trillion
ppb...
Metadata
• Metadata
– Missing
– Not computable

Name
Unit

Location

Data
Point
Attribution

Time
Metadata
• Metadata
– Missing
– Not computable

Name
Unit
Method

Location

Data
Point
Attribution

Uncertainty

Time
Comparing to Model Output
Model Output in netCDF format
Parameter

Depth

Latitude

Longitude

TimeStamp

Nearest Neighbor...
Comparing to Model Output
• Set limits on what is considered nearestneighbor
• Not all data points have to be matched
• Da...
Attribution and Citation
• Literature citation
• Repository identifier
• Generate new
Future Work
•
•
•
•
•
•

More data
User feedback
Web Access
Users’ Guide
Manuscripts
Improved query
Questions?
The Great Data Hunt

– Online
– Ask directly
– Literature
We received responses
to 58% of our inquires
and obtained 40% of...
The Great Data Hunt

– Online
– Ask directly
– Literature

0-24 email exchanges per data set

We received responses
to 58%...
Why didn’t people share?
•
•
•
•
•

Paper not published yet – 30%
Passed the buck – 17%
Too busy – 9%
Medical problems – 9...
Upcoming SlideShare
Loading in...5
×

Gulf of Mexico Hydrocarbon Database: Integrating Heterogeneous Data for Improved Model Development

369

Published on

Presented on January 26, 2014 in Mobile, Alabama, USA at the 2014 Gulf of Mexico Oil Spill & Ecosystem Conference

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
369
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
3
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Hello, my name is Anne Thessen and I’m going to speak to you about some model development that we’ve been doing. First I would like to acknowledge my coauthors, Sean McGinnis, Elizabeth North and Ian Mitchell. We are part of the Gulf Integrated Spill Research Consortium. Our work was funded by the Gulf of Mexico Research Initiative. We received institutional support from Arizona State University and the University of Maryland Center for Environmental Science. I recently started my own business called The Data Detektiv that does the type of data work I’m about to present. If you like what you see and have need of this sort of expertise in your project please see me after the talk. These slides will be posted to slideshare later today.
  • This talk is primarily about building a database from multiple data sets. Here is a list of all the data providers. As you can see, we have many. It takes a village to build a database. We have a fantastic product and we are just starting to scratch the surface of what it can tell us, so we really appreciate all these folks sharing data and answering questions about their data.
  • The goal of our project is to modify an existing Lagrangian transport model, called LTRANS, so that it can be effectively used to understand the processes that determine transport and fate of hydrocarbons in the Gulf of Mexico. I won’t say much more about the model itself, but if you are interested, here is where you can learn more. The figure shows model output and field data together. The small points are model output and the large circles are field data. You can see there is a good match here, but we are dealing with geographic sampling bias.
  • To determine the efficacy of the model, we are comparing the output to field data collected after the Deepwater Horizon explosion. To accomplish this, we are compiling a database of oceanographic and hydrocarbon field measurements called the GISR Deepwater Horizon database. It can be queried to get the output we need for analysis. Currently, it is over 13 GB in size and contains over 8 million georeferenced data points gathered from published and unpublished sources, industry, government databases, volunteer networks and individual researchers.
  • The data base contains multiple types of oceanographic and chemistry data. This plot is an example of database content. It shows Naphthalene data from the beginning of August 2010. We have well over 10,000 naphthalene data points.
  • We encountered four major challenges while building and using this database. I will talk about each of them in turn.
  • The first challenge was finding and accessing data sets. A significant number of data sets were not in a repository or part of the published literature - and we expected this. To discover data we looked through project directories, databases of awarded projects, the literature, and the internet. That gave us a list of contacts. We ended up identifying 146 potentially relevant projects. We approached each contact via email to find out if they had data and if they were relevant. At the end of the process we identified 95 relevant data sets.
  • Once the data sets were discovered, they had to be accessed. Some were freely available and were simply downloaded. Some data sets were in repositories that may or may not involve working with the data manager to gain access. Others were published as a table in supplementary material. Most data sets involved communicating with the provider to get the complete data set and the metadata. There were a few instances where the provider instructed us to take the data from the figure, but we tried to avoid doing that. Out of those 95 relevant data sets, we received responses to 58% of our inquiries and were able to obtain 40% of the data sets. This chart is a breakdown of the 95 data sets. The dark orange represents the data sets we asked for - and received a response and the data. The dark purple are the data sets that were freely available online, so no communication was necessary. The light orange represents the data sets that were denied us. The light purple represents the inquires that went completely unanswered. You can look at this another way. The orange represents communication and the purple represents no communication. The dark colors represent data while the light colors represent no data. This is quite good compared to sharing in some communities which can be as low as 10%.
  • Then came the process of integrating the data sets, which brings us to our second challenge - Heterogeneity. We encountered heterogeneity in terms, units, formats, structures and quality codes. We normalized terms, units and codes algorithmically. Some of the formats and structures were normalized algorithmically. Terms were normalized using a Google Fusion Table that lists a “preferred name” and all of the synonyms for that name. An algorithm generates a table relating the homonym and the preferred name. This is connected to the database such that the preferred name can be pulled from the table. That way when the database is queried for a particular analyte, we don’t have missed data because one homonym is used in the original data set instead of another. For example, benzoic acid has five homonyms in the table. We had over 2,000 terms before reconciliation and 1,367 terms after reconciliation.
  • The units are handled similarly, except there is a transformation step, wherein some math is done to convert the value to the “preferred unit”. This allows us to normalize terms and units without changing the original data set. For example, n-Decane was represented by 6 different units. The number of different units in the database was decreased substantially after reconciliation. Formats varied from Access databases and shape files to pdf tables. All data sets, except for the databases, were normalized to our schema and then imported into an SQL database. The databases were transformed to SQL and then joined. Sometimes this had to be done manually. Sometimes we were able to write scripts to help. We are in the process of normalizing the quality codes, but we will probably handle this in the same manner as the terms and units.
  • The third challenge was metadata (or lack thereof). Metadata was often missing, in a separate location or in a separate format from the actual data. At a minimum, for the database to work, we needed to know the basics of what, where and when.
  • Ideally, we were able to get more information, like about methods and uncertainty. Compiling the metadata was an exercise in detective work that involved searching through multiple files and contacting data providers. This was often a very time consuming process.
  • The final challenge was in actually putting the database to use. We have only scratched the surface in this regard. We developed the “nearest neighbor” algorithm to connect a data point in the model output to its partner in the database (or vice versa) based on space/time. This is accomplished via a C# script that takes as input a link to each dataset, and the names of the fields to be considered in the distance function between two points.  This distance function is currently implemented as a stepped function, where the number of candidate points are filtered first by date, then by geospatial distance, and finally by depth. The output is given in SQL and links data points via their data point ID in the database.
  • Some important features of nearest-neighbor include…
  • An important part of reusing other people’s data is citing them appropriately. Data set citation is still a relatively new concept, but its starting to gain momentum through tools like ImpactStory that summarizes sharing activities and repositories like FigShare and Dryad. We worked with each of the data providers to find out how they wanted to be cited. Typically, if the data had a publication, the provider wanted the publication to be cited. Not all data sets had a publication. Data sets in repositories often had a citation already developed and provided by the repository. There were plenty of data sets that were unpublished and not in a repository. For these data sets we worked with the provider to generate a citation. This involved encouraging the provider to deposit data and receive a citable, unique identifier for the data. If the data set was already online, like on a personal web site, the access URL was given in the citation. We also plan to develop a citation for the database as a whole with all of the providers as authors. In the future, when a user executes a query, they will also be presented with a list of citations for the data sets that appear in the query results. So they can cite the database as a whole or the individual data sets they actually use.
  • We have accomplished a lot, but we still have much to do to fulfill our goals. There will be a lot of additional data released over the next year that will be added to the database. We will be giving web access to the contributors and plan to incorporate their feedback to improve usability before opening to a wider audience. We are currently drafting and refining a users’ guide. There will be manuscripts published on the process of gathering data that I just described and a more technical paper on the database itself. At the top of my wish list is building a more semantically-intelligent structure for improved query. We don’t currently have funds for this, but an ontology of terms would enable users to query for classes of parameters instead of single parameters and to use terms of their choosing. As I said before, this is a really great resource that we have only just begun to use. We look forward to getting many insights out of this database and making it available for others to get even more.
  • With that, I can take questions now or if you want to speak in more detail about this project or The Data Detektiv I would be happy to sit down over coffee, food or beer. I have worked on many different data types including oceanographic, chemical, ecological and taxonomic data sets. I can help you solve your data problems, so you can spend more time on research.
  • Most of the responses we received were quite timely. 40% were received within the first 24 hours and 27% were received within 2-7 days.
  • This process can be quite labor intensive. Some data sets required up to 24 email exchanges to get all of the data and metadata situated. The average was 7.8 emails.
  • We were actively denied data by 24% of the 95 contacts made. The other 36% did not respond to our requests at all. We know nothing about the data set or why it wasn’t shared. For those 24% that did give us a reason we see that “paper not published yet” was the primary reason. All of these folks expressed willingness to share after publication. So the sharing rate will increase dramatically once all these papers come out. 17% directed me to another person who did not respond at all. Only 9% told me they were too busy. Another 9% said the data or the samples got messed up in some way and was not useful. Interestingly, medical problems were also cited as a reason for not sharing. I also want to say, as an aside, that we know there are large data sets that are not available to us because of legal reasons. They were part of the Natural Resources Damage Assessment. These data sets were not included in any of these statistics. This does not add up to 100. The last 26% was a combination of random, one-off reasons or folks being hesitant and not really giving a reason.
  • Transcript of "Gulf of Mexico Hydrocarbon Database: Integrating Heterogeneous Data for Improved Model Development"

    1. 1. Gulf of Mexico Hydrocarbon Database: Integrating Heterogeneous Data for Improved Model Development Anne E. Thessen, Sean McGinnis, Elizabeth North, and Ian Mitchell http://www.slideshare.net/athessen
    2. 2. Thank You to Data Providers • • • • • • • • • • • • • • • • • • • • • NOAA/NOS Office of Response and Restoration Commonwealth Scientific and Industrial Research Organization Environmental Protection Commission of Hillsborough County National Estuarine Research Reserves Sarah Allan Kim Anderson Jamie Pierson Nan Walker Ed Overton Richard Aronson Ryan Moody Charlotte Brunner William Patterson Kyeong Park Kendra Daly Liz Kujawinski Jana Goldman Jay Lunden Samuel Georgian Leslie Wade British Petroleum • • • • • • • • • • • • • • • • • • • • • • • Joe Montoya Terry Hazen Mandy Joye Richard Camilli Chris Reddy John Kessler David Valentine Tom Soniat Matt Tarr Tom Bianchi Tom Miller Elise Gornish Terry Wade Steven Lohrenz Dick Snyder Paul Montagna Patrick Bieber Wei Wu Mitchell Roffer Dongjoo Joung Mark Williams Don Blake Jordan Pino • • • • • • • • • • • • • • • • • • • • • • • John Valentine Jeffrey Baguely Gary Ervin Erik Cordes Michaeol Perdue Bill Stickle Andrew Zimmerman Andrew Whitehead Alice Ortmann Alan Shiller Laodong Guo A. Ravishankara Ken Aikin Tom Ryerson Prabhakar Clement Christine Ennis Eric Williams Ed Sherwood Julie Bosch Wade Jeffrey Chet Pilley Just Cebrian Ambrose Bordelon
    3. 3. LTRANS • Lagrangian Transport Model • Open Source • http://northweb.hpl.umc es.edu/LTRANS.htm • Used to predict transport of particles, subsurface hydrocarbons, and surface oil slicks (in development)
    4. 4. GISR Deepwater Horizon Database Number of Data Points • Over 8 million georeferenced data points • Over 13 GB • Over 2000 analytes and parameters
    5. 5. Database Contents • Oceanographic Data – – – – Salinity Temperature Oxygen More • • • • Air Water Tissue Sediment/Soil • Chemistry Data – – – – Hydrocarbons Heavy metals Nutrients More n > 10,000
    6. 6. Challenges • • • • Obtaining the data Heterogeneity Metadata Comparison
    7. 7. The Great Data Hunt • Discovery – Project directory – Funding agency records – Literature – Internet search Relevant Total Data Sets Discovered n = 146
    8. 8. The Great Data Hunt • Access – Online – Ask directly – Literature data and response no data and response no data no response data no response We received responses to 58% of our inquires and obtained 40% of the identified data sets
    9. 9. Heterogeneity • Heterogeneity – Terms – Units – Format – Structure – Quality Codes Benzoic Acid Carboxybenzene E210 Benzoic Acid Dracylic Acid C7H6O2 2,212 1,367
    10. 10. Heterogeneity • Heterogeneity n-Decane – Terms – Units – Format – Structure – Quality Codes 122 parts per trillion ppbv 37 μg/g ng/g ppt mg/kg μg/kg ppb
    11. 11. Metadata • Metadata – Missing – Not computable Name Unit Location Data Point Attribution Time
    12. 12. Metadata • Metadata – Missing – Not computable Name Unit Method Location Data Point Attribution Uncertainty Time
    13. 13. Comparing to Model Output Model Output in netCDF format Parameter Depth Latitude Longitude TimeStamp Nearest Neighbor Algorithm Database in SQL Parameter Depth Latitude Longitude TimeStamp Parameter Depth Latitude Longitude TimeStamp Parameter Depth Latitude Longitude TimeStamp Parameter Depth Latitude Longitude TimeStamp
    14. 14. Comparing to Model Output • Set limits on what is considered nearestneighbor • Not all data points have to be matched • Data points can have many neighbors • Matching is done before query
    15. 15. Attribution and Citation • Literature citation • Repository identifier • Generate new
    16. 16. Future Work • • • • • • More data User feedback Web Access Users’ Guide Manuscripts Improved query
    17. 17. Questions?
    18. 18. The Great Data Hunt – Online – Ask directly – Literature We received responses to 58% of our inquires and obtained 40% of the identified data sets 25 20 Number of Responses • Discovery • Access 40% of those responses were received within 24 hours and 27% were received within the first week 15 10 5 0 First Day 2 to 7 8 to 30 31 to 60 61 to 90 91 to 120 Time to First Response (Days) 121 to 150 151 to 180
    19. 19. The Great Data Hunt – Online – Ask directly – Literature 0-24 email exchanges per data set We received responses to 58% of our inquires and obtained 40% of the identified data sets 7 6 Number of Data Sets • Discovery • Access 40% of those responses were received within 24 hours and 27% were received within the first week 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Number of Emails
    20. 20. Why didn’t people share? • • • • • Paper not published yet – 30% Passed the buck – 17% Too busy – 9% Medical problems – 9% Poor quality – 9%
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×