2004-11-13 Supersite Relational Database Project: (Data Portal?)


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

2004-11-13 Supersite Relational Database Project: (Data Portal?)

  1. 1. Supersite Relational Database Project: (Data Portal?) a sub- project of St. Louis Midwest Supersite Project Draft of the November 16, 2001 Presentation to the Supersite Program Nov 13, 2001
  2. 2. Purpose of the Supersite Relational Database System <ul><li>Design, populate and maintain a database which: </li></ul><ul><ul><li>Includes monitoring data from Supersites and auxiliary projects </li></ul></ul><ul><ul><li>Facilitates cross-Supersite [regional or comparative] data analyses </li></ul></ul><ul><ul><li>Supports the analyses by a variety of research groups </li></ul></ul>
  3. 3. Stated Features of Relational Data System <ul><li>Data Input: </li></ul><ul><ul><li>Data input electronically through FTP, Web browser, (CD, if necessary) </li></ul></ul><ul><ul><li>Modest amount of metadata on sites, instruments, data sources/version, contacts etc. </li></ul></ul><ul><ul><li>Data structures, formats and submission procedures simple for the submitters </li></ul></ul><ul><li>Data Storage and Maintenance: </li></ul><ul><ul><li>Data stored in relational database(s), possibly distributed over multiple servers </li></ul></ul><ul><ul><li>Maintenance of data holdings catalog and and request logs </li></ul></ul><ul><ul><li>Data updates quarterly </li></ul></ul><ul><li>Data Access: </li></ul><ul><ul><li>Access method : User-friendly web-access by multiple authorized users </li></ul></ul><ul><ul><li>Data finding : Metadata catalog of datasets </li></ul></ul><ul><ul><li>Data query : by parameter, method, location, date/time, or other metadata </li></ul></ul><ul><ul><li>Data output format : ASCII, spreadsheet, other (dbf, XML) </li></ul></ul>
  4. 4. Database Schema Design <ul><li>Fact Table : A fact table (yellow) contains the main data of interest, i.e. the pollutant concentration by location, day, pollutant and measurement method. </li></ul><ul><li>Star Schema consists of a central fact table surrounded by de-normalized dimensional tables (blue) describing the sites, parameters, methods .. </li></ul><ul><li>Snowflake Schema is an extension of the star schema where each point of the star ‘explodes’ into further fully normalized tables, expanding the description of each dimension. </li></ul><ul><li>Snowflake schema can capture all the key data content and relationships if full detail. It is well suited for capturing and encoding complex monitoring data into a robust relational database. </li></ul>
  5. 5. Abstract (Minimal) Star Schema for Integrative, Cross-Supersite, Spatio-Temporal Analysis <ul><li>The minimal Site table includes SiteID, Name and Lat/Lon . </li></ul><ul><li>The minimal Parameter table consists of ParamterID , Description and Unit </li></ul><ul><li>The time dimensional table is usually skipped since time is self-describing </li></ul><ul><li>The minimal Fact (Data) table consists of the Obs_Value and the three dimensional codes for Obs_DateTime , Site_ID and Parameter_ID </li></ul>For integrative, cross-Supersite analysis, data queries by time, location and parameter, the database has to have time, location and parameter as dimensions The above minimal (multidimensional) schema was used in the CAPITA data exploration software, Voyager for the past 22 years, encoding 1000+ datasets. Most Supersite data require a more elaborate schema to fully capture the content
  6. 6. Extended Star Schema for SRDS <ul><li>The Supersite program employs a variety of instrument/sampling/procedures </li></ul><ul><li>Hence, at least one additional dimension table is needed for Methods </li></ul><ul><li>A example extended star schema encodes the IMPROVE relational database (B. Schichtel) </li></ul>
  7. 7. Snowflake Example: Central Calif. AQ Study, CCAQS <ul><li>CCAQS schema incorporates a rich set of parameters needed for QA/QC (e.g. sample tracking) as well as for data analysis. </li></ul><ul><li>The fully relational CCAQS schema permits the enforcing of integrity constraints and it has been demonstrated to be useful for data entry/verification. </li></ul><ul><li>However, no two snowflakes are identical. The rich snowflake schemata for one sampling/analysis environment cannot be easily transplanted elsewhere. </li></ul><ul><li>More importantly, many of the recorded parameters ‘on the fringes’ are not particularly useful for integrative, cross-supersite, regional analyses. </li></ul>
  8. 8. Data Portal: Features <ul><li>Data reside in their respective home environment. ‘Uprooted’ data in separate databases are not easily updated, maintained, enriched. </li></ul><ul><li>Abstract (universal) query/retrieval facilitates integration and comparison along the key dimensions (space, time, parameter, method) </li></ul><ul><li>The open architecture data portal, (based on Web Services) promotes the building of further value chains: Data Viewers, Data Integration Programs, Automatic Report Generators etc.. </li></ul>
  9. 9. From Heterogeneous to Homogeneous Schema <ul><li>Individual Supersite SQL databases can be queried by along spatial, temporal and parameter dimensional queries. However, the query to retrieve the same information depends on the of the particular database. </li></ul><ul><li>A way to homogenize the distributed data is access all the data through a Data Adapter using only a subset of the tables/fields from any particular database (red) </li></ul><ul><li>The proposed extracted (abstract) schema is the Minimal Star Schema, (possibly expanded ….). The final form of the extracted data schema will be arrived at by consensus. </li></ul>Data Adapter Extraction of homogeneous data from heterogeneous sources Subset used Abstract Schema Fact Table
  10. 10. Federated Data Warehouse Architecture <ul><li>Tree-tier architecture consisting of </li></ul><ul><ul><li>Provider Tier: Back-end servers containing heterogeneous data, maintained by the federation members </li></ul></ul><ul><ul><li>Proxy Tier: Retrieves designated Provider data and homogenizes it into common, uniform Datasets </li></ul></ul><ul><ul><li>User Tier: Accesses the Proxy Server and uses the uniform data for presentation, integration or processing </li></ul></ul><ul><li>The Provider servers interact only with the Proxy Server in accordance with the Federation Contract </li></ul><ul><ul><li>The contract sets the rules of interaction (accessible data subsets, types of queries) </li></ul></ul><ul><ul><li>Strong server security measures enforced, e.g. through Secure Socket layer </li></ul></ul><ul><li>The data User interacts only with the generic Proxy Server using flexible Web Services interface </li></ul><ul><ul><li>Generic data queries, applicable to all data in the Warehouse (e.g. space, time, parameter data sub-cube) </li></ul></ul><ul><ul><li>The data query is addressed to a Web Service provided by the Proxy Server of the Federation </li></ul></ul><ul><ul><li>Uniformly formatted, self-describing data packages are handed to the user for presentation or further processing </li></ul></ul>SQLDataAdapter1 CustomDataAdapter SQLDataAdapter2 SQLServer1 SQLServer2 LegacyServer Presentation Data Access & Use Provider Tier Heterogeneous Data Proxy Tier Data Homogenization, etc. Member Servers Proxy Server User Tier Data Consumption Processing Integration Federated Data Warehouse Fire Wall, Federation Contract Web Service, Uniform Query & Data
  11. 11. Universal Query/Response from SQL servers <ul><li>A common feature of all SQL databases for AQ data is that they can be queried by along spatial, temporal and parameter dimensional queries. </li></ul><ul><li>However, the query to retrieve the same information depends on the of the particular database. </li></ul><ul><li>A way to homogenize the distributed data is access all the data through an abstract virtual schema. </li></ul>
  12. 12. Summary of Proposed Database Schema Design <ul><li>The starting point for the design of Supersite Relational Database schema will be the Minimal Star Schema for fixed-location monitoring data. </li></ul><ul><li>Extensions will be made if it clearly benefits regional analysis and cross-Supersite comparisons </li></ul><ul><li>The possible extensions, based on user needs, may include the addition of: </li></ul><ul><ul><li>‘ Methods’ dimension table to identify the sampling/analysis method of each observation </li></ul></ul><ul><ul><li>Additional attributes (columns) Site and Parameter tables </li></ul></ul><ul><li>The Supersite data are not yet ready for submission to the NARSTO archive. Thus, there is still time to develop an agreed-upon schema for the Supersite data in SRDS. </li></ul><ul><li>The schema modifications and and the consensus-building will be conducted through the SRDS website </li></ul>
  13. 13. Data Entry to the Supersite Relational Data System: <ul><li>Automatic translation and transfer of NARSTO-archived DES data to SQL </li></ul><ul><li>Web-submission of of relational tables by the data producers/custodians </li></ul><ul><li>Batch transfer of large auxiliary datasets to the SQL server </li></ul>EPA Supersite Data Coordinated Supersite Relational Tables EOSDIS Data Archive NARSTO ORNL DES, Data Ingest Supersite SQL Server DES-SQL Transformer Manual- SQL Transformer Auxiliary Batch Data Data Query Table Output Direct Web Data Input
  14. 14. Data Preparation Procedures: <ul><li>Data gathering, QA/QC and standard formatting is to be done by individual projects </li></ul><ul><li>The data exchange standards , data ingest and archives are by ORNL and NASA </li></ul><ul><li>Data ingest is to automated, aided by tools and procedures supplied by this project </li></ul><ul><ul><li>NARSTO DES-SQL translator </li></ul></ul><ul><ul><li>Web submission tools and procedures </li></ul></ul><ul><ul><li>Metadata Catalog and I/O facilities </li></ul></ul><ul><li>Data submissions and access will be password protected as set by the community. </li></ul><ul><li>Submitted data will be retained in a temporary buffer space and following verification transferred to the shared SQL database. </li></ul><ul><li>The data access, submissions etc. will be automatically recorded an summarized in human-readable reports. </li></ul>
  15. 15. Data Catalog <ul><li>Data Catalog and discussion page of the CAPITA Xsystem </li></ul>
  16. 16. Related CAPITA Projects <ul><li>EPA Network Design Project (~$150K/yr –April 2003). Development of novel quantitative methods of network optimization. The network performance evaluation is conducted using the complete PM FRM data set in AIRS which will be available for input into the SRDS. </li></ul><ul><li>EPA WebVis Project (~$120K/yr - April 2003). Delivery current visibility data to the public through a web-based system. The surface met data are being transferred into the SQL database (Since March 2001) and will be available to SRDS. </li></ul><ul><li>NSF Collaboration Support Project (~$140K/yr – Dec 2004). Continuing development of interactive web sites for community discussions and for web-based data sharing; (directly applicable to this project) </li></ul><ul><li>NOAA ASOS Analysis Project (~$50K/yr - May 2002). Evaluate the potential utility of the ASOS visibility sensors (900 sites, one minute resolution) as PM surrogate. Data now available for April-October 2001 – can be incorporated into to the Supersite Relational Data System. </li></ul><ul><li>St. Louis Supersite Project website (~$50K/yr – Dec 2003) . The CAPITA group maintains the St. Louis Supersite website and some auxiliary data. It will also be used for this project </li></ul>
  17. 17. Federated Data Warehouse Architecture XML Web Services HTTP Services Time Chart Scatter Chart Text, Table Data View & Process Tier Layered Map Cursor OpenGIS Services Data are rendered by linked Data Views (map, time, text) Distributed data of multiple types (spatial, temporal text ) The Broker handles the views, connections, data access, cursor Satellite Vector GIS Data XDim Data OLAP Cube SQL Table Text Data Web Page Text Data Data Warehouse Tier Data View Manager Connection Manager Data Access Manager Cursor-Query Manager
  18. 18. Example Data Viewer (to be made more Supersite relevant) Map View Variable View Time View WebCamView The views are linked so that making a change in one view, such as selecting a different location in the map view, updates the other views.
  19. 19. Supersite Relational Data System: Schedule <ul><li>First four four months to design of the relational database, associated data transformers, I/O; submitted to the Supersite workgroups for comment </li></ul><ul><li>In six months, Supersite data preparation and entry begins </li></ul><ul><li>In Year 2 and Year 3, data sets will be updated by providers as needed; system accessible to data user community </li></ul>Year 1 - 2002 Year 2 - 2003 Year 2 - 2004 RDMS Design Feedback Impl. & Test SQL Supersite Data Entry Auxiliary Data Entry Other Coordinated Data Entry Supersite, Coordinated and Auxiliary Data Updates
  20. 20. Personnel, Management and Facilities <ul><li>Personnel </li></ul><ul><li>PI, R. B. Husar (10%), Kari Hoijarvi (25%). Software experience at CAPITA, Microsoft, Visala. </li></ul><ul><li>20% of project budget ($12k/yr) to consultants: J. Watson, DRI, W. White and J. Turner, WU. </li></ul><ul><li>Collaborators, (CAPITA associates): B. Schichtel, CIRA, S. Falke, EPA, M. Bezic, Microsoft. </li></ul><ul><li>Management </li></ul><ul><li>This project is a sub-project of the St. Louis-Midwest Supersite project, Dr. Jay Turner, PI. </li></ul><ul><li>Special focus on supporting large scale, crosscutting, and integrative analysis. </li></ul><ul><li>This project will leverage the other CAPITA data sharing projects </li></ul><ul><li>Resources and Facilities </li></ul><ul><li>CAPITA has the largest known privately held collection of air quality, metrological and emission data, available in uniform Voyager format and extensively accessed from the CAPITA website </li></ul><ul><li>The computing and communication facilities include two servers, ten workstations and laptops, connected internally and externally through high-speed networks. </li></ul><ul><li>Software development tools, including the Visual Studio, part of the .NET dev-environment </li></ul>
  21. 21. Miscellaneous Stuff <ul><li>The remainder is pages are potentially reusable stuff – not yet organized. </li></ul>
  22. 22. OpenGIS Web Services <ul><li>Mission: Definition and specification of geospatial web services. </li></ul><ul><li>A Web service is an application that can be published, located, and dynamically invoked across the Web. </li></ul><ul><li>Applications and other Web services can discover and invoke the service. </li></ul><ul><li>The sponsors of the Web services initiative include </li></ul><ul><ul><li>Federal Geographic Data Committee </li></ul></ul><ul><ul><li>Natural Resources Canada </li></ul></ul><ul><ul><li>Lockheed Martin </li></ul></ul><ul><ul><li>National Aeronautics and Space Administration </li></ul></ul><ul><ul><li>U.S. Army Corps of Engineers Engineer Research and Development Center </li></ul></ul><ul><ul><li>U.S. Environmental Protection Agency EMPACT Program </li></ul></ul><ul><ul><li>U.S. Geological Survey </li></ul></ul><ul><ul><li>US National Imagery and Mapping Agency. </li></ul></ul><ul><li>Phase I - February 2002 </li></ul><ul><ul><li>Common Architecture: OGC Services Model, OGC Registry Services, and Sensor Model Language. </li></ul></ul><ul><ul><li>Web Mapping: Map Server- raster, Feature Server-vector, Coverage Server-image, Coverage Portrayal Services. </li></ul></ul><ul><ul><li>Sensor Web: OpenGIS Sensor Collection Service for accessing data from a variety of land, water, air and other sensors. </li></ul></ul>
  23. 23. Distributed Data Analysis & Dissemination System: D-DADS <ul><li>Specifications: </li></ul><ul><ul><li>Uses standardized forms of data, metadata and access protocols </li></ul></ul><ul><ul><li>Supports distributed data archives, each run by its own provider </li></ul></ul><ul><ul><li>Provides tools for data exploration, analysis and presentation </li></ul></ul><ul><li>Features: </li></ul><ul><ul><li>Data are structured as relational tables and multidim. data cubes </li></ul></ul><ul><ul><li>Dimensional data cubes are distributed but shared </li></ul></ul><ul><ul><li>Analysis is supported by built-in and user functions </li></ul></ul><ul><ul><li>Supports other data types, such as images, GIS data layers, etc. </li></ul></ul>
  24. 24. D-DADS Architecture
  25. 25. The D-DADS Components <ul><li>Data Providers supply primary data to system, through SQL or other data servers . </li></ul><ul><li>Standardized Description & Format populate and describe the data cubes and other data types using a standard metadata describing data </li></ul><ul><li>Data Access and Manipulation tools for providing a unified interface to data cubes, GIS data layers, etc. for accessing and processing (filtering, aggregating, fusing) data and integrating data into virtual data cubes </li></ul><ul><li>Users are the analysts who access the D-DADS and produce knowledge from the data </li></ul>The multidimensional data access and manipulation component of D-DADS will be implemented using OLAP.
  26. 26. Interoperability <ul><li>“ the ability to freely exchange all kinds of spatial information about the Earth and about objects and phenomena on, above, and below the Earth’s surface; and to cooperatively, over networks, run software capable of manipulating such information.” (Buehler & McKee, 1996) </li></ul><ul><li>Such a system has two key elements: </li></ul><ul><li>Exchange of meaningful information </li></ul><ul><li>Cooperative and distributed data management </li></ul>One requirement for an effective distributed environmental data system is interoperability, defined as,
  27. 27. On-line Analytical Processing: OLAP <ul><li>A multidimensional data model making it easy to select, navigate, integrate and explore the data. </li></ul><ul><li>An analytical query language providing power to filter, aggregate and merge data as well as explore complex data relationships. </li></ul><ul><li>Ability to create calculated variables from expressions based on other variables in the database. </li></ul><ul><li>Pre-calculation of frequently queried aggregated values, i.e. monthly averages, enables fast response time to ad hoc queries. </li></ul>
  28. 28. User Interaction with D-DADS Query XML data XML data Data View (Table, Map, etc.) Distributed Database
  29. 29. Metadata Standardization <ul><li>The Supersite Data Management Workgroup </li></ul><ul><li>NARSTO </li></ul><ul><li>FGDC </li></ul>Metadata standards for describing air quality data are currently being actively pursued by several organizations, including:
  30. 30. Potential D-DADS Nodes The following organizations are potential nodes in a distributed data analysis and dissemination system: <ul><li>CAPITA </li></ul><ul><li>NPS-CIRA </li></ul><ul><li>EPA Supersites </li></ul><ul><li>- California </li></ul><ul><li>- Texas </li></ul><ul><li>- St. Louis </li></ul>
  31. 31. Summary <ul><li>In the past, data analysis has been hampered by data flow resistances. However, the tools and framework to overcome each of these resistances now exist, including: </li></ul><ul><ul><li>World Wide Web </li></ul></ul><ul><ul><li>XML </li></ul></ul><ul><ul><li>OLAP </li></ul></ul><ul><ul><li>OpenGIS </li></ul></ul><ul><ul><li>Metadata standards </li></ul></ul><ul><li>Incorporating these tools will initiate a distributed data analysis and dissemination system. </li></ul>
  32. 32. ‘Global’ and ‘Local’ AQ Analysis <ul><li>AQ data analysis needs to be performed at both global and local levels </li></ul><ul><li>The ‘global’ refers to regional national, and global analysis. It establishes the larger-scale context . </li></ul><ul><li>‘ Local’ analysis focuses on the specific and detailed local features </li></ul><ul><li>Both global and local analyses are needed for for full understanding. </li></ul><ul><li>Global-local interaction (information flow) needs to be established for effective management. </li></ul>National and Local AQ Analysis
  33. 33. Data Re-Use and Synergy <ul><li>Data producers maintain their own workspace and resources (data, reports, comments). </li></ul><ul><li>Part of the resources are shared by creating a common virtual resources. </li></ul><ul><li>Web-based integration of the resources can be across several dimensions: </li></ul><ul><ul><li>Spatial scale : Local – global data sharing </li></ul></ul><ul><ul><li>Data content : Combination of data generated internally and externally </li></ul></ul><ul><li>The main benefits of sharing are data re-use, data complementing and synergy . </li></ul><ul><li>The goal of the system is to have the benefits of sharing outweigh the costs. </li></ul>Content Content User User User Local Local Global Global Virtual Shared Resources Data, Knowledge Tools, Methods User User Shared part of resources
  34. 34. Integration for Global-Local Activities Global Activity Local Benefit Global data, tools => Improved local productivity Global data analysis => Spatial context; initial analysis Analysis guidance => Standardized analysis, reporting Local Activity Global Benefit Local data, tools => Improved global productivity Local data analysis => Elucidate, expand initial analysis Identify relevant issues => Responsive, relevant global work Global and local activities are both needed – e.g. ‘think global, act local’ ‘ Global’ and ‘Local’ here refers to relative, not absolute scale
  35. 35. Content Integration for Multiple Uses (Reports) Data from multiple measurements are shared by their providers or custodians Data are integrated, filtered, aggregated and fused in the process of analysis Reports use the analysis for Status and Trends; Exposure Assessment; Compliance … The creation of the needed reports requires data sharing and integration from multiple sources.