This document discusses efforts to standardize data products from three NASA laser altimeter missions - ICESat, ICESat-2, and MABEL. It describes designing similar data products for all three missions to promote interoperability. Products are being developed for ICESat data using lessons from MABEL. Code is also being generated to help create products from specifications to reduce development time. The goal is to make the multi-rate point data from all three missions easily accessible and usable.
This document discusses using HDF4 file content maps to enable cloud computing capabilities for HDF4 files. HDF4 files contain scientific data but their large size and legacy format pose challenges. The document proposes creating XML maps that describe HDF4 file structure and contents, including chunk locations and sizes. These maps could then be indexed and searched to locate relevant data chunks. Only those chunks would need to be extracted to the cloud, avoiding unnecessary data transfers. This would allow HDF4 files to be queried and analyzed using cloud-based tools while reducing storage costs.
The document discusses the current status and schedule of HDF-EOS development. HDF-EOS 2 is the current storage format for EOS standard products, using HDF4, and is supported by NASA. HDF-EOS 5 is a rewrite using HDF5 that is designed to resemble HDF-EOS 2. It will be used by the EOS Aura mission. The document outlines current archive holdings using HDF-EOS 2 and functionality included in HDF-EOS 5.
The document discusses view_hdf, a visualization and analysis tool developed to access data from HDF products generated by NASA's CERES Data Management System. view_hdf allows users to select and plot variables from CERES Science Data Sets without needing knowledge of HDF formats. It provides capabilities such as 2D and 3D graphics, geographic mapping, statistics computation, and saving/printing plots. Contact information is provided for accessing the CERES data center and documentation for view_hdf.
Twitter's Data Replicator for Google Cloud Storagelohitvijayarenu
Twitter replicates petabytes of data from its Hadoop clusters to Google Cloud Storage daily using a data replicator architecture. The replicator copies data in hourly/daily partitions from source clusters to destination clusters and Google Cloud Storage, maintaining consistent access for users through a unified file system abstraction and data access layer. This replication enables unlocking analytics tools on Google Cloud Platform and provides a backup of Twitter data on cloud object storage.
HDF-EOS to GeoTIFF (HEG) conversion tool is a versatile tool that besides converting HDF-EOS to GeoTIFF it subsets, resamples, convert swath to grid and/or grids to different projections, subsamples, or does mosaicing. HDF-EOS plug-in is a library that extends HDFView, a visual tool for browsing and editing HDF4 and HDF5 files, for EOS applications. These tools will be discussed in detail and a demo will be presented.
This document provides information about HDF (Hierarchical Data Format) tools and resources for working with Earth observation data. It summarizes HDF's focus on helping users at different stages of working with data, from initial product design to long-term archiving. It also describes specific HDF tools for viewing, comparing, converting between formats and adding metadata to scientific data files.
This document discusses efforts to standardize data products from three NASA laser altimeter missions - ICESat, ICESat-2, and MABEL. It describes designing similar data products for all three missions to promote interoperability. Products are being developed for ICESat data using lessons from MABEL. Code is also being generated to help create products from specifications to reduce development time. The goal is to make the multi-rate point data from all three missions easily accessible and usable.
This document discusses using HDF4 file content maps to enable cloud computing capabilities for HDF4 files. HDF4 files contain scientific data but their large size and legacy format pose challenges. The document proposes creating XML maps that describe HDF4 file structure and contents, including chunk locations and sizes. These maps could then be indexed and searched to locate relevant data chunks. Only those chunks would need to be extracted to the cloud, avoiding unnecessary data transfers. This would allow HDF4 files to be queried and analyzed using cloud-based tools while reducing storage costs.
The document discusses the current status and schedule of HDF-EOS development. HDF-EOS 2 is the current storage format for EOS standard products, using HDF4, and is supported by NASA. HDF-EOS 5 is a rewrite using HDF5 that is designed to resemble HDF-EOS 2. It will be used by the EOS Aura mission. The document outlines current archive holdings using HDF-EOS 2 and functionality included in HDF-EOS 5.
The document discusses view_hdf, a visualization and analysis tool developed to access data from HDF products generated by NASA's CERES Data Management System. view_hdf allows users to select and plot variables from CERES Science Data Sets without needing knowledge of HDF formats. It provides capabilities such as 2D and 3D graphics, geographic mapping, statistics computation, and saving/printing plots. Contact information is provided for accessing the CERES data center and documentation for view_hdf.
Twitter's Data Replicator for Google Cloud Storagelohitvijayarenu
Twitter replicates petabytes of data from its Hadoop clusters to Google Cloud Storage daily using a data replicator architecture. The replicator copies data in hourly/daily partitions from source clusters to destination clusters and Google Cloud Storage, maintaining consistent access for users through a unified file system abstraction and data access layer. This replication enables unlocking analytics tools on Google Cloud Platform and provides a backup of Twitter data on cloud object storage.
HDF-EOS to GeoTIFF (HEG) conversion tool is a versatile tool that besides converting HDF-EOS to GeoTIFF it subsets, resamples, convert swath to grid and/or grids to different projections, subsamples, or does mosaicing. HDF-EOS plug-in is a library that extends HDFView, a visual tool for browsing and editing HDF4 and HDF5 files, for EOS applications. These tools will be discussed in detail and a demo will be presented.
This document provides information about HDF (Hierarchical Data Format) tools and resources for working with Earth observation data. It summarizes HDF's focus on helping users at different stages of working with data, from initial product design to long-term archiving. It also describes specific HDF tools for viewing, comparing, converting between formats and adding metadata to scientific data files.
This document discusses accessing Earth observation data through the OGC Web Coverage Service (WCS) 2.0 with an Earth Observation Application Profile (EO AP). It describes how the WCS EO AP maps Earth observation terminology to the WCS model, outlines the implementation of the WCS EO AP including supported data formats and products, and discusses future work such as adding more data support and integrating with other OGC services.
Status of HDF-EOS and access tools will be summarized. Updates on HDF-EOS, TOOLKIT, HDFView plug-in and The HDF-EOS to GeoTIFF (HEG) conversion tool, including recent changes to the software, ongoing maintenance, upcoming releases, future plans, and issues will be discussed.
This tutorial is designed for anyone who needs to work with data stored in HDF and HDF5 files.
The first part of the tutorial will focus on the HDF5 utilities to display the contents of HDF5 files, to extract and to import data from and to HDF5 files, to compare two HDF5 files, and more. Participants will be guided through the hand-on examples and will learn about different tools options. New changes and advanced features will be covered in a separate session (Updates on HDF tools) on Wednesday.
The second part of tutorial includes a hands-on session to learn the HDF (4 & 5) Java browsing tool, HDFView. The tool and special plug-ins will be used to work with the existing HDF, HDF-EOS, and netCDF-4 files, and to create a new HDF5 file. The tutorial will cover basic features of HDFView.
This document provides an overview of the status of HDF-EOS software and tools. It describes HDF-EOS5, a rewrite of HDF-EOS2 based on HDF5, which is used operationally by EOS instrument teams. The document also outlines software releases, major developments including bug fixes, and future plans, and provides contact information for support.
GeoPackage, OWS Context and the OGC Interoperability ProgramRaj Singh
Overview of GeoPackage, OWS Context and the OGC Interoperability Program Testbed process with details on how OGC testbeds work and the time commitment.
The document discusses recent and upcoming improvements to parallel HDF5 for improved I/O performance on HPC systems. Recent improvements include reducing file truncations, distributing metadata writes across processes, and improved selection matching. Upcoming work includes a high-level HPC API, funding for Exascale-focused enhancements, and future improvements like asynchronous I/O and auto-tuning to parallel file systems. Performance tips are also provided like passing MPI hints and using collective I/O.
Introduction to GeoPackage and OWS ContextRaj Singh
GeoPackage is the modern alternative to formats like SDTS and Shapefile. At it’s core, GeoPackage is simply a SQLite database schema. If you know SQLite, you are close to knowing GeoPackage. Install Spatialite – the premiere spatial extention to SQLite – and you get all the performance of a spatial database along with the convenience of a file-based data set that can be emailed, shared on a USB drive or burned to a DVD.
A ‘context document’ specifies a fully configured service set which can be exchanged (with a consistent interpretation) among clients supporting the standard. The OGC Web Services Context Document (OWS Context) was created to allow a set of configured information resources (service set) to be passed between applications primarily as a collection of services. OWS Context is developed to support in-line content as well. The goal is to support use cases such as the distribution of search results, the exchange of a set of resources such as OGC Web Feature Service (WFS), Web Map Service (WMS), Web Map Tile Service (WMTS), Web Coverage Service (WCS) and others in a ‘common operating picture’. Additionally OWS Context can deliver a set of configured processing services (Web Processing Service (WPS)) parameters to allow the processing to be reproduced on different nodes.
In this talk, we will give an update on the HDF5 OPeNDAP project. We will update the new features inside OPeNDAP HDF5 data handler. We will also introduce a new HDF5-Friendly OPeNDAP client library and demonstrate how it can help users to view and analyze remote HDF-EOS5 data served by OPeNDAP HDF5 handler. A demo will be presented with a customized OPeNDAP visualization client (GrADS) that uses the library.
ENVI and IDL software support HDF and HDF-EOS. Capabilities and the HDF tools built on ENVI and IDL will be reviewed. The current development will be discussed and demonstrated.
Data produced by the Ozone PEATE from the Ozone Mapping and Profiler Suite (OMPS) instruments are to be stored in HDF5, not HDF-EOS, but will still need some features similar to those in HDF-EOS. In particular, a mechanism for handling dimension names will be needed. This poster proposes a method to handle dimension names for arrays in HDF5 in a manner commensurate with HDF-EOS5.
Accessibility and usability of NPP/NPOESS data in HDF5 can be enhanced by providing tools that simplify and standardize how data is accessed and presented. In this project, The HDF Group is creating such tools in the form of software to read and write certain key data types and data aggregates used in NPP/NPOESS data products, and extending HDFView to extract, present and export these data effectively. In particular, the work will focus on NPP/NPOESS use of HDF5 region references and quality flags. The HDF Group will also provide high quality user support for the project.
The document discusses the transition of the CFD General Notation System (CGNS) to using HDF5 as its main storage format instead of ADF. CGNS provides a standard for storing computational fluid dynamics (CFD) simulation data. It is switching to HDF5 to take advantage of HDF5's capabilities like parallel I/O and availability in many tools, though initial HDF5 implementations have larger file sizes and slower I/O performance than ADF. The CGNS steering committee is evaluating the HDF5 implementation and investigating performance problems to further improve the transition.
The document provides an overview of National Polar-orbiting Operational Satellite System (NPOESS) HDF5 files. Key points include:
1) NPOESS is a satellite system that collects environmental data related to weather, atmosphere, oceans, land, and near-space. Data products are distributed in HDF5 format.
2) NPOESS HDF5 files contain raw data records, sensor data records, intermediate products, application related products, and environmental data records.
3) Data is organized into granules and aggregations. Granules contain a segment of data and are referenced by aggregations, which group granules over a temporal range.
This is a slide from
HDF AND HDF-EOS WORKSHOP V
February 26 - 28, 2002
Source: http://hdfeos.org/workshops/ws05/presentations/Ullman/11c-Discussion_notes.ppt
HDF-EOS is a software library designed to support NASA Earth Observing System (EOS) science data. HDF is the Hierarchical Data Format developed by The HDF Group. Specific data structures in HDF-EOS5 which are containers for science data are: Grid, Point, Zonal Average and Swath. These data structures are constructed from standard HDF5 data objects, using EOS conventions, through the use of a software library. This presentation is intended to familiarize current HDF-EOS users with the structure of HDF-EOS5 files and the Grid, Swath, Point and Zonal Average structures used in these files.
The document discusses migrating from HDF5 1.6 to HDF5 1.8. It provides an overview of new features in HDF5 1.8, including a revised file format, improvements to group storage, new link types like external links, and enhanced error handling. The document recommends helping with the transition to HDF5 1.8 by discussing beneficial new features and awareness of compatibility issues when moving from 1.6 to 1.8.
Are you curious what is coming next? Are you willing to try some prototyped software? If so, come to this talk to learn about new HDF5 features such as HDF5 FORTARN2003 APIs, metadata journaling etc. and how your application can benefit from using them.
An update on HDF, including a status report on the HDF Group, an overview of recent changes to the HDF4 and HDF5 libraries and tools, plans for future releases, HDF Group projects and collaborations, and future plans.
Current status of HDF-EOS and access tools will be summarized. Update on HDF-EOS, HDFView plug-in and The HDF-EOS to GeoTIFF (HEG) conversion tool, including recent changes to the software, ongoing maintenance, upcoming releases, future plans, and issues will be discussed.
This tutorial will introduce the three levels of the HDF-Java products: the HDF-Java wrapper (or Java Native Interfaces to the standard HDF libraries), the HDF-Java object package, and the HDFView. The Java wrapper provides standard Java APIs that allow applications to call the C HDF libraries from Java. The HDF-Java object package implements HDF data objects, e.g. Groups and Datasets, in an object-oriented form and makes it easy for applications to use the libraries. The HDFView is a visual tool for browsing and editing HDF4 and HDF5 files.
As the volume and complexity of data from myriad Earth Observing platforms, both remote sensing and in-situ increases so does the demand for access to both data and information products from these data. The audience no longer is restricted to an investigator team with specialist science credentials. Non-specialist users from scientists from other disciplines, science-literate public, to teachers, to the general public and decision makers want access. What prevents them from this access to resources? It is the very complexity and specialist developed data formats, data set organizations and specialist terminology. What can be done in response? We must shift the burden from the user to the data provider. To achieve this our developed data infrastructures are likely to need greater degrees of internal code and data structure complexity to achieve (relatively) simpler end-user complexity. Evidence from numerous technical and consumer markets supports this scenario. We will cover the elements of modern data environments, what the new use cases are and how we can respond to them.
The document provides an overview and status update of the Earth Observing System Data and Information System (EOSDIS). It discusses that EOSDIS supports EOS missions by ingesting, processing, archiving, and distributing their data. It notes that the volume of archived data has grown to over 4.9 petabytes containing over 2700 datasets. It also outlines plans to transition to new systems and complete updates to the EOSDIS code by 2009.
This document discusses accessing Earth observation data through the OGC Web Coverage Service (WCS) 2.0 with an Earth Observation Application Profile (EO AP). It describes how the WCS EO AP maps Earth observation terminology to the WCS model, outlines the implementation of the WCS EO AP including supported data formats and products, and discusses future work such as adding more data support and integrating with other OGC services.
Status of HDF-EOS and access tools will be summarized. Updates on HDF-EOS, TOOLKIT, HDFView plug-in and The HDF-EOS to GeoTIFF (HEG) conversion tool, including recent changes to the software, ongoing maintenance, upcoming releases, future plans, and issues will be discussed.
This tutorial is designed for anyone who needs to work with data stored in HDF and HDF5 files.
The first part of the tutorial will focus on the HDF5 utilities to display the contents of HDF5 files, to extract and to import data from and to HDF5 files, to compare two HDF5 files, and more. Participants will be guided through the hand-on examples and will learn about different tools options. New changes and advanced features will be covered in a separate session (Updates on HDF tools) on Wednesday.
The second part of tutorial includes a hands-on session to learn the HDF (4 & 5) Java browsing tool, HDFView. The tool and special plug-ins will be used to work with the existing HDF, HDF-EOS, and netCDF-4 files, and to create a new HDF5 file. The tutorial will cover basic features of HDFView.
This document provides an overview of the status of HDF-EOS software and tools. It describes HDF-EOS5, a rewrite of HDF-EOS2 based on HDF5, which is used operationally by EOS instrument teams. The document also outlines software releases, major developments including bug fixes, and future plans, and provides contact information for support.
GeoPackage, OWS Context and the OGC Interoperability ProgramRaj Singh
Overview of GeoPackage, OWS Context and the OGC Interoperability Program Testbed process with details on how OGC testbeds work and the time commitment.
The document discusses recent and upcoming improvements to parallel HDF5 for improved I/O performance on HPC systems. Recent improvements include reducing file truncations, distributing metadata writes across processes, and improved selection matching. Upcoming work includes a high-level HPC API, funding for Exascale-focused enhancements, and future improvements like asynchronous I/O and auto-tuning to parallel file systems. Performance tips are also provided like passing MPI hints and using collective I/O.
Introduction to GeoPackage and OWS ContextRaj Singh
GeoPackage is the modern alternative to formats like SDTS and Shapefile. At it’s core, GeoPackage is simply a SQLite database schema. If you know SQLite, you are close to knowing GeoPackage. Install Spatialite – the premiere spatial extention to SQLite – and you get all the performance of a spatial database along with the convenience of a file-based data set that can be emailed, shared on a USB drive or burned to a DVD.
A ‘context document’ specifies a fully configured service set which can be exchanged (with a consistent interpretation) among clients supporting the standard. The OGC Web Services Context Document (OWS Context) was created to allow a set of configured information resources (service set) to be passed between applications primarily as a collection of services. OWS Context is developed to support in-line content as well. The goal is to support use cases such as the distribution of search results, the exchange of a set of resources such as OGC Web Feature Service (WFS), Web Map Service (WMS), Web Map Tile Service (WMTS), Web Coverage Service (WCS) and others in a ‘common operating picture’. Additionally OWS Context can deliver a set of configured processing services (Web Processing Service (WPS)) parameters to allow the processing to be reproduced on different nodes.
In this talk, we will give an update on the HDF5 OPeNDAP project. We will update the new features inside OPeNDAP HDF5 data handler. We will also introduce a new HDF5-Friendly OPeNDAP client library and demonstrate how it can help users to view and analyze remote HDF-EOS5 data served by OPeNDAP HDF5 handler. A demo will be presented with a customized OPeNDAP visualization client (GrADS) that uses the library.
ENVI and IDL software support HDF and HDF-EOS. Capabilities and the HDF tools built on ENVI and IDL will be reviewed. The current development will be discussed and demonstrated.
Data produced by the Ozone PEATE from the Ozone Mapping and Profiler Suite (OMPS) instruments are to be stored in HDF5, not HDF-EOS, but will still need some features similar to those in HDF-EOS. In particular, a mechanism for handling dimension names will be needed. This poster proposes a method to handle dimension names for arrays in HDF5 in a manner commensurate with HDF-EOS5.
Accessibility and usability of NPP/NPOESS data in HDF5 can be enhanced by providing tools that simplify and standardize how data is accessed and presented. In this project, The HDF Group is creating such tools in the form of software to read and write certain key data types and data aggregates used in NPP/NPOESS data products, and extending HDFView to extract, present and export these data effectively. In particular, the work will focus on NPP/NPOESS use of HDF5 region references and quality flags. The HDF Group will also provide high quality user support for the project.
The document discusses the transition of the CFD General Notation System (CGNS) to using HDF5 as its main storage format instead of ADF. CGNS provides a standard for storing computational fluid dynamics (CFD) simulation data. It is switching to HDF5 to take advantage of HDF5's capabilities like parallel I/O and availability in many tools, though initial HDF5 implementations have larger file sizes and slower I/O performance than ADF. The CGNS steering committee is evaluating the HDF5 implementation and investigating performance problems to further improve the transition.
The document provides an overview of National Polar-orbiting Operational Satellite System (NPOESS) HDF5 files. Key points include:
1) NPOESS is a satellite system that collects environmental data related to weather, atmosphere, oceans, land, and near-space. Data products are distributed in HDF5 format.
2) NPOESS HDF5 files contain raw data records, sensor data records, intermediate products, application related products, and environmental data records.
3) Data is organized into granules and aggregations. Granules contain a segment of data and are referenced by aggregations, which group granules over a temporal range.
This is a slide from
HDF AND HDF-EOS WORKSHOP V
February 26 - 28, 2002
Source: http://hdfeos.org/workshops/ws05/presentations/Ullman/11c-Discussion_notes.ppt
HDF-EOS is a software library designed to support NASA Earth Observing System (EOS) science data. HDF is the Hierarchical Data Format developed by The HDF Group. Specific data structures in HDF-EOS5 which are containers for science data are: Grid, Point, Zonal Average and Swath. These data structures are constructed from standard HDF5 data objects, using EOS conventions, through the use of a software library. This presentation is intended to familiarize current HDF-EOS users with the structure of HDF-EOS5 files and the Grid, Swath, Point and Zonal Average structures used in these files.
The document discusses migrating from HDF5 1.6 to HDF5 1.8. It provides an overview of new features in HDF5 1.8, including a revised file format, improvements to group storage, new link types like external links, and enhanced error handling. The document recommends helping with the transition to HDF5 1.8 by discussing beneficial new features and awareness of compatibility issues when moving from 1.6 to 1.8.
Are you curious what is coming next? Are you willing to try some prototyped software? If so, come to this talk to learn about new HDF5 features such as HDF5 FORTARN2003 APIs, metadata journaling etc. and how your application can benefit from using them.
An update on HDF, including a status report on the HDF Group, an overview of recent changes to the HDF4 and HDF5 libraries and tools, plans for future releases, HDF Group projects and collaborations, and future plans.
Current status of HDF-EOS and access tools will be summarized. Update on HDF-EOS, HDFView plug-in and The HDF-EOS to GeoTIFF (HEG) conversion tool, including recent changes to the software, ongoing maintenance, upcoming releases, future plans, and issues will be discussed.
This tutorial will introduce the three levels of the HDF-Java products: the HDF-Java wrapper (or Java Native Interfaces to the standard HDF libraries), the HDF-Java object package, and the HDFView. The Java wrapper provides standard Java APIs that allow applications to call the C HDF libraries from Java. The HDF-Java object package implements HDF data objects, e.g. Groups and Datasets, in an object-oriented form and makes it easy for applications to use the libraries. The HDFView is a visual tool for browsing and editing HDF4 and HDF5 files.
As the volume and complexity of data from myriad Earth Observing platforms, both remote sensing and in-situ increases so does the demand for access to both data and information products from these data. The audience no longer is restricted to an investigator team with specialist science credentials. Non-specialist users from scientists from other disciplines, science-literate public, to teachers, to the general public and decision makers want access. What prevents them from this access to resources? It is the very complexity and specialist developed data formats, data set organizations and specialist terminology. What can be done in response? We must shift the burden from the user to the data provider. To achieve this our developed data infrastructures are likely to need greater degrees of internal code and data structure complexity to achieve (relatively) simpler end-user complexity. Evidence from numerous technical and consumer markets supports this scenario. We will cover the elements of modern data environments, what the new use cases are and how we can respond to them.
The document provides an overview and status update of the Earth Observing System Data and Information System (EOSDIS). It discusses that EOSDIS supports EOS missions by ingesting, processing, archiving, and distributing their data. It notes that the volume of archived data has grown to over 4.9 petabytes containing over 2700 datasets. It also outlines plans to transition to new systems and complete updates to the EOSDIS code by 2009.
HDF5 is a powerful and feature-rich creature, and getting the most out of it requires powerful tools. The MathWorks provides a "low-level" interface to the HDF5 library that closely corresponds to the C API and exposes much of its richness. This short tutorial will present ways to use the low-level MATLAB interface to build those tools and tackle such topics as subsetting, chunking, and compression.
NetCDF-Java is an open source Java library for reading scientific data formats like NetCDF, HDF5, HDF4, and OPeNDAP. It has been used as a component in many software projects. The library provides an object-oriented API for reading data from these file formats and exposing it to Java programs. It works by providing format readers for specific file types that can read data into the Common Data Model used by the library. The library has been tested against many file examples but could benefit from more systematic testing. Proper use of dimensions, variables, units, and metadata is important for self-documenting scientific data files.
This one-year research project, funded by NOAA Climate Program Office (CPO) Scientific Data Stewardship (SDS), provides a solution to migrate data to a single standards-based archive format. Specifically, we investigate on how to store NASA ECS data and metadata into HDF5 Archival Information Packages (AIP). To achieve this, the HDF4 to HDF5 conversion tool has been enhanced so that converted ECS data can be read through the NetCDF4/CDM interface. In addition, metadata tools will be developed that convert ECS collection and granule level metadata to NOAA's collection level and NARA's METS standard. The enhanced HDF4 to HDF5 conversion tool has been released in May 2008 and it includes new functionality as the converted ECS data can be read through the NetCDF4 interface. We have tested 33 typical HDF-EOS2 swath, grid and point products at the National Snow and Ice Data Center (NSIDC). We also demonstrate the initial effort of the work to develop METS compliant metadata from granule metadata held in NASA's Earth Observing System (EOS) Data and Information System (EOSDIS) Core System (ECS).
MODIS (Moderate Resolution Imaging Spectroradiometer) sensor data are highly useful for field research. However, the volume of MODIS data and the complexity in data format makes MODIS data less usable for some communities. To expand the use of MODIS data beyond traditional remote sensing specialists, the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC) prepares and distributes subsets of selected Land Products in a scale and format useful for undergraduate students and field researchers. MODIS subsets are provided for more than 1,000 sites across the globe. The subsets are offered in tabular ASCII format and in GIS compatible GeoTIFF format. Time series plots and grid visualizations to help characterize field sites are also provided. In addition to offering subsets for fixed sites, the ORNL DAAC also offers the capability to create user-defined subsets for any location worldwide. The MODIS Global subsetting tool provides subsets from a single pixel up to 201 x 201 km for user-defined time range. Statistics, time series plots and GIS compatible files for the customized subsets are also distributed through this tool. Users can also programmatically retrieve the subsets through a SOAP based Web Service.
This tutorial is designed for the HDF5 users with some HDF5 experience.
It will cover advanced features of the HDF5 library for achieving better I/O performance and efficient storage. The following HDF5 features will be discussed: partial I/O, chunked storage layout, compression and other filters including new n-bit and scale+offset filters. Significant time will be devoted to the discussion of complex HDF5 datatypes such as strings, variable-length datatypes, array and compound datatypes.
HDFLook is a software tool developed jointly by NASA GSFC and LOA USTL, France that allows users to view and analyze Earth science datasets. It provides capabilities to access, visualize, remap, reproject, subset, mosaic and convert files for MODIS, AIRS, and CERES products. HDFLook can be run in interactive, operational and batch modes and supports accessing geophysical and ancillary data for scientific analysis and operational uses. Over 3000 copies of HDFLook have been distributed worldwide to support the user community.
The document discusses GES DISC's experience supporting 7 MEaSUREs projects, which create long-term Earth science data records. It recommends file formats like HDF-EOS5 and netCDF4 to promote consistency and interoperability. While HDF5 was initially recommended, ad-hoc implementations have caused issues. The document provides information on GES DISC's recommendations for file naming, formats and metadata, as well as problems encountered and resources for learning more.
The document provides an overview and status update of the Earth Science Data and Information System (ESDIS). ESDIS has successfully supported numerous Earth science satellite missions and currently manages over 2 petabytes of science data. In fiscal year 2002, ESDIS delivered over 16 million data products to more than 1.8 million users. ESDIS is working to enhance its capabilities through initiatives like Data Pools and the EOS ClearingHOuse (ECHO) metadata broker. HDF-EOS 5 development is nearly complete and the workshop will discuss next steps for HDF-EOS tools and community adoption of HDF-EOS 5.
This document describes the Web Hierarchical Ordering Mechanism (WHOM), a tool for ordering HDF and HDF-EOS data from the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC). WHOM provides a web-based interface for navigating, searching, subsetting, and ordering NASA Earth science data held by the GES DAAC. It allows users to easily identify, subset, and order data of interest through a hierarchical browse of data products, calendar and map search tools, and a shopping cart system. The document provides an overview of WHOM's capabilities and interface, using MODIS and ocean data products as examples, and describes other HDF data processing and visualization tools developed by the GES DAAC.
The document discusses the National Snow and Ice Data Center's (NSIDC) use of HDF and HDF-EOS file formats to manage and distribute scientific cryosphere data. NSIDC collects data from satellites like MODIS, AMSR-E and aircraft missions, processes the data, and makes it available in HDF(5) formats. The HDF formats allow for efficient storage of multi-dimensional scientific datasets along with metadata. NSIDC develops tools to allow users to access, analyze and visualize data stored in HDF files.
This document provides an overview of the Earth Observing System Data and Information System (EOSDIS) including its components, metrics, trends in data access, and plans for the upcoming year. Key points include that EOSDIS is a distributed system consisting of data centers and processing systems that have archived over 4 petabytes of data and distributed over 254 million data products in fiscal year 2009. Popular data products distributed include MODIS and AIRS data. Plans for the upcoming year focus on improving data access and search capabilities.
The document discusses MODIS Land products and their distribution format. MODIS Land products include radiation budget, ecosystem, and land cover variables. Products are distributed in HDF-EOS format with fine resolution grids in Integerized Sinusoidal or Lambert Azimuthal Equal-Area projections and coarse grids in a geographic Climate Modeling Grid. HDF-EOS allows for collaboration and standard geolocation representation but toolkit support is still limited.
The document describes a tool called the HDF-EOS to GeoTIFF conversion tool (HEG) that was developed to convert science data files in HDF-EOS format to GeoTIFF files. GeoTIFF files can be directly accessed by many GIS tools, unlike HDF-EOS files. HEG allows individual users to convert a wide variety of EOS data products, including MODIS, MISR, and ASTER data. It provides functionality like mosaicking, subsetting, reprojection, and metadata creation. HEG has both a graphical and command line interface and runs on Windows, Linux, Sun, and SGI platforms. It has been integrated into online NASA data archives to help make earth science data more accessible.
The document describes HDF-EOS tools developed to help users work with Earth Observing System (EOS) data stored in HDF-EOS format. It outlines libraries and tools created, including the HDF-EOS to GeoTIFF Converter (HEG) tool, which can convert HDF-EOS files to GeoTIFF and perform subsetting, stitching, and reprojection. It also describes the HDF-EOS plugin for HDFView, which allows browsing of HDF-EOS data and objects. The tools were created to make HDF-EOS data more accessible to users and reduce the need to learn HDF/HDF-EOS libraries.
The document presents guidelines for standardizing the HDF-EOS file format for the Aura satellite mission. It proposes organizing data fields and attributes in a consistent way, using standard dimension and field names, and reporting data on a common pressure grid. This will make files easier to understand, cross-platform compatible, and simplify software development. The guidelines cover topics such as data ordering, units, and attribute names. Standardizing these aspects aims to reduce mismatches between instruments. [/SUMMARY]
The document discusses the NSIDC DAAC's efforts to facilitate access to Earth Observing System (EOS) data in HDF-EOS format. It summarizes tools developed by NSIDC to help users access, visualize, and process HDF-EOS data, including the PHDIS tool for viewing metadata and subsetting data, a swath-to-grid tool for gridding MODIS data, and a bit viewer for MODIS quality flags. It also discusses future plans to enhance these tools and develop additional capabilities like output in HDF-EOS and GIS formats.
This document summarizes the fifth annual HDF workshop sponsored by ESDIS and NCSA. It provides an overview of the status of ESDIS, HDF/HDF-EOS, and plans for the future. Over 750 terabytes of Terra and Landsat 7 data have been processed and made available. Some instruments like ASTER and CERES now have validated data while others like MODIS are still being reprocessed. Future plans include installing data pools at DAACs and procuring an EMD contract to support ongoing EOS operations. The community advisory process involves groups like UWGs, DAWG, and SWGD to provide feedback. HDF is a file format for scientific data while HDF-EOS is the
HDF-EOS is a standard developed by NASA for storing Earth science data in the Hierarchical Data Format (HDF). It defines structures for gridded data, swaths, points, and other arrays. The HDF-EOS APIs provide functions for reading, writing, and manipulating these data structures and attributes in both HDF4 and HDF5 formats. Tools like HDFView and the HDF-EOS to GeoTIFF Converter (HEG) allow users to browse, subset, and convert HDF-EOS data to other formats.
HDF-EOS 5 is a new format based on HDF5 that is designed to resemble the existing HDF-EOS 2 format. It supports the same data structures as HDF-EOS 2 but uses HDF5 functionality like compression and chunking. A conversion tool allows data to be changed from HDF-EOS 2 to HDF-EOS 5 format. While HDF-EOS 2 will still be supported, HDF-EOS 5 will be used by new missions like Aura and provides more options for data storage and filtering.
WE1.L10 - IMPLEMENTATION OF THE LAND, ATMOSPHERE NEAR-REAL-TIME CAPABILITY FO...grssieee
The LANCE system provides near real-time satellite data from NASA instruments within 3 hours of observation for applications such as weather forecasting, monitoring natural hazards, and agricultural monitoring. It leverages existing EOS processing and distribution capabilities. Products include MODIS imagery, AIRS temperature and moisture profiles, and OMI measurements of ozone and sulfur dioxide. The system aims to improve latency and provide a one-stop shop for users through the LANCE web portal.
This slide will demonstrate how to use OPeNDAP Java clients such as IDV and Panoply via HDF OPeNDAP data handlers to access various NASA HDF products such as AIRS, OMI, MLS, MODIS, TRMM, CERES, SeaWIFS etc. Various features of these tools that can help users easy access the HDF data will also be explored.
The document summarizes the EOS MLS software and data processing levels. The EOS MLS instrument measures atmospheric temperature, water, ozone and other molecules from 5-80km on the AURA spacecraft. The software processes the instrument data through four levels: Level 1 converts telemetry to radiances, Level 2 converts radiances to abundances, Level 3 produces daily and monthly mean maps and zonal means. The data flows between levels using HDF4 and HDF-EOS formats, with products archived and described by metadata to enable ordering of Level 1-3 products. Future plans include transitioning to HDF5 and HDF-EOS5.
The document discusses updates to OPeNDAP handlers for HDF4 and HDF5 data formats. The HDF4 handler was improved to allow more NASA HDF and HDF-EOS data to be visualized by translating the data structure to follow climate and forecast metadata conventions. The HDF5 handler was updated to support additional HDF-EOS5 data from NASA missions. The handlers address issues that previously prevented data visualization and make the data more interoperable. Limitations include unsupported additional HDF4 objects and untested HDF4 products.
1) The document discusses mapping seismic data stored in the SEG-Y format to the DAOS object storage system to improve storage and processing efficiency.
2) Currently, seismic processing copies data for each step, but DAOS snapshots and versioning can reduce copies by storing only updates.
3) The DAOS-SEIS mapping represents seismic data as a graph with trace headers and data mapped to DAOS objects to allow random access and filtering.
4) Early benchmarking shows the DAOS-SEIS API outperforms seismic processing libraries on large datasets, though further optimization is needed.
This document describes tools for working with HDF-EOS data formats: the HDF-EOS to GeoTIFF Conversion Tool (HEG) and an HDF-EOS plugin for the HDFView browser. HEG allows conversion between HDF-EOS and GeoTIFF formats and supports subsetting, reprojection, stitching and other functions. The HDFView plugin allows browsing and conversion between HDF-EOS2 and HDF-EOS5 formats within HDFView. Both tools are available for download and support a variety of Earth science datasets.
Similar to HDF and HDF-EOS Experiences and Applications (20)
This document discusses how to optimize HDF5 files for efficient access in cloud object stores. Key optimizations include using large dataset chunk sizes of 1-4 MiB, consolidating internal file metadata, and minimizing variable-length datatypes. The document recommends creating files with paged aggregation and storing file content information in the user block to enable fast discovery of file contents when stored in object stores.
This document provides an overview of HSDS (Highly Scalable Data Service), which is a REST-based service that allows accessing HDF5 data stored in the cloud. It discusses how HSDS maps HDF5 objects like datasets and groups to individual cloud storage objects to optimize performance. The document also describes how HSDS was used to improve access performance for NASA ICESat-2 HDF5 data on AWS S3 by hyper-chunking datasets into larger chunks spanning multiple original HDF5 chunks. Benchmark results showed that accessing the data through HSDS provided over 2x faster performance than other methods like ROS3 or S3FS that directly access the cloud storage.
This document summarizes the current status and focus of the HDF Group. It discusses that the HDF Group is located in Champaign, IL and is a non-profit organization focused on developing and maintaining HDF software and data formats. It provides an overview of recent HDF5, HDF4 and HDFView releases and notes areas of focus for software quality improvements, increased transparency, strengthening the community, and modernizing HDF products. It invites support and participation in upcoming user group meetings.
This document provides an overview of HSDS (HDF Server and Data Service), which allows HDF5 files to be stored and accessed from the cloud. Key points include:
- HSDS maps HDF5 objects like datasets and groups to individual cloud storage objects for scalability and parallelism.
- Features include streaming support, fancy indexing for complex queries, and caching for improved performance.
- HSDS can be deployed on Docker, Kubernetes, or AWS Lambda depending on needs.
- Case studies show HSDS is used by organizations like NREL and NSF to make petabytes of scientific data publicly accessible in the cloud.
This document discusses creating cloud-optimized HDF5 files by rearranging internal structures for more efficient data access in cloud object stores. It describes cloud-native and cloud-optimized storage formats, with the latter involving storing the entire HDF5 file as a single object. The benefits of cloud-optimized HDF5 include fast scanning and using the HDF5 library. Key aspects covered include using optimal chunk sizes, compression, and minimizing variable-length datatypes.
This document discusses updates and performance improvements to the HDF5 OPeNDAP data handler. It provides a history of the handler since 2001 and describes recent updates including supporting DAP4, new data types, and NetCDF data models. A performance study showed that passing compressed HDF5 data through the handler without decompressing/recompressing led to speedups of around 17-30x by leveraging HDF5 direct I/O APIs. This allows outputting HDF5 files as NetCDF files much faster through the handler.
This document provides instructions for using the Hyrax software to serve scientific data files stored on Amazon S3 using the OPeNDAP data access protocol. It describes how to generate ancillary metadata files called DMR++ files using the get_dmrpp tool that provide information about the data file structure and locations. The document explains how to run get_dmrpp inside a Docker container to process data files on S3 and generate customized DMR++ files that the Hyrax server can use to serve the files to clients.
This document provides an overview and examples of accessing cloud data and services using the Earthdata Login (EDL), Pydap, and MATLAB. It discusses some common problems users encounter, such as being unable to access HDF5 data on AWS S3 using MATLAB or read data from OPeNDAP servers using Pydap. Solutions presented include using EDL to get temporary AWS tokens for S3 access in MATLAB and providing code examples on the HDFEOS website to help users access S3 data and OPeNDAP services. The document also notes some limitations, such as tokens being valid for only 1 hour, and workarounds like requesting new tokens or using the MATLAB HDF5 API instead of the netCDF API.
The HDF5 Roadmap and New Features document outlines upcoming changes and improvements to the HDF5 library. Key points include:
- HDF5 1.13.x releases will include new features like selection I/O, the Onion VFD for versioned files, improved VFD SWMR for single-writer multiple-reader access, and subfiling for parallel I/O.
- The Virtual Object Layer allows customizing HDF5 object storage and introduces terminal and pass-through connectors.
- The Onion VFD stores versions of HDF5 files in a separate onion file for versioned access.
- VFD SWMR improves on legacy SWMR by implementing single-writer multiple-reader capabilities
This document discusses user analysis of the HDFEOS.org website and plans for future improvements. It finds that the majority of the site's 100 daily users are "quiet", not posting on forums or other interactive elements. The main user types are locators, who search for examples or data; mergers, who combine or mosaic datasets; and converters, who change file formats. The document outlines recent updates focused on these user types, like adding Python examples for subsetting and calculating latitude and longitude. It proposes future work on artificial intelligence/machine learning uses of HDF files and examples for processing HDF data in the cloud.
This document summarizes a presentation about the current status and future directions of the Hierarchical Data Format (HDF) software. It provides updates on recent HDF5 releases, development efforts including new compression methods and ways to access HDF5 data, and outreach resources. It concludes by inviting the audience to share wishes for future HDF development.
The document describes H5Coro, a new C++ library for reading HDF5 files from cloud storage. H5Coro was created to optimize HDF5 reading for cloud environments by minimizing I/O operations through caching and efficient HTTP requests. Performance tests showed H5Coro was 77-132x faster than the previous HDF5 library at reading HDF5 data from Amazon S3 for NASA's SlideRule project. H5Coro supports common HDF5 elements but does not support writing or some complex HDF5 data types and messages to focus on optimized read-only performance for time series data stored sequentially in memory.
This document summarizes MathWorks' work to modernize MATLAB's support for HDF5. Key points include:
1) MATLAB now supports HDF5 1.10.7 features like single-writer/multiple-reader access and virtual datasets through new and updated low-level functions.
2) Performance benchmarks show some improvements but also regressions compared to the previous HDF5 version, and work continues to optimize code and support future versions.
3) There are compatibility considerations for Linux filter plugins, but interim solutions are provided until MathWorks can ship a single HDF5 version.
HSDS provides HDF as a service through a REST API that can scale across nodes. New releases will enable serverless operation using AWS Lambda or direct client access without a server. This allows HDF data to be accessed remotely without managing servers. HSDS stores each HDF object separately, making it compatible with cloud object storage. Performance on AWS Lambda is slower than a dedicated server but has no management overhead. Direct client access has better performance but limits collaboration between clients.
HDF5 and Zarr are data formats that can be used to store and access scientific data. This presentation discusses approaches to translating between the two formats. It describes how HDF5 files were translated to the Zarr format by creating a separate Zarr store to hold HDF5 file chunks, and storing chunk location metadata. It also discusses an implementation that translates Zarr data to the HDF5 format by using a special chunking layout and storing chunk information in an HDF5 compound dataset. Limitations of the translations include lack of support for some HDF5 dataset properties in Zarr, and lack of support for some Zarr compression methods in the HDF5 implementation.
The document discusses HDF for the cloud, including new features of the HDF Server and what's next. Key points:
- HDF Server uses a "sharded schema" that maps HDF5 objects to individual storage objects, allowing parallel access and updates without transferring entire files.
- Implementations include HSDS software that uses the sharded schema with an API and SDKs for different languages like h5pyd for Python.
- New features of HSDS 0.6 include support for POSIX, Azure, AWS Lambda, and role-based access control.
- Future work includes direct access to storage without a server intermediary for some use cases.
This document compares different methods for accessing HDF and netCDF files stored on Amazon S3, including Apache Drill, THREDDS Data Server (TDS), and HDF5 Virtual File Driver (VFD). A benchmark test of accessing a 24GB HDF5/netCDF-4 file on S3 from Amazon EC2 found that TDS performed the best, responding within 2 minutes, while Apache Drill failed after 7 minutes. The document concludes that TDS 5.0 is the clear winner based on performance and support for role-based access control and HDF4 files, but the best solution depends on use case and software.
This document discusses STARE-PODS, a proposal to NASA/ACCESS-19 to develop a scalable data store for earth science data using the SpatioTemporal Adaptive Resolution Encoding (STARE) indexing scheme. STARE allows diverse earth science data to be unified and indexed, enabling the data to be partitioned and stored in a Parallel Optimized Data Store (PODS) for efficient analysis. The HDF Virtual Object Layer and Virtual Data Set technologies can then provide interfaces to access the data in STARE-PODS in a familiar way. The goal is for STARE-PODS to organize diverse data for alignment and parallel/distributed storage and processing to enable integrative analysis at scale.
This document provides an overview and update on HDF5 and its ecosystem. Key points include:
- HDF5 1.12.0 was recently released with new features like the Virtual Object Layer and external references.
- The HDF5 library now supports accessing data in the cloud using connectors like S3 VFD and REST VOL without needing to modify applications.
- Projects like HDFql and H5CPP provide additional interfaces for querying and working with HDF5 files from languages like SQL, C++, and Python.
- The HDF5 community is moving development to GitHub and improving documentation resources on the HDF wiki site.
This document summarizes new features in HDF5 1.12.0, including support for storing references to objects and attributes across files, new storage backends using a virtual object layer (VOL), and virtual file drivers (VFDs) for Amazon S3 and HDFS. It outlines the HDF5 roadmap for 2019-2022, which includes continued support for HDF5 1.8 and 1.10, and new features in future 1.12.x releases like querying, indexing, and provenance tracking.
More from The HDF-EOS Tools and Information Center (20)
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
1. GES DISC
DAAC
The Goddard DAAC
http://daac.gsfc.nasa.gov
HDF and HDF-EOS
Experiences and
Applications
Presented by:
James Johnson,
SSAI
February 28, 2002
HDF-EOS Workshop V
1
2. Science Disciplines
GES DISC
DAAC
Atmospheric Dynamics
Global Biosphere
Ocean Color
• CZCS
• OCTS
• SeaWiFS
• Terra MODIS
• Aqua MODIS
• TOVS Pathfinder
• Data Assimilation
• Terra MODIS
• Aqua AIRS
• Aqua MODIS
TOVS 1000 hPa Monthly Mean Specific Humidity
Hydrology
Black
completed
Red
- active
Green
- future
Monthly Ocean Chlorophyll and NVDI from SeaWiFS
Upper Atmosphere
• Rainfall
Climatology
• TRMM
• TRMM Field
Experiments
• Aqua AIRS
Hurricane Mitch as seen by TRMM
February 28, 2002
Land Biosphere
• AVHRR Pathfinder
1999 Antarctic Ozone Hole as seen by TOMS
HDF-EOS Workshop V
• Heritage
BUV/SBUV
• Heritage LIMS
• Heritage TOMS
• UARS
• EP TOMS
• Aura HIRDLS
• Aura MLS
• Aura OMI
• SORCE
2
3. Primary Data Sets
GES DISC
DAAC
Data Set
AVHRR Pathfinder
CZCS
DAO
MODIS (Terra)
SeaWiFS
TOMS
TOVS Pathfinder
TRMM
UARS
DAS
AIRS
Aura
SORCE
February 28, 2002
Format
Temporal Coverage
HDF (subsets in binary)
Binary
Binary
HDF-EOS
HDF
HDF
HDF (subsets in binary)
HDF
Binary
HDF-EOS
HDF-EOS
HDF5-EOS
HDF5
Jul 1981 to Oct 2001
Oct 1978 to Jun 1986
Mar 1980 to Nov 1993
Dec 1999 to Present
Dec 1996 to Present
Nov 1978 to Present
Nov 1978 to Jul 1995
Dec 1997 to Present
Sep 1991 to Sep 2001
(soon)
Mar 2002 (launch)
Jul 2003 (launch)
Jul 2002 (launch)
HDF-EOS Workshop V
3
4. GES DISC
DAAC
HDF & HDF-EOS
Applications
Universal Data Reduction Server (UDRS)
Distributed Oceanographic Data System (DODS)
Web Mapping Testbed (WMT-DODS)/OpenGIS
Live Access Server (LAS)/Ferret
Gridded Analysis and Display System (GrADS-DODS)
Online data Analysis (OASIS)
read_hdf generic reader
other data set specific read software, including MODIS
February 28, 2002
HDF-EOS Workshop V
4
5. GES DISC
DAAC
Universal Data
Reduction Server
A virtual server consisting of:
DODS server
WMT-DODS server
GrADS-DODS server
LAS/Ferret
(others can be added)
Allows a variety of discipline, interdisciplinary and
applications users to access DAAC data
February 28, 2002
HDF-EOS Workshop V
5
6. GES DISC
DAAC
Distributed Oceanographic
Data Server
DODS developed by URI and UCAR
a protocol for requesting and transporting
data across the web
Transparently supports multiple formats
Subsetting performed at server end
Supports various servers (netCDF, HDF,
GrADS, MatLAB, FreeForm,…)
Supports various clients (IDL, MatLAB,
Ferret, LAS, GrADS, …)
Various DAAC data sets served by DODS
(see http://daac.gsfc.nasa.gov/DAAC_DOCS/DODS.html)
February 28, 2002
HDF-EOS Workshop V
6
7. GES DISC
DAAC
Web Mapping Testbed
WMT-DODS Server
OpenGIS consortium
Import HDF & HDF-EOS
data into GIS packages
Supports geolocated images
Interface to many DODS
servers
DAAC
external
(see http://daac.gsfc.nasa.gov/daac-bin/viewer/viewer.cgi)
February 28, 2002
HDF-EOS Workshop V
7
8. GrADS-DODS Server
GES DISC
DAAC
Developed by Center for Ocean-Land-Atmosphere Studies
(COLA)
Supports data analysis functions (statistical, smoothing,
correlation, …)
Subset data
Work on single or multiple files
Supports several data formats (HDF, netCDF, GRIB, binary, …)
Interfaces with DODS
DAAC Server to go operational later this year
February 28, 2002
HDF-EOS Workshop V
8
9. GES DISC
DAAC
Live Access Server/
Ferret
LAS developed by NOAA
Web GUI interface (Ferret)
Interfaces with DODS
Visualize data
Subset data
Save to various formats
Custom data set specific
templates added by DAAC
Support for MODIS and
SeaWiFS (coming soon)
February 28, 2002
HDF-EOS Workshop V
9
10. Atmospheric Dynamics
OASIS
GES DISC
DAAC
Web Interface
Uses JAVA applets
Perform data analysis
online
Intercomparison
Visualize data
Animations
Supports DAAC
atmospheric dynamics data
HDF & HDF-EOS support
coming soon
(see http://daac.gsfc.nasa.gov/CAMPAIGN_DOCS/atmospheric_dynamics/onl ine_analysis/OASIS/html /)
February 28, 2002
HDF-EOS Workshop V
10
11. read_hdf
GES DISC
DAAC
Interactive command line C program
Generic, supports any HDF file
Display hierarchical tree of useful objects (SDS,
Vdata, Vgroup, Raster Images, Annotations)
Subset data
Output to ASCII or binary
Also dump any obscure HDF object (DFTAG_NT,
DFTAG_VERSION, etc.)
February 28, 2002
HDF-EOS Workshop V
11
12. GES DISC
DAAC
Other DAAC
HDF Applications
MODIS readers and visualization (IDL based)
(see http://daac.gsfc.nasa.gov/CAMPAIGN_DOCS/MODIS/)
geoview
modis_atmos
simap
HDFLook-MODIS (collaboration between DAAC and the
Laboratoire d’Optique Atmosphérique, France)
SeaWiFS data best used with SeaWiFS’ SeaDAS package
TRMM data reader and IDL based TSDIS orbit viewer
Other DAAC data sets include C, Fortran and/or IDL readers
February 28, 2002
HDF-EOS Workshop V
12
13. HDF & HDF-EOS Issues
GES DISC
DAAC
Large file sizes (MODIS, AVHRR)
requires lots of bandwidth for downloading
end user needs lots of disk space
non-standard Grid projections
User frustration
reluctance to accept HDF (prefer ASCII, binary, other formats)
download and install libraries (two for HDF-EOS)
confusion - HDF, HDF-EOS, and now HDF5
Poor HDF layout/implementation (or not self documenting)
cryptic field and file names
no field level attributes or descriptions of file contents
too many fields
internal compression rarely utilized
February 28, 2002
HDF-EOS Workshop V
13