The document summarizes updates from The HDF Group, including recent releases of HDF5, HDF4, and related tools. It provides an overview of The HDF Group, which supports HDF users through services like helpdesk, consulting, and training. The Group is working on new features for HDF5 like single-writer/multiple-reader access and improved multi-threaded concurrency. It is also partnering with others to improve HPC performance and add exascale-ready capabilities to HDF5.
This tutorial is designed for new HDF5 users. We will cover HDF5 abstractions such as datasets, groups, attributes, and datatypes. Simple C examples will cover the programming model and basic features of the API, and will give new users the knowledge they need to navigate through the rich collection of HDF5 interfaces. Participants will be guided through an interactive demonstration of the fundamentals of HDF5.
This tutorial is for new HDF5 users.
Update on HDF, including recent changes to the software, new releases, THG collaborations, and future plans. Session will include an overview of the HDF4.2r2, HDF5 1.6.6, and 1.8.0 releases, as well as updates on completed and on-going THG projects including crash-proofing HDF5, efficient append to HDF5 datasets, and indexing in HDF5.
An update on HDF, including a status report on the HDF Group, an overview of recent changes to the HDF4 and HDF5 libraries and tools, plans for future releases, HDF Group projects and collaborations, and future plans.
The document summarizes updates to HDF5 command line tools and HDF-Java products presented by Peter Cao of The HDF Group on September 28-30, 2010. Key updates include new features for tools like h5ls, h5dump, h5diff, h5repack, and h5copy to handle soft/external links and exclude paths. A new tool h5watch was presented to monitor dataset growth. Planned h5edit will allow editing HDF5 file attributes. HDF-Java 2.7 will add features like moving objects and support HDF5 1.8 APIs, with 117 new functions added to the Java Native Interface.
This document provides an overview of HDF5 (Hierarchical Data Format version 5) and introduces its core concepts. HDF5 is an open source file format and software library designed for storing and managing large amounts of numerical data. It supports a data model with objects such as datasets, groups, attributes, and datatypes. HDF5 files can be accessed through its software library and APIs from languages like C, Fortran, C++, Python and more. The document covers HDF5's data model, file format, programming interfaces, tools and example code.
The document discusses migrating from HDF5 1.6 to HDF5 1.8. It provides an overview of new features in HDF5 1.8, including a revised file format, improvements to group storage, new link types like external links, and enhanced error handling. The document recommends helping with the transition to HDF5 1.8 by discussing beneficial new features and awareness of compatibility issues when moving from 1.6 to 1.8.
The document summarizes updates from The HDF Group. It discusses that The HDF Group was established in 1988 and owns HDF4 and HDF5 formats and libraries. It provides services like helpdesk, support, consulting and training to users. The HDF Group aims to ensure long-term accessibility of HDF data through development and support of HDF technologies. Recent improvements include new HDF5 and HDF4 releases, tools updates, HDF-Java and SWMR file access work. Future work involves parallel I/O, indexing methods and EOS, OPeNDAP and NPP/NPOESS support.
This tutorial is designed for new HDF5 users. We will go over a brief history of HDF and HDF5 software, and will cover basic HDF5 Data Model objects and their properties; we will give an overview of the HDF5 Libraries and APIs, and discuss the HDF5 programming model. Simple C and Fortran examples, and Java tool HDFView will be used to illustrate HDF5 concepts.
This tutorial is designed for new HDF5 users. We will cover HDF5 abstractions such as datasets, groups, attributes, and datatypes. Simple C examples will cover the programming model and basic features of the API, and will give new users the knowledge they need to navigate through the rich collection of HDF5 interfaces. Participants will be guided through an interactive demonstration of the fundamentals of HDF5.
This tutorial is for new HDF5 users.
Update on HDF, including recent changes to the software, new releases, THG collaborations, and future plans. Session will include an overview of the HDF4.2r2, HDF5 1.6.6, and 1.8.0 releases, as well as updates on completed and on-going THG projects including crash-proofing HDF5, efficient append to HDF5 datasets, and indexing in HDF5.
An update on HDF, including a status report on the HDF Group, an overview of recent changes to the HDF4 and HDF5 libraries and tools, plans for future releases, HDF Group projects and collaborations, and future plans.
The document summarizes updates to HDF5 command line tools and HDF-Java products presented by Peter Cao of The HDF Group on September 28-30, 2010. Key updates include new features for tools like h5ls, h5dump, h5diff, h5repack, and h5copy to handle soft/external links and exclude paths. A new tool h5watch was presented to monitor dataset growth. Planned h5edit will allow editing HDF5 file attributes. HDF-Java 2.7 will add features like moving objects and support HDF5 1.8 APIs, with 117 new functions added to the Java Native Interface.
This document provides an overview of HDF5 (Hierarchical Data Format version 5) and introduces its core concepts. HDF5 is an open source file format and software library designed for storing and managing large amounts of numerical data. It supports a data model with objects such as datasets, groups, attributes, and datatypes. HDF5 files can be accessed through its software library and APIs from languages like C, Fortran, C++, Python and more. The document covers HDF5's data model, file format, programming interfaces, tools and example code.
The document discusses migrating from HDF5 1.6 to HDF5 1.8. It provides an overview of new features in HDF5 1.8, including a revised file format, improvements to group storage, new link types like external links, and enhanced error handling. The document recommends helping with the transition to HDF5 1.8 by discussing beneficial new features and awareness of compatibility issues when moving from 1.6 to 1.8.
The document summarizes updates from The HDF Group. It discusses that The HDF Group was established in 1988 and owns HDF4 and HDF5 formats and libraries. It provides services like helpdesk, support, consulting and training to users. The HDF Group aims to ensure long-term accessibility of HDF data through development and support of HDF technologies. Recent improvements include new HDF5 and HDF4 releases, tools updates, HDF-Java and SWMR file access work. Future work involves parallel I/O, indexing methods and EOS, OPeNDAP and NPP/NPOESS support.
This tutorial is designed for new HDF5 users. We will go over a brief history of HDF and HDF5 software, and will cover basic HDF5 Data Model objects and their properties; we will give an overview of the HDF5 Libraries and APIs, and discuss the HDF5 programming model. Simple C and Fortran examples, and Java tool HDFView will be used to illustrate HDF5 concepts.
The document discusses the HDF4 Mapping Project which aims to ensure long-term access to Earth Observing System (EOS) data stored in HDF4 files. It provides an overview of the project scope, including developing a proof of concept prototype and production quality mapping tools. It also describes verification studies conducted with NASA data centers to identify requirements for verifying correctness of HDF4 file content maps produced by the mapping tools. The project aims to generate content maps for HDF4 files containing valuable EOS data before the HDF4 library and tools are no longer maintained.
Update on HDF, including recent changes to the software, upcoming releases, collaborations, future plans. Will include an overview of the upcoming HDF5 1.8 release, and updates on the netCDF4/HDF5 merge, HDF5 support for indexing, BioHDF, the HDF5-Storage Resource Broker project, and the HDF spin-off THG.
The document summarizes HDF5 advanced topics presented at an HDF5 workshop. It discusses HDF5 groups and links which organize data objects in a file. It also covers HDF5 datasets and datatypes like compound and reference datatypes. HDF5 references allow accessing specific regions of datasets or objects in other files. The document provides examples in Python to demonstrate HDF5 groups, links, datatypes and references.
This document summarizes Mike Folk's presentation at the Science Data Processing Workshop from February 26-28, 2002. The presentation provided updates on HDF4 and HDF5, including recent releases and future plans. HDF4 and HDF5 are open source data formats and software libraries for scientific data that support efficient storage of arrays, images, and tables. The presentation outlined ongoing work to improve performance, add new features, and facilitate the transition from HDF4 to HDF5.
This document summarizes discussions from the HDF Group's HDF/HDF-EOS Workshop XIV about data interoperability. It covered topics like enabling one set of APIs to handle multiple data formats through projects like netCDF4 and CDM. It also discussed format conversions and translations between formats like HDF4, HDF5, netCDF and others. Finally, it addressed semantic and content interoperability challenges like representing latitude and longitude in different formats and how standards like CF conventions help with interpretation of metadata across tools and applications. Interoperability issues that can arise from simultaneous access of HDF5 files via HDF5 and netCDF-4 libraries were also presented.
The document provides an update on the HDF software projects. It discusses recent releases of HDF4, HDF5, and HDF Java products. It highlights new features, platforms supported, and organizations contributing to development. Upcoming work includes improvements to parallel I/O, data indexing and viewing tools, and harmonization with netCDF and OPeNDAP formats.
The document describes 5 tools developed by The HDF Group to improve the usability of NASA HDF data formats. The tools allow for conversion between HDF-4, HDF-EOS2, HDF-EOS5, and netCDF formats. They include h4cf for converting HDF-4 to netCDF while following climate and forecasting conventions, h4toh5 for converting any HDF-4 file to HDF-5, eos52nc4 for converting HDF-EOS5 to netCDF-4, and aug_eos5 for augmenting HDF-EOS5 files to be readable by both HDF-EOS5 and netCDF APIs. The HDF-EOS2 dumper tool extracts latitude and longitude values from HDF-EOS2 files into ASCII format for
The document discusses HDF command line tools that can be used to view, modify, and manipulate HDF5 files. It provides examples of using tools like h5dump to view file structure and dataset information, h5repack to optimize file layout and compression, h5diff to compare files and datasets, and h5copy to copy objects between files. The tutorial was presented at the 15th HDF and HDF-EOS workshop from April 17-19, 2012.
This document discusses interoperability between HDF5 files and the netCDF-4 format. It begins with background on netCDF-3, netCDF-4, and the Climate and Forecast (CF) metadata conventions. It then demonstrates different use cases for accessing HDF5 data via netCDF-4, including when the HDF5 file follows the netCDF data model and CF conventions compared to when it does not. The document shares experiences working with HDF-EOS5 and JPSS data products in HDF5 format through this netCDF-4 interface. In particular, it finds that following the netCDF data model and CF conventions improves visualization of HDF5 data in tools like IDV that expect netCDF files.
NetCDF and HDF5 are data formats and software libraries used for scientific data. NetCDF began in 1989 and allows for array-oriented data with dimensions, variables, and attributes. NetCDF-4 introduced new features while maintaining backward compatibility. It uses HDF5 for data storage and can read HDF4/HDF5 files. NetCDF provides APIs for C, Fortran, Java, and is widely used for earth science and climate data. It supports conventions, parallel I/O, and reading many data formats.
This document discusses assigning Digital Object Identifiers (DOIs) to data products from NASA's Earth Observing System Data and Information System (EOSDIS). It reviews different identification schemes and recommends DOIs for their persistence and ability to provide unique, citable identifiers. The document outlines a pilot process to assign DOIs to specific EOSDIS data products, including embedding DOIs in metadata and registering them with the DataCite registration agency. Guidelines are provided for constructing the DOI suffix to make identifiers descriptive and recognizable to researchers.
This document summarizes HDF software activities in 2002, including support and funding sources for HDF, recent and upcoming releases of HDF4 and HDF5 libraries and tools, and other HDF-related projects. The HDF5 library saw improvements to performance, compilers supported, and tools like HDFView and converters. The next major HDF5 release in 2003 will focus on new features, performance enhancements, and special platform support. High level APIs and the parallel HDF5 programming model were also under development.
The document discusses using HDF5 technologies to represent complex terrain data. It explores a unified data model for information retrieval, visualization, and analysis using HDF5. The goals are to identify HDF5's role in managing battlefield operations data and demonstrate using web-based tools with HDF5 to organize a wide range of operational data. A demo will focus on heterogeneous information layers, dynamic data management, multidimensional scales, and scalable data structures.
The document discusses a project to improve long-term preservation of Earth Observing System (EOS) data by creating independent maps of HDF4 data objects. The project aims to map HDF4 files to allow simple readers to access data without relying on HDF software. It involves categorizing NASA HDF4 data, prototyping an XML mapping file format, and building tools to create maps and read data based on maps. The project will investigate integrating the mapping schema with standards, address HDF-EOS2 requirements, redesign the schema, implement production mapping and reading tools, and deploy the tools at NASA data centers.
The document discusses updates to OPeNDAP handlers for HDF4 and HDF5 data formats. The HDF4 handler was improved to allow more NASA HDF and HDF-EOS data to be visualized by translating the data structure to follow climate and forecast metadata conventions. The HDF5 handler was updated to support additional HDF-EOS5 data from NASA missions. The handlers address issues that previously prevented data visualization and make the data more interoperable. Limitations include unsupported additional HDF4 objects and untested HDF4 products.
The HDF Group presented on their support for the National Polar-orbiting Partnership/Joint Polar Satellite System (NPP/JPSS) program. Their goals included providing HDF5 support for distributing data from the Visible Infrared Imaging Radiometer Suite (VIIRS) and other sensors. They outlined priorities for testing software on critical platforms, developing tools to access and manage NPP/JPSS products, and providing rapid support. The presentation described software released or under development like h5edit, h5augjpss, and nagg, an aggregation tool.
In this talk, we will give an update on the HDF5 OPeNDAP project. We will update the new features inside OPeNDAP HDF5 data handler. We will also introduce a new HDF5-Friendly OPeNDAP client library and demonstrate how it can help users to view and analyze remote HDF-EOS5 data served by OPeNDAP HDF5 handler. A demo will be presented with a customized OPeNDAP visualization client (GrADS) that uses the library.
The HDF Group provides support for the National Polar-orbiting Partnership/National Polar-orbiting Operational Environmental Satellite System (NPP/NPOESS) through developing tools and libraries to help users access and work with NPP/NPOESS data. Key areas of focus include making data access intuitive, allowing data viewing and conversion, and providing helpdesk support. The HDF Group develops high-level APIs, tests HDF5 on NASA systems, enhances command line tools like h5dump to support NPP/NPOESS data, and is working on a new tool called h5edit to edit HDF5 files. Future work includes finishing h5edit, maintaining software releases, testing compatibility with netCDF, and providing user
Mike Folk from the National Center for Supercomputing Applications gave an update on HDF software in 2003. HDF is supported by several government agencies for applications in earth science, simulations, and data-intensive computing. Version 4.2 Release 1 was planned for October 2003 with bug fixes and new features. HDF5 1.6.0 was released in July 2003 with new filters, properties, and performance improvements. Work was also being done on high-level APIs, parallel HDF5, tools, and collaborations with other projects.
The document provides an update from The HDF Group on their activities related to Earth science data. The HDF Group maintains HDF software and provides services to users. They work on projects for NASA, NOAA, and other government agencies to help manage large Earth science data using HDF formats. Recent activities include support for EOSDIS, JPSS, and other missions through tool development, web services, and data standards work.
See Data Differently: Applying the Design Process to Data Sciencebo_p
Â
This is a condensed version of some other talks I've done, that I presented at Mission Measurement on June 9, 2014. In this 10-minute presentation, I give a high level introduction of the design process, how it's used as a means to solve ambiguous problems, and how it is an improvement over more traditional linear methods. I go through a case study of a recent project, in which we use the design process: asking the end user for specific problems and use cases, brainstorming with quantity over quality, prototyping with throwaway mockups and minimum viable products, and most importantly, iteration in a continuous feedback loop.
The document discusses the HDF4 Mapping Project which aims to ensure long-term access to Earth Observing System (EOS) data stored in HDF4 files. It provides an overview of the project scope, including developing a proof of concept prototype and production quality mapping tools. It also describes verification studies conducted with NASA data centers to identify requirements for verifying correctness of HDF4 file content maps produced by the mapping tools. The project aims to generate content maps for HDF4 files containing valuable EOS data before the HDF4 library and tools are no longer maintained.
Update on HDF, including recent changes to the software, upcoming releases, collaborations, future plans. Will include an overview of the upcoming HDF5 1.8 release, and updates on the netCDF4/HDF5 merge, HDF5 support for indexing, BioHDF, the HDF5-Storage Resource Broker project, and the HDF spin-off THG.
The document summarizes HDF5 advanced topics presented at an HDF5 workshop. It discusses HDF5 groups and links which organize data objects in a file. It also covers HDF5 datasets and datatypes like compound and reference datatypes. HDF5 references allow accessing specific regions of datasets or objects in other files. The document provides examples in Python to demonstrate HDF5 groups, links, datatypes and references.
This document summarizes Mike Folk's presentation at the Science Data Processing Workshop from February 26-28, 2002. The presentation provided updates on HDF4 and HDF5, including recent releases and future plans. HDF4 and HDF5 are open source data formats and software libraries for scientific data that support efficient storage of arrays, images, and tables. The presentation outlined ongoing work to improve performance, add new features, and facilitate the transition from HDF4 to HDF5.
This document summarizes discussions from the HDF Group's HDF/HDF-EOS Workshop XIV about data interoperability. It covered topics like enabling one set of APIs to handle multiple data formats through projects like netCDF4 and CDM. It also discussed format conversions and translations between formats like HDF4, HDF5, netCDF and others. Finally, it addressed semantic and content interoperability challenges like representing latitude and longitude in different formats and how standards like CF conventions help with interpretation of metadata across tools and applications. Interoperability issues that can arise from simultaneous access of HDF5 files via HDF5 and netCDF-4 libraries were also presented.
The document provides an update on the HDF software projects. It discusses recent releases of HDF4, HDF5, and HDF Java products. It highlights new features, platforms supported, and organizations contributing to development. Upcoming work includes improvements to parallel I/O, data indexing and viewing tools, and harmonization with netCDF and OPeNDAP formats.
The document describes 5 tools developed by The HDF Group to improve the usability of NASA HDF data formats. The tools allow for conversion between HDF-4, HDF-EOS2, HDF-EOS5, and netCDF formats. They include h4cf for converting HDF-4 to netCDF while following climate and forecasting conventions, h4toh5 for converting any HDF-4 file to HDF-5, eos52nc4 for converting HDF-EOS5 to netCDF-4, and aug_eos5 for augmenting HDF-EOS5 files to be readable by both HDF-EOS5 and netCDF APIs. The HDF-EOS2 dumper tool extracts latitude and longitude values from HDF-EOS2 files into ASCII format for
The document discusses HDF command line tools that can be used to view, modify, and manipulate HDF5 files. It provides examples of using tools like h5dump to view file structure and dataset information, h5repack to optimize file layout and compression, h5diff to compare files and datasets, and h5copy to copy objects between files. The tutorial was presented at the 15th HDF and HDF-EOS workshop from April 17-19, 2012.
This document discusses interoperability between HDF5 files and the netCDF-4 format. It begins with background on netCDF-3, netCDF-4, and the Climate and Forecast (CF) metadata conventions. It then demonstrates different use cases for accessing HDF5 data via netCDF-4, including when the HDF5 file follows the netCDF data model and CF conventions compared to when it does not. The document shares experiences working with HDF-EOS5 and JPSS data products in HDF5 format through this netCDF-4 interface. In particular, it finds that following the netCDF data model and CF conventions improves visualization of HDF5 data in tools like IDV that expect netCDF files.
NetCDF and HDF5 are data formats and software libraries used for scientific data. NetCDF began in 1989 and allows for array-oriented data with dimensions, variables, and attributes. NetCDF-4 introduced new features while maintaining backward compatibility. It uses HDF5 for data storage and can read HDF4/HDF5 files. NetCDF provides APIs for C, Fortran, Java, and is widely used for earth science and climate data. It supports conventions, parallel I/O, and reading many data formats.
This document discusses assigning Digital Object Identifiers (DOIs) to data products from NASA's Earth Observing System Data and Information System (EOSDIS). It reviews different identification schemes and recommends DOIs for their persistence and ability to provide unique, citable identifiers. The document outlines a pilot process to assign DOIs to specific EOSDIS data products, including embedding DOIs in metadata and registering them with the DataCite registration agency. Guidelines are provided for constructing the DOI suffix to make identifiers descriptive and recognizable to researchers.
This document summarizes HDF software activities in 2002, including support and funding sources for HDF, recent and upcoming releases of HDF4 and HDF5 libraries and tools, and other HDF-related projects. The HDF5 library saw improvements to performance, compilers supported, and tools like HDFView and converters. The next major HDF5 release in 2003 will focus on new features, performance enhancements, and special platform support. High level APIs and the parallel HDF5 programming model were also under development.
The document discusses using HDF5 technologies to represent complex terrain data. It explores a unified data model for information retrieval, visualization, and analysis using HDF5. The goals are to identify HDF5's role in managing battlefield operations data and demonstrate using web-based tools with HDF5 to organize a wide range of operational data. A demo will focus on heterogeneous information layers, dynamic data management, multidimensional scales, and scalable data structures.
The document discusses a project to improve long-term preservation of Earth Observing System (EOS) data by creating independent maps of HDF4 data objects. The project aims to map HDF4 files to allow simple readers to access data without relying on HDF software. It involves categorizing NASA HDF4 data, prototyping an XML mapping file format, and building tools to create maps and read data based on maps. The project will investigate integrating the mapping schema with standards, address HDF-EOS2 requirements, redesign the schema, implement production mapping and reading tools, and deploy the tools at NASA data centers.
The document discusses updates to OPeNDAP handlers for HDF4 and HDF5 data formats. The HDF4 handler was improved to allow more NASA HDF and HDF-EOS data to be visualized by translating the data structure to follow climate and forecast metadata conventions. The HDF5 handler was updated to support additional HDF-EOS5 data from NASA missions. The handlers address issues that previously prevented data visualization and make the data more interoperable. Limitations include unsupported additional HDF4 objects and untested HDF4 products.
The HDF Group presented on their support for the National Polar-orbiting Partnership/Joint Polar Satellite System (NPP/JPSS) program. Their goals included providing HDF5 support for distributing data from the Visible Infrared Imaging Radiometer Suite (VIIRS) and other sensors. They outlined priorities for testing software on critical platforms, developing tools to access and manage NPP/JPSS products, and providing rapid support. The presentation described software released or under development like h5edit, h5augjpss, and nagg, an aggregation tool.
In this talk, we will give an update on the HDF5 OPeNDAP project. We will update the new features inside OPeNDAP HDF5 data handler. We will also introduce a new HDF5-Friendly OPeNDAP client library and demonstrate how it can help users to view and analyze remote HDF-EOS5 data served by OPeNDAP HDF5 handler. A demo will be presented with a customized OPeNDAP visualization client (GrADS) that uses the library.
The HDF Group provides support for the National Polar-orbiting Partnership/National Polar-orbiting Operational Environmental Satellite System (NPP/NPOESS) through developing tools and libraries to help users access and work with NPP/NPOESS data. Key areas of focus include making data access intuitive, allowing data viewing and conversion, and providing helpdesk support. The HDF Group develops high-level APIs, tests HDF5 on NASA systems, enhances command line tools like h5dump to support NPP/NPOESS data, and is working on a new tool called h5edit to edit HDF5 files. Future work includes finishing h5edit, maintaining software releases, testing compatibility with netCDF, and providing user
Mike Folk from the National Center for Supercomputing Applications gave an update on HDF software in 2003. HDF is supported by several government agencies for applications in earth science, simulations, and data-intensive computing. Version 4.2 Release 1 was planned for October 2003 with bug fixes and new features. HDF5 1.6.0 was released in July 2003 with new filters, properties, and performance improvements. Work was also being done on high-level APIs, parallel HDF5, tools, and collaborations with other projects.
The document provides an update from The HDF Group on their activities related to Earth science data. The HDF Group maintains HDF software and provides services to users. They work on projects for NASA, NOAA, and other government agencies to help manage large Earth science data using HDF formats. Recent activities include support for EOSDIS, JPSS, and other missions through tool development, web services, and data standards work.
See Data Differently: Applying the Design Process to Data Sciencebo_p
Â
This is a condensed version of some other talks I've done, that I presented at Mission Measurement on June 9, 2014. In this 10-minute presentation, I give a high level introduction of the design process, how it's used as a means to solve ambiguous problems, and how it is an improvement over more traditional linear methods. I go through a case study of a recent project, in which we use the design process: asking the end user for specific problems and use cases, brainstorming with quantity over quality, prototyping with throwaway mockups and minimum viable products, and most importantly, iteration in a continuous feedback loop.
This document discusses the challenges of long-term preservation of earth science data and information. It outlines threats to preservation like hardware and software failures. It also describes the Open Archival Information System reference model for representing data in layers from bits to scientific objects. Formats like templates and delimiters are mechanisms to identify digital artifacts and structure for representation networks. Archival transformations must demonstrate scientific content equality between old and new formats.
Data stage Online Training is Offering at Glory IT Technologies. We have Certified Working Professionals on this Modules. They trained so many Global Students, We also Provides Corporate Training & Job/Project Support Services to data stage . We are Only Institute Delivering Best Online Training Services to this Module.
The Emerging Landscape Of The Software Industry Presentation (June)Anand Deshpande
Â
The document discusses how the software industry is at a crossroads due to economic pressures forcing cuts to IT budgets. It predicts that chief information officers will increasingly demand software as a service (SaaS) models from vendors to reduce costs. For vendors to provide competitive SaaS offerings, they will need to build virtual private clouds that span legacy and new applications across on-premise and cloud infrastructures. This transition will significantly change the economics of the software industry.
This document discusses how open data is changing Los Angeles. It outlines that open data benefits many industries including financial services, healthcare, communications, energy, charities, government, retail, insurance, manufacturing and technology. It also notes that open data in Los Angeles is being used by governments, developers, businesses, customers, suppliers, journalists and researchers. The document describes Los Angeles' open data governance strategy which includes opening data in useful ways, empowering citizens to use data, hosting hackathons and partnerships, building flexible infrastructure, developing analytics programs, and participating in open data initiatives.
The document summarizes updates on Hierarchical Data Formats (HDF) software releases and tools. It discusses the latest releases of HDF5 1.8.19 and 1.10.1, compatibility issues when moving to newer versions, updates on tools like HDF-Java and HDFView 3.0, supported compilers and systems, and a new compression library for interoperability. It invites readers to provide feedback on their needs.
A preponderance of data from NASA's Earth Observing System (EOS) is archived in the HDF Version 4 (HDF4) format. The long-term preservation of these data is critical for climate and other scientific studies going many decades into the future. HDF4 is very effective for working with the large and complex collection of EOS data products. Unfortunately, because of the complex internal byte layout of HDF4 files, future readability of HDF4 data depends on preserving a complex software library that can interpret that layout. Having a way to access HDF4 data independent of a library could improve its viability as an archive format, and consequently give confidence that HDF4 data will be readily accessible forever, even if the HDF4 library is gone.
To address the need to simplify long-term access to EOS data stored in HDF4, a collaborative project between The HDF Group and NASA Earth Science Data Centers is implementing an approach to accessing data in HDF4 files based on the use of independent maps that describe the data in HDF4 files and tools that can use these maps to recover data from those files. With this approach, relatively simple programs will be able to extract the data from an HDF4 file, bypassing the need for the HDF4 library.
A demonstration project has shown that this approach is feasible. This involved an assessment of NASA�s HDF4 data holdings, and development of a prototype XML-based layout mapping language and tools to read layout maps and read HDF4 files using layout maps. Future plans call for a second phase of the project, in which the mapping tools and XML schema are made production quality, the mapping schema are integrated with existing XML metadata files in several data centers, and outreach activities are carried out to encourage and facilitate acceptance of the technology.
This tutorial is designed for anyone who needs to work with data stored in HDF and HDF5 files.
The first part of the tutorial will focus on the HDF5 utilities to display the contents of HDF5 files, to extract and to import data from and to HDF5 files, to compare two HDF5 files, and more. Participants will be guided through the hand-on examples and will learn about different tools options. New changes and advanced features will be covered in a separate session (Updates on HDF tools) on Wednesday.
The second part of tutorial includes a hands-on session to learn the HDF (4 & 5) Java browsing tool, HDFView. The tool and special plug-ins will be used to work with the existing HDF, HDF-EOS, and netCDF-4 files, and to create a new HDF5 file. The tutorial will cover basic features of HDFView.
This document summarizes activities related to the HDF project. It discusses the status of The HDF Group as a nonprofit organization dedicated to supporting HDF. It reviews ESDIS activities including maintenance of HDF and HDF-EOS code and user support. It provides statistics on downloads, helpdesk requests, and forum usage. It also outlines maintenance and testing of HDF4, HDF5 and related software releases. Platform support issues are also addressed. The document covers recent improvements and previews future work for the HDF software suite.
This tutorial is designed for users who would like to move to the new version of HDF5 (version 1.8.0). The tutorial will cover new features of the HDF5 1.8.0 release, as well as forward/backward file format and API compatibility considerations. We will discuss how to change applications to take advantage of the new HDF5 library's features without major source code modifications.
This document provides an overview and update on HDF5 and its ecosystem. Key points include:
- HDF5 1.12.0 was recently released with new features like the Virtual Object Layer and external references.
- The HDF5 library now supports accessing data in the cloud using connectors like S3 VFD and REST VOL without needing to modify applications.
- Projects like HDFql and H5CPP provide additional interfaces for querying and working with HDF5 files from languages like SQL, C++, and Python.
- The HDF5 community is moving development to GitHub and improving documentation resources on the HDF wiki site.
The HDF Group provides support for NPP/NPOESS in a number of ways, including development and maintenance of software capabilities in HDF5 libraries and tools that help NPP/NPOESS data producers and users, software testing on platforms of importance to NPP/NPOESS, high quality rapid response user support for NPP/NPOESS, and performance of special projects. The purposes of this presentation are to apprise attendees of the areas of emphasis for FY 2010, and to solicit ideas and opinions that will help the project understand how best to use its resources in order to best serve the needs of NPP/NPOESS.
This document provides information about HDF (Hierarchical Data Format) tools and resources for working with Earth observation data. It summarizes HDF's focus on helping users at different stages of working with data, from initial product design to long-term archiving. It also describes specific HDF tools for viewing, comparing, converting between formats and adding metadata to scientific data files.
Modular HDFView is an improved HDFView with replaceable I/O and GUI modules. It consists of several interfaces that enable users to write and use alternative implementations of I/O and GUI components to replace default modules. The HDF-EOS plugin will be used as examples.
Update on HDF, including recent changes to the software, upcoming releases, collaborations, future plans. Will include an overview of the upcoming HDF5 1.8 release, and updates on the netCDF4/HDF5 merge, HDF5 support for indexing, BioHDF, the HDF5-Storage Resource Broker project, the NPOESS BAA, HDF5-OPeNDAP project, HDF-EOS library and website supports and the HDF spin-off THG.
The HDF-Java products include three components: HDF4 and HDF5 Java wrappers, HDF-Java object package, and HDFView. The Java wrappers provide standard Java APIs that allow applications to call the C HDF4 and HDF5 libraries from Java. The HDF-Java object package implements HDF data objects, e.g. Groups and Datasets, in an object-oriented form and makes it easy for applications to use the libraries. The HDFView is a visual tool for browsing and editing HDF4 and HDF5 files.
This presentation will include recent work on supporting HDF5 1.8 APIs and new features. As part of the HDF-NPOESS project, some enhancements have been added to HDFView to support region references and quality flags. The presentation will show these features along with other new features added to HDFView since HDF-Java 2.5 release.
The HDF Group is in the process of updating HDF-EOS web site. During the workshop, we would like to share with audiences some useful information in the new website that can help users to have easy access of NASA HDF and HDF-EOS data.
The presentation includes three parts:
EOS User Forum: will introduce the EOS user forum and how users can benefit from this forum.
Tools: will present information on how to use several widely-used tools to access NASA HDF and HDF-EOS data.
Examples: will present several examples on how to use C, Fortran and IDL to access NASA HDF and HDF-EOS data.
This slide will demonstrate how to use OPeNDAP Java clients such as IDV and Panoply via HDF OPeNDAP data handlers to access various NASA HDF products such as AIRS, OMI, MLS, MODIS, TRMM, CERES, SeaWIFS etc. Various features of these tools that can help users easy access the HDF data will also be explored.
The goal of this talk is to educate HDF5 users about backward and forward compatibility issues across releases of the HDF5 Library and versions of the HDF5 file format. We will discuss changes in the file format that were done to support new HDF5 features such as object creation order, compact groups, efficient access to the variable length data, UTF-8 encoding, external links, etc., and their implications on the HDF5 Library and users' applications.
Accessibility and usability of NPP/NPOESS data in HDF5 can be enhanced by providing tools that simplify and standardize how data is accessed and presented. In this project, The HDF Group is creating such tools in the form of software to read and write certain key data types and data aggregates used in NPP/NPOESS data products, and extending HDFView to extract, present and export these data effectively. In particular, the work will focus on NPP/NPOESS use of HDF5 region references and quality flags. The HDF Group will also provide high quality user support for the project.
The document discusses recent and upcoming improvements to parallel HDF5 for improved I/O performance on HPC systems. Recent improvements include reducing file truncations, distributing metadata writes across processes, and improved selection matching. Upcoming work includes a high-level HPC API, funding for Exascale-focused enhancements, and future improvements like asynchronous I/O and auto-tuning to parallel file systems. Performance tips are also provided like passing MPI hints and using collective I/O.
This tutorial will introduce the three levels of the HDF-Java products: the HDF-Java wrapper (or Java Native Interfaces to the standard HDF libraries), the HDF-Java object package, and the HDFView. The Java wrapper provides standard Java APIs that allow applications to call the C HDF libraries from Java. The HDF-Java object package implements HDF data objects, e.g. Groups and Datasets, in an object-oriented form and makes it easy for applications to use the libraries. The HDFView is a visual tool for browsing and editing HDF4 and HDF5 files.
In this presentation, we will give an update on the HDF OPeNDAP project. We will update the new features inside the HDF5 OPeNDAP data handler. We will also introduce the enhanced HDF4 OPeNDAP data handler and demonstrate how it can help users to view and analyze remote HDF-EOS2 data. A demo that uses OPeNDAP client tools to handle AIRS and MODIS Grid/Swath data with the enhanced handler will be presented.
This document discusses how to optimize HDF5 files for efficient access in cloud object stores. Key optimizations include using large dataset chunk sizes of 1-4 MiB, consolidating internal file metadata, and minimizing variable-length datatypes. The document recommends creating files with paged aggregation and storing file content information in the user block to enable fast discovery of file contents when stored in object stores.
This document provides an overview of HSDS (Highly Scalable Data Service), which is a REST-based service that allows accessing HDF5 data stored in the cloud. It discusses how HSDS maps HDF5 objects like datasets and groups to individual cloud storage objects to optimize performance. The document also describes how HSDS was used to improve access performance for NASA ICESat-2 HDF5 data on AWS S3 by hyper-chunking datasets into larger chunks spanning multiple original HDF5 chunks. Benchmark results showed that accessing the data through HSDS provided over 2x faster performance than other methods like ROS3 or S3FS that directly access the cloud storage.
This document summarizes the current status and focus of the HDF Group. It discusses that the HDF Group is located in Champaign, IL and is a non-profit organization focused on developing and maintaining HDF software and data formats. It provides an overview of recent HDF5, HDF4 and HDFView releases and notes areas of focus for software quality improvements, increased transparency, strengthening the community, and modernizing HDF products. It invites support and participation in upcoming user group meetings.
This document provides an overview of HSDS (HDF Server and Data Service), which allows HDF5 files to be stored and accessed from the cloud. Key points include:
- HSDS maps HDF5 objects like datasets and groups to individual cloud storage objects for scalability and parallelism.
- Features include streaming support, fancy indexing for complex queries, and caching for improved performance.
- HSDS can be deployed on Docker, Kubernetes, or AWS Lambda depending on needs.
- Case studies show HSDS is used by organizations like NREL and NSF to make petabytes of scientific data publicly accessible in the cloud.
This document discusses creating cloud-optimized HDF5 files by rearranging internal structures for more efficient data access in cloud object stores. It describes cloud-native and cloud-optimized storage formats, with the latter involving storing the entire HDF5 file as a single object. The benefits of cloud-optimized HDF5 include fast scanning and using the HDF5 library. Key aspects covered include using optimal chunk sizes, compression, and minimizing variable-length datatypes.
This document discusses updates and performance improvements to the HDF5 OPeNDAP data handler. It provides a history of the handler since 2001 and describes recent updates including supporting DAP4, new data types, and NetCDF data models. A performance study showed that passing compressed HDF5 data through the handler without decompressing/recompressing led to speedups of around 17-30x by leveraging HDF5 direct I/O APIs. This allows outputting HDF5 files as NetCDF files much faster through the handler.
This document provides instructions for using the Hyrax software to serve scientific data files stored on Amazon S3 using the OPeNDAP data access protocol. It describes how to generate ancillary metadata files called DMR++ files using the get_dmrpp tool that provide information about the data file structure and locations. The document explains how to run get_dmrpp inside a Docker container to process data files on S3 and generate customized DMR++ files that the Hyrax server can use to serve the files to clients.
This document provides an overview and examples of accessing cloud data and services using the Earthdata Login (EDL), Pydap, and MATLAB. It discusses some common problems users encounter, such as being unable to access HDF5 data on AWS S3 using MATLAB or read data from OPeNDAP servers using Pydap. Solutions presented include using EDL to get temporary AWS tokens for S3 access in MATLAB and providing code examples on the HDFEOS website to help users access S3 data and OPeNDAP services. The document also notes some limitations, such as tokens being valid for only 1 hour, and workarounds like requesting new tokens or using the MATLAB HDF5 API instead of the netCDF API.
The HDF5 Roadmap and New Features document outlines upcoming changes and improvements to the HDF5 library. Key points include:
- HDF5 1.13.x releases will include new features like selection I/O, the Onion VFD for versioned files, improved VFD SWMR for single-writer multiple-reader access, and subfiling for parallel I/O.
- The Virtual Object Layer allows customizing HDF5 object storage and introduces terminal and pass-through connectors.
- The Onion VFD stores versions of HDF5 files in a separate onion file for versioned access.
- VFD SWMR improves on legacy SWMR by implementing single-writer multiple-reader capabilities
This document discusses user analysis of the HDFEOS.org website and plans for future improvements. It finds that the majority of the site's 100 daily users are "quiet", not posting on forums or other interactive elements. The main user types are locators, who search for examples or data; mergers, who combine or mosaic datasets; and converters, who change file formats. The document outlines recent updates focused on these user types, like adding Python examples for subsetting and calculating latitude and longitude. It proposes future work on artificial intelligence/machine learning uses of HDF files and examples for processing HDF data in the cloud.
This document summarizes a presentation about the current status and future directions of the Hierarchical Data Format (HDF) software. It provides updates on recent HDF5 releases, development efforts including new compression methods and ways to access HDF5 data, and outreach resources. It concludes by inviting the audience to share wishes for future HDF development.
The document describes H5Coro, a new C++ library for reading HDF5 files from cloud storage. H5Coro was created to optimize HDF5 reading for cloud environments by minimizing I/O operations through caching and efficient HTTP requests. Performance tests showed H5Coro was 77-132x faster than the previous HDF5 library at reading HDF5 data from Amazon S3 for NASA's SlideRule project. H5Coro supports common HDF5 elements but does not support writing or some complex HDF5 data types and messages to focus on optimized read-only performance for time series data stored sequentially in memory.
This document summarizes MathWorks' work to modernize MATLAB's support for HDF5. Key points include:
1) MATLAB now supports HDF5 1.10.7 features like single-writer/multiple-reader access and virtual datasets through new and updated low-level functions.
2) Performance benchmarks show some improvements but also regressions compared to the previous HDF5 version, and work continues to optimize code and support future versions.
3) There are compatibility considerations for Linux filter plugins, but interim solutions are provided until MathWorks can ship a single HDF5 version.
HSDS provides HDF as a service through a REST API that can scale across nodes. New releases will enable serverless operation using AWS Lambda or direct client access without a server. This allows HDF data to be accessed remotely without managing servers. HSDS stores each HDF object separately, making it compatible with cloud object storage. Performance on AWS Lambda is slower than a dedicated server but has no management overhead. Direct client access has better performance but limits collaboration between clients.
HDF5 and Zarr are data formats that can be used to store and access scientific data. This presentation discusses approaches to translating between the two formats. It describes how HDF5 files were translated to the Zarr format by creating a separate Zarr store to hold HDF5 file chunks, and storing chunk location metadata. It also discusses an implementation that translates Zarr data to the HDF5 format by using a special chunking layout and storing chunk information in an HDF5 compound dataset. Limitations of the translations include lack of support for some HDF5 dataset properties in Zarr, and lack of support for some Zarr compression methods in the HDF5 implementation.
The document discusses HDF for the cloud, including new features of the HDF Server and what's next. Key points:
- HDF Server uses a "sharded schema" that maps HDF5 objects to individual storage objects, allowing parallel access and updates without transferring entire files.
- Implementations include HSDS software that uses the sharded schema with an API and SDKs for different languages like h5pyd for Python.
- New features of HSDS 0.6 include support for POSIX, Azure, AWS Lambda, and role-based access control.
- Future work includes direct access to storage without a server intermediary for some use cases.
This document compares different methods for accessing HDF and netCDF files stored on Amazon S3, including Apache Drill, THREDDS Data Server (TDS), and HDF5 Virtual File Driver (VFD). A benchmark test of accessing a 24GB HDF5/netCDF-4 file on S3 from Amazon EC2 found that TDS performed the best, responding within 2 minutes, while Apache Drill failed after 7 minutes. The document concludes that TDS 5.0 is the clear winner based on performance and support for role-based access control and HDF4 files, but the best solution depends on use case and software.
This document discusses STARE-PODS, a proposal to NASA/ACCESS-19 to develop a scalable data store for earth science data using the SpatioTemporal Adaptive Resolution Encoding (STARE) indexing scheme. STARE allows diverse earth science data to be unified and indexed, enabling the data to be partitioned and stored in a Parallel Optimized Data Store (PODS) for efficient analysis. The HDF Virtual Object Layer and Virtual Data Set technologies can then provide interfaces to access the data in STARE-PODS in a familiar way. The goal is for STARE-PODS to organize diverse data for alignment and parallel/distributed storage and processing to enable integrative analysis at scale.
This document summarizes new features in HDF5 1.12.0, including support for storing references to objects and attributes across files, new storage backends using a virtual object layer (VOL), and virtual file drivers (VFDs) for Amazon S3 and HDFS. It outlines the HDF5 roadmap for 2019-2022, which includes continued support for HDF5 1.8 and 1.10, and new features in future 1.12.x releases like querying, indexing, and provenance tracking.
The document discusses leveraging cloud resources like Amazon Web Services to improve software testing for the HDF group. Currently HDF software is tested on various in-house systems, but moving more testing to the cloud could provide better coverage of operating systems and distributions at a lower cost. AWS spot instances are being used to run HDF5 build and regression tests across different Linux distributions in around 30 minutes for approximately $0.02 per hour.
More from The HDF-EOS Tools and Information Center (20)
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Â
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Â
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
Â
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
Â
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Â
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Â
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
Â
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
Â
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Â
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Â
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Â
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und ĂĽberflĂĽssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. đź’»
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Â
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Â
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Â
HDF Update
1. The HDF Group
HDF Update
Mike Folk
The HDF Group
The 14th HDF and HDF-EOS Workshop
September 28-30, 2010
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
1
www.hdfgroup.org
2. Topics
What's up with The HDF Group?
Library Update
Tools update
HDF Java Products
Library development in the works
Other activities
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
2
www.hdfgroup.org
3. The HDF Group
What’s up with The HDF
Group?
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
3
www.hdfgroup.org
4. The HDF Group
What is
The HDF Group
And why does it exist?
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
4
www.hdfgroup.org
5. The HDF Group
• A company dedicated to supporting HDF and
its users
• 18 years at University of Illinois National
Center for Supercomputing Applications
• 5 years non-profit “The HDF Group”
• The HDF Group owns HDF4 and HDF5
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
5
www.hdfgroup.org
6. Data challenges addressed by HDF
Need to
organize complex
collections of data
lat | lon | temp
----|-----|----12 | 23 | 3.1
15 | 24 | 4.2
17 | 21 | 3.6
Long term data
preservation
Efficient,
scalable
storage and
access
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
6
www.hdfgroup.org
7. The HDF Group Services
• Helpdesk and Mailing Lists
• Available to all users as a first level of support
• Standard Support
• Rapid issue resolution and advice
• Consulting
• Needs assessment, troubleshooting, design reviews, etc.
• Training
• Tutorials and hands-on practical experience
• Enterprise Support
• Supporting many HDF activities across organizations
• Special Projects
• Adapting customer applications to HDF
• New features and tools
• Research and Development
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
7
www.hdfgroup.org
8. Members of the HDF support community
Army test and
evalution
command
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
8
www.hdfgroup.org
9. Some areas of increased recent interest
• Improvements
•
•
•
•
•
Concurrent access
Remote Access
Parallel I/O performance
Real-time write performance
High level language support
• Life sciences
• Sequencing
• Biomedical imaging
• Database integration
• Microsoft products (HPC, .NET, others)
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
9
www.hdfgroup.org
10. Topics
What's up with The HDF Group?
Library Update
Tools Update
HDF Java Products
Library development in the works
Other activities
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
10
www.hdfgroup.org
11. The HDF Group
Software Releases
Highlights
HDF4
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
11
www.hdfgroup.org
13. HDF5 1.8.4 minor release (Nov 09)
• New features
• Embedded library information in executable
• UNIX “strings” command pulls the info
• h5diff: Added system “epsilon” for comparing floating-point
datasets
• h5diff: Infinity is treated as a number (vs. NaN); a dataset
compared to itself is always “the same” now
• Bugs
• Corrected a problem where library will touch the file when
file opened with R/W permissions, when no changes were
done
• HDF5 configure no longer modifies CFLAGS set by a user
• Corrected a problem with deleting many objects in a heap
that caused a file to become unreadable.
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
13
www.hdfgroup.org
14. HDF5 1.8.4-patch1 (Feb 10)
• Bug reported by netCDF-4 users: some files
created on big-endian machines could not be
read on little-endian systems
• A problem with encoding fractal heap IDs for
attributes and shared object header messages in
releases 1.8.0-1.8.4
• Only files created according to the scenario
described at
http://www.hdfgroup.org/HDF5/release/known_pr
oblems/ are affected
• Please contact help@hdfgroup.org if you need
help with such files
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
14
www.hdfgroup.org
15. HDF5 1.8.5 minor release (Jun 10)
• New features
• CMake support is added for Windows and Linux
• Configure adds appropriate defines for supporting
large (64-bit) files on all systems, and instead of
only Linux (e.g., Solaris 32-bit)
• h5dump: added display of packed bits (a.k.a
quality flags)
• h5diff: better support for symbolic and external
links
• Added support for AIX 6.1
• Bugs
• Enabled -03 optimization with gcc
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
15
www.hdfgroup.org
16. HDF5 1.8.5-patch1 (Feb 10)
• Potential file corruption problem reported by the
SMHI (Swedish Meteorological and Hydrological
Institute) developers
• Introduced in 1.8.5
• Occurs when using non-default sizes of
addresses and/or lengths for file creation
• Switch to 1.8.5-patch1 immediately if you use
such creation properties for the files
• THG is working with SMHI to get access to the
files and to include them into backward/forward
compatibility testing
•
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
16
www.hdfgroup.org
17. Preview: HDF5 1.8.6 minor release (Oct 10)
• New features
• Added support for thread safety on Windows
using the Windows threads library.
• Improved I/O performance on datasets with the
“same shape” but different ranks (e.g., writing
from 2D array to a 2D plane in 3D dataset in a
file)
• Added support for Sun C and C++ 5.10 and Sun
Fortran 95 8.4
• h5ls: added new feature to follow symbolic links
• Bugs
• Fixed numerous memory leak problems
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
17
www.hdfgroup.org
18. HDF 4.2.5 minor release (Feb 10)
• Enhanced the library to handle Vgroup names and
Class names
• Many files use name lengths greater than 64
characters (default)
• Added ne functions to find the length Vgetnamelen and
Vgetclassnamelen
• Enhanced hdp to display SDSs in a specified order
(vs. index order)
• Added support for AIX 6.1, Mac Intel 64-bit with GNU
and Intel compilers
• Added all User’s Guide examples to the source code
for better support and regression testing
• Cleaned up a lot of obsolete code
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
18
www.hdfgroup.org
19. Preview: HDF 4.2.6 minor release (Feb 11)
• CMake to build on Windows, Linux and Mac
• New functions added to support H4 mapping
project
• Application can find the location of data in the
HDF4 files; it can be used to read data without
HDF4 library, e.g., using C program to seek to and
read data back
• Functions to return location and sizes of metadata for
SDSs, Images, Vgroups and Vdatas, Labels and
Annotations
• Functions to return location and sizes of raw data for SDSs,
Images, Vgroups and Vdatas, Labels and Annotations
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
19
www.hdfgroup.org
20. H4-H5 Conversion Software 2.1.1 (Feb 10)
• Based on HDF 4.2.5 and HDF5-1.8.5
• Added support for Windows 64-bit
• New release will in Oct 2010 will have a
minor bug fix and use HDF5 1.8.6 release
• Future work: move to Cmake for better
Windows support
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
20
www.hdfgroup.org
21. H5check Apr 10
•
•
•
•
Many bug fixes
Added support for Solaris 64-bit
Improved configuration step
Future releases depend on
bugs/enhancements requests and possible
file format changes in the future HDF5
versions
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
21
www.hdfgroup.org
22. Lessons learned or what we do
• Testing, testing, testing
• Regression testing on major platforms
• Linux, Solaris, FreeBSD, Windows, Mac, AIX,
SGI Altix
• Little-endian and big-endian platforms
• 32 and 64-bit
• Variety of compilers (e.g., gcc 4.3.*, 4.4,*,
Intel, PGI, Absoft, IBM, Sun)
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
22
www.hdfgroup.org
23. Lessons learned or what we do
• Backward/forward compatibility testing (file format and
APIs)
• Third part software testing (netCDF-4 and HDF-EOS2(5)
• Performance testing
• Assure that fixes and new features do not harm
performance
• Software quality analysis with special tools – Coverity
sessions
• In the process of revising current regression tests
• Adding more tests for the libraries and tools
• Adding different levels of tests (current tests take too much
time already)
• Enabling regression tests with valgrind
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
23
www.hdfgroup.org
24. Lessons learned or how you can help
• Need your help
• Participate in the pre-release testing
• Announced on hdf-forum mailing list
• Let us know if you are interested, we will contact you
individually
• Give us your files to include in backward/forward
compatibility testing
• Tell us about your applications or send us examples
of your HDF code (both HDF4 and HDF5)
• Tell us how do you use command line tools,
HDFView, documentation, APIs, etc.
• Send email to help@hdfgroup.org
• Post on hdf-forum mailing list
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
24
www.hdfgroup.org
25. Topics
What's up with The HDF Group?
Library Update
Tools update
HDF Java Products
Library development in the works
Other activities
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
25
www.hdfgroup.org
26. Command line tools
• Peter will cover in detail in tools update.
• Improvements to
•
•
•
•
h5repack
h5copy
h5diff
h5ls
• New tools in development
• h5watch - allows user to monitor growth of a
dataset
• H5edit - add/remove/modify data or metadata
• Give us feedback!
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
26
www.hdfgroup.org
27. Topics
What's up with The HDF Group?
Library Update
Tools update
HDF Java Products
Library development in the works
Other activities
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
27
www.hdfgroup.org
28. Support HDF5 1.8
• HDF5 JNI
• Over 100 new functions added to Java Interface (JHI5)
• Unit tests added for new functions & some HDF5 1.6 functions
• Many features added to Object Layer & HDFView, such as:
• Support for external links
• Attribute renaming
• Some features removed, including:
• Setting link creation order and link storage type
• Showing groups and attributes in creation order (Object Layer)
• Creating soft and external links
• Retrieve link information
• Rename Attributes
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
28
www.hdfgroup.org
29. Topics
What's up with The HDF Group?
Library Update
Tools update
HDF Java Products
Library development in the works
Other activities
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
29
www.hdfgroup.org
30. New capabilities in the works
• Single-Writer/Multiple-Reader (SWMR) Access
• Allows simultaneous reading of HDF5 file while
the file is being modified by another process
• Better Multi-Threaded Concurrency
• Improve ability to have multiple threads
performing HDF5 operations simultaneously
• Recent parallel I/O improvements
• Changes to reduce redundancy and
communication (available in 1.8.6 release)
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
30
www.hdfgroup.org
31. Other Library Features
• Saving space
• Persistent File Free Space tracking/recovery
• Allow a group’s link info to be compressed
• Saving time
• New chunk indexing methods
• Aggregate metadata for faster metadata I/O
• Asynchronous metadata I/O operations
• Preserving file in case of crash
• Separately journal metadata changes to file
• Re-order updates to metadata
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
31
www.hdfgroup.org
32. Parallel I/O Improvement - Partnerships
Improve performance
on parallel apps
Improve performance
on parallel apps
Add features
anticipating exascale
systems
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
32
www.hdfgroup.org
33. Future Parallel I/O Improvements
• High-level “HPC” API
• Fast indexing for HDF5 files (FastBit)
• I/O performance tracking, testing and tuning
• HPC specific “fast-tracking”
• Virtual file driver enhancements
• Auto-tuning to underlying parallel file system
September 28 - 30,
2010
HDF/HDF-EOS Workshop XIV
33
www.hdfgroup.org
34. Recent NSF proposals for new features
• New built-in datatypes
• Boolean, complex, C99 types, etc.
• Expand coverage of attributes
• Attributes for individual fields of compound type
• Attributes for regions within dataspace
•
•
•
•
Store compound datatypes in columns (per field)
Allow shared dataspaces in file
Improve HPC performance
Facilitate remote access
September 28 - 30,
2010
HDF/HDF-EOS Workshop XIV
34
www.hdfgroup.org
35. Topics
What's up with The HDF Group?
Library Update
Tools update
HDF Java Products
Library development in the works
Other activities
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
35
www.hdfgroup.org
36. The HDF Group
HDF-EOS Support
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
36
www.hdfgroup.org
37. EOS support
• HDF-EOS2 and HDF-EOS5
• Continue testing daily with HDF4 and HDF5
development code
• Updated and maintained the HDF-EOS
website
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
37
www.hdfgroup.org
38. The Updated HDF-EOS website
• Software
• Evaluating many packages
• Examples
• Adding examples for many
NASA products
• Forums
• Moderating the forum
http://hdfeos.org
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
38
www.hdfgroup.org
39. NCL/IDL/MATLAB examples
• Many examples from different NASA data centers’
• Example codes and plots
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
40
www.hdfgroup.org
40. An example to access AIRS Swath
• Directly read the lat/lon and use polar view
…
data=eos_file>radiances_L2_Standard_cloud_cleared_radiance_product(:,:,0) ; read
specific subset of data field
; In order to read the radiances data field from the HDF-EOS2 file, the
group
; under which the data field is placed must be appended to the data field
in NCL. For more information,
; visit section 4.3.2 of http://hdfeos.org/software/ncl.php.
data@lat2d=eos_file>Latitude_L2_Standard_cloud_cleared_radiance_product ; associate
longitude and latitude
data@lon2d=eos_file>Longitude_L2_Standard_cloud_cleared_radiance_product
data@_FillValue=-9999 ;
…
res@gsnCenterString="radiances at Channel=567"
plot(2)=gsn_csm_contour_map_polar(xwks,data_2,res)
res@gsnCenterString="radiances at Channel=1339"
plot(3)=gsn_csm_contour_map_polar(xwks,data_3,res)
delete(plot) ; cleaning up resources used
delete(data)
NCL
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
41
www.hdfgroup.org
43. HDF-EOS5 and NetCDF-4
• Enabling NetCDF4 to access HDF-EOS5 data
• One file can be used for both EOS5 and NetCDF-4.
• Note that EOS5 users are not affected at all.
Augmentation
HDF-EOS5
HDF5
September 28 - 30, 2010
HDF-EOS5
file
Augmented
HDF-EOS5
file
HDF/HDF-EOS Workshop XIV
NetCDF4
NetCDF-4
file
HDF5
44
www.hdfgroup.org
44. The Main Challenge
• Would like netCDF-4 applications to be able
to read and understand HDF-EOS 5 files
• Problem: NetCDF-4 model follows the HDF5
dimension scale model but HDF-EOS5 does
not.
HDFEOS
GRIDS
No HDF5 dimension
CloudFractionAndPressure
scales are associated
Data Fields
with this variable
CloudFraction
CloudPressure
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
45
www.hdfgroup.org
45. The HDF Group
HDF-EOS2 dumper
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
48
www.hdfgroup.org
46. HDFEOS2 dumper - motivation
• HDF-EOS2 Grid
• Latitude and longitude values are not stored
inside the file.
• It is not straightforward for users to calculate
the latitude and longitude for some
projections.
• HDF-EOS2 Swath using dimension map
• Latitude/longitude values are provided either
in a separate HDF-EOS2 file or need to be
interpolated.
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
49
www.hdfgroup.org
47. HDF EOS2 dumper
• This EOS2 dumper can be used to quickly
obtain the latitude and longitude data
• It is a command-line tool only supported on
Linux
• The output is ASCII format
• The dumper is used to generate some HDFEOS2 plots via IDL,NCL and MATLAB
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
50
www.hdfgroup.org
48. More information
• Augmentation tool
http://hdfeos.org/software/aug_hdfeos5.php
• HDF-EOS2 dumper
http://www.hdfeos.org/software/eosdump.php
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
52
www.hdfgroup.org
50. OPeNDAP Update
• HDF4-OPeNDAP handler
• Access many NASA HDF-EOS and HDF4
products
• HDF5-OPeNDAP handler
• Access MLS/HIRDLS Swath data and bug fixes
• More information in the afternoon session
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
54
www.hdfgroup.org
51. The HDF Group
HDF Group Support for
NPP/NPOESS/JPSS
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
56
www.hdfgroup.org
52. 2009-2010 Priorities
• Implement software to simplify working with
NPOESS data
• Include changes in mainstream
• Begin work on an h5edit tool
• Testing on NASA mini-IDPS system
• Regular meetings with NPOESS community
• High priority helpdesk support
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
57
www.hdfgroup.org
53. 2010-2011 Priorities
• Deploy/maintain new software for working
with HDF5 objects used by NPOESS
• Implement h5edit tool
• Help facilitate access to NPOESS data by
netCDF applications
• Streamline testing on NASA mini-IDPS
• User support
See Presentation Thursday
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
58
www.hdfgroup.org
55. HDF4 Layout Map Project
• Problem
• Long-term readability of HDF data depends
on long-term availability of software
• Proposed solution
• Create a map of the layout of data objects in
an HDF file, allowing a simple reader to be
written to access the data
See Presentation Thursday
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
60
www.hdfgroup.org
56. EXPLOITING HDF5 TO REPRESENT
GEO-INFORMATION
AN EXAMPLE WITH COMPLEX TERRAIN DATA
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
61
www.hdfgroup.org
58. NIH STTR with Geospiza, Seattle WA
BIOHDF : TOWARD
SCALABLE
BIOINFORMATICS
INFRASTRUCTURES
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
63
TM
www.hdfgroup.org
59. BioHDF Project
• Goal: Reduce need to organize and structure
data, so researchers can focus on asking
questions and visualizing data
• Develop data models and tools to work with
sequence data in HDF5
• Integrate BioHDF technologies into Geospiza
products
• Deliver core BioHDF technologies to the
community as open-source software
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
64
www.hdfgroup.org
60. The HDF Group
Thank You All
and
Thank You NASA!
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
65
www.hdfgroup.org
61. Acknowledgements
This report is based on work supported by
cooperative agreement number NNX08AO77A from
the National Aeronautics and Space Administration
(NASA).
Any opinions, findings, conclusions, or
recommendations expressed in this material are
those of the author[s] and do not necessarily reflect
the views of the National Aeronautics and Space
Administration.
September 28 - 30, 2010
HDF/HDF-EOS Workshop XIV
66
www.hdfgroup.org
WhyIncreasing need for support, services, quick responseNot a good model for a University R&D projectWho11 software engineers and several students: develop, maintain HDF software, work on special projects, manage projects3 tech support staff: helpdesk, doc, sysadmin. Management teamPresidentDirector of Technical Services and OperationsDirector of Software DevelopmentDirector of Business OperationsManagers responsible for tools, applicationsOther THG staff include seven full-time software engineers who develop and maintain the HDF software, as well as working on special projects, and three technical support staff who provide helpdesk support, documentation, and system administration. The HDF group also generally employs students from the University Computer Science and Engineering departments.
NASA – EOSNOAA/NASA/Riverside Tech – NPOESS/JPSSArmy Geospatial CenterA leading U.S. aerospace companyNIH/Geospiza (bio software company )University of Illinois/NCSASandia National Laboratory Lawrence Berkeley National LabProjects in petroleum industry, chip design, finance, others“In kind” support
1.8 had two patch releases along with the scheduled ones1.8.6 will come a month earlier due to the office moveNew release of h5check depends on future file format changes and reported bugs in the tool; it doesn’t depend on the HDF5 library at all, only on the file format
Store Partial Edge Chunks More EfficientlyAllow application to control whether partially used chunks at edges of datasets are compressed and/or allocated as full chunks in file.Persistent File Free Space trackingNo more “forgetting where all the free space in the file is” when the file is closedAllow a group’s heaps (which store link info) to be compressed
Examples for fast-tracking: (1) you know you will never do partial I/O. (2) You know you will never want to reclaim space in a file.
In general, for all the following slides: You need to specify the tool name under the plot. The font should not be too small.Add another code section:Amplify data@lat2d=eos_file->Latitude_L2_Standard_cloud_cleared_radiance_product..Amplify:plot(3)=gsn_csm_contour_map_polar(xwks,data_3,res…)Replace the plot with channel 567
Basically it’s a tool for getting lat/lon from HDF-EOS 2 files.Tool created a couple years ago for internal use, by Choonhwan. Kent found it convenient for finding lat/lon. Had a student test and improve it, fix bugs.Now it’s available.