ENVI and IDL software support HDF and HDF-EOS. Capabilities and the HDF tools built on ENVI and IDL will be reviewed. The current development will be discussed and demonstrated.
Domain Specific Languages: An introduction (DSLs)Pedro Silva
Domain Specific Languages (DSLs) are special-purpose programming languages developed for a specific domain.
Some of its most interesting benefits include:
● increasing productivity
○ by reducing
■ the lines of code that have to be written manually
■ the number of coding errors
● (due to automatic domain restrictions)
● test generation
● formal verification
(Check my books at https://beacons.ai/tagido)
The Development QA Reports for PeopleSoft enables an organization to perform smarter quality assurance of developed objects and code, by automating the detection of QA issues during release management. The reports also enable corrective actions to be made more efficiently by partitioning the language-dependent aspects of PeopleSoft development to those resources whose skills in a particular language are strongest,
Non-Comparable Object Tools for PeopleSoftLeandro Baca
The Non-Comparable Object Tools enable you to use PIA to generate Compare Reports of non-comparable PeopleTools objects (Roles, Trees, Message Catalog Entries, etc.), copy them between environments, and export/import them through file.
The document discusses source control implementation goals for a large distributed PowerBuilder development project. It proposes a two-tiered archive structure with both file-level and object-level source control to address the needs of the PowerBuilder development model. A key part of the model is a "staircase methodology" involving development, unit testing, and other stages that allows for isolated environments and reliable releases while supporting the entire application lifecycle.
There is a huge amount of data out there and a great deal of power and insight that we can gain from it — if we can just bring it all into focus and make it more manageable. Many industrial organizations are accomplishing this by building sophisticated HMI, SCADA, and MES projects with the Ignition Perspective Module.
This document provides an agenda and overview for a two-day training on software architecture. Day 1 will cover defining software architecture, decomposition strategies like layers and tiers, and service-level requirements. Day 2 will discuss technologies used in different tiers, integration, security, and other topics. Ground rules are provided for the training. The document then defines software architecture and the differences between architecture, design, and coding. Common decomposition strategies and architectural drivers are also outlined.
Five Pain Points of Agile Development (And How Software Version Management Ca...Perforce
The latest research on Software Configuration Management suggests that developers are struggling in five key areas: latency, far-flung teams, ad-hoc workflows, administrative overhead, and integration nightmares.
This webcast will help you understand how these five factors are undermining developer productivity and performance.
As modern practices strain some tools to their limits, companies are revisiting their approaches to version management. We will share with you...
* How the market is evolving to address these critical issues
* How innovative SCM tools can take your versioning to new levels.
Domain Specific Languages: An introduction (DSLs)Pedro Silva
Domain Specific Languages (DSLs) are special-purpose programming languages developed for a specific domain.
Some of its most interesting benefits include:
● increasing productivity
○ by reducing
■ the lines of code that have to be written manually
■ the number of coding errors
● (due to automatic domain restrictions)
● test generation
● formal verification
(Check my books at https://beacons.ai/tagido)
The Development QA Reports for PeopleSoft enables an organization to perform smarter quality assurance of developed objects and code, by automating the detection of QA issues during release management. The reports also enable corrective actions to be made more efficiently by partitioning the language-dependent aspects of PeopleSoft development to those resources whose skills in a particular language are strongest,
Non-Comparable Object Tools for PeopleSoftLeandro Baca
The Non-Comparable Object Tools enable you to use PIA to generate Compare Reports of non-comparable PeopleTools objects (Roles, Trees, Message Catalog Entries, etc.), copy them between environments, and export/import them through file.
The document discusses source control implementation goals for a large distributed PowerBuilder development project. It proposes a two-tiered archive structure with both file-level and object-level source control to address the needs of the PowerBuilder development model. A key part of the model is a "staircase methodology" involving development, unit testing, and other stages that allows for isolated environments and reliable releases while supporting the entire application lifecycle.
There is a huge amount of data out there and a great deal of power and insight that we can gain from it — if we can just bring it all into focus and make it more manageable. Many industrial organizations are accomplishing this by building sophisticated HMI, SCADA, and MES projects with the Ignition Perspective Module.
This document provides an agenda and overview for a two-day training on software architecture. Day 1 will cover defining software architecture, decomposition strategies like layers and tiers, and service-level requirements. Day 2 will discuss technologies used in different tiers, integration, security, and other topics. Ground rules are provided for the training. The document then defines software architecture and the differences between architecture, design, and coding. Common decomposition strategies and architectural drivers are also outlined.
Five Pain Points of Agile Development (And How Software Version Management Ca...Perforce
The latest research on Software Configuration Management suggests that developers are struggling in five key areas: latency, far-flung teams, ad-hoc workflows, administrative overhead, and integration nightmares.
This webcast will help you understand how these five factors are undermining developer productivity and performance.
As modern practices strain some tools to their limits, companies are revisiting their approaches to version management. We will share with you...
* How the market is evolving to address these critical issues
* How innovative SCM tools can take your versioning to new levels.
Unlike other mobile file access and collaborative file sharing solutions, Micro Focus Filr (formerly Novell Filr) has been designed with the enterprise in mind, resulting in less administration, better security, and more productive users.
Arkadiy Kogan has over 15 years of experience as a senior software engineer developing tools and applications to improve productivity and automation. He has a background in languages like Perl, Java, and databases like PostgreSQL and Oracle. The document provides details on his work history at EMC2 and ClearStory Systems where he developed custom applications, user interfaces, and databases to support testing, release management, and digital asset management systems.
Design Like a Pro: How to Pick the Right System ArchitectureInductive Automation
Whether your automation project has only a few tags or hundreds of thousands of tags, you need to make sure that it will work properly now and that it has enough room to grow in the future. Having the right architecture and server sizes are absolutely essential in reaching this goal.
The document provides an overview of a webinar comparing the CGM and SVG file formats. The webinar agenda includes introductions, a presentation by a guest speaker, interactive polls, and a summary. The document outlines details about the guest presenter and their background. It also provides background information on CGM and SVG, comparing their file sizes. Interactive polls are presented to engage participants. The CGM and SVG sections provide overviews of the properties and features of each file format.
The document discusses 5 common pain points of agile development: 1) Latency caused by global teams and continuous integration, 2) Workflow challenges of tracking versions across components and non-code assets, 3) Governance issues around versioning for legal/compliance purposes and reconstructing historic builds, 4) High administrative overhead of some SCM systems, and 5) Integration nightmares between the SCM and the rest of the ALM stack. It provides examples from companies like Accelrys, NVIDIA, NYSE Euronext, Trend Micro, and NetApp of how they addressed these pain points with Perforce's SCM system.
Codiad is a web-based IDE framework that has a small footprint and minimal requirements. It was built with simplicity and fast, interactive development in mind without the massive overhead of larger desktop editors. Codiad supports over 40 programming languages, has features like auto-complete, error checking, collaborative editing, customizable syntax highlighting, and can be run on a user's own server.
Docker Sydney: 5 Patterns for App Transformation with ContainersElton Stoneman
How to package legacy monoliths in contaienrs so they behave like new cloud-native apps - without changing code. Covers logging, configuration, dependency checking, healthchecks and monitoring with .NET Windows and Java Linux apps.
Getting started with CI involves setting up a connection to a version control repository, a build script, and a feedback mechanism. Key features of CI include continuously compiling source code changes, integrating database changes, running tests, inspecting code quality, and continuously deploying updates while enabling automatic rollbacks. Providing timely feedback through documentation is also critical for good CI systems.
Real-World Case Study: For Connecting CompactRIO's to Microsoft Azure IoTDMC, Inc.
The world is exploding with more connected devices and a growing need to store, share, and present data in increasingly powerful ways. Learn how to use Microsoft Azure IoT with CompactRIO to enable remote data collection stations with web access to both high-speed raw data and processed results.
This document provides an overview of Android application development. It introduces key concepts like the Android system architecture with multiple application components running on top of an Linux kernel. It demonstrates a simple "Hello World" application and covers major application components like Activities, Services, BroadcastReceivers and ContentProviders. It also discusses practical matters like storage, packaging, resources and application lifecycle. Finally, it introduces the Android development toolchain including the emulator, Eclipse plugin and debugging tools.
The document provides an agenda and summaries of presentations for the Chicago LabVIEW User Group meeting on June 20, 2019. The agenda includes presentations on NIWeek recap, NI Package Manager, and the JKI VI Package Manager. The summaries describe recent updates to NI products including LabVIEW 2019, NXG 3.1, and SystemLink as well as new CompactDAQ and FieldDAQ hardware.
This document discusses centralized logging in Lync Server 2013. It provides the following key points:
1. Lync Server 2013 introduces a centralized logging system (CLS) that allows administrators to start, stop, and search trace logs from all machines in a deployment from a single centralized location.
2. CLS consists of a CLSController PowerShell module that sends commands to CLSAgents running on each Lync Server, and the CLSAgents control the local logging and manage log files.
3. The document provides examples of using CLS commands to configure logging scenarios and providers, and search logs across the deployment.
Design Like a Pro: Basics of Building Mobile-Responsive HMIsInductive Automation
This document discusses responsive design principles for mobile applications. It covers topics like mobile design patterns, touch optimization, levels of depth, mobile-first design, and content as UI. It also describes common responsive layout patterns like mostly fluid, column drop, layout shifter, tiny tweaks, and off canvas. The document emphasizes that responsive design results in less work, more usability, and enables a mobile-focused mindset when building applications in Ignition 8.
Common Project Mistakes: Visualization, Alarms, and SecurityInductive Automation
Whether you’re a seasoned professional or new to industrial automation, everyone makes development mistakes now and then. But some mistakes are more common than others. Understanding how to avoid these integration issues will not only improve your current projects, but equip you with the tools and techniques necessary to streamline development and reduce rework in the future.
Developer Conference 1.4 - Customer In Focus- Nationwide (NY)Micro Focus
The document summarizes a project to upgrade two legacy applications at Nationwide from older development software, databases, and platforms to more modern versions. Key aspects of the project included upgrading from Micro Focus Net Express 3.1 to Visual COBOL 2.0, Oracle 9i to 11g, and Unix Solaris to Windows 2008 R2. The project involved over 1,200 programs and faced issues regarding interfaces, data conversion, and vendor support. After testing, the project implemented successfully in February 2013 and provided benefits like supported platforms, increased developer knowledge, and application simplification.
HDF-EOS is a software library designed to support NASA Earth Observing System (EOS) science data. HDF is the Hierarchical Data Format developed by The HDF Group. Specific data structures in HDF-EOS5 which are containers for science data are: Grid, Point, Zonal Average and Swath. These data structures are constructed from standard HDF5 data objects, using EOS conventions, through the use of a software library. This presentation is intended to familiarize current HDF-EOS users with the structure of HDF-EOS5 files and the Grid, Swath, Point and Zonal Average structures used in these files.
The document discusses migrating from HDF5 1.6 to HDF5 1.8. It provides an overview of new features in HDF5 1.8, including a revised file format, improvements to group storage, new link types like external links, and enhanced error handling. The document recommends helping with the transition to HDF5 1.8 by discussing beneficial new features and awareness of compatibility issues when moving from 1.6 to 1.8.
Are you curious what is coming next? Are you willing to try some prototyped software? If so, come to this talk to learn about new HDF5 features such as HDF5 FORTARN2003 APIs, metadata journaling etc. and how your application can benefit from using them.
As the volume and complexity of data from myriad Earth Observing platforms, both remote sensing and in-situ increases so does the demand for access to both data and information products from these data. The audience no longer is restricted to an investigator team with specialist science credentials. Non-specialist users from scientists from other disciplines, science-literate public, to teachers, to the general public and decision makers want access. What prevents them from this access to resources? It is the very complexity and specialist developed data formats, data set organizations and specialist terminology. What can be done in response? We must shift the burden from the user to the data provider. To achieve this our developed data infrastructures are likely to need greater degrees of internal code and data structure complexity to achieve (relatively) simpler end-user complexity. Evidence from numerous technical and consumer markets supports this scenario. We will cover the elements of modern data environments, what the new use cases are and how we can respond to them.
Data produced by the Ozone PEATE from the Ozone Mapping and Profiler Suite (OMPS) instruments are to be stored in HDF5, not HDF-EOS, but will still need some features similar to those in HDF-EOS. In particular, a mechanism for handling dimension names will be needed. This poster proposes a method to handle dimension names for arrays in HDF5 in a manner commensurate with HDF-EOS5.
Current status of HDF-EOS and access tools will be summarized. Update on HDF-EOS, HDFView plug-in and The HDF-EOS to GeoTIFF (HEG) conversion tool, including recent changes to the software, ongoing maintenance, upcoming releases, future plans, and issues will be discussed.
The document provides an overview of National Polar-orbiting Operational Satellite System (NPOESS) HDF5 files. Key points include:
1) NPOESS is a satellite system that collects environmental data related to weather, atmosphere, oceans, land, and near-space. Data products are distributed in HDF5 format.
2) NPOESS HDF5 files contain raw data records, sensor data records, intermediate products, application related products, and environmental data records.
3) Data is organized into granules and aggregations. Granules contain a segment of data and are referenced by aggregations, which group granules over a temporal range.
Unlike other mobile file access and collaborative file sharing solutions, Micro Focus Filr (formerly Novell Filr) has been designed with the enterprise in mind, resulting in less administration, better security, and more productive users.
Arkadiy Kogan has over 15 years of experience as a senior software engineer developing tools and applications to improve productivity and automation. He has a background in languages like Perl, Java, and databases like PostgreSQL and Oracle. The document provides details on his work history at EMC2 and ClearStory Systems where he developed custom applications, user interfaces, and databases to support testing, release management, and digital asset management systems.
Design Like a Pro: How to Pick the Right System ArchitectureInductive Automation
Whether your automation project has only a few tags or hundreds of thousands of tags, you need to make sure that it will work properly now and that it has enough room to grow in the future. Having the right architecture and server sizes are absolutely essential in reaching this goal.
The document provides an overview of a webinar comparing the CGM and SVG file formats. The webinar agenda includes introductions, a presentation by a guest speaker, interactive polls, and a summary. The document outlines details about the guest presenter and their background. It also provides background information on CGM and SVG, comparing their file sizes. Interactive polls are presented to engage participants. The CGM and SVG sections provide overviews of the properties and features of each file format.
The document discusses 5 common pain points of agile development: 1) Latency caused by global teams and continuous integration, 2) Workflow challenges of tracking versions across components and non-code assets, 3) Governance issues around versioning for legal/compliance purposes and reconstructing historic builds, 4) High administrative overhead of some SCM systems, and 5) Integration nightmares between the SCM and the rest of the ALM stack. It provides examples from companies like Accelrys, NVIDIA, NYSE Euronext, Trend Micro, and NetApp of how they addressed these pain points with Perforce's SCM system.
Codiad is a web-based IDE framework that has a small footprint and minimal requirements. It was built with simplicity and fast, interactive development in mind without the massive overhead of larger desktop editors. Codiad supports over 40 programming languages, has features like auto-complete, error checking, collaborative editing, customizable syntax highlighting, and can be run on a user's own server.
Docker Sydney: 5 Patterns for App Transformation with ContainersElton Stoneman
How to package legacy monoliths in contaienrs so they behave like new cloud-native apps - without changing code. Covers logging, configuration, dependency checking, healthchecks and monitoring with .NET Windows and Java Linux apps.
Getting started with CI involves setting up a connection to a version control repository, a build script, and a feedback mechanism. Key features of CI include continuously compiling source code changes, integrating database changes, running tests, inspecting code quality, and continuously deploying updates while enabling automatic rollbacks. Providing timely feedback through documentation is also critical for good CI systems.
Real-World Case Study: For Connecting CompactRIO's to Microsoft Azure IoTDMC, Inc.
The world is exploding with more connected devices and a growing need to store, share, and present data in increasingly powerful ways. Learn how to use Microsoft Azure IoT with CompactRIO to enable remote data collection stations with web access to both high-speed raw data and processed results.
This document provides an overview of Android application development. It introduces key concepts like the Android system architecture with multiple application components running on top of an Linux kernel. It demonstrates a simple "Hello World" application and covers major application components like Activities, Services, BroadcastReceivers and ContentProviders. It also discusses practical matters like storage, packaging, resources and application lifecycle. Finally, it introduces the Android development toolchain including the emulator, Eclipse plugin and debugging tools.
The document provides an agenda and summaries of presentations for the Chicago LabVIEW User Group meeting on June 20, 2019. The agenda includes presentations on NIWeek recap, NI Package Manager, and the JKI VI Package Manager. The summaries describe recent updates to NI products including LabVIEW 2019, NXG 3.1, and SystemLink as well as new CompactDAQ and FieldDAQ hardware.
This document discusses centralized logging in Lync Server 2013. It provides the following key points:
1. Lync Server 2013 introduces a centralized logging system (CLS) that allows administrators to start, stop, and search trace logs from all machines in a deployment from a single centralized location.
2. CLS consists of a CLSController PowerShell module that sends commands to CLSAgents running on each Lync Server, and the CLSAgents control the local logging and manage log files.
3. The document provides examples of using CLS commands to configure logging scenarios and providers, and search logs across the deployment.
Design Like a Pro: Basics of Building Mobile-Responsive HMIsInductive Automation
This document discusses responsive design principles for mobile applications. It covers topics like mobile design patterns, touch optimization, levels of depth, mobile-first design, and content as UI. It also describes common responsive layout patterns like mostly fluid, column drop, layout shifter, tiny tweaks, and off canvas. The document emphasizes that responsive design results in less work, more usability, and enables a mobile-focused mindset when building applications in Ignition 8.
Common Project Mistakes: Visualization, Alarms, and SecurityInductive Automation
Whether you’re a seasoned professional or new to industrial automation, everyone makes development mistakes now and then. But some mistakes are more common than others. Understanding how to avoid these integration issues will not only improve your current projects, but equip you with the tools and techniques necessary to streamline development and reduce rework in the future.
Developer Conference 1.4 - Customer In Focus- Nationwide (NY)Micro Focus
The document summarizes a project to upgrade two legacy applications at Nationwide from older development software, databases, and platforms to more modern versions. Key aspects of the project included upgrading from Micro Focus Net Express 3.1 to Visual COBOL 2.0, Oracle 9i to 11g, and Unix Solaris to Windows 2008 R2. The project involved over 1,200 programs and faced issues regarding interfaces, data conversion, and vendor support. After testing, the project implemented successfully in February 2013 and provided benefits like supported platforms, increased developer knowledge, and application simplification.
HDF-EOS is a software library designed to support NASA Earth Observing System (EOS) science data. HDF is the Hierarchical Data Format developed by The HDF Group. Specific data structures in HDF-EOS5 which are containers for science data are: Grid, Point, Zonal Average and Swath. These data structures are constructed from standard HDF5 data objects, using EOS conventions, through the use of a software library. This presentation is intended to familiarize current HDF-EOS users with the structure of HDF-EOS5 files and the Grid, Swath, Point and Zonal Average structures used in these files.
The document discusses migrating from HDF5 1.6 to HDF5 1.8. It provides an overview of new features in HDF5 1.8, including a revised file format, improvements to group storage, new link types like external links, and enhanced error handling. The document recommends helping with the transition to HDF5 1.8 by discussing beneficial new features and awareness of compatibility issues when moving from 1.6 to 1.8.
Are you curious what is coming next? Are you willing to try some prototyped software? If so, come to this talk to learn about new HDF5 features such as HDF5 FORTARN2003 APIs, metadata journaling etc. and how your application can benefit from using them.
As the volume and complexity of data from myriad Earth Observing platforms, both remote sensing and in-situ increases so does the demand for access to both data and information products from these data. The audience no longer is restricted to an investigator team with specialist science credentials. Non-specialist users from scientists from other disciplines, science-literate public, to teachers, to the general public and decision makers want access. What prevents them from this access to resources? It is the very complexity and specialist developed data formats, data set organizations and specialist terminology. What can be done in response? We must shift the burden from the user to the data provider. To achieve this our developed data infrastructures are likely to need greater degrees of internal code and data structure complexity to achieve (relatively) simpler end-user complexity. Evidence from numerous technical and consumer markets supports this scenario. We will cover the elements of modern data environments, what the new use cases are and how we can respond to them.
Data produced by the Ozone PEATE from the Ozone Mapping and Profiler Suite (OMPS) instruments are to be stored in HDF5, not HDF-EOS, but will still need some features similar to those in HDF-EOS. In particular, a mechanism for handling dimension names will be needed. This poster proposes a method to handle dimension names for arrays in HDF5 in a manner commensurate with HDF-EOS5.
Current status of HDF-EOS and access tools will be summarized. Update on HDF-EOS, HDFView plug-in and The HDF-EOS to GeoTIFF (HEG) conversion tool, including recent changes to the software, ongoing maintenance, upcoming releases, future plans, and issues will be discussed.
The document provides an overview of National Polar-orbiting Operational Satellite System (NPOESS) HDF5 files. Key points include:
1) NPOESS is a satellite system that collects environmental data related to weather, atmosphere, oceans, land, and near-space. Data products are distributed in HDF5 format.
2) NPOESS HDF5 files contain raw data records, sensor data records, intermediate products, application related products, and environmental data records.
3) Data is organized into granules and aggregations. Granules contain a segment of data and are referenced by aggregations, which group granules over a temporal range.
An update on HDF, including a status report on the HDF Group, an overview of recent changes to the HDF4 and HDF5 libraries and tools, plans for future releases, HDF Group projects and collaborations, and future plans.
The document discusses the transition of the CFD General Notation System (CGNS) to using HDF5 as its main storage format instead of ADF. CGNS provides a standard for storing computational fluid dynamics (CFD) simulation data. It is switching to HDF5 to take advantage of HDF5's capabilities like parallel I/O and availability in many tools, though initial HDF5 implementations have larger file sizes and slower I/O performance than ADF. The CGNS steering committee is evaluating the HDF5 implementation and investigating performance problems to further improve the transition.
The document provides an overview and status update of the Earth Observing System Data and Information System (EOSDIS). It discusses that EOSDIS supports EOS missions by ingesting, processing, archiving, and distributing their data. It notes that the volume of archived data has grown to over 4.9 petabytes containing over 2700 datasets. It also outlines plans to transition to new systems and complete updates to the EOSDIS code by 2009.
Accessibility and usability of NPP/NPOESS data in HDF5 can be enhanced by providing tools that simplify and standardize how data is accessed and presented. In this project, The HDF Group is creating such tools in the form of software to read and write certain key data types and data aggregates used in NPP/NPOESS data products, and extending HDFView to extract, present and export these data effectively. In particular, the work will focus on NPP/NPOESS use of HDF5 region references and quality flags. The HDF Group will also provide high quality user support for the project.
This is a slide from
HDF AND HDF-EOS WORKSHOP V
February 26 - 28, 2002
Source: http://hdfeos.org/workshops/ws05/presentations/Ullman/11c-Discussion_notes.ppt
This tutorial will introduce the three levels of the HDF-Java products: the HDF-Java wrapper (or Java Native Interfaces to the standard HDF libraries), the HDF-Java object package, and the HDFView. The Java wrapper provides standard Java APIs that allow applications to call the C HDF libraries from Java. The HDF-Java object package implements HDF data objects, e.g. Groups and Datasets, in an object-oriented form and makes it easy for applications to use the libraries. The HDFView is a visual tool for browsing and editing HDF4 and HDF5 files.
In this talk, we will give an update on the HDF5 OPeNDAP project. We will update the new features inside OPeNDAP HDF5 data handler. We will also introduce a new HDF5-Friendly OPeNDAP client library and demonstrate how it can help users to view and analyze remote HDF-EOS5 data served by OPeNDAP HDF5 handler. A demo will be presented with a customized OPeNDAP visualization client (GrADS) that uses the library.
HDF5 is a powerful and feature-rich creature, and getting the most out of it requires powerful tools. The MathWorks provides a "low-level" interface to the HDF5 library that closely corresponds to the C API and exposes much of its richness. This short tutorial will present ways to use the low-level MATLAB interface to build those tools and tackle such topics as subsetting, chunking, and compression.
NetCDF-Java is an open source Java library for reading scientific data formats like NetCDF, HDF5, HDF4, and OPeNDAP. It has been used as a component in many software projects. The library provides an object-oriented API for reading data from these file formats and exposing it to Java programs. It works by providing format readers for specific file types that can read data into the Common Data Model used by the library. The library has been tested against many file examples but could benefit from more systematic testing. Proper use of dimensions, variables, units, and metadata is important for self-documenting scientific data files.
MODIS (Moderate Resolution Imaging Spectroradiometer) sensor data are highly useful for field research. However, the volume of MODIS data and the complexity in data format makes MODIS data less usable for some communities. To expand the use of MODIS data beyond traditional remote sensing specialists, the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC) prepares and distributes subsets of selected Land Products in a scale and format useful for undergraduate students and field researchers. MODIS subsets are provided for more than 1,000 sites across the globe. The subsets are offered in tabular ASCII format and in GIS compatible GeoTIFF format. Time series plots and grid visualizations to help characterize field sites are also provided. In addition to offering subsets for fixed sites, the ORNL DAAC also offers the capability to create user-defined subsets for any location worldwide. The MODIS Global subsetting tool provides subsets from a single pixel up to 201 x 201 km for user-defined time range. Statistics, time series plots and GIS compatible files for the customized subsets are also distributed through this tool. Users can also programmatically retrieve the subsets through a SOAP based Web Service.
This one-year research project, funded by NOAA Climate Program Office (CPO) Scientific Data Stewardship (SDS), provides a solution to migrate data to a single standards-based archive format. Specifically, we investigate on how to store NASA ECS data and metadata into HDF5 Archival Information Packages (AIP). To achieve this, the HDF4 to HDF5 conversion tool has been enhanced so that converted ECS data can be read through the NetCDF4/CDM interface. In addition, metadata tools will be developed that convert ECS collection and granule level metadata to NOAA's collection level and NARA's METS standard. The enhanced HDF4 to HDF5 conversion tool has been released in May 2008 and it includes new functionality as the converted ECS data can be read through the NetCDF4 interface. We have tested 33 typical HDF-EOS2 swath, grid and point products at the National Snow and Ice Data Center (NSIDC). We also demonstrate the initial effort of the work to develop METS compliant metadata from granule metadata held in NASA's Earth Observing System (EOS) Data and Information System (EOSDIS) Core System (ECS).
This tutorial is designed for new HDF5 users. We will go over a brief history of HDF and HDF5 software, and will cover basic HDF5 Data Model objects and their properties; we will give an overview of the HDF5 Libraries and APIs, and discuss the HDF5 programming model. Simple C and Fortran examples, and Java tool HDFView will be used to illustrate HDF5 concepts.
GWAVACon 2013: Vibe Hudson and NetCB Success Story 2GWAVA
The document provides an overview of Novell Vibe, an electronic records and document management (eRDMS) system. It discusses where the implementation is at the National Research Foundation (NRF), recaps why eRDMS is being used, and outlines next steps which include end user training, a phased rollout, and future improvements to the system like enhanced mobile access and productivity features. Contact information is provided for the eRDMS team members available to assist with the implementation.
Cincom provided an update on their Smalltalk product line. Recent releases of Cincom Smalltalk, ObjectStudio, and VisualWorks included improvements to the virtual machine, Store, internationalization, and 64-bit support. Future plans include enhancements to mapping, modeling, encryption, performance, and new features like skins and fluid positioning. Cincom is focusing on maintenance releases, online updates, and gathering customer requirements to further improve their products.
The Deep Learning Application Builder for Developers
Fully functional deep learning application with just a few clicks.
Support for a wide range of deep learning frameworks and platforms.
Provide a fully opened source, not a library
Lalit Kumar Choudhary has over 5 years of experience in embedded software development including experience developing Linux device drivers, applications, and bootloaders. He has worked on projects involving WLAN firmware, power monitoring systems, and webpage development. His technical skills include C programming, Linux, networking protocols, and version control systems.
CloudFest 2018 Hackathon Project Results Presentation - CFHack18Jeffrey J. Hardy
Our third annual hackathon at CloudFest (formerly WHDglobal) was a great success. Developers from all over Europe came to participate - including noted experts from the WordPress and Joomla communities. Six technology projects were completed with two building on last year's success. Topics included IoT, secure FPTD, Domain Connect, WordPress updates, and more. And a special thanks to our sponsors who made it all possible. Cheers!
Recording: https://pan.news/20210422
Abstract: An HCL & panagenda joint webinar! Learn how to streamline your client upgrades.
The v12 release is on the horizon and many companies are looking forward to the great new features and functionality. If you have been waiting to upgrade your Notes clients, then this is a perfect time. Join us for this co-webinar with the experts from HCL to see what treasures await in Domino and Notes 12 and learn how you can automate a smooth upgrade process.
During the webinar, you will get a tour of the new interface and see some of the new features and improvements for the coming version.
Other key topics that will be discussed include:
• Auditing Existing Notes Client Deployments
• Centralized Tracking and Reporting for Client Upgrades
• Scheduling Methods for Upgrade Activities
Speakers: Barry Rosen (HCL), Kim Greene (Kim Greene Consulting), Christoph Adler (panagenda)
The document provides an introduction to the Android operating system. It discusses that Android is an open-source software stack for mobile devices created by the Open Handset Alliance. The architecture of Android includes components like the Linux kernel, middleware, and key applications. Developers can create Android applications using Java and tools provided in the Android SDK.
This document discusses different types of computer software, including system software and application software. System software consists of operating systems and utility programs that control computer operations and interface between hardware, users, and application software. Application software includes productivity programs like word processors and spreadsheets, as well as multimedia, home, and business programs. Productivity software is bundled into integrated packages, suites, and web-based applications for ease of use.
The document discusses new tools and enhancements for analyzing geospatial and scientific data using ENVI and IDL. Key points include:
- The MODIS Conversion Toolkit allows ingesting, processing, and georeferencing MODIS data in ENVI.
- The ENVI Plugin for Ocean Color converts ocean color data sets for use in ENVI. There are plans to merge its functionality into the MODIS toolkit.
- Other tools discussed include ones for Hyperion data, ASTER orthorectification, NOAA AVHRR data, and a prototype for AIRS radiance data.
Thomas Rock has over 20 years of experience developing and administering applications on IBM i (AS/400, iSeries) platforms. He is highly skilled in RPG, ILE, SQL, and DB2 and has extensive experience designing and implementing modular applications, database structures, security practices, and integration projects. His background includes roles in project management, system administration, database administration, and programming for order entry, distribution, POS, and other business systems.
Foundry Management System Desktop Application Dharmendra Sid
Presentation of Industrial Project Final Semester Department of Computer Science, Shivaji University, Kolhapur in the year March-2012.
Designed & Developed at Kadam Software & Services
Mark Cooper is a senior DevOps cloud engineer with over 15 years of experience in cloud development, DevOps, and cloud infrastructure. He has extensive skills in Java, Python, cloud technologies like OpenStack, and DevOps tools like Chef and Ansible. His background includes roles providing automated provisioning for IBM's cloud services, developing microservices for BlueMix, and managing deployments and pipelines. He aims to deliver high-quality solutions through an agile approach and effective communication skills.
The document provides information about an upcoming Neo4j GraphDay event in Italy and new Neo4j product developments. It introduces two Neo4j employees in Italy and discusses Neo4j's native graph database capabilities. It also summarizes Neo4j's cloud and on-premises deployment options, development tools, and integrations with data science and analytics platforms.
This document discusses architectures for enabling business intelligence and analytics on NoSQL data. It begins by outlining common questions around enabling ad-hoc reporting, improving dashboard performance, integrating data, and balancing simple and complex queries. It then reviews several architectures: using only NoSQL for reports, treating NoSQL as a data source, writing programs to access NoSQL in BI tools, and enabling SQL access to NoSQL data. Examples are provided of companies using different architectures, such as only NoSQL, NoSQL with MySQL, or NoSQL via a SQL database.
Amplexor Drupal for the Enterprise seminar - evaluating Drupal for the Enterp...Amplexor
Drupal is an open source content management framework that is flexible, extendable and user-centric. When deploying Drupal in an enterprise environment, key considerations include selecting reliable contributed modules, ensuring security best practices, and having the necessary Drupal expertise for infrastructure, support and product life cycles. Drupal 8 will improve the platform with new configuration management, multilingual and REST capabilities.
Sencha Tooling and Framework brings enterprise-grade development tools to Ext JS including visual application builders, theme designers, and debugging tools to help developers quickly build performant and beautiful applications. The document demonstrates using Sencha Architect to visually build a news application, and highlights new features in Architect 4.1 like support for premium components, grid enhancements, and importing themes from Themer. Sencha's tools help developers improve productivity and adopt Ext JS frameworks easily.
Whether you are an AI, HPC, IoT, Graphics, Networking or Media developer, visit the Intel Developer Zone today to access the latest software products, resources, training, and support. Test-drive the latest Intel hardware and software products on DevCloud, our online development sandbox, and use DevMesh, our online collaboration portal, to meet and work with other innovators and product leaders. Get started by joining the Intel Developer Community @ software.intel.com.
Presentation delivered at LinuxCon China 2017.
Zephyr is an upstream open source project for places where Linux is too big to fit. This talk will overview the progress we've made in the first year towards the projects goals around incorporating best of breed technologies into the code base, and building up the community to support multiple architectures and development environments. We will share our roadmap, plans and the challenges ahead of the us and give an overview of the major technical challenges we want to tackle in 2017.
This document discusses an accessibility framework for Android that aims to make applications more accessible by default. It provides accessible resources, containers, widgets and components that can be easily integrated by developers. The framework includes an accessibility service, accessible behavior extensions for widgets, and new accessible layouts that enhance screen reading and navigation. It also describes a GUI design assistant tool that helps developers visually create accessible user interfaces using the framework's components and generate the necessary XML files. The framework is meant to avoid the expense and fragmentation of individual accessibility implementations by different developers.
This document discusses how to optimize HDF5 files for efficient access in cloud object stores. Key optimizations include using large dataset chunk sizes of 1-4 MiB, consolidating internal file metadata, and minimizing variable-length datatypes. The document recommends creating files with paged aggregation and storing file content information in the user block to enable fast discovery of file contents when stored in object stores.
This document provides an overview of HSDS (Highly Scalable Data Service), which is a REST-based service that allows accessing HDF5 data stored in the cloud. It discusses how HSDS maps HDF5 objects like datasets and groups to individual cloud storage objects to optimize performance. The document also describes how HSDS was used to improve access performance for NASA ICESat-2 HDF5 data on AWS S3 by hyper-chunking datasets into larger chunks spanning multiple original HDF5 chunks. Benchmark results showed that accessing the data through HSDS provided over 2x faster performance than other methods like ROS3 or S3FS that directly access the cloud storage.
This document summarizes the current status and focus of the HDF Group. It discusses that the HDF Group is located in Champaign, IL and is a non-profit organization focused on developing and maintaining HDF software and data formats. It provides an overview of recent HDF5, HDF4 and HDFView releases and notes areas of focus for software quality improvements, increased transparency, strengthening the community, and modernizing HDF products. It invites support and participation in upcoming user group meetings.
This document provides an overview of HSDS (HDF Server and Data Service), which allows HDF5 files to be stored and accessed from the cloud. Key points include:
- HSDS maps HDF5 objects like datasets and groups to individual cloud storage objects for scalability and parallelism.
- Features include streaming support, fancy indexing for complex queries, and caching for improved performance.
- HSDS can be deployed on Docker, Kubernetes, or AWS Lambda depending on needs.
- Case studies show HSDS is used by organizations like NREL and NSF to make petabytes of scientific data publicly accessible in the cloud.
This document discusses creating cloud-optimized HDF5 files by rearranging internal structures for more efficient data access in cloud object stores. It describes cloud-native and cloud-optimized storage formats, with the latter involving storing the entire HDF5 file as a single object. The benefits of cloud-optimized HDF5 include fast scanning and using the HDF5 library. Key aspects covered include using optimal chunk sizes, compression, and minimizing variable-length datatypes.
This document discusses updates and performance improvements to the HDF5 OPeNDAP data handler. It provides a history of the handler since 2001 and describes recent updates including supporting DAP4, new data types, and NetCDF data models. A performance study showed that passing compressed HDF5 data through the handler without decompressing/recompressing led to speedups of around 17-30x by leveraging HDF5 direct I/O APIs. This allows outputting HDF5 files as NetCDF files much faster through the handler.
This document provides instructions for using the Hyrax software to serve scientific data files stored on Amazon S3 using the OPeNDAP data access protocol. It describes how to generate ancillary metadata files called DMR++ files using the get_dmrpp tool that provide information about the data file structure and locations. The document explains how to run get_dmrpp inside a Docker container to process data files on S3 and generate customized DMR++ files that the Hyrax server can use to serve the files to clients.
This document provides an overview and examples of accessing cloud data and services using the Earthdata Login (EDL), Pydap, and MATLAB. It discusses some common problems users encounter, such as being unable to access HDF5 data on AWS S3 using MATLAB or read data from OPeNDAP servers using Pydap. Solutions presented include using EDL to get temporary AWS tokens for S3 access in MATLAB and providing code examples on the HDFEOS website to help users access S3 data and OPeNDAP services. The document also notes some limitations, such as tokens being valid for only 1 hour, and workarounds like requesting new tokens or using the MATLAB HDF5 API instead of the netCDF API.
The HDF5 Roadmap and New Features document outlines upcoming changes and improvements to the HDF5 library. Key points include:
- HDF5 1.13.x releases will include new features like selection I/O, the Onion VFD for versioned files, improved VFD SWMR for single-writer multiple-reader access, and subfiling for parallel I/O.
- The Virtual Object Layer allows customizing HDF5 object storage and introduces terminal and pass-through connectors.
- The Onion VFD stores versions of HDF5 files in a separate onion file for versioned access.
- VFD SWMR improves on legacy SWMR by implementing single-writer multiple-reader capabilities
This document discusses user analysis of the HDFEOS.org website and plans for future improvements. It finds that the majority of the site's 100 daily users are "quiet", not posting on forums or other interactive elements. The main user types are locators, who search for examples or data; mergers, who combine or mosaic datasets; and converters, who change file formats. The document outlines recent updates focused on these user types, like adding Python examples for subsetting and calculating latitude and longitude. It proposes future work on artificial intelligence/machine learning uses of HDF files and examples for processing HDF data in the cloud.
This document summarizes a presentation about the current status and future directions of the Hierarchical Data Format (HDF) software. It provides updates on recent HDF5 releases, development efforts including new compression methods and ways to access HDF5 data, and outreach resources. It concludes by inviting the audience to share wishes for future HDF development.
The document describes H5Coro, a new C++ library for reading HDF5 files from cloud storage. H5Coro was created to optimize HDF5 reading for cloud environments by minimizing I/O operations through caching and efficient HTTP requests. Performance tests showed H5Coro was 77-132x faster than the previous HDF5 library at reading HDF5 data from Amazon S3 for NASA's SlideRule project. H5Coro supports common HDF5 elements but does not support writing or some complex HDF5 data types and messages to focus on optimized read-only performance for time series data stored sequentially in memory.
This document summarizes MathWorks' work to modernize MATLAB's support for HDF5. Key points include:
1) MATLAB now supports HDF5 1.10.7 features like single-writer/multiple-reader access and virtual datasets through new and updated low-level functions.
2) Performance benchmarks show some improvements but also regressions compared to the previous HDF5 version, and work continues to optimize code and support future versions.
3) There are compatibility considerations for Linux filter plugins, but interim solutions are provided until MathWorks can ship a single HDF5 version.
HSDS provides HDF as a service through a REST API that can scale across nodes. New releases will enable serverless operation using AWS Lambda or direct client access without a server. This allows HDF data to be accessed remotely without managing servers. HSDS stores each HDF object separately, making it compatible with cloud object storage. Performance on AWS Lambda is slower than a dedicated server but has no management overhead. Direct client access has better performance but limits collaboration between clients.
HDF5 and Zarr are data formats that can be used to store and access scientific data. This presentation discusses approaches to translating between the two formats. It describes how HDF5 files were translated to the Zarr format by creating a separate Zarr store to hold HDF5 file chunks, and storing chunk location metadata. It also discusses an implementation that translates Zarr data to the HDF5 format by using a special chunking layout and storing chunk information in an HDF5 compound dataset. Limitations of the translations include lack of support for some HDF5 dataset properties in Zarr, and lack of support for some Zarr compression methods in the HDF5 implementation.
The document discusses HDF for the cloud, including new features of the HDF Server and what's next. Key points:
- HDF Server uses a "sharded schema" that maps HDF5 objects to individual storage objects, allowing parallel access and updates without transferring entire files.
- Implementations include HSDS software that uses the sharded schema with an API and SDKs for different languages like h5pyd for Python.
- New features of HSDS 0.6 include support for POSIX, Azure, AWS Lambda, and role-based access control.
- Future work includes direct access to storage without a server intermediary for some use cases.
This document compares different methods for accessing HDF and netCDF files stored on Amazon S3, including Apache Drill, THREDDS Data Server (TDS), and HDF5 Virtual File Driver (VFD). A benchmark test of accessing a 24GB HDF5/netCDF-4 file on S3 from Amazon EC2 found that TDS performed the best, responding within 2 minutes, while Apache Drill failed after 7 minutes. The document concludes that TDS 5.0 is the clear winner based on performance and support for role-based access control and HDF4 files, but the best solution depends on use case and software.
This document discusses STARE-PODS, a proposal to NASA/ACCESS-19 to develop a scalable data store for earth science data using the SpatioTemporal Adaptive Resolution Encoding (STARE) indexing scheme. STARE allows diverse earth science data to be unified and indexed, enabling the data to be partitioned and stored in a Parallel Optimized Data Store (PODS) for efficient analysis. The HDF Virtual Object Layer and Virtual Data Set technologies can then provide interfaces to access the data in STARE-PODS in a familiar way. The goal is for STARE-PODS to organize diverse data for alignment and parallel/distributed storage and processing to enable integrative analysis at scale.
This document provides an overview and update on HDF5 and its ecosystem. Key points include:
- HDF5 1.12.0 was recently released with new features like the Virtual Object Layer and external references.
- The HDF5 library now supports accessing data in the cloud using connectors like S3 VFD and REST VOL without needing to modify applications.
- Projects like HDFql and H5CPP provide additional interfaces for querying and working with HDF5 files from languages like SQL, C++, and Python.
- The HDF5 community is moving development to GitHub and improving documentation resources on the HDF wiki site.
This document summarizes new features in HDF5 1.12.0, including support for storing references to objects and attributes across files, new storage backends using a virtual object layer (VOL), and virtual file drivers (VFDs) for Amazon S3 and HDFS. It outlines the HDF5 roadmap for 2019-2022, which includes continued support for HDF5 1.8 and 1.10, and new features in future 1.12.x releases like querying, indexing, and provenance tracking.
More from The HDF-EOS Tools and Information Center (20)
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
2. Toolkits & Enhancements For ENVI/IDL
•
•
•
•
MODIS Conversion Toolkit
ENVI Plugin for Ocean Color
Hyperion Tools
ASTER Level 2 RPC
Orthorectification
• NOAA CoastWatch AVHRR
• AIRS Level 1B Radiance
(prototype)
Devin White – Senior
Programming Consultant, PhD
dwhite@ittvis.com
• H5_Browser GUI to open and
browse HDF5 files
• H5_Parse automatically reads
entire file into an IDL structure
2
3. The MODIS Conversion Toolkit
• The MODIS Conversion Toolkit (MCTK) is a plugin for ENVI
that can ingest, process, and georeference every known
MODIS product (currently 143) through your choice of an
easy-to-use interactive widget interface or a fully-accessible
programmatic interface. Supported products include:
- Level 1A Uncalibrated Radiance
- Level 1B Calibrated Radiance
- Level 2 Swath
- Level 2G, Level 3, and Level 4 Grid
• Works with MODIS data from the Land,
Cryosphere, Atmosphere,
and Calibration groups.
3
5. ENVI 4.6 Themes
This release expands the value of ENVI in two primary areas:
• Automated Workflows:
– Even more workflows to support a variety of tasks for all
–
of their panchromatic, MSI and HSI data needs.
Customers of all ability levels and backgrounds will find
that the new workflows provide an intuitive way to
perform rigorous image analysis.
• Feature Extraction:
– Enhanced results by adding additional datasets and
–
ground truth data.
Prepare results for sharing by adding pertinent
annotations to the extracted features of interest.
• Release scheduled for January 12, 2009
Enhanced Results • Faster Workflows • Share Results
5
6. IDL Product Goals
Key problems for today’s IDL user
– Cannot read even the most common formats of data into IDL
without using commands or programming
– Cannot interact with visualizations and also control them
using simple procedural commands or programs
• Improve overall usability via Workbench integration
– Make it easier to read data & interact with graphics
– Provide easy-to-use modern, widget controls & behavior
– Modernize the user experience
• Enhance core functionality
6
7. IDL 7.1 Feature Highlights
• IDL Workbench Visualization Manager
– Open common file formats from File->Open
– Drag & drop to create visualizations
– Extensible – user-defined categories and action bars
• iTools Enhancements
– Procedural API with simplified ID strings
– Generate code to recreate visualizations
– Better margins and zooming
• Internationalization phase 2
– Fix File I/O issues
– Multibyte text for object graphics
– Support Unicode input for widgets
• IDL Workbench
– Plug-in Update Site Wizard
– Improvements & bug fixes
7
9. IDL 7.1 Feature Highlights (cont.)
• Data Access
–
–
–
–
Dataminer upgrade to latest version - MySQL
DICOMex for 64-bit Windows
AVI Read/Write Plugin – 3rd party Ronn Kling
CSV reader
• Image Processing – new algorithms TBD
• Integration
– IDL Python Bridge – Jacquette Consulting
• Cross Platform Support
–
–
–
–
Windows Command Line
Solaris 10 x86 64-bit, IDL command-line only
OMB FDCC non-admin mandate for Windows XP & Vista
Mac OS X 64-bit Intel (IDL 7.0.4)
• Scientific Data Formats
– CDF-3.2, HDF4-r2.3, HDF5-1.6.7 (IDL 7.0.3)
9