Science platforms are made up of (at least) four planks: data formats, services, tools and conventions. I focus here on formats and conventions, specifically the HDF5 format, already used in many disciplines, and the Climate-Forecast and HDF-EOS Conventions. Many science disciplines have already agreed on HDF as the preferred format for storing and sharing data. It is well established in high performance computing and supports arbitrary grouping and annotation. Community conventions are critical for useful data on top of the format. The Climate-Forecast (CF) conventions were created for relatively simple gridded data types while the HDF-EOS conventions originally considered more complex data (swaths). Making simple conventions more complex makes adoption more difficult. Community input and the need for stable data processing systems must be balanced in governance of conventions.
The Earth System Grid Federation (ESGF) is a large international collaboration that operates a global infrastructure for management and access of Earth System data. Some of the most valuable data collections served by ESGF include the output of global climate models used for the IPCC reports on climate change (CMIP3, CMIP5 and the upcoming CMIP6), regional climate model output (CORDEX), and observational data from several American and European agencies (Obs4MIPs). This talk will present a brief introduction to ESGF, describe the data access and analysis methods currently available or planned for the future, and conclude with some ideas on how this infrastructure could be used as a testbed for executing distributed analytics on a global scale.
Science platforms are made up of (at least) four planks: data formats, services, tools and conventions. I focus here on formats and conventions, specifically the HDF5 format, already used in many disciplines, and the Climate-Forecast and HDF-EOS Conventions. Many science disciplines have already agreed on HDF as the preferred format for storing and sharing data. It is well established in high performance computing and supports arbitrary grouping and annotation. Community conventions are critical for useful data on top of the format. The Climate-Forecast (CF) conventions were created for relatively simple gridded data types while the HDF-EOS conventions originally considered more complex data (swaths). Making simple conventions more complex makes adoption more difficult. Community input and the need for stable data processing systems must be balanced in governance of conventions.
The Earth System Grid Federation (ESGF) is a large international collaboration that operates a global infrastructure for management and access of Earth System data. Some of the most valuable data collections served by ESGF include the output of global climate models used for the IPCC reports on climate change (CMIP3, CMIP5 and the upcoming CMIP6), regional climate model output (CORDEX), and observational data from several American and European agencies (Obs4MIPs). This talk will present a brief introduction to ESGF, describe the data access and analysis methods currently available or planned for the future, and conclude with some ideas on how this infrastructure could be used as a testbed for executing distributed analytics on a global scale.
While much of the recent literature in spatial statistics has evolved around addressing the big data issue, practical implementations of these methods on high performance computing systems for truly large data are still rare. We discuss our explorations in this area at the National Center for Atmospheric Research for a range of applications, which can benefit from large scale computing infrastructure. These applications include extreme value analysis, approximate spatial methods, spatial localization methods and statistically-based data compression and are implemented in different programming languages. We will focus on timing results and practical considerations, such as speed vs. memory trade-offs, limits of scaling and ease of use.
The HDF Group provides NCL/IDL/MATLAB example codes and plots for many NASA HDF-EOS2 and HDF4 products. These example codes and plots can be found under http://hdfeos.org/zoo. This slide addresses some common issues on using these tools to visualize NASA HDF-EOS2 and HDF4 products.
The HDF Group is in the process of updating HDF-EOS web site. During the workshop, we would like to share with audiences some useful information in the new website that can help users to have easy access of NASA HDF and HDF-EOS data.
The presentation includes three parts:
EOS User Forum: will introduce the EOS user forum and how users can benefit from this forum.
Tools: will present information on how to use several widely-used tools to access NASA HDF and HDF-EOS data.
Examples: will present several examples on how to use C, Fortran and IDL to access NASA HDF and HDF-EOS data.
This is an introductory slide for accessing NASA HDF/HDF-EOS data for beginners. NASA distributes many Earth Science data in HDF/HDF-EOS file format and new users struggle to understand the file format and use the NASA HDF/HDF-EOS data properly. This brief presentation will help new users to understand the basic concepts about the HDF/HDF-EOS and to know the available tools that can access the NASA data easily.
While much of the recent literature in spatial statistics has evolved around addressing the big data issue, practical implementations of these methods on high performance computing systems for truly large data are still rare. We discuss our explorations in this area at the National Center for Atmospheric Research for a range of applications, which can benefit from large scale computing infrastructure. These applications include extreme value analysis, approximate spatial methods, spatial localization methods and statistically-based data compression and are implemented in different programming languages. We will focus on timing results and practical considerations, such as speed vs. memory trade-offs, limits of scaling and ease of use.
The HDF Group provides NCL/IDL/MATLAB example codes and plots for many NASA HDF-EOS2 and HDF4 products. These example codes and plots can be found under http://hdfeos.org/zoo. This slide addresses some common issues on using these tools to visualize NASA HDF-EOS2 and HDF4 products.
The HDF Group is in the process of updating HDF-EOS web site. During the workshop, we would like to share with audiences some useful information in the new website that can help users to have easy access of NASA HDF and HDF-EOS data.
The presentation includes three parts:
EOS User Forum: will introduce the EOS user forum and how users can benefit from this forum.
Tools: will present information on how to use several widely-used tools to access NASA HDF and HDF-EOS data.
Examples: will present several examples on how to use C, Fortran and IDL to access NASA HDF and HDF-EOS data.
This is an introductory slide for accessing NASA HDF/HDF-EOS data for beginners. NASA distributes many Earth Science data in HDF/HDF-EOS file format and new users struggle to understand the file format and use the NASA HDF/HDF-EOS data properly. This brief presentation will help new users to understand the basic concepts about the HDF/HDF-EOS and to know the available tools that can access the NASA data easily.
Innovative Communications Management Professional with expertise in entertainment post-production, human development, change management, media and corporate relations. Extensive experience in project management, strategic marketing, recruiting, training, supervising, staff support and career counseling.
An update on HDF, including a status report on The HDF Group, an overview of recent changes to the HDF4 and HDF5 libraries and tools, plans for future releases, HDF Group projects and collaborations, and future plans.
The Earth System Grid Federation: Origins, Current State, EvolutionIan Foster
I describe the origins, current state and potential future directions for the Earth System Grid Federation, an international consortium that develops infrastructure for sharing of climate simulation and related datasets.
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...Databricks
In this session, you will learn how CERN easily applied end-to-end deep learning and analytics pipelines on Apache Spark at scale for High Energy Physics using BigDL and Analytics Zoo open source software running on Intel Xeon-based distributed clusters.
Technical details and development learnings will be shared using an example of topology classification to improve real-time event selection at the Large Hadron Collider experiments. The classifier has demonstrated very good performance figures for efficiency, while also reducing the false positive rate compared to the existing methods. It could be used as a filter to improve the online event selection infrastructure of the LHC experiments, where one could benefit from a more flexible and inclusive selection strategy while reducing the amount of downstream resources wasted in processing false positives.
This is part of CERN’s research on applying Deep Learning and Analytics using open source and industry standard technologies as an alternative to the existing customized rule based methods. We show how we could quickly build and implement distributed deep learning solutions and data pipelines at scale on Apache Spark using Analytics Zoo and BigDL, which are open source frameworks unifying Analytics and AI on Spark with easy to use APIs and development interfaces seamlessly integrated with Big Data Platforms.
This tutorial is designed for new HDF5 users. We will cover HDF5 abstractions such as datasets, groups, attributes, and datatypes. Simple C examples will cover the programming model and basic features of the API, and will give new users the knowledge they need to navigate through the rich collection of HDF5 interfaces. Participants will be guided through an interactive demonstration of the fundamentals of HDF5.
This tutorial is for new HDF5 users.
One of the guiding concepts of the Reference Model for an Open Archival Information System, commonly referred to as the OAIS Reference Model, is the concept of an Archive Information Package (AIP) containing not just the data to be preserved for future access, but also the reference information needed to ensure that the data is understandable by its target audience and the preservation description information containing the lineage of the data and which ensures that an accurate, unaltered copy is retrieved at any point in the future. While creating AIPs is simple in principle it is not necessarily obvious that it will be as simple in practice. In this talk, the results of an experiment to develop AIPs for data in NASA'S Earth Observation System (EOS) Data and Information System (DIS) are reported.
This slide will demonstrate how to use OPeNDAP Java clients such as IDV and Panoply via HDF OPeNDAP data handlers to access various NASA HDF products such as AIRS, OMI, MLS, MODIS, TRMM, CERES, SeaWIFS etc. Various features of these tools that can help users easy access the HDF data will also be explored.
An update on HDF, including a status report on the HDF Group, an overview of recent changes to the HDF4 and HDF5 libraries and tools, plans for future releases, HDF Group projects and collaborations, and future plans.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
3. Changes in The HDF Group
• New Staff
•
•
•
•
7/9/2013
Earth Science program Director (Habermann)
Earth Science Project Manager (Plutchak)
Project Management Office Coordinator
Quality Engineer
ESIP Summer 2013
3
4. Earth Science Program
Director (Ted)
Project manager
(Joel)
Earth Science Team
ESDIS HDF
JPSS HDF
Maintenance, QA
IDPS support
Tools and
applications
Ted Habermann
Larry Knox
Joe Lee
Joel Plutchak
Elena Pourmal
Kent Yang
Albert Cheng
High Level
Libraries
Studies, Analyses
7/9/2013
JPSS Tools
Operations
Support
NASA Metadata
Outreach
ESIP Summer 2013
4
5. Mailing lists and archives
• news@lists.hdfgroup.org
• http://hdfgroup.org/news/
• hdf-forum@lists.hdfgroup.org
• http://mail.hdfgroup.org/pipermail/hdfforum_hdfgroup.org/
• New mailing for NASA DAACs
• hdf-nasa-daac@lists.hdfgroup.org
7/9/2013
ESIP Summer 2013
5
7. Maintenance Releases 2012–2013
2012
HDF4
Jan Feb Mar Apr
May
Jun Jul
4.2.7
HDF5
Aug Sep
Oct
Nov
4.2.8
1.8.9
1.8.10
HDFJava
h4h5
tools
2013
HDF4
HDF5
2.9
2.2.1
Jan Feb Mar Apr
May
Jun Jul
Aug Sep
Oct
7/9/2013
Nov
Dec
4.2.9
1.8.11
1.8.12
HDFJava
h4CF
Dec
2.10
1.0
beta
ESIP Summer 2013
7
8. HDF4 maintenance releases
HDF 4.2.9 (February 2013)
• Support for Mac 10.8 with Intel and Clang
compilers
• Support for Cygwin version 1.7.7 and higher
7/9/2013
ESIP Summer 2013
8
9. HDF5 maintenance releases
HDF5 1.8.10 (Nov 2012) and patch1 (Jan 2013)
• Interoperability between h5dump and h5import
• Performance improvements in h5diff for the files
with many attributes
• Support for I/O bigger than 2GB on Mac OS X
7/9/2013
ESIP Summer 2013
9
10. HDF5 maintenance releases
Future releases
• Request to support wide character filenames
(MathWorks)
• Request to support UTF-32 encoding (H5Py)
• Request to support parallel compression
7/9/2013
ESIP Summer 2013
10
11. New OSs and Compilers
• HDF software is now supported on
• SunOS 5.11 (Sparc) with Studio 12 compilers
• CentOS 6 with GCC and Intel compilers
• Mac OS X 10.8.* with Clang and Fortran, Java 1.7
Cygwin 1.7.7
• Windows 7 with VS 12 and Intel 13
• Windows 8 with VS 12 and Intel 13
7/9/2013
ESIP Summer 2013
11
12. Java maintenance releases
2.9 release (December 2012)
• Show groups/attributes in creation order
• Export data to a binary/ASCII file without having to
open the object in the TableView
• Reload feature to close/open file
• Improvements for installation
7/9/2013
ESIP Summer 2013
12
13. Java maintenance releases
2.10 release (December 2013)
•
•
•
•
7/9/2013
0 or 1-based indexing when displaying arrays
Displaying long names of files (“…” in names)
Ability to modify HDF4 compressed dataset
Support netCDF-4 files with VL attributes
ESIP Summer 2013
13
15. HDF and netCDF interoperability tools
•
•
•
•
•
HDF4/HDF-EOS2 to CF conversion toolkit - June
HDF-EOS5 augmentation tool (maint) - Dec 2013
HDF-EOS2 dumper tool (maint) - every other year
HDF-EOS5 to netCDF-4 conversion tool (retired)
HDF4 & HDF5 Handlers – May, to synchronize w/
Hyrax release
7/9/2013
ESIP Summer 2013
16
16. HDF Visualization tool assessment
• To evaluate the HDF Group’s data viewing
tools and user needs, and to explore,
recommend, and prioritize improvements.
7/9/2013
ESIP Summer 2013
17
18. Prototype Studies
• Apache Open Source Incubator Pilot Project
• Digital Object Identifier (DOI) support in HDF5
7/9/2013
ESIP Summer 2013
19
19. HPC R&D
• HDF5 Virtual Object Layer
• Allows apps to store and access HDF5 objects in
arbitrary storage methods and formats
• Allows HDF5 apps to migrate to future storage systems
with no source code modifications
• HDF5: Asynchronous I/O
• Application doesn’t wait for I/O
• Fault Tolerance:
• Prevent crash from corrupting HDF5 file
• End-to-End Data Integrity:
• Verify integrity of data from birth to death of file
• I/O Autotuning
• Runtime framework that dynamically determines
optimal application I/O strategy
7/9/2013
ESIP Summer 2013
20
20. Parallel I/O and Analysis of a Trillion
Particle VPIC Simulation
Problem: Support I/O and analysis needs for
state-of-the-art plasma physics code
Novel Accomplishments:
Ran Trillion particle VPIC simulation on
120,000 hopper cores and generated 350
TB dataset
Parallel HDF5 obtained peak 35GB/s I/O
rate and 80% sustained bandwidth
Developed hybrid parallel FastQuery
using FastBit to utilize multicore hardware
FastQuery took 10 minutes to index and 3
seconds to query energetic particles
SC12 paper, XLDB 2012 poster
I/O bandwidth utilization for parallel writes (blue) with HDF5 on
120,000 cores
CS Impact
Demonstrated software scalability for
writing and analyzing ~40TB HDF5 files
Enabled novel discoveries in plasma
physics (next slide)
A comparison of indexing (top table) and query times (bottom) for
hybrid and MPI-FastQuery
21. Science Impact: Multiple Scientific
Discoveries in Plasma Physics
•
Preferential acceleration along magnetic field
Discovered power-law in energy spectrum
Energetic particles are correlated with flux ropes
Discovered agyrotropy near the reconnection hot-spot
22. Other projects of interest
• ITER – International fusion research project
• Architecture for HDF5 for ITER data life cycle
• Particle accelerators and instrument vendors
• Faster I/O for compressed data
• Let apps send pre-compressed chunks directly to
file.
• Dynamic filter loading in HDF5
• Let apps read data compressed with non-standard
filter.
• SWMR
• Single Writer/Multiple Readers
7/9/2013
ESIP Summer 2013
23
23. Other projects of interest
• Digital Twin
• “Digital Twin integrates ultra-high fidelity simulation
with the vehicle’s on-board integrated vehicle
health management system, maintenance history
and all available historical and fleet data to mirror
the life of its flying twin and enable unprecedented
levels of safety and reliability.”
7/9/2013
ESIP Summer 2013
24
HDF5 1.8.7 – 1.8.9 Fortran 2003 support, support for Fortran dimension scalesHDF4 releases in support of the H4 mapping projectSupport for Powerpc64 platform (big-endian)Java – addressed all ESDIS requestsBased on the latest available HDF4 and HDF5H4h5tools – updated to 18 APIs, no 18 features were added
Up to here elena fixes. Add QA person.
Joe moved this slide after maintenance plan.
Java HDF4.
Java HDF4.
Does this belong to Goal #5?
HDFView more than 10 years old. Since first implemented, new technologies and techniques have emerged that could help improve HDFView. We surveyed HDFView users last year. A lot of good ideas came out of that.We will not just look at Java, but other alternatives such as QT.This is an internally funded project led by Cao, Heber, Readey (Amazon).This group will:Review our vision for vis tools and how they are aligned with our mission. Review and company goals as regards support for vis tools. Identify needs and opportunities based on current and potential customers and their needs and desires.Review technologies and tools currently available that can help us develop new tools if needed, how the new tools compare with current HDF tools, and what they might offer in terms of improvements.Develop of a set of guiding principles for going forward.Recommend activities, perhaps leading to a roadmap to long-term goals for the visualization tool(s).
The slide highlights recent accomplishments from the ExaHDF5 project funded by DOE/ASCR Exascale Scientific Data Management award.1) Parallel I/O with HDF5We ran a Trillion particle simulation on 120K cores on hopper. The code produced 30 TB of particle data per timestep, and produced over 350TB of data total- To the best of our knowledge, this is the first time that anyone has demonstrated writes to a single, shared 30 TB HDF5 fileWe hit peak I/O rates on hopper (~35GB/s) during the run, we sustained an average ~23GB/s, which is a new record for parallel HDF5 performance2) FastBit based analysis- We developed a novel hybrid parallel version of FastBit to do the indexing/querying on the datasetThis was the first time that we used FastBit and FastQuery to index and query a dataset with Trillion entriesWe were able to index the dataset in 10 minutes and query the dataset in 3 seconds DOE researchers: Prabhat (PI), Suren Byna, Oliver Rubel and John Wu (LBNL)Scientific collaborators: HomaKarimabadi (UCSD), VadimRoytershteyn (UCSD) and Bill Daughton (LANL)Simulation code used in the study is VPIC, developed at LANL.Please address any questions to Prabhat (prabhat@lbl.gov).
3) Scientific insightsThis is the first time that our science collaborators have been able to examine the trillion particle dataset. They had largely ignored the particle data, or looked at a coarse grained version earlier- Our collaborators discovered a power-law distribution in the energy spectrum of the particles. This is the first kinetic plasma physics to demonstrate a power-law distribution; our analysis capabilities directly facilitated this discovery Our collaborators had made a number of conjectures and hypothesis regarding the interplay between particles and the magnetic fields and multi-dimensional phase-space distribution of particles. Using these new tools, they were able to confirm these hypothesis quantitatively. More specifically the scientists found: - a preferential acceleration of particles in a direction parallel to the magnetic field - predominant distribution of energetic particles in the current sheet, suggesting that flux ropes can confine these particlesagyrotropic (asymmetric) distribution of particles near the magnetic reconnection event