5th Multicore World
15-17 February 2016 – Shed 6, Wellington, New Zealand
http://openparallel.com/multicore-world-2016/
We start by dividing applications into data plus model components and classifying each component (whether from Big Data or Big Simulations) in the same way. These leads to 64 properties divided into 4 views, which are Problem Architecture (Macro pattern); Execution Features (Micro patterns); Data Source and Style; and finally the Processing (runtime) View.
We discuss convergence software built around HPC-ABDS (High Performance Computing enhanced Apache Big Data Stack) http://hpc-abds.org/kaleidoscope/ and show how one can merge Big Data and HPC (Big Simulation) concepts into a single stack.
We give examples of data analytics running on HPC systems including details on persuading Java to run fast.
Some details can be found at http://dsc.soic.indiana.edu/publications/HPCBigDataConvergence.pdf
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...Geoffrey Fox
“Next Generation Grid – HPC Cloud” proposes a toolkit capturing current capabilities of Apache Hadoop, Spark, Flink and Heron as well as MPI and Asynchronous Many Task systems from HPC. This supports a Cloud-HPC-Edge (Fog, Device) Function as a Service Architecture. Note this "new grid" is focussed on data and IoT; not computing. Use interoperable common abstractions but multiple polymorphic implementations.
High Performance Processing of Streaming DataGeoffrey Fox
Describes two parallel robot planning algorithms implemented with Apache Storm on OpenStack -- SLAM (Simultaneous Localization & Mapping) and collision avoidance. Performance (response time) studied and improved as example of HPC-ABDS (High Performance Computing enhanced Apache Big Data Software Stack) concept.
Visualizing and Clustering Life Science Applications in Parallel Geoffrey Fox
HiCOMB 2015 14th IEEE International Workshop on
High Performance Computational Biology at IPDPS 2015
Hyderabad, India. This talk covers parallel data analytics for bioinformatics. Messages are
Always run MDS. Gives insight into data and performance of machine learning
Leads to a data browser as GIS gives for spatial data
3D better than 2D
~20D better than MSA?
Clustering Observations
Do you care about quality or are you just cutting up space into parts
Deterministic Clustering always makes more robust
Continuous clustering enables hierarchy
Trimmed Clustering cuts off tails
Distinct O(N) and O(N2) algorithms
Use Conjugate Gradient
We present a software model built on the Apache software stack (ABDS) that is well used in modern cloud computing, which we enhance with HPC concepts to derive HPC-ABDS.
We discuss layers in this stack
We give examples of integrating ABDS with HPC
We discuss how to implement this in a world of multiple infrastructures and evolving software environments for users, developers and administrators
We present Cloudmesh as supporting Software-Defined Distributed System as a Service or SDDSaaS with multiple services on multiple clouds/HPC systems.
We explain the functionality of Cloudmesh as well as the 3 administrator and 3 user modes supported
Comparing Big Data and Simulation Applications and Implications for Software ...Geoffrey Fox
At eScience in the Cloud 2014, Redmond WA, April 30 2014
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development.
However the same is not so true for data intensive, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations.
We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures.
We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks.
Our analysis builds on combining HPC and the Apache software stack that is well used in modern cloud computing.
Initial results on Azure and HPC Clusters are presented
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Geoffrey Fox
Describes relations between Big Data and Big Simulation Applications and how this can guide a Big Data - Exascale (Big Simulation) Convergence (as in National Strategic Computing Initiative) and lead to a "complete" set of Benchmarks. Basic idea is to view use cases as "Data" + "Model"
Matching Data Intensive Applications and Hardware/Software ArchitecturesGeoffrey Fox
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However the same is not so true for data intensive problems even though commercial clouds presumably devote more resources to data analytics than supercomputers devote to simulations. We try to establish some principles that allow one to compare data intensive architectures and decide which applications fit which machines and which software.
We use a sample of over 50 big data applications to identify characteristics of data intensive applications and propose a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks. We consider hardware from clouds to HPC. Our software analysis builds on the Apache software stack (ABDS) that is well used in modern cloud computing, which we enhance with HPC concepts to derive HPC-ABDS.
We illustrate issues with examples including kernels like clustering, and multi-dimensional scaling; cyberphysical systems; databases; and variants of image processing from beam lines, Facebook and deep-learning.
What is the "Big Data" version of the Linpack Benchmark?; What is “Big Data...Geoffrey Fox
Advances in high-performance/parallel computing in the 1980's and 90's was spurred by the development of quality high-performance libraries, e.g., SCALAPACK, as well as by well-established benchmarks, such as Linpack.
Similar efforts to develop libraries for high-performance data analytics are underway. In this talk we motivate that such benchmarks should be motivated by frequent patterns encountered in high-performance analytics, which we call Ogres.
Based upon earlier work, we propose that doing so will enable adequate coverage of the "Apache" bigdata stack as well as most common application requirements, whilst building upon parallel computing experience.
Given the spectrum of analytic requirements and applications, there are multiple "facets" that need to be covered, and thus we propose an initial set of benchmarks - by no means currently complete - that covers these characteristics.
We hope this will encourage debate
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...Geoffrey Fox
“Next Generation Grid – HPC Cloud” proposes a toolkit capturing current capabilities of Apache Hadoop, Spark, Flink and Heron as well as MPI and Asynchronous Many Task systems from HPC. This supports a Cloud-HPC-Edge (Fog, Device) Function as a Service Architecture. Note this "new grid" is focussed on data and IoT; not computing. Use interoperable common abstractions but multiple polymorphic implementations.
High Performance Processing of Streaming DataGeoffrey Fox
Describes two parallel robot planning algorithms implemented with Apache Storm on OpenStack -- SLAM (Simultaneous Localization & Mapping) and collision avoidance. Performance (response time) studied and improved as example of HPC-ABDS (High Performance Computing enhanced Apache Big Data Software Stack) concept.
Visualizing and Clustering Life Science Applications in Parallel Geoffrey Fox
HiCOMB 2015 14th IEEE International Workshop on
High Performance Computational Biology at IPDPS 2015
Hyderabad, India. This talk covers parallel data analytics for bioinformatics. Messages are
Always run MDS. Gives insight into data and performance of machine learning
Leads to a data browser as GIS gives for spatial data
3D better than 2D
~20D better than MSA?
Clustering Observations
Do you care about quality or are you just cutting up space into parts
Deterministic Clustering always makes more robust
Continuous clustering enables hierarchy
Trimmed Clustering cuts off tails
Distinct O(N) and O(N2) algorithms
Use Conjugate Gradient
We present a software model built on the Apache software stack (ABDS) that is well used in modern cloud computing, which we enhance with HPC concepts to derive HPC-ABDS.
We discuss layers in this stack
We give examples of integrating ABDS with HPC
We discuss how to implement this in a world of multiple infrastructures and evolving software environments for users, developers and administrators
We present Cloudmesh as supporting Software-Defined Distributed System as a Service or SDDSaaS with multiple services on multiple clouds/HPC systems.
We explain the functionality of Cloudmesh as well as the 3 administrator and 3 user modes supported
Comparing Big Data and Simulation Applications and Implications for Software ...Geoffrey Fox
At eScience in the Cloud 2014, Redmond WA, April 30 2014
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development.
However the same is not so true for data intensive, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations.
We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures.
We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks.
Our analysis builds on combining HPC and the Apache software stack that is well used in modern cloud computing.
Initial results on Azure and HPC Clusters are presented
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Geoffrey Fox
Describes relations between Big Data and Big Simulation Applications and how this can guide a Big Data - Exascale (Big Simulation) Convergence (as in National Strategic Computing Initiative) and lead to a "complete" set of Benchmarks. Basic idea is to view use cases as "Data" + "Model"
Matching Data Intensive Applications and Hardware/Software ArchitecturesGeoffrey Fox
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However the same is not so true for data intensive problems even though commercial clouds presumably devote more resources to data analytics than supercomputers devote to simulations. We try to establish some principles that allow one to compare data intensive architectures and decide which applications fit which machines and which software.
We use a sample of over 50 big data applications to identify characteristics of data intensive applications and propose a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks. We consider hardware from clouds to HPC. Our software analysis builds on the Apache software stack (ABDS) that is well used in modern cloud computing, which we enhance with HPC concepts to derive HPC-ABDS.
We illustrate issues with examples including kernels like clustering, and multi-dimensional scaling; cyberphysical systems; databases; and variants of image processing from beam lines, Facebook and deep-learning.
What is the "Big Data" version of the Linpack Benchmark?; What is “Big Data...Geoffrey Fox
Advances in high-performance/parallel computing in the 1980's and 90's was spurred by the development of quality high-performance libraries, e.g., SCALAPACK, as well as by well-established benchmarks, such as Linpack.
Similar efforts to develop libraries for high-performance data analytics are underway. In this talk we motivate that such benchmarks should be motivated by frequent patterns encountered in high-performance analytics, which we call Ogres.
Based upon earlier work, we propose that doing so will enable adequate coverage of the "Apache" bigdata stack as well as most common application requirements, whilst building upon parallel computing experience.
Given the spectrum of analytic requirements and applications, there are multiple "facets" that need to be covered, and thus we propose an initial set of benchmarks - by no means currently complete - that covers these characteristics.
We hope this will encourage debate
Classification of Big Data Use Cases by different FacetsGeoffrey Fox
Ogres classify Big Data applications by multiple facets – each with several exemplars and features. This gives a
guide to breadth and depth of Big Data and allows one to examine which ogres a particular architecture/software support.
High Performance Data Analytics and a Java Grande Run TimeGeoffrey Fox
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development.
However the same is not so true for data intensive even though commercially clouds devote many more resources to data analytics than supercomputers devote to simulations.
Here we use a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures.
We propose a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks.
Our analysis builds on the Apache software stack that is well used in modern cloud computing.
We give some examples including clustering, deep-learning and multi-dimensional scaling.
One suggestion from this work is value of a high performance Java (Grande) runtime that supports simulations and big data
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC Geoffrey Fox
This proposes an integration of HPC and Apache Technologies. HPC-ABDS+ Integration areas include
File systems,
Cluster resource management,
File and object data management,
Inter process and thread communication,
Analytics libraries,
Workflow
Monitoring
Matching Data Intensive Applications and Hardware/Software ArchitecturesGeoffrey Fox
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However the same is not so true for data intensive problems even though commercial clouds presumably devote more resources to data analytics than supercomputers devote to simulations. We try to establish some principles that allow one to compare data intensive architectures and decide which applications fit which machines and which software.
We use a sample of over 50 big data applications to identify characteristics of data intensive applications and propose a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks. We consider hardware from clouds to HPC. Our software analysis builds on the Apache software stack (ABDS) that is well used in modern cloud computing, which we enhance with HPC concepts to derive HPC-ABDS.
We illustrate issues with examples including kernels like clustering, and multi-dimensional scaling; cyberphysical systems; databases; and variants of image processing from beam lines, Facebook and deep-learning.
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions csandit
Analyzing interconnection structures among the data through the use of graph algorithms and
graph analytics has been shown to provide tremendous value in many application domains (like
social networks, protein networks, transportation networks, bibliographical networks,
knowledge bases and many more). Nowadays, graphs with billions of nodes and trillions of
edges have become very common. In principle, graph analytics is an important big data
discovery technique. Therefore, with the increasing abundance of large scale graphs, designing
scalable systems for processing and analyzing large scale graphs has become one of the
timeliest problems facing the big data research community. In general, distributed processing of
big graphs is a challenging task due to their size and the inherent irregular structure of graph
computations. In this paper, we present a comprehensive overview of the state-of-the-art to
better understand the challenges of developing very high-scalable graph processing systems. In
addition, we identify a set of the current open research challenges and discuss some promising
directions for future research.
The Matsu Project - Open Source Software for Processing Satellite Imagery DataRobert Grossman
The Matsu Project is an Open Cloud Consortium project that is developing open source software for processing satellite imagery data using Hadoop, OpenStack and R.
Big data analytics beyond Hadoop - 7 giants categorization of computing/ML problems. Hadoop is good for giant 1, whereas Spark is good for giants 2, 3 and 4. GraphLab is appropriate for giant 5, while Storm is good for real-time processing.
These slides outline the common distributed computing abstractions necessary to implement data science at scale. It starts with a characterization of the computations required to realize common machine learning at scale. Introductions to Hadoop MR, Spark, GraphLab are covered currently. Going forward, we shall update with Flink, Titan and TensorFlow and how to realize machine learning/deep learning algorithms on top of these frameworks as well as trade-offs between these frameworks.
GraphLab: Large-Scale Machine Learning on Graphs (BDT204) | AWS re:Invent 2013Amazon Web Services
GraphLab is like Hadoop for graphs in that it enables users to easily express and execute machine learning algorithms on massive graphs. In this session, we illustrate how GraphLab leverages Amazon EC2 and advances in graph representation, asynchronous communication, and scheduling to achieve orders-of-magnitude performance gains over systems like Hadoop on real-world data.
Data systems in NASA?s Earth Science Division are primarily focused on providing stewardship of the products of remote sensing and are manifested as Digital Active Archive Systems. Each Instrument Team has a related Science Team which defines the algorithms and monitors the processing of the output of the instruments to produce the related data products and in an format and standards compliance of them. These teams are influenced also by the research and applied sciences components of the programs, but the primary focus is on proving the ongoing validity of the products. Across the distributed system, every product is different. However, this is not conducive to analytics. NASA?s Advanced Information Systems Technology (AIST) program is developing an entirely new approach to creating Analytic Centers which focus on the scientific investigation and harmonize the data, computing resources and tools to enable and to accelerate scientific discovery. Stay tuned to _nd out how. A major element, in today’s science interests, is the comparison of multi-dimensional datasets; this warrants considerable experimentation in trying to understand how to do so meaningfully and quantitatively; asked another way, \What do you mean by similar?" Uncertainty quantification has evolved considerably in the arenas of data reduction and full physics models; however, the emerging demand for machine learning and other artificial intelligence techniques has failed to keep uncertainty quantification and error propagation in mind and there is considerable work to be done.
This deck covers some of the open problems in the big data analytics space, starting with a discussion of state-of-art analytics using Spark/Hadoop YARN. It details out whether each of these are appropriate technologies and explores alternatives wherever possible. It ends with an important problem discussion - how to build a single system to handle big data pipelines without explicit data transfers.
High Performance Computing and Big Data Geoffrey Fox
We propose a hybrid software stack with Large scale data systems for both research and commercial applications running on the commodity (Apache) Big Data Stack (ABDS) using High Performance Computing (HPC) enhancements typically to improve performance. We give several examples taken from bio and financial informatics.
We look in detail at parallel and distributed run-times including MPI from HPC and Apache Storm, Heron, Spark and Flink from ABDS stressing that one needs to distinguish the different needs of parallel (tightly coupled) and distributed (loosely coupled) systems.
We also study "Java Grande" or the principles to use to allow Java codes to perform as fast as those written in more traditional HPC languages. We also note the differences between capacity (individual jobs using many nodes) and capability (lots of independent jobs) computing.
We discuss how this HPC-ABDS concept allows one to discuss convergence of Big Data, Big Simulation, Cloud and HPC Systems. See http://hpc-abds.org/kaleidoscope/
Classification of Big Data Use Cases by different FacetsGeoffrey Fox
Ogres classify Big Data applications by multiple facets – each with several exemplars and features. This gives a
guide to breadth and depth of Big Data and allows one to examine which ogres a particular architecture/software support.
High Performance Data Analytics and a Java Grande Run TimeGeoffrey Fox
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development.
However the same is not so true for data intensive even though commercially clouds devote many more resources to data analytics than supercomputers devote to simulations.
Here we use a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures.
We propose a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks.
Our analysis builds on the Apache software stack that is well used in modern cloud computing.
We give some examples including clustering, deep-learning and multi-dimensional scaling.
One suggestion from this work is value of a high performance Java (Grande) runtime that supports simulations and big data
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC Geoffrey Fox
This proposes an integration of HPC and Apache Technologies. HPC-ABDS+ Integration areas include
File systems,
Cluster resource management,
File and object data management,
Inter process and thread communication,
Analytics libraries,
Workflow
Monitoring
Matching Data Intensive Applications and Hardware/Software ArchitecturesGeoffrey Fox
There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However the same is not so true for data intensive problems even though commercial clouds presumably devote more resources to data analytics than supercomputers devote to simulations. We try to establish some principles that allow one to compare data intensive architectures and decide which applications fit which machines and which software.
We use a sample of over 50 big data applications to identify characteristics of data intensive applications and propose a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks. We consider hardware from clouds to HPC. Our software analysis builds on the Apache software stack (ABDS) that is well used in modern cloud computing, which we enhance with HPC concepts to derive HPC-ABDS.
We illustrate issues with examples including kernels like clustering, and multi-dimensional scaling; cyberphysical systems; databases; and variants of image processing from beam lines, Facebook and deep-learning.
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions csandit
Analyzing interconnection structures among the data through the use of graph algorithms and
graph analytics has been shown to provide tremendous value in many application domains (like
social networks, protein networks, transportation networks, bibliographical networks,
knowledge bases and many more). Nowadays, graphs with billions of nodes and trillions of
edges have become very common. In principle, graph analytics is an important big data
discovery technique. Therefore, with the increasing abundance of large scale graphs, designing
scalable systems for processing and analyzing large scale graphs has become one of the
timeliest problems facing the big data research community. In general, distributed processing of
big graphs is a challenging task due to their size and the inherent irregular structure of graph
computations. In this paper, we present a comprehensive overview of the state-of-the-art to
better understand the challenges of developing very high-scalable graph processing systems. In
addition, we identify a set of the current open research challenges and discuss some promising
directions for future research.
The Matsu Project - Open Source Software for Processing Satellite Imagery DataRobert Grossman
The Matsu Project is an Open Cloud Consortium project that is developing open source software for processing satellite imagery data using Hadoop, OpenStack and R.
Big data analytics beyond Hadoop - 7 giants categorization of computing/ML problems. Hadoop is good for giant 1, whereas Spark is good for giants 2, 3 and 4. GraphLab is appropriate for giant 5, while Storm is good for real-time processing.
These slides outline the common distributed computing abstractions necessary to implement data science at scale. It starts with a characterization of the computations required to realize common machine learning at scale. Introductions to Hadoop MR, Spark, GraphLab are covered currently. Going forward, we shall update with Flink, Titan and TensorFlow and how to realize machine learning/deep learning algorithms on top of these frameworks as well as trade-offs between these frameworks.
GraphLab: Large-Scale Machine Learning on Graphs (BDT204) | AWS re:Invent 2013Amazon Web Services
GraphLab is like Hadoop for graphs in that it enables users to easily express and execute machine learning algorithms on massive graphs. In this session, we illustrate how GraphLab leverages Amazon EC2 and advances in graph representation, asynchronous communication, and scheduling to achieve orders-of-magnitude performance gains over systems like Hadoop on real-world data.
Data systems in NASA?s Earth Science Division are primarily focused on providing stewardship of the products of remote sensing and are manifested as Digital Active Archive Systems. Each Instrument Team has a related Science Team which defines the algorithms and monitors the processing of the output of the instruments to produce the related data products and in an format and standards compliance of them. These teams are influenced also by the research and applied sciences components of the programs, but the primary focus is on proving the ongoing validity of the products. Across the distributed system, every product is different. However, this is not conducive to analytics. NASA?s Advanced Information Systems Technology (AIST) program is developing an entirely new approach to creating Analytic Centers which focus on the scientific investigation and harmonize the data, computing resources and tools to enable and to accelerate scientific discovery. Stay tuned to _nd out how. A major element, in today’s science interests, is the comparison of multi-dimensional datasets; this warrants considerable experimentation in trying to understand how to do so meaningfully and quantitatively; asked another way, \What do you mean by similar?" Uncertainty quantification has evolved considerably in the arenas of data reduction and full physics models; however, the emerging demand for machine learning and other artificial intelligence techniques has failed to keep uncertainty quantification and error propagation in mind and there is considerable work to be done.
This deck covers some of the open problems in the big data analytics space, starting with a discussion of state-of-art analytics using Spark/Hadoop YARN. It details out whether each of these are appropriate technologies and explores alternatives wherever possible. It ends with an important problem discussion - how to build a single system to handle big data pipelines without explicit data transfers.
High Performance Computing and Big Data Geoffrey Fox
We propose a hybrid software stack with Large scale data systems for both research and commercial applications running on the commodity (Apache) Big Data Stack (ABDS) using High Performance Computing (HPC) enhancements typically to improve performance. We give several examples taken from bio and financial informatics.
We look in detail at parallel and distributed run-times including MPI from HPC and Apache Storm, Heron, Spark and Flink from ABDS stressing that one needs to distinguish the different needs of parallel (tightly coupled) and distributed (loosely coupled) systems.
We also study "Java Grande" or the principles to use to allow Java codes to perform as fast as those written in more traditional HPC languages. We also note the differences between capacity (individual jobs using many nodes) and capability (lots of independent jobs) computing.
We discuss how this HPC-ABDS concept allows one to discuss convergence of Big Data, Big Simulation, Cloud and HPC Systems. See http://hpc-abds.org/kaleidoscope/
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Geoffrey Fox
Within the last few years, there have been significant contributions to Java-based big data frameworks and libraries such as Apache Hadoop, Spark, and Storm. While these systems are rich in interoperability and features, developing high performance big data analytic applications is challenging. Also, the study of performance characteristics and high performance optimizations is lacking in the literature for these applications. By contrast, these features are well documented in the High Performance Computing (HPC) domain and some of the techniques have potential performance benefits in the big data domain as well. This paper identifies a class of machine learning applications with significant computation and communication as a yardstick and presents five optimizations to yield high performance in Java big data analytics. Also, it incorporates these optimizations in developing SPIDAL Java - a highly optimized suite of Global Machine Learning (GML) applications. The optimizations include intra-node messaging through memory maps over network calls, improving cache utilization, reliance on processes over threads, zero garbage collection, and employing offheap buffers to load and communicate data. SPIDAL Java demonstrates significant performance gains and scalability with these techniques when running on up to 3072 cores in one of the latest Intel Haswell-based multicore clusters.
http://dsc.soic.indiana.edu/publications/hpc2016-spidal-high-performance-submit-18-public.pdf
http://dsc.soic.indiana.edu/presentations/SPIDALJava.pptx
Estimating the Evolution Direction of Populations to Improve Genetic AlgorithmsAnnibale Panichella
Meta-heuristics have been successfully used to solve a wide variety of problems. However, one issue many techniques have is their risk of being trapped into local optima, or to create a limited variety of solutions (problem known as ``population drift''). During recent and past years, different kinds of techniques have been proposed to deal with population drift, for example hybridizing genetic algorithms with local search techniques or using niche techniques.
This paper proposes a technique, based on Singular Value Decomposition (SVD), to enhance Genetic Algorithms (GAs) population diversity. SVD helps to estimate the evolution direction and drive next generations towards orthogonal dimensions.
The proposed SVD-based GA has been evaluated on 11 benchmark problems and compared with a simple GA and a GA with a distance-crowding schema. Results indicate that SVD-based GA achieves significantly better solutions and exhibits a quicker convergence than the alternative techniques.
DTW: 2015 Data Teaching Workshop – 2nd IEEE STC CC and RDA Workshop on Curricula and Teaching Methods in Cloud Computing, Big Data, and Data Science
as part of CloudCom 2015 (http://2015.cloudcom.org/), Vancouver, Nov 30-Dec 3, 2015.
Discusses Indiana University Data Science Program and experience with online education; the program is available in both online and residential modes. We end by discussing two classes taught both online and residentially and online by Geoffrey Fox. One is BDAA: Big Data Applications & Analytics; The other is BDOSSP: Big Data Open Source Software and Projects. Links are
http://openedx.scholargrid.org/ BDAA Fall 2015
http://datascience.scholargrid.org/ BDOSSP Spring 2016
http://bigdataopensourceprojects.soic.indiana.edu/ Spring 2015
Knowledge Collaboration: Working with Data and Web SpecialistsOlivier Serrat
Organizing helps achieve—and even amplify—common purpose but often succumbs in time to organizational silos, teaming for the sake of teaming, and the obstacle course of organizational learning. In libraries as elsewhere, the three Ss of Strategy—Structure—Systems must give way to the three Ps of Purpose—Process—People. Thence, with entrepreneurship and knowledge behaviors, data and web specialists can synergize in mutually supportive relationships of shared destiny.
Managing and Deploying High Performance Computing Clusters using Windows HPC ...Saptak Sen
The new management features built into Windows HPC Server 2008 R2 are the foundation for deploying and managing HPC clusters of scale up to 1000 nodes. Join us for a deep dive in monitoring and diagnostic tools, a review of the updated heat-map and template-based deployment. We also cover the new PowerShell-based scripting capabilities: the basics of management shell, as well as the underlying design and key concepts, new Reporting Capabilities, and a discussion on network boot.
Author: Aggarat Jaisuk ( Romeo Art )
Big Data Trends - Big Data Thailand Meetup#1
Global Perspective - Gartner Hype Cycle for Emerging Tech 2015
Machine Learning - Citizen Datascience
Big Data on AWS
Big Data on Google Cloud Platform
In-memory Big Data Opensources Highlight Projects
New sources of data Computing powers GPGPU
Lean into Deep Learning
Talk about using Ganglia and other tools for storing all kinds of web application metrics for both operations and business purposes. Presented at Cambridge Geek Night
Big Data HPC Convergence and a bunch of other thingsGeoffrey Fox
This talk supports the Ph.D. in Computational & Data Enabled Science & Engineering at Jackson State University. It describes related educational activities at Indiana University, the Big Data phenomena, jobs and HPC and Big Data computations. It then describes how HPC and Big Data can be converged into a single theme.
As the volume and complexity of data from myriad Earth Observing platforms, both remote sensing and in-situ increases so does the demand for access to both data and information products from these data. The audience no longer is restricted to an investigator team with specialist science credentials. Non-specialist users from scientists from other disciplines, science-literate public, to teachers, to the general public and decision makers want access. What prevents them from this access to resources? It is the very complexity and specialist developed data formats, data set organizations and specialist terminology. What can be done in response? We must shift the burden from the user to the data provider. To achieve this our developed data infrastructures are likely to need greater degrees of internal code and data structure complexity to achieve (relatively) simpler end-user complexity. Evidence from numerous technical and consumer markets supports this scenario. We will cover the elements of modern data environments, what the new use cases are and how we can respond to them.
EarthCube Monthly Community Webinar- Nov. 22, 2013EarthCube
This webinar features project overviews of all EarthCube Awards (Building Blocks, Research Coordination Networks, Conceptual Designs, and Test Governance), followed by a call for involvement, and a Q&A session.
Agenda:
EarthCube Awards – Project Overviews
1.. EarthCube Web Services (Building Block)
2. EC3: Earth-Centered Community for Cyberinfrastructure (RCN)
3. GeoSoft (Building Block)
4. Specifying and Implementing ODSIP (Building Block)
5. A Broker Framework for Next Generation Geoscience (BCube) (Building Block)
6. Integrating Discrete and Continuous Data (Building Block)
7. EAGER: Collaborative Research (Building Block)
8. A Cognitive Computer Infrastructure for Geoscience (Building Block)
9. Earth System Bridge (Building Block)
10. CINERGI – Community Inventory of EC Resources for Geoscience Interoperability (BB)
11. Building a Sediment Experimentalist Network (RCN)
12. C4P: Collaboration and Cyberinfrastructure for Paleogeosciences (RCN)
13. Developing a Data-Oriented Human-centric Enterprise for Architecture (CD)
14. Enterprise Architecture for Transformative Research and Collaboration (CD)
15. EC Test Enterprise Governance: An Agile Approach (Test Governance)
A Call for Involvement!
A Keynote at the Web Science Conference, 2018, held at the VU Amsterdam [1]. This describes in the main the output of the Semantic Technology Institute International (STI2) Summit (for senior researchers in the Semantic Web field) held in Crete in September, 2017 [2].
1. https://websci18.webscience.org/
2. https://www.sti2.org/events/2017-sti2-semantic-summit
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...Raffaele Montella
FACE-IT is an effort to develop a new IT infrastructure to accelerate existing disciplinary research and enable information transfer among traditionally separate fields. At present, finding data and processing it into usable form can dominate research efforts. By providing ready access to not only data but also the software tools used to process it for specific uses (e.g., climate impact and economic model inputs), FACE-IT allows researchers to concentrate their efforts on analysis. Lowering barriers to data access allows researchers to stretch in new directions and allows researchers to learn and respond to the needs of other fields. FACE-IT builds on the Globus Galaxies platform, which has been developed over the past several years at the University of Chicago. FACE-IT also benefit from substantial software development undertaken by the communities who have developed most of the domain-specific tools required to populate FACE-IT with useful capabilities. The FACE-IT Galaxy manages earth system datatypes (as NetCDF), new tool parameters (dates, map, opendap), aggregated datatypes (RAFT), service providers and cool map visualizers.
Keynote presentation at GlobusWorld 2021. Highlights product updates and roadmap, as well as user success stories in research data management. Presented by Ian Foster, Rachana Ananthakrishnan, Kyle Chard and Vas Vasiliadis.
Challenges and Issues of Next Cloud Computing PlatformsFrederic Desprez
Cloud computing has now crossed the frontiers of research to reach industry. It is used every day , whether to exchange emails or make
reservations on web sites. However, many research works remain to be done to improve the performance and functionality of these platforms of tomorrow. In this talk, I will do an overview of some these theoretical and appliead researches done at INRIA and particularly around Clouds distribution, energy monitoring and management, massive data processing and exchange, and resource management.
"From Big Data to Smart data"
Jie (Jack) Yang, Associate Research Fellow, SMART Infrastructure Facility, presented a summary of his research as part of the SMART Seminar Series on 28 April 2016.
For more information, visit the event page at: http://smart.uow.edu.au/events/UOW212890.html.
A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science ...Anirudh Prabhu
Current Earth Science Information Systems lack support for researchers who are new to the field, or may be unfamiliar with the domain vocabulary or the breadth of relevant data available. Hence, there is a need to evolve the current information systems, to reduce the time required for data preparation, processing and analysis. Salvaging and leveraging “dark” resources (information resources that organizations collect, process, and store for regular business or operational activities but fail to utilize for other purposes) is an effective way to do this. We assert that Earth science metadata assets are dark resources. These dark resources can be effectively used for data processing and visualization, but they require a combination of domain, data product and processing knowledge, i.e. a knowledge base from which specific data operations can be performed. This paper describes a semantic, rules based approach to provide a service to visualize Earth Science phenomena, based on the data variables extracted using the dark metadata resources.
We use a rule-based language, in our case Apache Jena rules, to make assertions about the compatibility between a phenomena and various visualizations based on multiple factors. We created separate orthogonal rulesets to map each of these factors to the various phenomena. Some of the factors we have considered include measurements, spatial resolution and time intervals. This approach enables easy additions and deletions based on newly obtained domain knowledge or phenomena related information and thus improving the accuracy of the rules service overall. We have also created a “scoring function” that ranks the suggested visualizations by assigning them an importance score. In our scoring function, we take into account the strength of compatibilities asserted in the rules, the confidence metrics set by the experts, and the number of assertions, to rank the recommendations made by the information system.
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...Geoffrey Fox
Most things are dominated by Artificial Intelligence (AI). Technology Companies like Amazon, Google, Facebook, and Microsoft are AI First organizations.
Engineering achievement today is highlighted by the AI buried in a vehicle or machine. Industry (Manufacturing) 4.0 focusses on the AI-Driven future of the Industrial Internet of Things.
Software is eating the world.
We can describe much computer systems work as designing, building and using the Global AI and Modelling supercomputer which itself is autonomously tuned by AI. We suggest that this is not just a bunch of buzzwords but has profound significance and examine consequences of this for education and research.
Naively high-performance computing should be relevant for the AI supercomputer but somehow the corporate juggernaut is not making so much use of it. We discuss how to change this.
Lessons from Data Science Program at Indiana University: Curriculum, Students...Geoffrey Fox
Invited talk at NSF/TCPP Workshop on Parallel and Distributed Computing Education Edupar at IPDPS 2015 May 25, 2015 5/25/2015 Hyderabad
Discusses Indiana University Data Science Program and experience with online education; the program is available in both online and residential modes. We end by discussing two classes taught both online and residentially and online by Geoffrey Fox. One is BDAA: Big Data Applications & Analytics https://bigdatacourse.appspot.com/course. The other is BDOSSP: Big Data Open Source Software and Projects http://bigdataopensourceprojects.soic.indiana.edu/
Experience with Online Teaching with Open Source MOOC TechnologyGeoffrey Fox
This memo describes experiences with online teaching in Spring Semester 2014. We discuss the technologies used and the approach to teaching/learning.
This work is based on Google Course Builder for a Big Data overview course
Big Data and Clouds: Research and EducationGeoffrey Fox
Presentation September 9 2013 PPAM 2013 Warsaw
Economic Imperative: There are a lot of data and a lot of jobs
Computing Model: Industry adopted clouds which are attractive for data analytics. HPC also useful in some cases
Progress in scalable robust Algorithms: new data need different algorithms than before
Progress in Data Intensive Programming Models
Progress in Data Science Education: opportunities at universities
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...Geoffrey Fox
Keynote at Sixth International Workshop on Cloud Data Management CloudDB 2014 Chicago March 31 2014.
Abstract: We introduce the NIST collection of 51 use cases and describe their scope over industry, government and research areas. We look at their structure from several points of view or facets covering problem architecture, analytics kernels, micro-system usage such as flops/bytes, application class (GIS, expectation maximization) and very importantly data source.
We then propose that in many cases it is wise to combine the well known commodity best practice (often Apache) Big Data Stack (with ~120 software subsystems) with high performance computing technologies.
We describe this and give early results based on clustering running with different paradigms.
We identify key layers where HPC Apache integration is particularly important: File systems, Cluster resource management, File and object data management, Inter process and thread communication, Analytics libraries, Workflow and Monitoring.
See
[1] A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures, Shantenu Jha, Judy Qiu, Andre Luckow, Pradeep Mantha and Geoffrey Fox, accepted in IEEE BigData 2014, available at: http://arxiv.org/abs/1403.1528
[2] High Performance High Functionality Big Data Software Stack, G Fox, J Qiu and S Jha, in Big Data and Extreme-scale Computing (BDEC), 2014. Fukuoka, Japan. http://grids.ucs.indiana.edu/ptliupages/publications/HPCandApacheBigDataFinal.pdf
FutureGrid Computing Testbed as a ServiceGeoffrey Fox
Describes FutureGrid and its role as a Computing Testbed as a Service. FutureGrid is user-customizable, accessed interactively and supports Grid, Cloud and HPC software with and without VM’s. Lessons learnt and example use cases are described
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...Geoffrey Fox
Motivating Introduction to MOOC on Big Data from an applications point of view https://bigdatacoursespring2014.appspot.com/course
Course says:
Geoffrey motivates the study of X-informatics by describing data science and clouds. He starts with striking examples of the data deluge with examples from research, business and the consumer. The growing number of jobs in data science is highlighted. He describes industry trend in both clouds and big data.
He introduces the cloud computing model developed at amazing speed by industry. The 4 paradigms of scientific research are described with growing importance of data oriented version. He covers 3 major X-informatics areas: Physics, e-Commerce and Web Search followed by a broad discussion of cloud applications. Parallel computing in general and particular features of MapReduce are described. He comments on a data science education and the benefits of using MOOC's.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Big Data HPC Convergence
1. Big Data HPC Convergence
5th Multicore World
15-17 February 2016 – Shed 6, Wellington, New Zealand
http://openparallel.com/multicore-world-2016/
1
Geoffrey Fox
February 16, 2016
gcf@indiana.edu
http://www.dsc.soic.indiana.edu/, http://spidal.org/ http://hpc-abds.org/kaleidoscope/
Department of Intelligent Systems Engineering
School of Informatics and Computing, Digital Science Center
Indiana University Bloomington
02/16/2016
2. Abstract
• Two major trends in computing systems are the growth in high performance computing
(HPC) with an international exascale initiative, and the big data phenomenon with an
accompanying cloud infrastructure of well publicized dramatic and increasing size and
sophistication.
• In studying and linking these trends one needs to consider multiple aspects: hardware,
software, applications/algorithms and even broader issues like business model and
education.
• In this talk we study in detail a convergence approach for software and applications /
algorithms and show what hardware architectures it suggests.
• We start by dividing applications into data plus model components and classifying each
component (whether from Big Data or Big Simulations) in the same way. These leads to 64
properties divided into 4 views, which are Problem Architecture (Macro pattern); Execution
Features (Micro patterns); Data Source and Style; and finally the Processing (runtime) View.
• We discuss convergence software built around HPC-ABDS (High Performance Computing
enhanced Apache Big Data Stack) http://hpc-abds.org/kaleidoscope/ and show how one can
merge Big Data and HPC (Big Simulation) concepts into a single stack.
• We give examples of data analytics running on HPC systems including details on
persuading Java to run fast.
• Some details can be found at
http://dsc.soic.indiana.edu/publications/HPCBigDataConvergence.pdf
202/16/2016
3. NIST Big Data Initiative
Led by Chaitin Baru, Bob Marcus, Wo Chang
And
Big Data Application Analysis
02/16/2016 3
4. NBD-PWG (NIST Big Data Public Working Group)
Subgroups & Co-Chairs
• There were 5 Subgroups
– Note mainly industry
• Requirements and Use Cases Sub Group
– Geoffrey Fox, Indiana U.; Joe Paiva, VA; Tsegereda Beyene, Cisco
• Definitions and Taxonomies SG
– Nancy Grady, SAIC; Natasha Balac, SDSC; Eugene Luster, R2AD
• Reference Architecture Sub Group
– Orit Levin, Microsoft; James Ketner, AT&T; Don Krapohl, Augmented
Intelligence
• Security and Privacy Sub Group
– Arnab Roy, CSA/Fujitsu Nancy Landreville, U. MD Akhil Manchanda, GE
• Technology Roadmap Sub Group
– Carl Buffington, Vistronix; Dan McClary, Oracle; David Boyd, Data
Tactics
• See http://bigdatawg.nist.gov/usecases.php
• and http://bigdatawg.nist.gov/V1_output_docs.php
402/16/2016
5. Use Case Template
• 26 fields completed for 51 apps
• Government Operation: 4
• Commercial: 8
• Defense: 3
• Healthcare and Life Sciences:
10
• Deep Learning and Social
Media: 6
• The Ecosystem for Research:
4
• Astronomy and Physics: 5
• Earth, Environmental and
Polar Science: 10
• Energy: 1
• Now an online form
502/16/2016
8. 51 Detailed Use Cases: Contributed July-September 2013
Covers goals, data features such as 3 V’s, software, hardware
• http://bigdatawg.nist.gov/usecases.php
• https://bigdatacoursespring2014.appspot.com/course (Section 5)
• Government Operation(4): National Archives and Records Administration, Census Bureau
• Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search,
Digital Materials, Cargo shipping (as in UPS)
• Defense(3): Sensors, Image surveillance, Situation Assessment
• Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis,
Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity
• Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd
Sourcing, Network Science, NIST benchmark datasets
• The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source
experiments
• Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron
Collider at CERN, Belle Accelerator II in Japan
• Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake,
Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate simulation
datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry (microbes to
watersheds), AmeriFlux and FLUXNET gas sensors
• Energy(1): Smart grid
8
26 Features for each use case
Biased to science
02/16/2016
10. 51 Use Cases: What is Parallelism Over?
• People: either the users (but see below) or subjects of application and often both
• Decision makers like researchers or doctors (users of application)
• Items such as Images, EMR, Sequences below; observations or contents of online
store
– Images or “Electronic Information nuggets”
– EMR: Electronic Medical Records (often similar to people parallelism)
– Protein or Gene Sequences;
– Material properties, Manufactured Object specifications, etc., in custom dataset
– Modelled entities like vehicles and people
• Sensors – Internet of Things
• Events such as detected anomalies in telescope or credit card data or atmosphere
• (Complex) Nodes in RDF Graph
• Simple nodes as in a learning network
• Tweets, Blogs, Documents, Web Pages, etc.
– And characters/words in them
• Files or data to be backed up, moved or assigned metadata
• Particles/cells/mesh points as in parallel simulations
10
02/16/2016
11. Features of 51 Use Cases I
• PP (26) “All” Pleasingly Parallel or Map Only
• MR (18) Classic MapReduce MR (add MRStat below for full count)
• MRStat (7) Simple version of MR where key computations are simple
reduction as found in statistical averages such as histograms and
averages
• MRIter (23) Iterative MapReduce or MPI (Spark, Twister)
• Graph (9) Complex graph data structure needed in analysis
• Fusion (11) Integrate diverse data to aid discovery/decision making;
could involve sophisticated algorithms or could just be a portal
• Streaming (41) Some data comes in incrementally and is processed
this way
• Classify (30) Classification: divide data into categories
• S/Q (12) Index, Search and Query
1102/16/2016
12. Features of 51 Use Cases II
• CF (4) Collaborative Filtering for recommender engines
• LML (36) Local Machine Learning (Independent for each parallel entity) –
application could have GML as well
• GML (23) Global Machine Learning: Deep Learning, Clustering, LDA, PLSI,
MDS,
– Large Scale Optimizations as in Variational Bayes, MCMC, Lifted Belief
Propagation, Stochastic Gradient Descent, L-BFGS, Levenberg-Marquardt . Can
call EGO or Exascale Global Optimization with scalable parallel algorithm
• Workflow (51) Universal
• GIS (16) Geotagged data and often displayed in ESRI, Microsoft Virtual
Earth, Google Earth, GeoServer etc.
• HPC(5) Classic large-scale simulation of cosmos, materials, etc. generating
(visualization) data
• Agent (2) Simulations of models of data-defined macroscopic entities
represented as agents
1202/16/2016
13. Local and Global Machine Learning
• Many applications use LML or Local machine Learning where machine
learning (often from R) is run separately on every data item such as on every
image
• But others are GML Global Machine Learning where machine learning is a
single algorithm run over all data items (over all nodes in computer)
– maximum likelihood or 2 with a sum over the N data items –
documents, sequences, items to be sold, images etc. and often links
(point-pairs).
– Graph analytics is typically GML
• Covering clustering/community detection, mixture models, topic
determination, Multidimensional scaling, (Deep) Learning Networks
• PageRank is “just” parallel linear algebra
• Note many Mahout algorithms are sequential – partly as MapReduce
limited; partly because parallelism unclear
– MLLib (Spark based) better
• SVM and Hidden Markov Models do not use large scale parallelization in
practice?
1302/16/2016
14. 13 Image-based Use Cases
• 13-15 Military Sensor Data Analysis/ Intelligence PP, LML, GIS, MR
• 7:Pathology Imaging/ Digital Pathology: PP, LML, MR for search
becoming terabyte 3D images, Global Classification
• 18&35: Computational Bioimaging (Light Sources): PP, LML Also
materials
• 26: Large-scale Deep Learning: GML Stanford ran 10 million images and
11 billion parameters on a 64 GPU HPC; vision (drive car), speech, and
Natural Language Processing
• 27: Organizing large-scale, unstructured collections of photos: GML
Fit position and camera direction to assemble 3D photo ensemble
• 36: Catalina Real-Time Transient Synoptic Sky Survey (CRTS): PP,
LML followed by classification of events (GML)
• 43: Radar Data Analysis for CReSIS Remote Sensing of Ice Sheets: PP,
LML to identify glacier beds; GML for full ice-sheet
• 44: UAVSAR Data Processing, Data Product Delivery, and Data
Services: PP to find slippage from radar images
• 45, 46: Analysis of Simulation visualizations: PP LML ?GML find paths,
classify orbits, classify patterns that signal earthquakes, instabilities,
climate, turbulence
1402/16/2016
15. Internet of Things and Streaming Apps
• It is projected that there will be 24 (Mobile Industry Group) to 50 (Cisco)
billion devices on the Internet by 2020.
• The cloud natural controller of and resource provider for the Internet of
Things.
• Smart phones/watches, Wearable devices (Smart People), “Intelligent River”
“Smart Homes and Grid” and “Ubiquitous Cities”, Robotics.
• Majority of use cases are streaming – experimental science gathers data in
a stream – sometimes batched as in a field trip. Below is sample
• 10: Cargo Shipping Tracking as in UPS, Fedex PP GIS LML
• 13: Large Scale Geospatial Analysis and Visualization PP GIS LML
• 28: Truthy: Information diffusion research from Twitter Data PP MR for
Search, GML for community determination
• 39: Particle Physics: Analysis of LHC Large Hadron Collider Data:
Discovery of Higgs particle PP for event Processing, Global statistics
• 50: DOE-BER AmeriFlux and FLUXNET Networks PP GIS LML
• 51: Consumption forecasting in Smart Grids PP GIS LML
1502/16/2016
17. Big Data and Big Simulations
Patterns – the Convergence
Diamonds
02/16/2016 17
18. Big Data - Big Simulation (Exascale) Convergence
• Lets distinguish Data and Model (e.g. machine learning
analytics) in Big data problems
• Then almost always Data is large but Model varies
– e.g. LDA with many topics or deep learning has large model
– Clustering or Dimension reduction can be quite small
• Simulations can also be considered as Data and Model
– Model is solving particle dynamics or partial differential equations
– Data could be small when just boundary conditions or
– Data large with data assimilation (weather forecasting) or when data
visualizations produced by simulation
• Data often static between iterations (unless streaming), model
varies between iterations
1802/16/2016
19. Classifying Big Data and Big Simulation Applications
• “Benchmarks” “kernels” “algorithm” “mini-apps” can serve multiple
purposes
• Motivate hardware and software features
– e.g. collaborative filtering algorithm has feature of parallelizes well with
MapReduce and suggests using Hadoop on a cloud
– e.g. deep learning on images dominated by matrix operations; needs
CUDA&MPI and suggests HPC cluster
• Benchmark sets can be designed to cover key features of systems in terms
of features and sizes of “important” applications
• Take 51 uses cases derive specific features; each use case has multiple
features
• Generalize and systematize with features termed “facets”
• 50 Facets (Big Data) or 64 Facets (Big Simulation and Data) divided
into 4 sets or views where each view has “similar” facets
– Allow one to study coverage of benchmark sets and architectures
• Discuss Data and Model together as built around problems which combine
them but we can get insight by separating and this allows better
understanding of Big Data - Big Simulation “convergence”
1902/16/2016
20. 7 Computational Giants of
NRC Massive Data Analysis Report
1) G1: Basic Statistics e.g. MRStat
2) G2: Generalized N-Body Problems
3) G3: Graph-Theoretic Computations
4) G4: Linear Algebraic Computations
5) G5: Optimizations e.g. Linear Programming
6) G6: Integration e.g. LDA and other GML
7) G7: Alignment Problems e.g. BLAST
2002/16/2016
http://www.nap.edu/catalog.php?record_id=18374 Big Data Models?
21. HPC (Simulation) Benchmark Classics
• Linpack or HPL: Parallel LU factorization
for solution of linear equations
• NPB version 1: Mainly classic HPC solver kernels
– MG: Multigrid
– CG: Conjugate Gradient
– FT: Fast Fourier Transform
– IS: Integer sort
– EP: Embarrassingly Parallel
– BT: Block Tridiagonal
– SP: Scalar Pentadiagonal
– LU: Lower-Upper symmetric Gauss Seidel
2102/16/2016
Simulation Models
22. 13 Berkeley Dwarfs
1) Dense Linear Algebra
2) Sparse Linear Algebra
3) Spectral Methods
4) N-Body Methods
5) Structured Grids
6) Unstructured Grids
7) MapReduce
8) Combinational Logic
9) Graph Traversal
10) Dynamic Programming
11) Backtrack and
Branch-and-Bound
12) Graphical Models
13) Finite State Machines
2202/16/2016
First 6 of these correspond to Colella’s
original. (Classic simulations)
Monte Carlo dropped.
N-body methods are a subset of
Particle in Colella.
Note a little inconsistent in that
MapReduce is a programming model
and spectral method is a numerical
method.
Need multiple facets!
Largely Models for Data or Simulation
24. 2402/16/2016
Local(Analytics/Informatics/Simulations)
2
M
Data Source and Style View
Pleasingly Parallel
Classic MapReduce
Map-Collective
Map Point-to-Point
Shared Memory
Single Program Multiple Data
Bulk Synchronous Parallel
Fusion
Dataflow
Agents
Workflow
Geospatial Information System
HPC Simulations
Internet of Things
Metadata/Provenance
Shared / Dedicated / Transient / Permanent
Archived/Batched/Streaming – S1, S2, S3, S4, S5
HDFS/Lustre/GPFS
Files/Objects
Enterprise Data Model
SQL/NoSQL/NewSQL
1
M
Micro-benchmarks
Execution View
Processing View
1
2
3
4
6
7
8
9
10
11M
12
10D
9
8D
7D
6D
5D
4D
3D
2D
1D
Map Streaming 5
Convergence
Diamonds
Views and
Facets
Problem Architecture View
15
M
CoreLibraries
Visualization14
M GraphAlgorithms
13
M
LinearAlgebraKernels/Manysubclasses
12
M
Global(Analytics/Informatics/Simulations)
3
M
RecommenderEngine
5
M
4
M
BaseDataStatistics
10
M
StreamingDataAlgorithms
OptimizationMethodology
9
M
Learning
8
M
DataClassification
7
M
DataSearch/Query/Index
6
M
11
M
DataAlignment
Big Data Processing
Diamonds
MultiscaleMethod
17
M
16
M
IterativePDESolvers
22
M
Natureofmeshifused
EvolutionofDiscreteSystems
21
M
ParticlesandFields
20
M
N-bodyMethods
19
M
SpectralMethods
18
M
Simulation (Exascale)
Processing Diamonds
DataAbstraction
D
12
ModelAbstraction
M
12
DataMetric=M/Non-Metric=N
D
13
DataMetric=M/Non-Metric=N
M
13
=NN/=N
M
14
Regular=R/Irregular=IModel
M
10
Veracity
7
Iterative/Simple
M
11
CommunicationStructure
M
8
Dynamic=D/Static=S
D
9
Dynamic=D/Static=S
M
9
Regular=R/Irregular=IData
D
10
ModelVariety
M
6
DataVelocity
D
5
PerformanceMetrics
1
DataVariety
D
6
FlopsperByte/MemoryIO/Flopsperwatt
2
ExecutionEnvironment;Corelibraries
3
DataVolume
D
4
ModelSize
M
4
Simulations
Analytics
(Model for
Data)
Both
(All Model)
(Nearly all Data+Model)
(Nearly all Data)
(Mix of Data and Model)
25. Dwarfs and Ogres give
Convergence Diamonds
• Macropatterns or Problem Architecture View:
Unchanged from Ogres
• Execution View: Significant changes to separate Data
and Model and add characteristics of Simulation models
• Data Source and Style View: Same for Ogres and
Diamonds – present but less important for Simulations
compared to big data
• Processing View is a mix of Big Data Processing View
and Big Simulation Processing View and includes
some facets like “uses linear algebra” needed in both:
has specifics of key simulation kernels and in particular
includes NAS Parallel Benchmarks and Berkeley Dwarfs
2502/16/2016
26. Facets of the Convergence
Diamonds
Problem Architecture
Meta or Macro Aspects of Diamonds
Valid for Big Data or Big Simulations as describes Problem
which is Model-Data combination
02/16/2016 26
27. Problem Architecture View (Meta or MacroPatterns)
i. Pleasingly Parallel – as in BLAST, Protein docking, some (bio-)imagery including
Local Analytics or Machine Learning – ML or filtering pleasingly parallel, as in bio-
imagery, radar images (pleasingly parallel but sophisticated local analytics)
ii. Classic MapReduce: Search, Index and Query and Classification algorithms like
collaborative filtering (G1 for MRStat in Features, G7)
iii. Map-Collective: Iterative maps + communication dominated by “collective” operations
as in reduction, broadcast, gather, scatter. Common datamining pattern
iv. Map-Point to Point: Iterative maps + communication dominated by many small point to
point messages as in graph algorithms
v. Map-Streaming: Describes streaming, steering and assimilation problems
vi. Shared Memory: Some problems are asynchronous and are easier to parallelize on
shared rather than distributed memory – see some graph algorithms
vii. SPMD: Single Program Multiple Data, common parallel programming feature
viii. BSP or Bulk Synchronous Processing: well-defined compute-communication phases
ix. Fusion: Knowledge discovery often involves fusion of multiple methods.
x. Dataflow: Important application features often occurring in composite Ogres
xi. Use Agents: as in epidemiology (swarm approaches) This is Model only
xii. Workflow: All applications often involve orchestration (workflow) of multiple
components
2702/16/2016
Most (11 of total 12) are properties of Data+Model
28. Relation of Problem and Machine Architecture
• Problem is Model plus Data
• In my old papers (especially book Parallel Computing Works!), I discussed
computing as multiple complex systems mapped into each other
Problem Numerical formulation Software
Hardware
• Each of these 4 systems has an architecture that can be described in
similar language
• One gets an easy programming model if architecture of problem matches
that of Software
• One gets good performance if architecture of hardware matches that of
software and problem
• So “MapReduce” can be used as architecture of software (programming
model) or “Numerical formulation of problem”
2802/16/2016
29. 6 Forms of
MapReduce
cover “all”
circumstances
Describes
- Problem (Model
reflecting data)
- Machine
- Software
Architecture
2902/16/2016
30. Data Analysis Problem Architectures
1) Pleasingly Parallel PP or “map-only” in MapReduce
BLAST Analysis; Local Machine Learning
2A) Classic MapReduce MR, Map followed by reduction
High Energy Physics (HEP) Histograms; Web search; Recommender Engines
2B) Simple version of classic MapReduce MRStat
Final reduction is just simple statistics
3) Iterative MapReduce MRIter
Expectation maximization Clustering Linear Algebra, PageRank
4A) Map Point to Point Communication
Classic MPI; PDE Solvers and Particle Dynamics; Graph processing Graph
4B) GPU (Accelerator) enhanced 4A) – especially for deep learning
5) Map + Streaming + some sort of Communication
Images from Synchrotron sources; Telescopes; Internet of Things IoT
Apache Storm is (Map + Dataflow) +Streaming
Data assimilation is (Map + Point to Point Communication) + Streaming
6) Shared memory allowing parallel threads which are tricky to program but
lower latency
Difficult to parallelize asynchronous parallel Graph Algorithms
3002/16/2016
32. View for Micropatterns or Execution Features
i. Performance Metrics; property found by benchmarking Diamond
ii. Flops per byte; memory or I/O
iii. Execution Environment; Core libraries needed: matrix-matrix/vector algebra, conjugate
gradient, reduction, broadcast; Cloud, HPC etc.
iv. Volume: property of a Diamond instance: a) Data Volume and b) Model Size
v. Velocity: qualitative property of Diamond with value associated with instance. Only Data
vi. Variety: important property especially of composite Diamonds; Data and Model separately
vii. Veracity: important property of applications but not kernels;
viii. Model Communication Structure; Interconnect requirements; Is communication BSP,
Asynchronous, Pub-Sub, Collective, Point to Point?
ix. Is Data and/or Model (graph) static or dynamic?
x. Much Data and/or Models consist of a set of interconnected entities; is this regular as a set
of pixels or is it a complicated irregular graph?
xi. Are Models Iterative or not?
xii. Data Abstraction: key-value, pixel, graph(G3), vector, bags of words or items; Model can
have same or different abstractions e.g. mesh points, finite element, Convolutional Network
xiii. Are data points in metric or non-metric spaces? Data and Model separately?
xiv. Is Model algorithm O(N2) or O(N) (up to logs) for N points per iteration (G2)
3202/16/2016
33. Comparison of Data Analytics with Simulation I
• Simulations produce big data as visualization of results – they are data
source
– Or consume often smallish data to define a simulation problem
– HPC simulation in (weather) data assimilation is data + model
• Pleasingly parallel often important in both
• Both are often SPMD and BSP
• Non-iterative MapReduce is major big data paradigm
– not a common simulation paradigm except where “Reduce” summarizes
pleasingly parallel execution as in some Monte Carlos
• Big Data often has large collective communication
– Classic simulation has a lot of smallish point-to-point messages
– Motivates Map-Collective model
• Simulations characterized often by difference or differential operators
• Simulation dominantly sparse (nearest neighbor) data structures
– Some important data analytics involves full matrix algorithm but
– “Bag of words (users, rankings, images..)” algorithms are sparse, as is
PageRank
02/16/2016
33
35. Comparison of Data Analytics with Simulation II
• There are similarities between some graph problems and particle
simulations with a strange cutoff force.
– Both Map-Communication
• Note many big data problems are “long range force” (as in gravitational
simulations) as all points are linked.
– Easiest to parallelize. Often full matrix algorithms
– e.g. in DNA sequence studies, distance (i, j) defined by BLAST, Smith-
Waterman, etc., between all sequences i, j.
– Opportunity for “fast multipole” ideas in big data. See NRC report
• In image-based deep learning, neural network weights are block sparse
(corresponding to links to pixel blocks) but can be formulated as full matrix
operations on GPUs and MPI in blocks.
• In HPC benchmarking, Linpack being challenged by a new sparse conjugate
gradient benchmark HPCG, while I am diligently using non- sparse
conjugate gradient solvers in clustering and Multi-dimensional scaling.
02/16/2016
35
36. Convergence Diamond Facets
Big Data and Big Simulation
Processing View
All Model Properties but differences
between Big Data and Big Simulation
02/16/2016 36
37. Diamond Facets in Processing (runtime) View I
used in Big Data and Big Simulation
• Pr-1M Micro-benchmarks ogres that exercise simple features of hardware
such as communication, disk I/O, CPU, memory performance
• Pr-2M Local Analytics executed on a single core or perhaps node
• Pr-3M Global Analytics requiring iterative programming models (G5,G6)
across multiple nodes of a parallel system
• Pr-12M Uses Linear Algebra common in Big Data and simulations
– Subclasses like Full Matrix
– Conjugate Gradient, Krylov, Arnoldi iterative subspace methods
– Structured and unstructured sparse matrix methods
• Pr-13M Graph Algorithms (G3) Clear important class of algorithms -- as
opposed to vector, grid, bag of words etc. – often hard especially in parallel
• Pr-14M Visualization is key application capability for big data and
simulations
• Pr-15M Core Libraries Functions of general value such as Sorting, Math
functions, Hashing
3702/16/2016
38. Diamond Facets in Processing (runtime) View II
used in Big Data
• Pr-4M Basic Statistics (G1): MRStat in NIST problem features
• Pr-5M Recommender Engine: core to many e-commerce, media businesses;
collaborative filtering key technology
• Pr-6M Search/Query/Index: Classic database which is well studied (Baru, Rabl
tutorial)
• Pr-7M Data Classification: assigning items to categories based on many methods
– MapReduce good in Alignment, Basic statistics, S/Q/I, Recommender, Classification
• Pr-8M Learning of growing importance due to Deep Learning success in speech
recognition etc..
• Pr-9M Optimization Methodology: overlapping categories including
– Machine Learning, Nonlinear Optimization (G6), Maximum Likelihood or 2 least
squares minimizations, Expectation Maximization (often Steepest descent),
Combinatorial Optimization, Linear/Quadratic Programming (G5), Dynamic
Programming
• Pr-10M Streaming Data or online Algorithms. Related to DDDAS (Dynamic Data-
Driven Application Systems)
• Pr-11M Data Alignment (G7) as in BLAST compares samples with repository
3802/16/2016
39. Diamond Facets in Processing (runtime) View III
used in Big Simulation
• Pr-16M Iterative PDE Solvers: Jacobi, Gauss Seidel etc.
• Pr-17M Multiscale Method? Multigrid and other variable
resolution approaches
• Pr-18M Spectral Methods as in Fast Fourier Transform
• Pr-19M N-body Methods as in Fast multipole, Barnes-Hut
• Pr-20M Both Particles and Fields as in Particle in Cell method
• Pr-21M Evolution of Discrete Systems as in simulation of
Electrical Grids, Chips, Biological Systems, Epidemiology.
Needs Ordinary Differential Equation solvers
• Pr-22M Nature of Mesh if used: Structured, Unstructured,
Adaptive
3902/16/2016
Covers NAS Parallel Benchmarks and Berkeley Dwarfs
40. Facets of the Diamonds (same as
Ogres here)
Data Source and Style Aspects
Present but often less important for
Simulations (that use and produce data)
02/16/2016 40
41. Data Source and Style Diamond View I
i. SQL NewSQL or NoSQL: NoSQL includes Document,
Column, Key-value, Graph, Triple store; NewSQL is SQL redone to exploit
NoSQL performance
ii. Other Enterprise data systems: 10 examples from NIST integrate
SQL/NoSQL
iii. Set of Files or Objects: as managed in iRODS and extremely common in
scientific research
iv. File systems, Object, Blob and Data-parallel (HDFS) raw storage:
Separated from computing or colocated? HDFS v Lustre v. Openstack
Swift v. GPFS
v. Archive/Batched/Streaming: Streaming is incremental update of datasets
with new algorithms to achieve real-time response (G7); Before data gets
to compute system, there is often an initial data gathering phase which is
characterized by a block size and timing. Block size varies from month
(Remote Sensing, Seismic) to day (genomic) to seconds or lower (Real
time control, streaming)
• Streaming divided into categories overleaf
4102/16/2016
42. Data Source and Style Diamond View II
• Streaming divided into 5 categories depending on event size and
synchronization and integration
• Set of independent events where precise time sequencing unimportant.
• Time series of connected small events where time ordering important.
• Set of independent large events where each event needs parallel processing with time sequencing not
critical
• Set of connected large events where each event needs parallel processing with time sequencing critical.
• Stream of connected small or large events to be integrated in a complex way.
vi. Shared/Dedicated/Transient/Permanent: qualitative property of data; Other
characteristics are needed for permanent auxiliary/comparison datasets and these
could be interdisciplinary, implying nontrivial data movement/replication
vii. Metadata/Provenance: Clear qualitative property but not for kernels as important
aspect of data collection process
viii. Internet of Things: 24 to 50 Billion devices on Internet by 2020
ix. HPC simulations: generate major (visualization) output that often needs to be
mined
x. Using GIS: Geographical Information Systems provide attractive access to
geospatial data
4202/16/2016
43. 2. Perform real time analytics on data source
streams and notify users when specified
events occur
4302/16/2016
Storm, Kafka, Hbase, Zookeeper
Streaming Data
Streaming Data
Streaming Data
Posted Data Identified Events
Filter Identifying
Events
Repository
Specify filter
Archive
Post Selected
Events
Fetch streamed
Data
44. 5. Perform interactive analytics on data in
analytics-optimized database
4402/16/2016
Hadoop, Spark, Giraph, Pig …
Data Storage: HDFS, Hbase
Data, Streaming, Batch …..
Mahout, R
45. 5A. Perform interactive analytics on
observational scientific data
4502/16/2016
Grid or Many Task Software, Hadoop, Spark, Giraph, Pig …
Data Storage: HDFS, Hbase, File Collection
Streaming Twitter data for
Social Networking
Science Analysis Code,
Mahout, R
Transport batch of data to primary
analysis data system
Record Scientific Data in
“field”
Local
Accumulate
and initial
computing
Direct Transfer
NIST examples include
LHC, Remote Sensing,
Astronomy and
Bioinformatics
48. 4802/16/2016
Big Data and (Exascale) Simulation Convergence IIKaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies
Cross-
Cutting
Functions
1) Message
and Data
Protocols:
Avro, Thrift,
Protobuf
2) Distributed
Coordination
: Google
Chubby,
Zookeeper,
Giraffe,
JGroups
3) Security &
Privacy:
InCommon,
Eduroam
OpenStack
Keystone,
LDAP, Sentry,
Sqrrl, OpenID,
SAML OAuth
4)
Monitoring:
Ambari,
Ganglia,
Nagios, Inca
17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad,
Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA),
Jitterbit, Talend, Pentaho, Apatar, Docker Compose, KeystoneML
16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, PLASMA MAGMA,
Azure Machine Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j,
H2O, IBM Watson, Oracle PGX, GraphLab, GraphX, IBM System G, GraphBuilder(Intel), TinkerPop, Parasol, Dream:Lab, Google Fusion Tables,
CINET, NWB, Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree, DC.js, TensorFlow, CNTK
15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure, Cloud
Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi, HUBzero, OODT,
Agave, Atmosphere
15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq,
Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird
14B) Streams: Storm, S4, Samza, Granules, Neptune, Google MillWheel, Amazon Kinesis, LinkedIn, Twitter Heron, Databus, Facebook
Puma/Ptail/Scribe/ODS, Azure Stream Analytics, Floe, Spark Streaming, Flink Streaming, DataTurbine
14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, MR-MPI, Stratosphere (Apache Flink), Reef, Disco,
Hama, Giraph, Pregel, Pegasus, Ligra, GraphChi, Galois, Medusa-GPU, MapGraph, Totem
13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, HPX-5, Argo BEAST HPX-5 BEAST PULSAR, Harp, Netty,
ZeroMQ, ActiveMQ, RabbitMQ, NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Marionette Collective, Public Cloud: Amazon
SNS, Lambda, Google Pub Sub, Azure Queues, Event Hubs
12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan, VoltDB,
H-Store
12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC
12) Extraction Tools: UIMA, Tika
11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal
Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB, Spark SQL
11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, ZHT, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB,
Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J,
graphdb, Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame
Public Cloud: Azure Table, Amazon Dynamo, Google DataStore
11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet
10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST
9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm,
Torque, Globus Tools, Pilot Jobs
8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS
Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage
7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis
6) DevOps: Docker (Machine, Swarm), Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat,
Sahara, Rocks, Cisco Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes,
Buildstep, Gitreceive, OpenTOSCA, Winery, CloudML, Blueprints, Terraform, DevOpSlang, Any2Api
5) IaaS Management from HPC to hypervisors: Xen, KVM, QEMU, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula,
Eucalyptus, Nimbus, CloudStack, CoreOS, rkt, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds
Networking: Google Cloud DNS, Amazon Route 53
21 layers
Over 350
Software
Packages
January
29
2016
49. Functionality of 21 HPC-ABDS Layers
1) Message Protocols:
2) Distributed Coordination:
3) Security & Privacy:
4) Monitoring:
5) IaaS Management from HPC to hypervisors:
6) DevOps:
7) Interoperability:
8) File systems:
9) Cluster Resource Management:
10) Data Transport:
11) A) File management
B) NoSQL
C) SQL
12) In-memory databases&caches / Object-relational mapping / Extraction Tools
13) Inter process communication Collectives, point-to-point, publish-subscribe, MPI:
14) A) Basic Programming model and runtime, SPMD, MapReduce:
B) Streaming:
15) A) High level Programming:
B) Frameworks
16) Application and Analytics:
17) Workflow-Orchestration:
49
Here are 21 functionalities.
(including 11, 14, 15 subparts)
4 Cross cutting at top
17 in order of layered diagram
starting at bottom
02/16/2016
51. Java Grande
Revisited on 3 data analytics codes
Clustering
Multidimensional Scaling
Latent Dirichlet Allocation
all sophisticated algorithms
5102/16/2016
52. Java MPI performs better than Threads I
5202/16/2016
• 48 24 core Haswell nodes 200K DA-MDS Dataset size
• Default MPI much worse than threads
• Optimized MPI using shared memory node-based messaging is much better than
threads
53. MPI versus OpenMPI Intranode 400K DA-MDS
• Note, the default Java + OMPI could NOT handle 400K data
• Java + SM + OMPI is our zero intra-node messaging
implementation using Java shared memory maps
• SPIDAL Java is above plus other optimizations
• Input 400K x 400K binary Short (2 bytes) matrix ~300GB
• Runtime 5302/16/2016
OpenMPI Intranode
OpenMP Intranode
54. Java MPI performs better than Threads II
128 24 core Haswell nodes
5402/16/2016
200K Dataset Speedup
Best Threads
Best MPI
55. DA-MDS Speedup versus Problem Sizes
• All tests were done with the SPIDAL Java version
• Cores 48 to 1152 were done on 48 nodes by varying the number of
cores used within a node from 1 to 24
• Cores 2304 and 3072 were done on 96 and 128 nodes while using 24
cores per node
5502/16/2016
56. DA-MDS Scaling on 32 36 core Haswell Nodes
• Core counts of 32 to 1152 were done on 32 nodes by varying the number of
cores used within a node from 1 to 36
• Scales up to 32 cores per node (16 cores per chip)
5602/16/2016
MPI both Internode
and Intranode
57. Java compared to Fortran and C
5702/16/2016
• NAS Java is old Java Grande benchmark
• NAS serial benchmark total times in
seconds
• SPIDAL Java optimizations were
incomplete and included Zero-Garbage
Collection and Cache Optimizations only
• This is our machine learning code
• DA-MDS block matrix multiplication
times in milliseconds
• 1x1* is the serial version and does the
computation of a single process in
24x48 pattern.
• SPIDAL Java includes all optimizations
SPIDAL Java
1. Cache and memory optimizations - these include blocked loops, 1D arrays to
represent 2D data, and loop re-ordering
2. Minimal memory and zero full Garbage Collection (GC) - these include using statically
allocated arrays to reduce GC and using the minimum number of arrays possible
3. Off-heap data structures - these were used to load initial data, intra-node messaging,
and do MPI communications.
59. July 21 2007 Positions
End 2008 Positions
59
10 year US Stock daily price time series mapped to 3D (work
in progress)
3400 stocks
Sector Groupings
02/16/2016
62. Big Data and (Exascale) Simulation Convergence I
• Our approach to Convergence is built around two ideas that avoid addressing
the hardware directly as with modern DevOps technology it isn’t hard to
retarget applications between different hardware systems.
• Rather we approach Convergence through applications and software.
• Convergence Diamonds Convergence unify Big Simulation and Big Data
applications and so allow one to more easily identify good approaches to
implement Big Data and Exascale applications in a uniform fashion.
• Software convergence builds on the HPC-ABDS High Performance
Computing enhanced Apache Big Data Software Stack concept
(http://dsc.soic.indiana.edu/publications/HPC-ABDSDescribed_final.pdf,
http://hpc-abds.org/kaleidoscope/ )
– This arranges key HPC and ABDS software together in 21 layers showing
where HPC and ABDS overlap. It for example, introduces a communication
layer to allow ABDS runtime like Hadoop Storm Spark and Flink to use the
richest high performance capabilities shared with MPI Generally it
proposes how to use HPC and ABDS software together.
– Layered Architecture offers some protection to rapid ABDS technology
change
6202/16/2016
63. Dual Convergence Architecture
• Running same HPC-ABDS across all platforms but data management
machine has different balance in I/O, Network and Compute from “model”
machine
6302/16/2016
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
D
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
Data Management Model for Big Data
and Big Simulation
64. Things to do for Big Data and (Exascale)
Simulation Convergence II
• Converge Applications: Separate data and model to classify
Applications and Benchmarks across Big Data and Big
Simulations to give Convergence Diamonds with many
facets
– Indicated how to extend Big Data Ogres to Big Simulations
by looking separately at model and data in Ogres
– Diamonds have four views or collections of facets: Problem
Architecture; Execution; Data Source and Style; Big Data
and Big Simulation Processing
– Facets cover data, model or their combination – the
problem or application
– Note Simulation Processing View has by construction,
similarities to old parallel computing benchmarks
6402/16/2016
65. Things to do for Big Data and (Exascale)
Simulation Convergence III
• Convergence Benchmarks: we will use benchmarks that cover the facets of the
convergence diamonds i.e. cover big data and simulations;
– As we separate data and model, compute intensive simulation benchmarks (e.g.
solve partial differential equation) will be linked with data analytics (the model in
big data)
– IU focus SPIDAL (Scalable Parallel Interoperable Data Analytics Library) with
high performance clustering, dimension reduction, graphs, image processing as
well as MLlib will be linked to core PDE solvers to explore the communication
layer of parallel middleware
– Maybe integrating data and simulation is an interesting idea in benchmark sets
• Convergence Programming Model
– Note parameter servers used in machine learning will be mimicked by collective
operators invoked on distributed parameter (model) storage
– E.g. Harp as Hadoop HPC Plug-in
– There should be interest in using Big Data software systems to support exascale
simulations
– Streaming solutions from IoT to analysis of astronomy and LHC data will drive
high performance versions of Apache streaming systems
6502/16/2016
66. Things to do for Big Data and (Exascale)
Simulation Convergence IV
• Converge Language: Make Java run as fast as C++ (Java
Grande) for computing and communication
• Surprising that so much Big Data work in industry but basic
high performance Java methodology and tools missing
– Needs some work as no agreed OpenMP for Java parallel
threads
– OpenMPI supports Java but needs enhancements to get
best performance on needed collectives (For C++ and
Java)
– Convergence Language Grande should support Python,
Java (Scala), C/C++ (Fortran)
6602/16/2016