Slides from Strata+Hadoop Singapore 2016 presenting how Deep Learning can be scaled both vertically and horizontally, when to use CPUs and when to use GPUs.
How Spark is Enabling the New Wave of Converged Cloud Applications MapR Technologies
Apache Spark has become the de-facto compute engine of choice for data engineers, developers, and data scientists because of its ability to run multiple analytic workloads with a single, general-purpose compute engine.
But is Spark alone sufficient for developing cloud-based big data applications? What are the other required components for supporting big data cloud processing? How can you accelerate the development of applications which extend across Spark and other frameworks such as Kafka, Hadoop, NoSQL databases, and more?
How Spark is Enabling the New Wave of Converged Cloud Applications MapR Technologies
Apache Spark has become the de-facto compute engine of choice for data engineers, developers, and data scientists because of its ability to run multiple analytic workloads with a single, general-purpose compute engine.
But is Spark alone sufficient for developing cloud-based big data applications? What are the other required components for supporting big data cloud processing? How can you accelerate the development of applications which extend across Spark and other frameworks such as Kafka, Hadoop, NoSQL databases, and more?
Open Source Innovations in the MapR Ecosystem Pack 2.0MapR Technologies
Over the summer, we introduced the MapR Ecosystem Pack (MEP) which is a natural evolution of our existing software update program that decouples open source ecosystem updates from core platform updates. MEP gives our customers quick access to the latest open source innovations while also ensuring cross-project compatibility in any given MEP version.
Streaming in the Extreme
Jim Scott, Director, Enterprise Strategy & Architecture, MapR
Have you ever heard of Kafka? Are you ready to start streaming all of the events in your business? What happens to your streaming solution when you outgrow your single data center? What happens when you are at a company that is already running multiple data centers and you need to implement streaming across data centers? I will discuss technologies like Kafka that can be used to accomplish, real-time, lossless messaging that works in both single and multiple globally dispersed data centers. I will also describe how to handle the data coming in through these streams in both batch processes as well as real-time processes.What about when you need to scale to a trillion events per day? I will discuss technologies like Kafka that can be used to accomplish, real-time, lossless messaging that works in both single and multiple globally dispersed data centers. I will also describe how to handle the data coming in through these streams in both batch processes as well as real-time processes.
Video Presentation:
https://youtu.be/Y0vxLgB1u9o
The increasing availability of mobile phones with embedded GPS devices and sensors has spurred the use of vehicle telematics in recent years. Telematics provides detailed and continuous information of a vehicle such as the location, speed, and movement. Vehicle telematics can be further linked with other spatial data to provide context to understand driving behaviors at the detailed level. However, the collection of high-frequency telematics data results in huge volumes of data that must be processed efficiently. And the raw sensor and GPS data must be properly pre-processed and transformed to extract signal relevant to downstream processes. In addition, driving behavior often depends on the spatial context, and the analysis of telematics must be contextualized using spatial and real-time traffic data.
Our talk covers the promises and challenges of telematics data. We present a framework for large-scaled telematics data analysis using Apache big data tools (Hadoop, Hive, Spark, Kafka, etc). We discuss common techniques to load and transform telematics data. We then present how to use machine learning on telematics data to derive insights about driving safety.
Speakers
Yanwei Zhang, Senior Data Scientists II, Uber
Neil Parker, Senior Software Engineer, Uber
Realtime streaming architecture in INFINARIOJozo Kovac
About our experience with realtime analyses on never-ending stream of user events. Discuss Lambda architecture, Kappa, Apache Kafka and our own approach.
Improving Traffic Prediction Using Weather Data with Ramya RaghavendraSpark Summit
As common sense would suggest, weather has a definite impact on traffic. But how much? And under what circumstances? Can we improve traffic (congestion) prediction given weather data? Predictive traffic is envisioned to significantly impact how driver’s plan their day by alerting users before they travel, find the best times to travel, and over time, learn from new IoT data such as road conditions, incidents, etc. This talk will cover the traffic prediction work conducted jointly by IBM and the traffic data provider. As a part of this work, we conducted a case study over five large metropolitans in the US, 2.58 billion traffic records and 262 million weather records, to quantify the boost in accuracy of traffic prediction using weather data. We will provide an overview of our lambda architecture with Apache Spark being used to build prediction models with weather and traffic data, and Spark Streaming used to score the model and provide real-time traffic predictions. This talk will also cover a suite of extensions to Spark to analyze geospatial and temporal patterns in traffic and weather data, as well as the suite of machine learning algorithms that were used with Spark framework. Initial results of this work were presented at the National Association of Broadcasters meeting in Las Vegas in April 2017, and there is work to scale the system to provide predictions in over a 100 cities. Audience will learn about our experience scaling using Spark in offline and streaming mode, building statistical and deep-learning pipelines with Spark, and techniques to work with geospatial and time-series data.
Big data real time architectures -
How do to big data processing in real time?
What architectures are out there to support this paradigm?
Which one should we choose?
What Advantages / Pitfalls they contain.
Virgin Hyperloop One is the leader in realizing a Hyperloop mass transportation system (VHOMTS), which will bring the cities and people closer together than ever before while reducing pollution, emission of greenhouse gases, transit times, etc. To build a safe and user friendly Hyperloop, we need to answer key technical and business questions, including: – ‘What is the safe maximum speed the hyperloop can go?’ – ‘How many pods (the vehicles that carry people) do we need to fulfill a given demand?’
Building Identity Graphs over Heterogeneous DataDatabricks
In today’s world, customers and service providers (e.g., Social networks, ad targeting, retail, etc.) interact in a variety of modes and channels such as browsers, apps, devices, etc. In each such interaction, users are identified using a token (possibly different token for each mode/channel). Examples of such identity tokens include cookies, app IDs etc. As the user engages more with these services, linkages are generated between tokens belonging to the same user; linkages connect multiple identity tokens together.
Real-World Machine Learning - Leverage the Features of MapR Converged Data Pl...Mathieu Dumoulin
Examine the unique features of the MapR Converged Data Platform and how they can support production-grade enterprise machine learning - Ends with a live demo using H2O - Presented at Hadoop Summit Tokyo 2016
How Spark Enables the Internet of Things: Efficient Integration of Multiple ...sparktc
IBM researchers in Haifa, together with partners from the COSMOS EU-funded project, are using Spark to analyze the new wave of IoT data and solve problems in a way that is generic, integrated, and practical.
Indexing 3-dimensional trajectories: Apache Spark and Cassandra integrationCesare Cugnasco
Data visualization can be a tricky problem, even more if the dataset is made of several billions of 3-dimensional particles moving along the time. The talk will focus on some simple indexing and data thinning techniques and how (and how do not) implement them with Cassandra and Spark.
Stream Processing is emerging as a popular paradigm for data processing architectures, because it handles the continuous nature of most data and computation and gets rid of artificial boundaries and delays. In this talk, we are going to look at some of the most common misconceptions about stream processing and debunk them.
- Myth 1: Streaming is approximate and exactly-once is not possible.
- Myth 2: Streaming is for real-time only.
- Myth 4: Streaming is harder to learn than Batch Processing.
- Myth 3: You need to choose between latency and throughput.
We will look at these and other myths and debunk them at the example of Apache Flink. We will discuss Apache Flink's approach to high performance stream processing with state, strong consistency, low latency, and sophisticated handling of time. With such building blocks, Apache Flink can handle classes of problems previously considered out of reach for stream processing. We also take a sneak preview at the next steps for Flink.
We describe an application of CEP using a microservice-based streaming architecture. We use Drools business rule engine to apply rules in real time to an event stream from IoT traffic sensor data.
Open Source Innovations in the MapR Ecosystem Pack 2.0MapR Technologies
Over the summer, we introduced the MapR Ecosystem Pack (MEP) which is a natural evolution of our existing software update program that decouples open source ecosystem updates from core platform updates. MEP gives our customers quick access to the latest open source innovations while also ensuring cross-project compatibility in any given MEP version.
Streaming in the Extreme
Jim Scott, Director, Enterprise Strategy & Architecture, MapR
Have you ever heard of Kafka? Are you ready to start streaming all of the events in your business? What happens to your streaming solution when you outgrow your single data center? What happens when you are at a company that is already running multiple data centers and you need to implement streaming across data centers? I will discuss technologies like Kafka that can be used to accomplish, real-time, lossless messaging that works in both single and multiple globally dispersed data centers. I will also describe how to handle the data coming in through these streams in both batch processes as well as real-time processes.What about when you need to scale to a trillion events per day? I will discuss technologies like Kafka that can be used to accomplish, real-time, lossless messaging that works in both single and multiple globally dispersed data centers. I will also describe how to handle the data coming in through these streams in both batch processes as well as real-time processes.
Video Presentation:
https://youtu.be/Y0vxLgB1u9o
The increasing availability of mobile phones with embedded GPS devices and sensors has spurred the use of vehicle telematics in recent years. Telematics provides detailed and continuous information of a vehicle such as the location, speed, and movement. Vehicle telematics can be further linked with other spatial data to provide context to understand driving behaviors at the detailed level. However, the collection of high-frequency telematics data results in huge volumes of data that must be processed efficiently. And the raw sensor and GPS data must be properly pre-processed and transformed to extract signal relevant to downstream processes. In addition, driving behavior often depends on the spatial context, and the analysis of telematics must be contextualized using spatial and real-time traffic data.
Our talk covers the promises and challenges of telematics data. We present a framework for large-scaled telematics data analysis using Apache big data tools (Hadoop, Hive, Spark, Kafka, etc). We discuss common techniques to load and transform telematics data. We then present how to use machine learning on telematics data to derive insights about driving safety.
Speakers
Yanwei Zhang, Senior Data Scientists II, Uber
Neil Parker, Senior Software Engineer, Uber
Realtime streaming architecture in INFINARIOJozo Kovac
About our experience with realtime analyses on never-ending stream of user events. Discuss Lambda architecture, Kappa, Apache Kafka and our own approach.
Improving Traffic Prediction Using Weather Data with Ramya RaghavendraSpark Summit
As common sense would suggest, weather has a definite impact on traffic. But how much? And under what circumstances? Can we improve traffic (congestion) prediction given weather data? Predictive traffic is envisioned to significantly impact how driver’s plan their day by alerting users before they travel, find the best times to travel, and over time, learn from new IoT data such as road conditions, incidents, etc. This talk will cover the traffic prediction work conducted jointly by IBM and the traffic data provider. As a part of this work, we conducted a case study over five large metropolitans in the US, 2.58 billion traffic records and 262 million weather records, to quantify the boost in accuracy of traffic prediction using weather data. We will provide an overview of our lambda architecture with Apache Spark being used to build prediction models with weather and traffic data, and Spark Streaming used to score the model and provide real-time traffic predictions. This talk will also cover a suite of extensions to Spark to analyze geospatial and temporal patterns in traffic and weather data, as well as the suite of machine learning algorithms that were used with Spark framework. Initial results of this work were presented at the National Association of Broadcasters meeting in Las Vegas in April 2017, and there is work to scale the system to provide predictions in over a 100 cities. Audience will learn about our experience scaling using Spark in offline and streaming mode, building statistical and deep-learning pipelines with Spark, and techniques to work with geospatial and time-series data.
Big data real time architectures -
How do to big data processing in real time?
What architectures are out there to support this paradigm?
Which one should we choose?
What Advantages / Pitfalls they contain.
Virgin Hyperloop One is the leader in realizing a Hyperloop mass transportation system (VHOMTS), which will bring the cities and people closer together than ever before while reducing pollution, emission of greenhouse gases, transit times, etc. To build a safe and user friendly Hyperloop, we need to answer key technical and business questions, including: – ‘What is the safe maximum speed the hyperloop can go?’ – ‘How many pods (the vehicles that carry people) do we need to fulfill a given demand?’
Building Identity Graphs over Heterogeneous DataDatabricks
In today’s world, customers and service providers (e.g., Social networks, ad targeting, retail, etc.) interact in a variety of modes and channels such as browsers, apps, devices, etc. In each such interaction, users are identified using a token (possibly different token for each mode/channel). Examples of such identity tokens include cookies, app IDs etc. As the user engages more with these services, linkages are generated between tokens belonging to the same user; linkages connect multiple identity tokens together.
Real-World Machine Learning - Leverage the Features of MapR Converged Data Pl...Mathieu Dumoulin
Examine the unique features of the MapR Converged Data Platform and how they can support production-grade enterprise machine learning - Ends with a live demo using H2O - Presented at Hadoop Summit Tokyo 2016
How Spark Enables the Internet of Things: Efficient Integration of Multiple ...sparktc
IBM researchers in Haifa, together with partners from the COSMOS EU-funded project, are using Spark to analyze the new wave of IoT data and solve problems in a way that is generic, integrated, and practical.
Indexing 3-dimensional trajectories: Apache Spark and Cassandra integrationCesare Cugnasco
Data visualization can be a tricky problem, even more if the dataset is made of several billions of 3-dimensional particles moving along the time. The talk will focus on some simple indexing and data thinning techniques and how (and how do not) implement them with Cassandra and Spark.
Stream Processing is emerging as a popular paradigm for data processing architectures, because it handles the continuous nature of most data and computation and gets rid of artificial boundaries and delays. In this talk, we are going to look at some of the most common misconceptions about stream processing and debunk them.
- Myth 1: Streaming is approximate and exactly-once is not possible.
- Myth 2: Streaming is for real-time only.
- Myth 4: Streaming is harder to learn than Batch Processing.
- Myth 3: You need to choose between latency and throughput.
We will look at these and other myths and debunk them at the example of Apache Flink. We will discuss Apache Flink's approach to high performance stream processing with state, strong consistency, low latency, and sophisticated handling of time. With such building blocks, Apache Flink can handle classes of problems previously considered out of reach for stream processing. We also take a sneak preview at the next steps for Flink.
We describe an application of CEP using a microservice-based streaming architecture. We use Drools business rule engine to apply rules in real time to an event stream from IoT traffic sensor data.
[9Lenses + CSC] – Transforming the Way you Discover Organizational Insights9Lenses
Incite Discovery through scaled insight discovery. This is the presentation Edwin Miller, CEO, 9Lenses made along with Jerry Overton, Head of Advanced Analytics Research in CSC’s ResearchNetwork and founder of CSC’s FutureTense competency. This presentation was made on December 2nd, 2014 at the Innovation Enterprise’s Chief Innovation Officer Summit, in New York City.
Edwin advocated for the automation of the middle-management layer of a business so that machine learning can more rapidly connect all the complexities of a business.
Jerry Overton spoke about how Data Science + Human Data allows companies to simulate new business models.
This presentation deals with how one can utilize multiple cores, while working with C/C++ applications using an API called OpenMP. It's a shared memory programming model, built on top of POSIX thread. Also the fork-join model, parallel design pattern are discussed using PetriNets.
MALT: Distributed Data-Parallelism for Existing ML Applications (Distributed ...asimkadav
Machine learning methods, such as SVM and neural net- works, often improve their accuracy by using models with more parameters trained on large numbers of examples. Building such models on a single machine is often impracti- cal because of the large amount of computation required.
We introduce MALT, a machine learning library that inte- grates with existing machine learning software and provides data parallel machine learning. MALT provides abstractions for fine-grained in-memory updates using one-sided RDMA, limiting data movement costs during incremental model up- dates. MALT allows machine learning developers to specify the dataflow and apply communication and representation optimizations. Through its general-purpose API, MALT can be used to provide data-parallelism to existing ML appli- cations written in C++ and Lua and based on SVM, ma- trix factorization and neural networks. In our results, we show MALT provides fault tolerance, network efficiency and speedup to these applications.
Alex Smola, Professor in the Machine Learning Department, Carnegie Mellon Uni...MLconf
Fast, Cheap and Deep – Scaling Machine Learning: Distributed high throughput machine learning is both a challenge and a key enabling technology. Using a Parameter Server template we are able to distribute algorithms efficiently over multiple GPUs and in the cloud. This allows us to design very fast recommender systems, factorization machines, classifiers, and deep networks. This degree of scalability allows us to tackle computationally expensive problems efficiently, yielding excellent results e.g. in visual question answering.
My talk from SICS Data Science Day, describing FlinkML, the Machine Learning library for Apache Flink.
I talk about our approach to large-scale machine learning and how we utilize state-of-the-art algorithms to ensure FlinkML is a truly scalable library.
You can watch a video of the talk here: https://youtu.be/k29qoCm4c_k
Distributed DNN training: Infrastructure, challenges, and lessons learnedWee Hyong Tok
Deep learning is revolutionizing a wide range of applications across various industries and in organizations of all sizes. Scalable DNN training is critical to the success of large-scale deep learning. The methodologies, tools, and infrastructure in this space are rapidly evolving. Drawing on their experiences building a multitenant, distributed DNN training infrastructure that uses familiar OSS components to execute Docker container-based deep learning workloads from hundreds of AI applications on clusters with thousands of GPUs, Kaarthik Sivashanmugam and Wee Hyong Tok share recommendations to address the common challenges in enabling scalable and efficient distributed DNN training and the lessons learned in building and operating a large-scale training infrastructure. Kaarthik and Wee Hyong introduce the challenges in distributed DNN training and provide an overview of the components that can enable distributed training on bare metal infrastructure, virtual machines, and containers. In addition, they outline practical tips for running deep learning workloads on Kubernetes clusters on Azure and explain how you can leverage deep learning toolkits (e.g., CNTK, TensorFlow) on these clusters to do distributed training.
An Introduction to TensorFlow architectureMani Goswami
Introduces you to the internals of TensorFlow and deep dives into distributed version of TensorFlow. Refer to https://github.com/manigoswami/tensorflow-examples for examples.
Netflix success is credited to pioneering ways that the company introduced AI and ML into its products, services and infrastructure. ML learning is applied to solve a wide range of problems at Netflix.
This presentation introduces the Big Data topic to Software Quality Assurance Engineers. It can also be useful for Software Developers and other software professionals.
We leave in the era where the atomic building elements of silicon computers, e.g., transistors and wires, are no longer visible using traditional optical microscopes and their sizes are measured in just tens of Angstroms. In addition, power dissipation per unit volume is bounded by the laws of Physics that all resulted among others in stagnating processor clock frequencies. Adding more and more processor cores that perform simpler and simpler tasks in an attempt to efficiently fill the available on-chip area seems to be the current trend taken by the Industry.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
1. Deep Learning at scale
Mateusz Dymczyk
Software Engineer
H2O.ai
Strata+Hadoop Singapore
08.12.2016
2. About me
• M.Sc. in CS @ AGH UST, Poland
• CS Ph.D. dropout
• Software Engineer @ H2O.ai
• Previously ML/NLP and distributed
systems @ Fujitsu Laboratories and
en-japan inc.
3. • Deep learning - brief introduction
• Why scale
• Scaling models + implementations
• Demo
• A peek into the future
Agenda
5. Text classification (item prediction)
Use Cases
Fraud detection (11% accuracy boost in production)
Image classification
Machine translation
Recommendation systems
6. Deep Learning
Deep learning is a branch of machine learning based on a
set of algorithms that attempt to model high level
abstractions in data by using a deep graph with multiple
processing layers, composed of multiple linear and non-
linear transformations.*
*https://en.wikipedia.org/wiki/Deep_learning
8. SGD
• Memory efficient
• Fast
• Not easy to parallelize without speed
degradation
Initialize
Parameters
Get training
sample i
1
2
3
Until converged
9. Deep Learning
PRO
• Relatively simple concept
• Non-linear
• Versatile and flexible
• Features can be extracted
• Great with big data
• Very promising results in
multiple fields
CON
• Hard to interpret
• Not well understood theory
• Lots of architectures
• A lot of hyper-parameters
• Slow training, data hungry
• CPU/GPU hungry
• Overfits
10. Why scale?
PRO
• Relatively simple concept
• Non-linear
• Versatile and flexible
• Features can be extracted
• Great with big data
• Very promising results in
multiple fields
CON
• Hard to interpret
• Not well understood theory
• Lots of architectures
• A lot of hyper-parameters
• Slow training, data hungry
• CPU/GPU hungry
• Overfits
Grid search?
11. Why not to scale?
• Distribution isn’t free: overhead due to network traffic,
synchronization etc.
• Small neural network (not many layers and/or neurons)
• small+shallow network = not much computation per iteration
• Small data
• Very slow network communication
13. Distribution models
• Model parallelism
• Data parallelism
• Mixed/composed
• Parameter server vs *peer-to-peer
• Communication: Asynchronous vs Synchronous
*http://nikkostrom.com/publications/interspeech2015/strom_interspeech2015.pdf
14. Model parallelism
Node 1 Node 2 Node 3
• Each node computes a
different part of network
• Potential to scale to large
models
• Rather difficult to implement
and reason about
• Originally designed for large
convolutional layer in
GoogleNet
Parameter
Server
Parameters or deltas
15. Data parallelism
Node 1
Parameter
Server
Node 3
Node 2
Node 4
• Each node computes all the
parameters
• Using part (or all) of local data
• Results have to be combined
Parameters or deltas
16. Mixed/composed
• Node distribution (model or data)
• In-node, per gpu/cpu/thread
concurrent/parallel computation
• Example: learn whole model on each
multi CPU/GPU machine, where each
CPU/GPU trains a different layer or
works with different part of data
Node
T T T
Node
CPU/GPU CPU/GPU
CPU/GPU CPU/GPU
17. Sync vs. Async
Sync
• Slower
• Reproducible
• Might overfit
• In some cases more
accurate and faster*
At some point parameters need to be collected within the
nodes and between them.
Async
• Race conditions possible
• Not reproducible
• Faster
• Might helps with overfitting
(you make mistakes)
*https://arxiv.org/abs/1604.00981
18. H2O’s architecture
H2O in-memory
Non-blocking
hash map
(resides on all nodes)
Initial model
(weights & biases)
Node
Computation
(threads, async)
Node communication
MAP
each node trains a copy
of the whole network with
its local data (part or all) using
async F/J framework
19. H2O Frame
Node 1 data
Node 2-N data
Thread 1 data
* Each row is fully stored on the same node
* Each chunk contains thousands/millions of rows
* All the data is compressed (sometimes 2-4x) in a
lossless fashion
20. Inside a node
• Each thread works on a
part of data
• All threads update
weights/biases
concurrently
• Race conditions possible
(hard to reproduce,
good for overfitting)
21. H2O’s architecture
Updated model
(weights $ biases)
* Communication frequency is auto-tuned
and user-controllable (affects
convergence)
H2O in-memory
Non-blocking
hash map
(resides on all nodes)
Initial model
(weights & biases)
Node
Computation
(threads, async)
Node communication
MAP
each node trains a copy
of the whole network with
its local data (part or all) using
async F/J framework
REDUCE
Model averaging:
Average weights
and biases from
all the nodes
Here: averaging
New: Elastic averaging
24. Airline Delays
• Data:
• airline data
• 116mln rows (6GB)
• 800+ predictors (numeric & categorical)
• Problem: predict if a flight is delayed
• Hardware: 10* Dual E5-2650 (32 cores, 2.6GHz), ~11Gb
• Platform: H2O
25. GPUs and other networks
• What if you want to use GPUs?
• What if you want to train arbitrary networks?
• What if you want to compare different frameworks?
29. Other frameworks
• Data parallelism (default)
• Model parallelism also available
• for example for multi-layer LSTM
• Supports both sync and async
communication
• Can perform updates in the GPU or CPU
*http://mxnet.io/how_to/model_parallel_lstm.html
• Data and model parallelism
• Both sync and async updates
supported
*https://ischlag.github.io/2016/06/12/async-distributed-tensorflow/
mxnet TensorFlow
30. Summary
Multi Nodo
• data/model doesn’t fit on one node
• computation too long
Single Node
• small NN/small data
Multi cpu/gpu
• data fits on single node but too
much for single processing unit
Model parallelism
• network parameters don’t fit on a
single machine
• faster computation
Data parallelism
• data doesn’t fit on a single node
• faster computation
Async
• faster convergence
• ok with potential lower accuracy
Sync
• best accuracy
• lots of workers or ok with slower
training