Converged and Containerized Distributed Deep Learning With TensorFlow and Kub...Mathieu Dumoulin
Docker containers running on Kubernetes combine with MapR Converged Data Platform allow any company to potentially enjoy the same sophisticated data infrastructure for enabling teams to engage in transformative machine learning and deep learning for production use at scale.
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
State of the Art Robot Predictive Maintenance with Real-time Sensor DataMathieu Dumoulin
Our Strata Beijing 2017 presentation slides where we show how to use data from a movement sensor, in real-time, to do anomaly detection at scale using standard enterprise big data software.
BigDL: Bringing Ease of Use of Deep Learning for Apache Spark with Jason Dai ...Databricks
BigDL is a distributed deep learning framework for Apache Spark open sourced by Intel. BigDL helps make deep learning more accessible to the Big Data community, by allowing them to continue the use of familiar tools and infrastructure to build deep learning applications. With BigDL, users can write their deep learning applications as standard Spark programs, which can then directly run on top of existing Spark or Hadoop clusters.
In this session, we will introduce BigDL, how our customers use BigDL to build End to End ML/DL applications, platforms on which BigDL is deployed and also provide an update on the latest improvements in BigDL v0.1, and talk about further developments and new upcoming features of BigDL v0.2 release (e.g., support for TensorFlow models, 3D convolutions, etc.).
MapR is an ideal scalable platform for data science and specifically for operationalizing machine learning in the enterprise. This presentations gives specific reasons why.
BigDL: A Distributed Deep Learning Library on Spark: Spark Summit East talk b...Spark Summit
BigDL is a distributed deep Learning framework built for Big Data platform using Apache Spark. It combines the benefits of “high performance computing” and “Big Data” architecture, providing native support for deep learning functionalities in Spark, orders of magnitude speedup than out-of-box open source DL frameworks (e.g., Caffe/Torch) wrt single node performance (by leveraging Intel MKL), and the scale-out of deep learning workloads based on the Spark architecture. We’ll also share how our users adopt BigDL for their deep learning applications (such as image recognition, object detection, NLP, etc.), which allows them to use their Big Data (e.g., Apache Hadoop and Spark) platform as the unified data analytics platform for data storage, data processing and mining, feature engineering, traditional (non-deep) machine learning, and deep learning workloads.
Scaling out Driverless AI with IBM Spectrum Conductor - Kevin Doyle - H2O AI ...Sri Ambati
This talk was recorded in London on Oct 30, 2018 and can be viewed here: https://youtu.be/lk2NXurrwAA
This talk highlights the integration of Driverless AI with IBM Spectrum Conductor. The integration demonstrates how you can deploy, manage, and scale out to have multiple Driverless AI instances running within your cluster per user to help maximize the efficiency and security of the cluster. The integration includes failover for Driverless AI instances, so that users can continue to work without needing to find another host to start Driverless AI on. In addition, the integration of H2O Sparkling Water with IBM Spectrum Conductor as a notebook is highlighted; as well as the benefits of running H20 Sparkling water within the cluster to maximize your cluster utilization across different workloads.For both Driverless AI and H2O Sparkling Water, a demo will be provided and a future plan for the integrations is highlighted.
Bio: Kevin Doyle is the lead architect of IBM Spectrum Conductor at IBM, where he works with customers to deploy and manage all workloads; especially Spark and deep learning workloads to on-premise clusters. Kevin has been working on distributed computing, grid, cloud, and big data for the past five years with a focus on the management and lifecycle of workloads.
Converged and Containerized Distributed Deep Learning With TensorFlow and Kub...Mathieu Dumoulin
Docker containers running on Kubernetes combine with MapR Converged Data Platform allow any company to potentially enjoy the same sophisticated data infrastructure for enabling teams to engage in transformative machine learning and deep learning for production use at scale.
Palestra apresentada por Pedro Mário Cruz e Silva, Solution Architect da NVIDIA, como parte da programação da VIII Semana de Inverno de Geofísica, em 19/07/2017.
State of the Art Robot Predictive Maintenance with Real-time Sensor DataMathieu Dumoulin
Our Strata Beijing 2017 presentation slides where we show how to use data from a movement sensor, in real-time, to do anomaly detection at scale using standard enterprise big data software.
BigDL: Bringing Ease of Use of Deep Learning for Apache Spark with Jason Dai ...Databricks
BigDL is a distributed deep learning framework for Apache Spark open sourced by Intel. BigDL helps make deep learning more accessible to the Big Data community, by allowing them to continue the use of familiar tools and infrastructure to build deep learning applications. With BigDL, users can write their deep learning applications as standard Spark programs, which can then directly run on top of existing Spark or Hadoop clusters.
In this session, we will introduce BigDL, how our customers use BigDL to build End to End ML/DL applications, platforms on which BigDL is deployed and also provide an update on the latest improvements in BigDL v0.1, and talk about further developments and new upcoming features of BigDL v0.2 release (e.g., support for TensorFlow models, 3D convolutions, etc.).
MapR is an ideal scalable platform for data science and specifically for operationalizing machine learning in the enterprise. This presentations gives specific reasons why.
BigDL: A Distributed Deep Learning Library on Spark: Spark Summit East talk b...Spark Summit
BigDL is a distributed deep Learning framework built for Big Data platform using Apache Spark. It combines the benefits of “high performance computing” and “Big Data” architecture, providing native support for deep learning functionalities in Spark, orders of magnitude speedup than out-of-box open source DL frameworks (e.g., Caffe/Torch) wrt single node performance (by leveraging Intel MKL), and the scale-out of deep learning workloads based on the Spark architecture. We’ll also share how our users adopt BigDL for their deep learning applications (such as image recognition, object detection, NLP, etc.), which allows them to use their Big Data (e.g., Apache Hadoop and Spark) platform as the unified data analytics platform for data storage, data processing and mining, feature engineering, traditional (non-deep) machine learning, and deep learning workloads.
Scaling out Driverless AI with IBM Spectrum Conductor - Kevin Doyle - H2O AI ...Sri Ambati
This talk was recorded in London on Oct 30, 2018 and can be viewed here: https://youtu.be/lk2NXurrwAA
This talk highlights the integration of Driverless AI with IBM Spectrum Conductor. The integration demonstrates how you can deploy, manage, and scale out to have multiple Driverless AI instances running within your cluster per user to help maximize the efficiency and security of the cluster. The integration includes failover for Driverless AI instances, so that users can continue to work without needing to find another host to start Driverless AI on. In addition, the integration of H2O Sparkling Water with IBM Spectrum Conductor as a notebook is highlighted; as well as the benefits of running H20 Sparkling water within the cluster to maximize your cluster utilization across different workloads.For both Driverless AI and H2O Sparkling Water, a demo will be provided and a future plan for the integrations is highlighted.
Bio: Kevin Doyle is the lead architect of IBM Spectrum Conductor at IBM, where he works with customers to deploy and manage all workloads; especially Spark and deep learning workloads to on-premise clusters. Kevin has been working on distributed computing, grid, cloud, and big data for the past five years with a focus on the management and lifecycle of workloads.
Deep Learning to Big Data Analytics on Apache Spark Using BigDL with Xianyan ...Databricks
With the continued success of deep learning techniques, there’s been a rapid growth in applications for perception in many modalities, such as image classification, object detection and speech recognition. In response, Intel’s BigDL is an open source distributed deep learning framework for Apache Spark that includes rich deep learning support and Intel Math Kernel Library acceleration, allowing users to quickly develop deep learning applications with extremely high performance on their existing Hadoop ecosystems.
This sessions will explore several key deep learning applications that Intel successfully built on top of Apache Spark with BigDL. Hear about the technologies they developed and what they learned from building such applications, including: the tool stack in the system and design considerations; an application on image recognition and object detection (faster-rcnn using VGG and PVANET); and an application on speech recognition with deep speech and acoustic feature transformers. He’ll also share other insights and experiences Intel gained while building a unified data analytics platform with Apache Spark MLlib and BigDL.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
The Future of Computing is Distributed
Professor Ion Stoica, UC Berkeley RISELab
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Accelerating Deep Learning Training with BigDL and Drizzle on Apache Spark wi...Databricks
The BigDL framework scales deep learning for large data sets using Apache Spark. However there is significant scheduling overhead from Spark when running BigDL at large scale. In this talk we propose a new parameter manager implementation that along with coarse-grained scheduling can provide significant speedups for deep learning models like Inception, VGG etc. Aggregation functions like reduce or treeReduce that are used for parameter aggregation in Apache Spark (and the original MapReduce) are slow as the centralized scheduling and driver network bandwidth become a bottleneck especially in large clusters.
To reduce the overhead of parameter aggregation and allow for near-linear scaling, we introduce a new AllReduce operation, a part of the parameter manager in BigDL which is built directly on top of the BlockManager in Apache Spark. AllReduce in BigDL uses a peer-to-peer mechanism to synchronize and aggregate parameters. During parameter synchronization and aggregation, all nodes in the cluster play the same role and driver’s overhead is eliminated thus enabling near-linear scaling. To address the scheduling overhead we use Drizzle, a recently proposed scheduling framework for Apache Spark. Currently, Spark uses a BSP computation model, and notifies the scheduler at the end of each task. Invoking the scheduler at the end of each task adds overheads and results in decreased throughput and increased latency.
Drizzle introduces group scheduling, where multiple iterations (or a group) of iterations are scheduled at once. This helps decouple the granularity of task execution from scheduling and amortizes the costs of task serialization and launch. Finally we will present results from using the new AllReduce operation and Drizzle on a number of common deep learning models including VGG and Inception. Our benchmarks run on Amazon EC2 and Google DataProc will show the speedups and scalability of our implementation.
CI/CD for Machine Learning with Daniel KobranDatabricks
What we call the public cloud was developed primarily to manage and deploy web servers. The target audience for these products is Dev Ops. While this is a massive and exciting market, the world of Data Science and Deep Learning is very different — and possibly even bigger. Unfortunately, the tools available today are not designed for this new audience and the cloud needs to evolve. This talk would cover what the next 10 years of cloud computing will look like.
Accelerating Inference in the Data Center with Malini Bhandaru and Karol Zale...Databricks
To realize the vision of Autonomous Vehicles, the research and development platform will need to handle multiple peta bytes of drive data daily. The sheer volume of data mandates automation to hone in on regions of significance, such as lane changes, application of breaks, arrival at intersections, snow/rain days, glare, and more. These richer segments are then analyzed more carefully, probably even used to refine machine learning models or test actuation plans.
Within images, for instance, to meet government privacy regulations, we may need to detect faces and license plates and blur them to preserve privacy, possibly before the data can even be stored, and definitely before exchanging with partner research facilities. Intel’s Movidius Neural Compute Stick (NCS) (https://developer.movidius.com/) a Deep Neural Network interference accelerator would be ideal to help cope with the data inferencing tasks.
In this talk we demonstrate a proof-of-concept enhancement to Spark on Kubernetes to selectively launch inference workloads on machines with hardware acceleration capability such as the NCS hardware accelerator or FPGAs to gain performance speedup. We contrast the speedup obtained when the data is local versus remote and illustrate how to provide scheduler hints during job submission to trade-off between inference speed versus data movement, given that these are influenced by the actual inference task at hand the sensor data type.
We illustrate how to launch workloads in the data center with heterogenous hardware, including how to push custom graphs onto the accelerator. While our domain of interest is Autonomous Vehicles, the techniques illustrated are more widely applicable, such as detecting network anomalies, new events in twitter feeds, and more.
Attendees will learn about:
– Spark on Kubernetes
– Intel Movidius Neural Compute Stick and its use
– FPGAs
– Leveraging hardware acceleration for improved performance on inference workloads
Tackle more data science challenges than ever before without the need for discrete acceleration with the 3rd Gen Intel® Xeon® Scalable processors. Learn about the built-in AI acceleration and performance optimizations for popular AI libraries, tools and models.
Performance Analysis of Apache Spark and Presto in Cloud EnvironmentsDatabricks
Today, users have multiple options for big data analytics in terms of open-source and proprietary systems as well as in cloud computing service providers. In order to obtain the best value for their money in a SaaS cloud environment, users need to be aware of the performance of each service as well as its associated costs, while also taking into account aspects such as usability in conjunction with monitoring, interoperability, and administration capabilities.
We present an independent analysis of two mature and well-known data analytics systems, Apache Spark and Presto. Both running on the Amazon EMR platform, but in the case of Apache Spark, we also analyze the Databricks Unified Analytics Platform and its associated runtime and optimization capabilities. Our analysis is based on running the TPC-DS benchmark and thus focuses on SQL performance, which still is indispensable for data scientists and engineers. In our talk we will present quantitative results that we expect to be valuable for end users, accompanied by an in depth look into the advantages and disadvantages of each alternative.
Thus, attendees will be better informed of the current big data analytics landscape and find themselves in a better position to avoid common pitfalls in deploying data analytics at a scale.
1) NVIDIA-Iguazio Accelerated Solutions for Deep Learning and Machine Learning (30 mins):
About the speaker:
Dr. Gabriel Noaje, Senior Solutions Architect, NVIDIA
http://bit.ly/GabrielNoaje
2) GPUs in Data Science Pipelines ( 30 mins)
- GPU as a Service for enterprise AI
- A short demo on the usage of GPUs for model training and model inferencing within a data science workflow
About the speaker:
Anant Gandhi, Solutions Engineer, Iguazio Singapore. https://www.linkedin.com/in/anant-gandhi-b5447614/
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...Databricks
In this session, you will learn how CERN easily applied end-to-end deep learning and analytics pipelines on Apache Spark at scale for High Energy Physics using BigDL and Analytics Zoo open source software running on Intel Xeon-based distributed clusters.
Technical details and development learnings will be shared using an example of topology classification to improve real-time event selection at the Large Hadron Collider experiments. The classifier has demonstrated very good performance figures for efficiency, while also reducing the false positive rate compared to the existing methods. It could be used as a filter to improve the online event selection infrastructure of the LHC experiments, where one could benefit from a more flexible and inclusive selection strategy while reducing the amount of downstream resources wasted in processing false positives.
This is part of CERN’s research on applying Deep Learning and Analytics using open source and industry standard technologies as an alternative to the existing customized rule based methods. We show how we could quickly build and implement distributed deep learning solutions and data pipelines at scale on Apache Spark using Analytics Zoo and BigDL, which are open source frameworks unifying Analytics and AI on Spark with easy to use APIs and development interfaces seamlessly integrated with Big Data Platforms.
Axel Koehler from Nvidia presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
“Accelerated computing is transforming the data center that delivers unprecedented through- put, enabling new discoveries and services for end users. This talk will give an overview about the NVIDIA Tesla accelerated computing platform including the latest developments in hardware and software. In addition it will be shown how deep learning on GPUs is changing how we use computers to understand data.”
In related news, the GPU Technology Conference takes place April 4-7 in Silicon Valley.
Watch the video presentation: http://insidehpc.com/2016/03/tesla-accelerated-computing/
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
In this deck from FOSDEM'19, Christoph Angerer from NVIDIA presents: Rapids - Data Science on GPUs.
"The next big step in data science will combine the ease of use of common Python APIs, but with the power and scalability of GPU compute. The RAPIDS project is the first step in giving data scientists the ability to use familiar APIs and abstractions while taking advantage of the same technology that enables dramatic increases in speed in deep learning. This session highlights the progress that has been made on RAPIDS, discusses how you can get up and running doing data science on the GPU, and provides some use cases involving graph analytics as motivation.
GPUs and GPU platforms have been responsible for the dramatic advancement of deep learning and other neural net methods in the past several years. At the same time, traditional machine learning workloads, which comprise the majority of business use cases, continue to be written in Python with heavy reliance on a combination of single-threaded tools (e.g., Pandas and Scikit-Learn) or large, multi-CPU distributed solutions (e.g., Spark and PySpark). RAPIDS, developed by a consortium of companies and available as open source code, allows for moving the vast majority of machine learning workloads from a CPU environment to GPUs. This allows for a substantial speed up, particularly on large data sets, and affords rapid, interactive work that previously was cumbersome to code or very slow to execute. Many data science problems can be approached using a graph/network view, and much like traditional machine learning workloads, this has been either local (e.g., Gephi, Cytoscape, NetworkX) or distributed on CPU platforms (e.g., GraphX). We will present GPU-accelerated graph capabilities that, with minimal conceptual code changes, allows both graph representations and graph-based analytics to achieve similar speed ups on a GPU platform. By keeping all of these tasks on the GPU and minimizing redundant I/O, data scientists are enabled to model their data quickly and frequently, affording a higher degree of experimentation and more effective model generation. Further, keeping all of this in compatible formats allows quick movement from feature extraction, graph representation, graph analytic, enrichment back to the original data, and visualization of results. RAPIDS has a mission to build a platform that allows data scientist to explore data, train machine learning algorithms, and build applications while primarily staying on the GPU and GPU platforms."
Learn more: https://rapids.ai/
and
https://fosdem.org/2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Learn about what technologies enable a new, modern Stream-based architecture to connect everything within application modules or across data centers and public clouds. Combine Kafka-style streaming and stream processing frameworks like Spark and Flink with Microservices and completely rethink your big data architecture away from state and into data flows.
GPU-Accelerating UDFs in PySpark with Numba and PyGDFKeith Kraus
With advances in computer hardware such as 10 gigabit network cards, infiniband, and solid state drives all becoming commodity offerings, the new bottleneck in big data technologies is very commonly the processing power of the CPU. In order to meet the computational demand desired by users, enterprises have had to resort to extreme scale out approaches just to get the processing power they need. One of the most well known technologies in this space, Apache Spark, has numerous enterprises publicly talking about the challenges in running multiple 1000+ node clusters to give their users the processing power they need. This talk is based on work completed by NVIDIA’s Applied Solutions Engineering team. Attendees will learn how they were able to GPU-accelerate UDFs in PySpark using open source technologies such as Numba and PyGDF, the lessons they learned in the process, and how they were able to accelerate workloads in a fraction of the hardware footprint.
Deep Learning to Big Data Analytics on Apache Spark Using BigDL with Xianyan ...Databricks
With the continued success of deep learning techniques, there’s been a rapid growth in applications for perception in many modalities, such as image classification, object detection and speech recognition. In response, Intel’s BigDL is an open source distributed deep learning framework for Apache Spark that includes rich deep learning support and Intel Math Kernel Library acceleration, allowing users to quickly develop deep learning applications with extremely high performance on their existing Hadoop ecosystems.
This sessions will explore several key deep learning applications that Intel successfully built on top of Apache Spark with BigDL. Hear about the technologies they developed and what they learned from building such applications, including: the tool stack in the system and design considerations; an application on image recognition and object detection (faster-rcnn using VGG and PVANET); and an application on speech recognition with deep speech and acoustic feature transformers. He’ll also share other insights and experiences Intel gained while building a unified data analytics platform with Apache Spark MLlib and BigDL.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
The Future of Computing is Distributed
Professor Ion Stoica, UC Berkeley RISELab
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Accelerating Deep Learning Training with BigDL and Drizzle on Apache Spark wi...Databricks
The BigDL framework scales deep learning for large data sets using Apache Spark. However there is significant scheduling overhead from Spark when running BigDL at large scale. In this talk we propose a new parameter manager implementation that along with coarse-grained scheduling can provide significant speedups for deep learning models like Inception, VGG etc. Aggregation functions like reduce or treeReduce that are used for parameter aggregation in Apache Spark (and the original MapReduce) are slow as the centralized scheduling and driver network bandwidth become a bottleneck especially in large clusters.
To reduce the overhead of parameter aggregation and allow for near-linear scaling, we introduce a new AllReduce operation, a part of the parameter manager in BigDL which is built directly on top of the BlockManager in Apache Spark. AllReduce in BigDL uses a peer-to-peer mechanism to synchronize and aggregate parameters. During parameter synchronization and aggregation, all nodes in the cluster play the same role and driver’s overhead is eliminated thus enabling near-linear scaling. To address the scheduling overhead we use Drizzle, a recently proposed scheduling framework for Apache Spark. Currently, Spark uses a BSP computation model, and notifies the scheduler at the end of each task. Invoking the scheduler at the end of each task adds overheads and results in decreased throughput and increased latency.
Drizzle introduces group scheduling, where multiple iterations (or a group) of iterations are scheduled at once. This helps decouple the granularity of task execution from scheduling and amortizes the costs of task serialization and launch. Finally we will present results from using the new AllReduce operation and Drizzle on a number of common deep learning models including VGG and Inception. Our benchmarks run on Amazon EC2 and Google DataProc will show the speedups and scalability of our implementation.
CI/CD for Machine Learning with Daniel KobranDatabricks
What we call the public cloud was developed primarily to manage and deploy web servers. The target audience for these products is Dev Ops. While this is a massive and exciting market, the world of Data Science and Deep Learning is very different — and possibly even bigger. Unfortunately, the tools available today are not designed for this new audience and the cloud needs to evolve. This talk would cover what the next 10 years of cloud computing will look like.
Accelerating Inference in the Data Center with Malini Bhandaru and Karol Zale...Databricks
To realize the vision of Autonomous Vehicles, the research and development platform will need to handle multiple peta bytes of drive data daily. The sheer volume of data mandates automation to hone in on regions of significance, such as lane changes, application of breaks, arrival at intersections, snow/rain days, glare, and more. These richer segments are then analyzed more carefully, probably even used to refine machine learning models or test actuation plans.
Within images, for instance, to meet government privacy regulations, we may need to detect faces and license plates and blur them to preserve privacy, possibly before the data can even be stored, and definitely before exchanging with partner research facilities. Intel’s Movidius Neural Compute Stick (NCS) (https://developer.movidius.com/) a Deep Neural Network interference accelerator would be ideal to help cope with the data inferencing tasks.
In this talk we demonstrate a proof-of-concept enhancement to Spark on Kubernetes to selectively launch inference workloads on machines with hardware acceleration capability such as the NCS hardware accelerator or FPGAs to gain performance speedup. We contrast the speedup obtained when the data is local versus remote and illustrate how to provide scheduler hints during job submission to trade-off between inference speed versus data movement, given that these are influenced by the actual inference task at hand the sensor data type.
We illustrate how to launch workloads in the data center with heterogenous hardware, including how to push custom graphs onto the accelerator. While our domain of interest is Autonomous Vehicles, the techniques illustrated are more widely applicable, such as detecting network anomalies, new events in twitter feeds, and more.
Attendees will learn about:
– Spark on Kubernetes
– Intel Movidius Neural Compute Stick and its use
– FPGAs
– Leveraging hardware acceleration for improved performance on inference workloads
Tackle more data science challenges than ever before without the need for discrete acceleration with the 3rd Gen Intel® Xeon® Scalable processors. Learn about the built-in AI acceleration and performance optimizations for popular AI libraries, tools and models.
Performance Analysis of Apache Spark and Presto in Cloud EnvironmentsDatabricks
Today, users have multiple options for big data analytics in terms of open-source and proprietary systems as well as in cloud computing service providers. In order to obtain the best value for their money in a SaaS cloud environment, users need to be aware of the performance of each service as well as its associated costs, while also taking into account aspects such as usability in conjunction with monitoring, interoperability, and administration capabilities.
We present an independent analysis of two mature and well-known data analytics systems, Apache Spark and Presto. Both running on the Amazon EMR platform, but in the case of Apache Spark, we also analyze the Databricks Unified Analytics Platform and its associated runtime and optimization capabilities. Our analysis is based on running the TPC-DS benchmark and thus focuses on SQL performance, which still is indispensable for data scientists and engineers. In our talk we will present quantitative results that we expect to be valuable for end users, accompanied by an in depth look into the advantages and disadvantages of each alternative.
Thus, attendees will be better informed of the current big data analytics landscape and find themselves in a better position to avoid common pitfalls in deploying data analytics at a scale.
1) NVIDIA-Iguazio Accelerated Solutions for Deep Learning and Machine Learning (30 mins):
About the speaker:
Dr. Gabriel Noaje, Senior Solutions Architect, NVIDIA
http://bit.ly/GabrielNoaje
2) GPUs in Data Science Pipelines ( 30 mins)
- GPU as a Service for enterprise AI
- A short demo on the usage of GPUs for model training and model inferencing within a data science workflow
About the speaker:
Anant Gandhi, Solutions Engineer, Iguazio Singapore. https://www.linkedin.com/in/anant-gandhi-b5447614/
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...Databricks
In this session, you will learn how CERN easily applied end-to-end deep learning and analytics pipelines on Apache Spark at scale for High Energy Physics using BigDL and Analytics Zoo open source software running on Intel Xeon-based distributed clusters.
Technical details and development learnings will be shared using an example of topology classification to improve real-time event selection at the Large Hadron Collider experiments. The classifier has demonstrated very good performance figures for efficiency, while also reducing the false positive rate compared to the existing methods. It could be used as a filter to improve the online event selection infrastructure of the LHC experiments, where one could benefit from a more flexible and inclusive selection strategy while reducing the amount of downstream resources wasted in processing false positives.
This is part of CERN’s research on applying Deep Learning and Analytics using open source and industry standard technologies as an alternative to the existing customized rule based methods. We show how we could quickly build and implement distributed deep learning solutions and data pipelines at scale on Apache Spark using Analytics Zoo and BigDL, which are open source frameworks unifying Analytics and AI on Spark with easy to use APIs and development interfaces seamlessly integrated with Big Data Platforms.
Axel Koehler from Nvidia presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
“Accelerated computing is transforming the data center that delivers unprecedented through- put, enabling new discoveries and services for end users. This talk will give an overview about the NVIDIA Tesla accelerated computing platform including the latest developments in hardware and software. In addition it will be shown how deep learning on GPUs is changing how we use computers to understand data.”
In related news, the GPU Technology Conference takes place April 4-7 in Silicon Valley.
Watch the video presentation: http://insidehpc.com/2016/03/tesla-accelerated-computing/
See more talks in the Swiss Conference Video Gallery:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter:
http://insidehpc.com/newsletter
In this deck from FOSDEM'19, Christoph Angerer from NVIDIA presents: Rapids - Data Science on GPUs.
"The next big step in data science will combine the ease of use of common Python APIs, but with the power and scalability of GPU compute. The RAPIDS project is the first step in giving data scientists the ability to use familiar APIs and abstractions while taking advantage of the same technology that enables dramatic increases in speed in deep learning. This session highlights the progress that has been made on RAPIDS, discusses how you can get up and running doing data science on the GPU, and provides some use cases involving graph analytics as motivation.
GPUs and GPU platforms have been responsible for the dramatic advancement of deep learning and other neural net methods in the past several years. At the same time, traditional machine learning workloads, which comprise the majority of business use cases, continue to be written in Python with heavy reliance on a combination of single-threaded tools (e.g., Pandas and Scikit-Learn) or large, multi-CPU distributed solutions (e.g., Spark and PySpark). RAPIDS, developed by a consortium of companies and available as open source code, allows for moving the vast majority of machine learning workloads from a CPU environment to GPUs. This allows for a substantial speed up, particularly on large data sets, and affords rapid, interactive work that previously was cumbersome to code or very slow to execute. Many data science problems can be approached using a graph/network view, and much like traditional machine learning workloads, this has been either local (e.g., Gephi, Cytoscape, NetworkX) or distributed on CPU platforms (e.g., GraphX). We will present GPU-accelerated graph capabilities that, with minimal conceptual code changes, allows both graph representations and graph-based analytics to achieve similar speed ups on a GPU platform. By keeping all of these tasks on the GPU and minimizing redundant I/O, data scientists are enabled to model their data quickly and frequently, affording a higher degree of experimentation and more effective model generation. Further, keeping all of this in compatible formats allows quick movement from feature extraction, graph representation, graph analytic, enrichment back to the original data, and visualization of results. RAPIDS has a mission to build a platform that allows data scientist to explore data, train machine learning algorithms, and build applications while primarily staying on the GPU and GPU platforms."
Learn more: https://rapids.ai/
and
https://fosdem.org/2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Learn about what technologies enable a new, modern Stream-based architecture to connect everything within application modules or across data centers and public clouds. Combine Kafka-style streaming and stream processing frameworks like Spark and Flink with Microservices and completely rethink your big data architecture away from state and into data flows.
GPU-Accelerating UDFs in PySpark with Numba and PyGDFKeith Kraus
With advances in computer hardware such as 10 gigabit network cards, infiniband, and solid state drives all becoming commodity offerings, the new bottleneck in big data technologies is very commonly the processing power of the CPU. In order to meet the computational demand desired by users, enterprises have had to resort to extreme scale out approaches just to get the processing power they need. One of the most well known technologies in this space, Apache Spark, has numerous enterprises publicly talking about the challenges in running multiple 1000+ node clusters to give their users the processing power they need. This talk is based on work completed by NVIDIA’s Applied Solutions Engineering team. Attendees will learn how they were able to GPU-accelerate UDFs in PySpark using open source technologies such as Numba and PyGDF, the lessons they learned in the process, and how they were able to accelerate workloads in a fraction of the hardware footprint.
How Data Volume Affects Spark Based Data Analytics on a Scale-up ServerAhsan Javed Awan
Sheer increase in volume of data over the last decade has triggered research in cluster computing frameworks that enable web enterprises to extract big insights from big data. While Apache Spark is gaining popularity for exhibiting superior scale-out performance on the commodity machines, the impact of data volume on the performance of Spark based data analytics in scale-up configuration is not well under-stood. We present a deep-dive analysis of Spark based applications on a large scale-up server machine. Our analysis reveals that Spark based data analytics are DRAM bound and do not benefitt by using more than 12 cores for an executor. By enlarging input data size, application per-performance degrades significantly due to substantial increase in wait time during I/O operations and garbage collection, despite 10% better instruction retirement rate (due to lower L1 cache misses and higher core utilization). We match memory behavior with the garbage collector to improve performance of applications between 1.6x to 3x.
BigDL webinar - Deep Learning Library for SparkDESMOND YUEN
BigDL is a distributed deep learning library for Apache Spark*
and a unified Big Data Platform Driving Analytics and Data Science.
If you like what you read be sure you ♥ it below. Thank you!
Apache Spark 2.0 set the architectural foundations of Structure in Spark, Unified high-level APIs, Structured Streaming, and the underlying performant components like Catalyst Optimizer and Tungsten Engine. Since then the Spark community has continued to build new features and fix numerous issues in releases Spark 2.1 and 2.2.
Continuing forward in that spirit, the upcoming release of Apache Spark 2.3 has made similar strides too, introducing new features and resolving over 1300 JIRA issues. In this talk, we want to share with the community some salient aspects of soon to be released Spark 2.3 features:
• Kubernetes Scheduler Backend
• PySpark Performance and Enhancements
• Continuous Structured Streaming Processing
• DataSource v2 APIs
• Structured Streaming v2 APIs
Streamline End-to-End AI Pipelines with Intel, Databricks, and OmniSciIntel® Software
Preprocess, visualize, and Build AI Faster at-Scale on Intel Architecture. Develop end-to-end AI pipelines for inferencing including data ingestion, preprocessing, and model inferencing with tabular, NLP, RecSys, video and image using Intel oneAPI AI Analytics Toolkit and other optimized libraries. Build at-scale performant pipelines with Databricks and end-to-end Xeon optimizations. Learn how to visualize with the OmniSci Immerse Platform and experience a live demonstration of the Intel Distribution of Modin and OmniSci.
sudoers: Benchmarking Hadoop with ALOJANicolas Poggi
Presentation for the sudoers Barcelona group 0ct 06 2015, on benchmarking Hadoop with ALOJA open source benchmarking platform. The presentation was mostly a live DEMO, posting some slides for the people who could not attend.
http://lanyrd.com/2015/sudoers-barcelona-october/
GOAI: GPU-Accelerated Data Science DataSciCon 2017Joshua Patterson
The GPU Open Analytics Initiative, GOAI, is accelerating data science like never before. CPUs are not improving at the same rate as networking and storage, and leveraging GPUs data scientist can analyze more data than ever with less hardware. Learn more about how GPU are accelerating data science (not just Deep Learning), and how to get started.
Performance advantages of Hadoop ETL offload with the Intel processor-powered...Principled Technologies
High-level Hadoop analysis requires custom solutions to deliver the data that you need, and the faster these jobs run the better. What if ETL jobs created by an entry-level employee after only a few days of training could run even faster than the same jobs created by a Hadoop expert with 18 years of database experience?
This is exactly what we found in our testing with the Dell | Cloudera | Syncsort solution. Not only was this solution was faster, easier, and less expensive to implement, but the ETL use cases our beginner created with this solution ran up to 60.3 percent more quickly than those our expert created with open-source tools.
Using the Dell | Cloudera | Syncsort solution means that your organization can compensate a lower-level employee for half as much time as a senior engineer doing less-optimized work. That is a clear path to savings.
Making Hadoop Realtime by Dr. William Bain of Scaleout SoftwareData Con LA
Hadoop has been widely embraced for its ability to economically store and analyze large data sets. Using parallel computing techniques like MapReduce, Hadoop can reduce long computation times to hours or minutes. This works well for mining large volumes of historical data stored on disk, but it is not suitable for gaining real-time insights from live operational data. Still, the idea of using Hadoop for real-time data analytics on live data is appealing because it leverages existing programming skills and infrastructure – and the parallel architecture of Hadoop itself. This presentation will describe how real-time analytics using Hadoop can be performed by combining an in-memory data grid (IMDG) with an integrated, stand-alone Hadoop MapReduce execution engine. This new technology delivers fast results for live data and also accelerates the analysis of large, static data sets.
The relationships between data sets matter. Discovering, analyzing, and learning those relationships is a central part to expanding our understand, and is a critical step to being able to predict and act upon the data. Unfortunately, these are not always simple or quick tasks.
To help the analyst we introduce RAPIDS, a collection of open-source libraries, incubated by NVIDIA and focused on accelerating the complete end-to-end data science ecosystem. Graph analytics is a critical piece of the data science ecosystem for processing linked data, and RAPIDS is pleased to offer cuGraph as our accelerated graph library.
Simply accelerating algorithms only addressed a portion of the problem. To address the full problem space, RAPIDS cuGraph strives to be feature-rich, easy to use, and intuitive. Rather than limiting the solution to a single graph technology, cuGraph supports Property Graphs, Knowledge Graphs, Hyper-Graphs, Bipartite graphs, and the basic directed and undirected graph.
A Python API allows the data to be manipulated as a DataFrame, similar and compatible with Pandas, with inputs and outputs being shared across the full RAPIDS suite, for example with the RAPIDS machine learning package, cuML.
This talk will present an overview of RAPIDS and cuGraph. Discuss and show examples of how to manipulate and analyze bipartite and property graph, plus show how data can be shared with machine learning algorithms. The talk will include some performance and scalability metrics. Then conclude with a preview of upcoming features, like graph query language support, and the general RAPIDS roadmap.
Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...Databricks
Nowadays, people are creating, sharing and storing data at a faster pace than ever before, effective data compression / decompression could significantly reduce the cost of data usage. Apache Spark is a general distributed computing engine for big data analytics, and it has large amount of data storing and shuffling across cluster in runtime, the data compression/decompression codecs can impact the end to end application performance in many ways.
However, there’s a trade-off between the storage size and compression/decompression throughput (CPU computation). Balancing the data compress speed and ratio is a very interesting topic, particularly while both software algorithms and the CPU instruction set keep evolving. Apache Spark provides a very flexible compression codecs interface with default implementations like GZip, Snappy, LZ4, ZSTD etc. and Intel Big Data Technologies team also implemented more codecs based on latest Intel platform like ISA-L(igzip), LZ4-IPP, Zlib-IPP and ZSTD for Apache Spark; in this session, we’d like to compare the characteristics of those algorithms and implementations, by running different micro workloads as well as end to end workloads, based on different generations of Intel x86 platform and disk.
It’s supposedly to be the best practice for big data software engineers to choose the proper compression/decompression codecs for their applications, and we also will present the methodologies of measuring and tuning the performance bottlenecks for typical Apache Spark workloads.
This lecture aims to give some food for thought regarding how the current High Performance Computing systems (hardware and software) tends to merge with Big Data ones (Machine Learning, Analytics and Enterprise workloads) in order to meet both workloads demands sharing the same clusters.
Ultra Fast Deep Learning in Hybrid Cloud Using Intel Analytics Zoo & AlluxioAlluxio, Inc.
Alluxio Global Online Meetup
Apr 23, 2020
For more Alluxio events: https://www.alluxio.io/events/
Speakers:
Jiao (Jennie) Wang, Intel
Tsai Louie, Intel
Bin Fan, Alluxio
Today, many people run deep learning applications with training data from separate storage such as object storage or remote data centers. This presentation will demo the Intel Analytics Zoo + Alluxio stack, an architecture that enables high performance while keeping cost and resource efficiency balanced without network being I/O bottlenecked.
Intel Analytics Zoo is a unified data analytics and AI platform open-sourced by Intel. It seamlessly unites TensorFlow, Keras, PyTorch, Spark, Flink, and Ray programs into an integrated pipeline, which can transparently scale from a laptop to large clusters to process production big data. Alluxio, as an open-source data orchestration layer, accelerates data loading and processing in Analytics Zoo deep learning applications.
This talk, we will go over:
- What is Analytics Zoo and how it works
- How to run Analytics Zoo with Alluxio in deep learning applications
- Initial performance benchmark results using the Analytics Zoo + Alluxio stack
Similar to Very large scale distributed deep learning on BigDL (20)
The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind.
Intel Blockscale ASICs are built for the demanding environment of cryptocurrency mining. Each ASIC has built-in temperature and voltage sensor capabilities. The accelerator can be operated across a range of frequencies, enabling system designers to balance performance and efficiency.
Cryptography Processing with 3rd Gen Intel Xeon Scalable ProcessorsDESMOND YUEN
Cryptographic operations are amongst the most compute intensive and critical operations applied to data as it is stored, moved, and processed. Comprehending Intel's cryptography processing acceleration is essential to optimizing overall platform workload, and service performance.
At Intel, security comes first both in the way we work and in what we work on. Our culture and practices guide everything we build, with the goal of delivering the highest performance and optimal protections. As with previous reports, the 2021 Intel Product Security Report demonstrates our Security First Pledge and our endless efforts to proactively seek out and mitigate security issues.
How can regulation keep up as transformation races ahead? 2022 Global regulat...DESMOND YUEN
As the pandemic drags into its third year, financial services firms face a range of challenges, from increased operational complexity and an evolving regulatory directive to address environmental and social issues to new forms of competition
and evolving technologies, such as digital assets and cryptocurrencies. Banks, insurers, asset managers and other financial services firms (collectively referred to as “firms” in
the rest of this document) must innovate more effectively — and rapidly — to keep up with the pace of change while still identifying emerging risks and building appropriate governance and controls.
NASA Spinoffs Help Fight Coronavirus, Clean Pollution, Grow Food, MoreDESMOND YUEN
NASA's mission of exploration requires new technologies, software, and research – which show up in daily life. The agency’s Spinoff 2022 publication tells the stories of companies, start-ups, and entrepreneurs transforming these innovations into cutting-edge products and services that boost the economy, protect the planet, and save lives.
“The value of NASA is not confined to the cosmos but realized throughout our country – from hundreds of thousands of well-paying jobs to world-leading climate science, understanding the universe and our place within it, to technology transfers that make life easier for folks around the world,” NASA Administrator Bill Nelson said. “As we combat the coronavirus pandemic and promote environmental justice and sustainability, NASA technology is essential to address humanity’s greatest challenges.”
Spinoff 2022 features more than 45 companies using NASA technology to advance manufacturing techniques, detoxify polluted soil, improve weather forecasting, and even clean the air to slow the spread of viruses, including coronavirus.
"NASA's technology portfolio contains many innovations that not only enable exploration but also address challenges and improve life here at home," said Jim Reuter, associate administrator of the agency’s Space Technology Mission Directorate (STMD) in Washington. "We’ve captured these examples of successful commercialization of NASA technology and research, not only to share the benefits of the space program with the public, but to inspire the next generation of entrepreneurs."
This year in Spinoff, readers will learn more about:
How companies use information from NASA’s vertical farm to sustainably grow fresh produce
New ways that technology developed for insulation in space keeps people warm in the great outdoors
How a system created for growing plants in space now helps improve indoor air quality and reduces the spread of airborne viruses like coronavirus
How phase-change materials – originally developed to help astronauts wearing spacesuits – absorb, hold, and release heat to help keep race car drivers cool
A Survey on Security and Privacy Issues in Edge Computing-Assisted Internet o...DESMOND YUEN
Internet of Things (IoT) is an innovative paradigm
envisioned to provide massive applications that are now part of
our daily lives. Millions of smart devices are deployed within
complex networks to provide vibrant functionalities including
communications, monitoring, and controlling of critical infrastructures. However, this massive growth of IoT devices and the corresponding huge data traffic generated at the edge of the network created additional burdens on the state-of-the-art
centralized cloud computing paradigm due to the bandwidth and
resources scarcity. Hence, edge computing (EC) is emerging as
an innovative strategy that brings data processing and storage
near to the end users, leading to what is called EC-assisted IoT.
Although this paradigm provides unique features and enhanced
quality of service (QoS), it also introduces huge risks in data security and privacy aspects. This paper conducts a comprehensive survey on security and privacy issues in the context of EC-assisted IoT. In particular, we first present an overview of EC-assisted IoT including definitions, applications, architecture, advantages, and challenges. Second, we define security and privacy in the context of EC-assisted IoT. Then, we extensively discuss the major classifications of attacks in EC-assisted IoT and provide possible solutions and countermeasures along with the related research efforts. After that, we further classify some security and privacy issues as discussed in the literature based on security services and based on security objectives and functions. Finally, several open challenges and future research directions for secure EC-assisted IoT paradigm are also extensively provided.
PUTTING PEOPLE FIRST: ITS IS SMART COMMUNITIES AND CITIESDESMOND YUEN
The report covers the benefits, goals, challenges, and success factors associated with smart cities and communities and gives a glimpse of a path forward.
BUILDING AN OPEN RAN ECOSYSTEM FOR EUROPEDESMOND YUEN
Five companies—Deutsche Telekom, Orange, Telecom Italia, Telefónica, and Vodafone—published a report outlining why they feel Europe as a whole is lagging behind other regions such as the U.S. and Japan in developing Open RAN. The companies point to both a lack of companies developing key components, notably silicon chips, for Open RAN technologies, as well as the need to get incumbent equipment vendors Ericsson and Nokia on board with Open RAN development.
An Introduction to Semiconductors and IntelDESMOND YUEN
Did you know that...
The average American adult spends over 12 hours a day engaged with electronics — computers, mobile devices, TVs, cars, to name just a few — powered by semiconductors.
A common chip the size of your smallest fingernail is only about 1-millimeter thick but contains roughly 30 different layers of components and wires (called interconnects) that make up its complex circuitry.
Intel owns nearly 70,000 active patents worldwide. Its first — “Resistor for Integrated Circuit,” #3,631,313 — was granted to Gordon Moore on Dec. 28, 1971.
Those are a few fun facts in a high-level presentation that provides an easy-to-understand look at the world of semiconductors, why they matter and the role Intel plays in their creation.
Changing demographics and economic growth bloomDESMOND YUEN
Demography is destiny” is an oft-cited phrase that suggests the size, growth, and structure of a nation’s population deter mines its long-term social, economic, andpolitical fabric. The phrase highlights the role of
demographics in shaping many complex challenges
and opportunities societies face, including several
pertinent to economic growth and development.
Nevertheless, it is an overstatement to say that
demography determines all, as it downplays the
fact that both demographic trajectories and their
development implications are responsive to economic
incentives; to policy and institutional reforms; and to
changes in technology, cultural norms, and behavior.
The world is undergoing a major demographic
upheaval with three key components: population
growth, changes in fertility and mortality, and
associated changes in population age structure.
Intel Corporation (“Intel”) designs and manufactures
advanced integrated digital technology platforms that power
an increasingly connected world. A platform consists of
a microprocessor and chipset, and may be enhanced by
additional hardware, software, and services. The platforms
are used in a wide range of applications, such as PCs, laptops,
servers, tablets, smartphones, automobiles, automated
factory systems, and medical devices. Intel is also in the midst
of a corporate transformation that has seen its data-centric
businesses capture an increasing share of its revenue.
This report provides economic impact estimates for Intel in terms of employment, labor income, and gross domestic product (“GDP”) for the most recent historical year, 2019.1
Discover how private 5G networks can give enterprises options to enhance services and deliver new use cases with the level of control and investment they want.
The document describes how the latest Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions and Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI) enabled in the latest Intel® 3rd Generation Xeon® Scalable Processor are used to significantly increase and achieve 1 Tb of IPsec throughput.
"Life and Learning After One-Hundred Years: Trust Is The Coin Of The Realm."DESMOND YUEN
The former secretary of state George Shultz passed away last weekend. He is one of the most influential secretaries of state in US history. Around the time of his hundredth birthday this past December, he published a short book on Trust and Effective Relationships
Telefónica views on the design, architecture, and technology of 4G/5G Open RA...DESMOND YUEN
This whitepaper is a blueprint for developing an Open RAN solution. It provides an overview of the main
technology elements that Telefónica is developing
in collaboration with selected partners in the Open
RAN ecosystem.
It describes the architectural elements, design
criteria, technology choices, and key chipsets
employed to build a complete portfolio of radio
units and baseband equipment capable of a full
4G/5G RAN rollout in any market of interest.
The field of machine programming — the automation of the development of software — is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In today’s technological landscape, software is integrated into almost everything we do, but maintaining software is a time-consuming and error-prone process. When fully realized, machine programming will enable everyone to express their creativity and develop their own software without writing a single line of code. Intel realizes the pioneering promise of machine programming, which is why it created the Machine Programming Research (MPR) team in Intel Labs. The MPR team’s goal is to create a society where everyone can create software, but machines will handle the “programming” part.
StarCompliance is a leading firm specializing in the recovery of stolen cryptocurrency. Our comprehensive services are designed to assist individuals and organizations in navigating the complex process of fraud reporting, investigation, and fund recovery. We combine cutting-edge technology with expert legal support to provide a robust solution for victims of crypto theft.
Our Services Include:
Reporting to Tracking Authorities:
We immediately notify all relevant centralized exchanges (CEX), decentralized exchanges (DEX), and wallet providers about the stolen cryptocurrency. This ensures that the stolen assets are flagged as scam transactions, making it impossible for the thief to use them.
Assistance with Filing Police Reports:
We guide you through the process of filing a valid police report. Our support team provides detailed instructions on which police department to contact and helps you complete the necessary paperwork within the critical 72-hour window.
Launching the Refund Process:
Our team of experienced lawyers can initiate lawsuits on your behalf and represent you in various jurisdictions around the world. They work diligently to recover your stolen funds and ensure that justice is served.
At StarCompliance, we understand the urgency and stress involved in dealing with cryptocurrency theft. Our dedicated team works quickly and efficiently to provide you with the support and expertise needed to recover your assets. Trust us to be your partner in navigating the complexities of the crypto world and safeguarding your investments.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
3. 3
BigDL
BringingDeepLearningToBigDataPlatform
• Distributed deep learning framework for Apache
Spark*
• Make deep learning more accessible to big data users
and data scientists
• Write deep learning applications as standard Spark programs
• Run on existing Spark/Hadoop clusters (no changes needed)
• Feature parity with popular deep learning frameworks
• E.g., Caffe, Torch, Tensorflow, etc.
• High performance
• Powered by Intel MKL and multi-threaded programming
• Efficient scale-out
• Leveraging Spark for distributed training & inference
https://github.com/intel-analytics/BigDL
https://bigdl-project.github.io/
Spark Core
SQL SparkR Streaming
MLlib GraphX
ML Pipeline
DataFrame
5. 5
BigDLAnsweringTheNeeds
Make deep learning more accessible to big data and data science communities
• Continue the use of familiar SW tools and HW infrastructure to build deep learning
applications
• Analyze “big data” using deep learning on the same Hadoop/Spark cluster where the data
are stored
• Add deep learning functionalities to the Big Data (Spark) programs and/or workflow
• Leverage existing Hadoop/Spark clusters to run deep learning applications
• Shared with other workloads (e.g., ETL, data warehouse, feature engineering, statistic machine
learning, graph analytics, etc.) in a dynamic and elastic fashion
6. 6
FraudDetectionforUnionPay
Train one model
all features selected features
model
candidate model
sampled
partition
Training Data
…
normal
fraud
Train one model
Train one model
sampled
partition
sampled
partition
Post
Processing
Pre-
Processing
model
model
Spark Pipeline
Test Data
Predictions
Test
Spark DataFrameHive Table
Pre-processing
Feature
Engineering
Feature
Engineering
Feature
Selection
Model
Ensemble
SparkPipeline
Neural Network Model Using BigDL
Feature
Selection*
Model
Training
Model
Evaluation
& Fine Tune
https://mp.weixin.qq.com/s?__biz=MzI3NDAwNDUwNg==&mid=2648307335&idx=1&sn=8eb9f63eaf2e40e24a90601b9cc03d1f
8. 8
DistributedTrainingonBigDL
Training
for (i <- 1 to N) {
batch = next_batch()
output = model.forward(batch.input)
loss = criterion.forward(output, batch.target)
error = criterion.backward(output, batch.target)
model.backward(input, error)
optimMethod.optimize(model.weight, model.gradient)
}
Iterative Data parallel
Synchronous SGD
Mini-batch
9. 9
RunasstandardSparkProgram
Spark
Program
DL App on Driver
Spark
Executor
(JVM)
Spark
Task
BigDL lib
Worker
Intel MKL
Standard
Spark jobs
Worker
Worker Worker
Worker
Spark
Executor
(JVM)
Spark
Task
BigDL lib
Worker
Intel MKL
BigDL
library
Spark
jobs
BigDL Program
Standard Spark jobs
• No changes to the Spark or Hadoop clusters needed
Iterative
• Each iteration of the training runs as a Spark job
Data parallel
• Each Spark task runs the same model on a subset of the data (batch)
14. 14
TaskSchedulingOverheads
Spark overheads (task scheduling, task serde, task fetch) as
a fraction of average compute time for Inception v1 training
For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.
15. 15
Drizzle
*Collaboration with Shivaram Venkataraman from UC Berkeley RICELab
• Low latency execution engine for Apache
Spark
• Fine-grained execution with coarse-grained
scheduling
• Group scheduling
• Schedule a group of iterations at once
• Fault tolerance, scheduling at group boundaries
• Coordinating shuffles: PRE-SCHEDULING
• Pre-schedule tasks on executors
• Trigger tasks once dependencies are met
23. 23
ChallengesofLarge-ScaleProcessinginGPUSolutions
• Reading images out takes a very long time
• Image pre-processing on HBase is very complex
• No existing software frameworks can be leveraged
• E.g., resource management, distributed data processing, fault tolerance, etc.
• Very challenging to scale out to massive amount of pictures
• Due to SW and HW infrastructure constraints
24. 24
UpgradingtoBigDLSolutions
• Reuse existing Hadoop/Spark clusters for deep learning with no changes
• Efficiently scale out on Spark with superior performance
• Reading HBase data no longer a bottleneck
• Very easy to build the end-to-end pipeline in BigDL
• Image transformation and augmentation based on OpenCV on Spark
val preProcessor = BytesToMat() -> Resize(300, 300) -> …
val transformed = preProcessor(dataRdd)
• Directly Load pre-trained models (BigDL/Caffe/Torch/TensorFLow) into BigDL
val model = Module.loadCaffeModel(caffeDefPath, caffeModelPath)
25. 25
ModelQuantizationforEfficientInferenceinBigDL
• Local quantization scheme converting floats to intergers
• Faster compute and smaller models
• Take advantage of SSE and AVX instructions on Xeon servers
• Supports pre-trained models (BigDL/Caffe/Torch/TensorFLow)
val model = Module.loadCaffeModel(caffeDefPath, caffeModelPath)
val quantizedModel = model.quantize()
• Quantized SSD model
• ~4x model size reduction
• >2x inference speedup
• ~0.001 mAP (mean average precision) loss
For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.
26. 26
TryBigDLOut
Running BigDL, Deep Learning
for Apache Spark, on AWS*
(Amazon* Web Service)
https://aws.amazon.com/blogs/ai/running-
bigdl-deep-learning-for-apache-spark-on-aws/
Use BigDL on Microsoft*
Azure* HDInsight*
https://azure.microsoft.com/en-
us/blog/use-bigdl-on-hdinsight-spark-for-
distributed-deep-learning/
Intel’s BigDL on Databricks*
https://databricks.com/blog/2017/02/09/in
tels-bigdl-databricks.html
BigDL on Alibaba* Cloud
E-MapReduce*
https://yq.aliyun.com/articles/73347
BigDL on CDH* and Cloudera*
Data Science Workbench*
http://blog.cloudera.com/blog/2017/04/bigdl-
on-cdh-and-cloudera-data-science-workbench/
BigDL Distribution in Cray
Urika-XC Analytics Suite
http://www.cray.com/products/analytics/uri
ka-xc