Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Machine Learning for Big Data Analytics: Scaling In with Containers while Scaling Out on Clusters

459 views

Published on

Watch On Demand Anytime via http://www.univa.com/resources/webinar-machine-learning.php

Armed with nothing more than an Apache Spark toting laptop, you have all the trappings required to prototype the application of Machine Learning against your data-science needs. From programmability in Scala, Java or Python, to built-in support for Machine Learning via MLlib, Spark is an exceedingly effective enabler that allows you to rapidly produce results.

Of course, as soon as your prototyping proves successful, you'll want to scale out to embrace the volume, variety and velocity that characterizes today's Big Data demands... in production. Because Spark is as comfortable on an isolated laptop as it is in a distributed-computing environment, addressing Big Data requirements in production boils down to effectively and efficiently embracing containers and clusters for Big Data Analytics.

And this is where offerings from Univa shine - i.e., in making the transition from prototype to production completely seamless. For some use cases, it makes sense to scale-in Spark based applications within Docker containers via Univa Grid Engine Container Edition or Navops by Univa; whereas in others, Spark is interfaced (as a Mesos-compliant framework) with Univa Universal Resource Broker, to permit scaling out on a cluster. In both scenarios, your production Spark applications are scheduled alongside other classes of workload - without a need for dedicated resources.

Agenda:

• Overview of Apache Spark as a platform for Deep Learning - from Python-based Jupyter Notebooks to Spark's Machine Learning library MLlib
• Overview of prototyping Machine Learning via Apache Spark on a laptop - without and within Docker containers
• Introductions to Univa Grid Engine Container Edition and Univa Universal Resource Broker plus Navops by Univa
• Overview of production Big Data Analytics platforms for Machine Learning
• Docker-containerized Apache Spark and Univa Grid Engine Container Edition
• Docker-containerized Apache Spark and Navops by Univa
• Apache Spark plus Univa Universal Resource Broker
• Introducing support for GPUs without and within Docker containers
• Use case example - using Machine Learning to classify data from Twitter without and within Docker containers
• Summary and next steps

Watch On Demand Anytime via http://www.univa.com/resources/webinar-machine-learning.php

Published in: Software
  • Be the first to comment

  • Be the first to like this

Machine Learning for Big Data Analytics: Scaling In with Containers while Scaling Out on Clusters

  1. 1. www.univa.com Presenter: Ian Lumb Machine Learning for Big Data Analytics: Scaling In with Containers while Scaling Out on Clusters Watch On Demand Anytime Note: Includes demos
  2. 2. 2 Agenda  Introduction  Use case example  Scaling …  Out with Apache Spark via Univa Universal Resource Broker  Up with NVIDIA GPUs and Univa Grid Engine  In/Down with Univa container solutions  Summary
  3. 3. www.univa.com Introduction
  4. 4. Machine Learning Defined 4 “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”. T. M. Mitchell et al., Machine Learning, WCB, 1997
  5. 5. Deep Learning Defined 5 “… a modern refinement of ‘machine learning’, in which computers teach themselves tasks by crunching large sets of data”. http://www.economist.com/news/briefing/21650526- artificialintelligence-scares-peopleexcessively-so-rise-machines
  6. 6. www.univa.com Use Case Example: Earthquakes and Tsunamis
  7. 7. Use Case: Context http://credit.pvamu.edu/MCBDA2016/Slides/Day2_Lumb_MCBDA1 _Twitter_Tsunami.pdf
  8. 8. Use Case: Motivation  Non-deterministic cause  Uncertainty inherent in any attempt to predict earthquakes o In situ measurements may reduce uncertainty  Lead times  Availability of actionable observations  Communication of situation - advisories, warnings, etc.  Cause-effect relationship  Energy transfer - inputs ... coupling ... outputs o ‘Geometry’ - bathymetry and topography  Other factors - e.g., tides  Established effect  Far-field estimates of tsunami propagation (pre-computed) and coastal inundation (real-time) have proven to be extremely accurate ... requires – Distributed array of deep-ocean tsunami detection buoys + forecasting model http://credit.pvamu.edu/MCBDA2016/Slides/Day2_Lumb_MCBDA1_ Twitter_Tsunami.pdf
  9. 9. http://www.gitews.org/en/concept/ Use Case: Traditional Data Sources http://credit.pvamu.edu/MCBDA2016/Slides/Day2_Lumb_MCBDA1_ Twitter_Tsunami.pdf
  10. 10. Use Case: Deep Learning from Twitter? http://credit.pvamu.edu/MCBDA2016/Slides/Day2_Lumb_MCBDA1_ Twitter_Tsunami.pdf
  11. 11. Karau et al., Learning Spark, O’Reilly, 2015 Use Case: Machine Learning Pipeline
  12. 12. Use Case: Deep Learning from Twitter? Represent data  Twitter data manually curated into ‘ham’ and ‘spam’  In-memory representation via Spark RDDs Extract features  Frequency-based usage via Spark MLlib HashingTF ⇒ feature vectors Develop model object  Spark MLlib LogisticRegressionWithSGD used for classification Evaluate model http://credit.pvamu.edu/MCBDA2016/Slides/Day2_Lumb_MCBDA1_ Twitter_Tsunami.pdf
  13. 13. Use Case: Laptop Prototype http://credit.pvamu.edu/MCBDA2016/Slides/Day2_Lumb_MCBDA1_ Twitter_Tsunami.pdf
  14. 14. Use Case: Next Steps … http://credit.pvamu.edu/MCBDA2016/Slides/Day2_Lumb_MCBDA1_Twitter_Tsunami.pdf
  15. 15. Next Steps: Scaling … 15 OUTIN DOWN UP
  16. 16. www.univa.com Apache Spark via Univa Universal Resource Broker
  17. 17. Machine Learning via Apache Spark 17 http://img.deusm.com/informationweek/2015/03/1319660/Spark-2015-Vision.jpg
  18. 18. URB: Product Overview 18 URB extends Univa Grid Engine to handle Service and Custom distributed applications in a Univa Grid Engine Cluster. An API for developing distributed applications  Compatible with Apache Mesos API  Bindings for Python, Java, and C++ A runtime environment for hosting distributed applications  Supports frameworks developed against the Mesos API  Supports frameworks developed against the URB API  Uses Univa Grid Engine to place and run work What is Universal Resource Broker (URB)? www.univa.com
  19. 19. URB: Architecture Overview 19 Spark Framework Running Thunder www.univa.com
  20. 20. www.univa.com Copyright © Univa Corporation, 2015. All Rights Reserved 20 URB: Web User Interface
  21. 21. HPC & Spark Workloads Together 21
  22. 22. URB: Solution Summary t 22 Universal Resource Broker  For the end user there is no change in application workflow  For the admins there is increased control and policy capability over compute resources  The solution provides the ability to share resources across big data and traditional batch workloads  Single resource allocation policy defined by business goals  Single accounting repository to track resource consumption  Full workload lifecycle management for heterogeneous workloads www.univa.com
  23. 23. www.univa.com GPUs
  24. 24. GPUs for Deep Learning 24 http://image.slidesharecdn.com/nvidiateslap100-160621104058/95/announcing-the-nvidia-tesla-p100-gpu-for-pcie- servers-9-638.jpg?cb=1466505803
  25. 25.  Post installation check:  qhost -F <hostname> hl:cuda.verstr=270.41.06 hl:cuda.0.name=GeForce 8400 GS hl:cuda.0.totalMem=511.312M hl:cuda.0.freeMem=500.480M hl:cuda.0.usedMem=10.832M hl:cuda.0.eccEnabled=0 hl:cuda.0.temperature=44.000000 hl:cuda.1.name=GeForce 8400 GS hl:cuda.1.totalMem=511.312M hl:cuda.1.freeMem=406.066M hl:cuda.1.usedMem=20.274M hl:cuda.1.eccEnabled=0 hl:cuda.1.temperature=43.000000 hl:cuda.devices=2 CUDA LOAD SENSOR Copyright © 2016 Univa Corporation, All Rights Reserved. 25
  26. 26. • CUDA complexes can be used for: • Setting alarm state of a host based on ECC errors (load_threshold in queue config) • Sorting hosts (load_formula) • Job submission • Requesting a host with GPUs • qsub -l cuda.devices=2 ... • Complex can be made consumable (complex configuration) in order to limit amount of CUDA jobs per host GPU JOB SUBMISSION Copyright © 2016 Univa Corporation, All Rights Reserved. 26
  27. 27. Host A 10 Host B 10 Host N 10 UGE Cluster ... Job 124 A B C D E e.g. GPUs (IDs 0 & 1) e.g. scratch storage A-E Job 123  Two host resources: 0, 1  Five global resources: A, B, C, D, E  Job 123 got assigned ID 0 of GPU resource on host N and resource C of global resource scratch  Job 124 got assigned ID 1 of GPU resource on host B and resource E of global resource scratch RESOURCE MAPS Copyright © 2016 Univa Corporation, All Rights Reserved. 27
  28. 28. www.univa.com Containers
  29. 29. www.univa.co m Containerized PySpark Example 29
  30. 30. www.univa.co m 30 Univa Grid Engine – Container Edition (1)  Launch Docker Container on best machine in cluster  Reduces time wasted (it can be minutes … or longer) o Attempting to launch on an improperly serviced execution host. o Waiting for the Docker image to download from the Docker registry.  Ensures container runs faster increasing throughput in the cluster.  Run Docker Containers in a Univa Grid Engine Cluster  Business Critical containers are prioritized over other containers. Increases efficiency of the overall organization.  Containers can be orchestrated alongside other critical workloads such as batch jobs and frameworks. $ qsub -o /home/jdoe -j y -xdv "/home:/home" -l docker,docker_images="*centos:latest*“ my_job.sh
  31. 31. www.univa.co m 31 Univa Grid Engine – Container Edition (2)  Job Control and Limits for Docker Containers  Provides user and administrator control over containers running on Grid Engine Hosts.  Accounting for Docker Containers  Keeps track of containers. Share policies require accounting.  Data file Management for Docker Containers  Transparent access to input, output and error files. Simplifies the management of input and output files for Docker Containers and ensures any output or error files are moved to a location where the user can access them.  Interactive Docker Containers  Good for debugging when containers don’t work correctly!  Parallel jobs in Docker Containers  Message-passing parallel jobs can each run a set of tasks in a container on a machine.
  32. 32. Containerized GPUs 32 https://github.com/NVIDIA/nvidia-docker
  33. 33. Univa Confidential Navops by Univa Easy installation, preconfigured solution including pre-integration with cloud services. Build a container cluster on premise or in the cloud. The fastest way to build a container cluster!! Respond Quickly: Easy to resize, adapt, dynamic provisioning Orchestrate and Optimize: Best use of resources and keep track of containers The most advanced container orchestration!! http://navops.io/
  34. 34. Univa Confidential 34 Navops orchestration solution
  35. 35. 35 Summary  Scaling Machine Learning from prototype to production …  Out with Apache Spark via Univa Universal Resource Broker  Up with NVIDIA GPUs via Univa Grid Engine  In/Down via Univa Container solutions o Univa Grid Engine – Container Edition o Navops Launch and Command
  36. 36. www.univa.com THANK YOU Ian Lumb Solutions Architect +1 630 303-9068 ilumb@univa.com Watch On Demand Anytime Note: Includes demos

×