Successfully reported this slideshow.
Your SlideShare is downloading. ×

Apache Spark and Tensorflow as a Service with Jim Dowling

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad

Check these out next

1 of 44 Ad

Apache Spark and Tensorflow as a Service with Jim Dowling

Download to read offline

In Sweden, from the Rise ICE Data Center at www.hops.site, we are providing to reseachers both Spark-as-a-Service and, more recently, Tensorflow-as-a-Service as part of the Hops platform. In this talk, we examine the different ways in which Tensorflow can be included in Spark workflows, from batch to streaming to structured streaming applications. We will analyse the different frameworks for integrating Spark with Tensorflow, from Tensorframes to TensorflowOnSpark to Databrick’s Deep Learning Pipelines. We introduce the different programming models supported and highlight the importance of cluster support for managing different versions of python libraries on behalf of users. We will also present cluster management support for sharing GPUs, including Mesos and YARN (in Hops Hadoop). Finally, we will perform a live demonstration of training and inference for a TensorflowOnSpark application written on Jupyter that can read data from either HDFS or Kafka, transform the data in Spark, and train a deep neural network on Tensorflow. We will show how to debug the application using both Spark UI and Tensorboard, and how to examine logs and monitor training.

In Sweden, from the Rise ICE Data Center at www.hops.site, we are providing to reseachers both Spark-as-a-Service and, more recently, Tensorflow-as-a-Service as part of the Hops platform. In this talk, we examine the different ways in which Tensorflow can be included in Spark workflows, from batch to streaming to structured streaming applications. We will analyse the different frameworks for integrating Spark with Tensorflow, from Tensorframes to TensorflowOnSpark to Databrick’s Deep Learning Pipelines. We introduce the different programming models supported and highlight the importance of cluster support for managing different versions of python libraries on behalf of users. We will also present cluster management support for sharing GPUs, including Mesos and YARN (in Hops Hadoop). Finally, we will perform a live demonstration of training and inference for a TensorflowOnSpark application written on Jupyter that can read data from either HDFS or Kafka, transform the data in Spark, and train a deep neural network on Tensorflow. We will show how to debug the application using both Spark UI and Tensorboard, and how to examine logs and monitor training.

Advertisement
Advertisement

More Related Content

Slideshows for you (20)

Similar to Apache Spark and Tensorflow as a Service with Jim Dowling (20)

Advertisement

More from Spark Summit (20)

Recently uploaded (20)

Advertisement

Apache Spark and Tensorflow as a Service with Jim Dowling

  1. 1. Jim Dowling Assoc Prof, KTH Senior Researcher, RISE SICS CEO, Logical Clocks AB SPARK & TENSORFLOW AS-A-SERVICE #EUai8 Hops
  2. 2. Newton confirmed what many suspected • In August 1684, Halley visited Newton: “What type of curve does a planet describe in its orbit about the sun, assuming an inverse square law of attraction?” 2#EUai8
  3. 3. • In June 2017, Facebook showed how to reduce training time on ImageNet for a Deep CNN from 2 weeks to 1 hour by scaling out to 256 GPUs. 3#EUai8 https://arxiv.org/abs/1706.02677 Facebook confirmed what many suspected
  4. 4. AI Hierarchy of Needs 5 DDL (Distributed Deep Learning) Deep Learning, RL, Automated ML A/B Testing, Experimentation, ML B.I. Analytics, Metrics, Aggregates, Features, Training/Test Data Reliable Data Pipelines, ETL, Unstructured and Structured Data Storage, Real-Time Data Ingestion [Adapted from https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007?gi=7e13a696e469 ]
  5. 5. AI Hierarchy of Needs 6 DDL (Distributed Deep Learning) Deep Learning, RL, Automated ML A/B Testing, Experimentation, ML B.I. Analytics, Metrics, Aggregates, Features, Training/Test Data Reliable Data Pipelines, ETL, Unstructured and Structured Data Storage, Real-Time Data Ingestion [Adapted from https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007?gi=7e13a696e469 ] Analytics Prediction
  6. 6. AI Hierarchy of Needs 7 DDL (Distributed Deep Learning) Deep Learning, RL, Automated ML A/B Testing, Experimentation, ML B.I. Analytics, Metrics, Aggregates, Features, Training/Test Data Reliable Data Pipelines, ETL, Unstructured and Structured Data Storage, Real-Time Data Ingestion Hops [Adapted from https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007?gi=7e13a696e469 ]
  7. 7. Deep Learning Hierarchy of Scale 8#EUai8 DDL AllReduce on GPU Servers DDL with GPU Servers and Parameter Servers Parallel Experiments on GPU Servers Single GPU Many GPUs on a Single GPU Server Days/Hours Days Weeks Minutes Training Time for ImageNet Hours
  8. 8. Deep Learning Hierarchy of Scale 9#EUai8 Public Clouds On-Premise Single GPU Multiple GPUs on a Single GPU Server DDL AllReduce on GPU Servers DDL with GPU Servers and Parameter Servers Single GPU Many GPUs on a Single GPU Server Parallel Experiments on GPU Servers Single Host DL Distributed DL
  9. 9. DNN Training Time and Researcher Productivity • Distributed Deep Learning – Interactive analysis! – Instant gratification! • Single Host Deep Learning – Google-Envy 10 “My Model’s Training.” Training
  10. 10. What Hardware do you Need? • SingleRoot PCI Complex Server* – 10 Nvidia GTX 1080Ti • 11 GB Memory – 256 GB Ram – 2 Intel Xeon CPUs – 2x56 Gb Infiniband 15K Euro • Nvidia DGX-1 – 8 Nvidia Tesla P100/V100 • 16 GB Memory – 512 GB Ram – 2 Intel Xeon CPUs – 4x100 Gb Infiniband – NVLink** up to 150K Euro *https://www.servethehome.com/single-root-or-dual-root-for-deep-learning-gpu-to-gpu-systems **https://www.microway.com/hpc-tech-tips/comparing-nvlink-vs-pci-e-nvidia-tesla-p100-gpus-openpower-servers/
  11. 11. 12#EUai8 SingleRoot Complex Server with 10 GPUs [Images from: https://www.microway.com/product/octoputer-4u-10-gpu-server-single-root-complex/ ]
  12. 12. Tensorflow GAN Training Example* 13#EUai8 *https://www.servethehome.com/deeplearning11-10x-nvidia-gtx-1080-ti-single-root-deep-learning-server-part-1/
  13. 13. Cluster of Commodity GPU Servers 14#EUai8 InfiniBand Max 1-2 GPU Servers per Rack (2-4 KW per server)
  14. 14. Spark and TF – Cluster Integration 15#EUai8 Training Data and Model Store Cluster Manager Single GPU Experiment Parallel Experiments (HyperParam Tuning) Distributed Training Job Deprecated Mix of commodity GPUs and more powerful GPUs good for (1) parallel experiments and (2) distributed training
  15. 15. GPU Resource Requests in Hops 16#EUai8 HopsYARN (Supports GPUs-as-a-Resource) 4 GPUs on any host 10 GPUs on 1 host 100 GPUs on 10 hosts with ‘Infiniband’ 20 GPUs on 2 hosts with ‘Infiniband_P100’ Hops HopsFS
  16. 16. HopsFS: Next Generation HDFS* 17 16x Throughput FasterBigger *https://www.usenix.org/conference/fast17/technical-sessions/presentation/niazi **https://eurosys2017.github.io/assets/data/posters/poster09-Niazi.pdf 37x Number of files Scale Challenge Winner (2017) Small Files**
  17. 17. TensorFlow Spark API Integration • Tight Integration – Databricks’ Tensorframes and Deep Learning Pipelines • Loose Integration – TensorFlow-on-Spark, Hops TfLauncher • PySpark as a wrapper for TensorFlow 18#EUai8
  18. 18. Deep Learning Pipelines 19#EUai8 graph = tf.Graph() with tf.Session(graph=graph) as sess: image_arr = utils.imageInputPlaceholder() frozen_graph = tfx.strip_and_freeze_until(…) transformer = TFImageTransformer(…) image_df = readImages("/data/myimages") processed_image_df = transformer.transform(image_df) … select image, driven_by_007(image) as probability from car_examples order by probability desc limit 6 Inferencing possible with SparkSQL
  19. 19. Hops TfLauncher – TF in Spark def model_fn(learning_rate, dropout): import tensorflow as tf from hops import tensorboard, hdfs, devices ….. from hops import tflauncher args_dict = {'learning_rate': [0.001], 'dropout': [0.5]} tflauncher.launch(spark, model_fn, args_dict) 20 Launch TF jobs as Mappers in Spark “Pure” TensorFlow code in the Executor
  20. 20. Hops TfLauncher – Parallel Experiments 21#EUai8 def model_fn(learning_rate, dropout): ….. from hops import tflauncher args_dict = {'learning_rate': [0.001, 0.005, 0.01], 'dropout': [0.5, 0.6, 0.7]} tflauncher.launch(spark, model_fn, args_dict) Launches 3 Executors with 3 different Hyperparameter settings. Each Executor can have 1-N GPUs.
  21. 21. New TensorFlow APIs tf.data.Dataset tf.estimator.Estimator tf.data.Iterator 22#EUai8 def model_fn(features, labels, mode, params): … dataset = tf.data.TFRecordDataset([“/v/f1.tfrecord", “/v/f2.tfrecord"]) dataset = dataset.map(...) dataset = dataset.shuffle(buffer_size=10000) dataset = dataset.batch(32) iterator = Iterator.from_dataset(dataset) …. nn = tf.estimator.Estimator(model_fn=model_fn, params=dict_hyp_params) Prefer over RDDs-to-feed_dict
  22. 22. Distributed TensorFlow • AllReduce – Horovod by Uber with MPI/NCCL – Baidu AllReduce/MPI in TensorFlow/contrib • Distributed Parameter Servers – TensorFlow-on-Spark – Distributed TensorFlow 23#EUai8 DDL AllReduce on GPU Servers DDL with GPU Servers and Parameter Servers
  23. 23. Asynchronous SGD vs Synchronous SGD • Synchronous Stochastic Gradient Descent (SGD) now dominant, due to improved convergence guarantees: – “Revisiting Synchronous SGD”, Chen et al, ICLR 2016 https://research.google.com/pubs/pub45187.html 24
  24. 24. Distributed TF with Parameter Servers 25 Synchronous SGD with Data Parallelism
  25. 25. Tensorflow-on-Spark (Yahoo!) • Rewrite TensorFlow apps to Distributed TensorFlow • Two modes: 1. feed_dict: RDD.mapPartitions() 2. TFReader + queue_runner: direct HDFS access from Tensorflow 26[Image from https://www.slideshare.net/Hadoop_Summit/tensorflowonspark-scalable-tensorflow-learning-on-spark-clusters]
  26. 26. TFonSpark with Spark Streaming 27#EUai8 [Image from https://www.slideshare.net/Hadoop_Summit/tensorflowonspark-scalable-tensorflow-learning-on-spark-clusters]
  27. 27. All-Reduce/MPI 28 GPU 0 GPU 1 GPU 2 GPU 3 send send send send recv recv recv recv
  28. 28. AllReduce: Minimize Inter-Host B/W 29 Only one slow worker or comms link is needed to bottleneck DNN training time.
  29. 29. AllReduce Algorithm • AllReduce sums all Gradients in N Layers (L1..LN) using N GPUs in parallel (simplified steps shown). GPU 0 GPU 1 GPU 2 GPU 3 L1 L2 L3 L4 L1 L2 L3 L4 L1 L2 L3 L4 L1 L2 L3 L4 Backprop
  30. 30. AllReduce Algorithm GPU 0 GPU 1 GPU 2 GPU 3 L10+L11+L12+L13 L2 L3 L4 Backprop L10+L11+L12+L13 L2 L3 L4 L10+L11+L12+L13 L2 L3 L4 L10+L11+L12+L13 L2 L3 L4 • Aggregate Gradients from the first layer (L1) while sending Gradients for L2
  31. 31. AllReduce Algorithm GPU 0 GPU 1 GPU 2 GPU 3 Backprop L10+L11+L12+L13 L20+L21+L22+L23 L3 L4 L10+L11+L12+L13 L20+L21+L22+L23 L3 L4 L10+L11+L12+L13 L20+L21+L22+L23 L3 L4 L10+L11+L12+L13 L20+L21+L22+L23 L3 L4 • Broadcast Gradients from higher layers while computing Gradients at lower layers.
  32. 32. AllReduce Algorithm GPU 0 GPU 1 GPU 2 GPU 3 Backprop L10+L11+L12+L13 L20+L21+L22+L23 L30+L31+L32+L33 L4 L10+L11+L12+L13 L20+L21+L22+L23 L30+L31+L32+L33 L4 L10+L11+L12+L13 L20+L21+L22+L23 L30+L31+L32+L33 L4 L10+L11+L12+L13 L20+L21+L22+L23 L30+L31+L32+L33 L4 • Nearly there.
  33. 33. AllReduce Algorithm GPU 0 GPU 1 GPU 2 GPU 3 L10+L11+L12+L13 L20+L21+L22+L23 L30+L31+L32+L33 L40+L41+L42+L43 L10+L11+L12+L13 L20+L21+L22+L23 L30+L31+L32+L33 L40+L41+L42+L43 L10+L11+L12+L13 L20+L21+L22+L23 L30+L31+L32+L33 L40+L41+L42+L43 L10+L11+L12+L13 L20+L21+L22+L23 L30+L31+L32+L33 L40+L41+L42+L43 • Finished an iteration.
  34. 34. Hops AllReduce/Horovod/TensorFlow 35#EUai8 import horovod.tensorflow as hvd def conv_model(feature, target, mode) ….. def main(_): hvd.init() opt = hvd.DistributedOptimizer(opt) if hvd.local_rank()==0: hooks = [hvd.BroadcastGlobalVariablesHook(0), ..] ….. else: hooks = [hvd.BroadcastGlobalVariablesHook(0), ..] ….. from hops import allreduce allreduce.launch(spark, 'hdfs:///Projects/…/all_reduce.ipynb') “Pure” TensorFlow code
  35. 35. Parameter Server vs AllReduce (Uber)* 36 *https://github.com/uber/horovod Setup: 16 servers with 4 P100 GPUs each connected by 40 Gbit/s network (synthetic data). VGG model is larger
  36. 36. Dist. Synchnrous SGD: N/W is the Bottleneck 37 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 1 2 3 4 5 6 7 8 9 10 1 GPU 4 GPUs N/W N/W N/W N/W N/W Amount Work Time Reduce N/W Comms Time, Increase Computation Time Amdahl’s Law
  37. 37. Hopsworks:Tensorflow/Spark-as-a-Service 38#EUai8
  38. 38. Hopsworks: Full AI Hierarchy of Needs 39 Develop Train Test Deploy MySQL Cluster Hive InfluxDB ElasticSearch KafkaProjects,Datasets,Users HopsFS / YARN Spark, Flink, Tensorflow Jupyter, Zeppelin Jobs, Kibana, Grafana REST API Hopsworks
  39. 39. Proj-42 Hopsworks Abstractions 40 A Project is a Grouping of Users and Data Proj-X Shared TopicTopic /Projs/My/Data Proj-AllCompanyDB Ismail et al, Hopsworks: Improving User Experience and Development on Hadoop with Scalable, Strongly Consistent Metadata, ICDCS 2017
  40. 40. Per-Project Conda Libs in Hopsworks 41#EUai8
  41. 41. Dela* 42 Peer-to-Peer Search and Download for Huge DataSets (ImageNet, YouTube8M, MsCoCo, Reddit, etc) *http://ieeexplore.ieee.org/document/7980225/ (ICDCS 2017)
  42. 42. DEMO 43#EUai8 Register and Play for today: http://spark.hops.site
  43. 43. Conclusions • Many good frameworks for TF and Spark – TensorFlowOnSpark, Deep Learning Pipelines • Hopsworks support for TF and Spark – GPUs-as-a-Resource in HopsYARN – TfLauncher, TensorFlow-on-Spark, Horovod – Jupyter with Conda Support • More on GPU-Servers at www.logicalclocks.com 44#EUai8
  44. 44. Jim Dowling, Seif Haridi, Gautier Berthou, Salman Niazi, Mahmoud Ismail, Theofilos Kakantousis, Ermias Gebremeskel, Antonios Kouzoupis, Alex Ormenisan, Fabio Buso, Robin Andersso,n August Bonds, Filotas Siskos, Mahmoud Hamed. Active: Alumni: Roberto Bampi, ArunaKumari Yedurupaka, Tobias Johansson, Fanti Machmount Al Samisti, Braulio Grana, Adam Alpire, Zahin Azher Rashid, Vasileios Giannokostas, Johan Svedlund Nordström,Rizvi Hasan, Paul Mälzer, Bram Leenders, Juan Roca, Misganu Dessalegn, K “Sri” Srijeyanthan, Jude D’Souza, Alberto Lorente, Andre Moré, Ali Gholami, Davis Jaunzems, Stig Viaene, Hooman Peiro, Evangelos Savvidis, Steffen Grohsschmiedt, Qi Qi, Gayana Chandrasekara, Nikolaos Stanogias, Daniel Bali, Ioannis Kerkinos, Peter Buechler, Pushparaj Motamari, Hamid Afzali, Wasif Malik, Lalith Suresh, Mariano Valles, Ying Lieu. Please Follow Us! @hopshadoop Hops Heads Please Star Us! http://github.com/ hopshadoop/hopsworks

×