Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017

1,280 views

Published on

http://pipeline.io

Title

PipelineAI Distributed Spark ML + Tensorflow AI + GPU Workshop

*A GPU-based cloud instance will be provided to each attendee as part of this event

Highlights

We will each build an end-to-end, continuous Tensorflow AI model training and deployment pipeline on our own GPU-based cloud instance.

At the end, we will combine our cloud instances to create the LARGEST Distributed Tensorflow AI Training and Serving Cluster in the WORLD!

Pre-requisites

Just a modern browser, internet connection, and a good night's sleep! We'll provide the rest.

Agenda

Spark ML

TensorFlow AI

Storing and Serving Models with HDFS

Trade-offs of CPU vs. *GPU, Scale Up vs. Scale Out

CUDA + cuDNN GPU Development Overview

TensorFlow Model Checkpointing, Saving, Exporting, and Importing

Distributed TensorFlow AI Model Training (Distributed Tensorflow)

TensorFlow's Accelerated Linear Algebra Framework (XLA)

TensorFlow's Just-in-Time (JIT) Compiler, Ahead of Time (AOT) Compiler

Centralized Logging and Visualizing of Distributed TensorFlow Training (Tensorboard)

Distributed Tensorflow AI Model Serving/Predicting (TensorFlow Serving)

Centralized Logging and Metrics Collection (Prometheus, Grafana)

Continuous TensorFlow AI Model Deployment (TensorFlow, Airflow)

Hybrid Cross-Cloud and On-Premise Deployments (Kubernetes)

High-Performance and Fault-Tolerant Micro-services (NetflixOSS)

Bio

Chris Fregly is Founder and Research Engineer at PipelineIO, a Streaming Machine Learning and Artificial Intelligence Startup based in San Francisco. He is also an Apache Spark Contributor, a Netflix Open Source Committer, founder of the Global Advanced Spark and TensorFlow Meetup, author of the O’Reilly Training and Video Series titled, "High Performance TensorFlow in Production."

Previously, Chris was a Distributed Systems Engineer at Netflix, a Data Solutions Engineer at Databricks, and a Founding Member and Principal Engineer at the IBM Spark Technology Center in San Francisco.


Github Repo

https://github.com/fluxcapacitor/pipeline

Video

https://youtu.be/oNf3I1fVmg8

Published in: Software
  • How to Get Automated winning picks for NFL, NCAA and MLB? ♣♣♣ http://ishbv.com/zcodesys/pdf
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Hello! Get Your Professional Job-Winning Resume Here - Check our website! https://vk.cc/818RFv
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

High Performance Distributed TensorFlow with GPUs - NYC Workshop - July 9 2017

  1. 1. HIGH PERFORMANCE TENSORFLOW IN PRODUCTION WITH GPUS!! CHRIS FREGLY, FOUNDER @ PIPELINE.IO PIPELINE.AI TENSORFLOW GPU WORKSHOP NEW YORK - JULY 8, 2017
  2. 2. INTRODUCTIONS: ME § Chris Fregly, Research Engineer @ § Formerly Netflix and Databricks § Advanced Spark and TensorFlow Meetup Please Join Our 18,000+ Members Globally!! * San Francisco * Chicago * Washington DC * London Please Join!!
  3. 3. INTRODUCTIONS: YOU § Software Engineer or Data Scientist interested in optimizing and deploying TensorFlow models to production § Assume you have a working knowledge of TensorFlow
  4. 4. CONTENT SUMMARY § 50% Training Optimizations (TensorFlow, XLA, Tools) § 50% Deployment and Inference Optimizations (Serving) § Why Heavy Focus on Inference? § Training: boring batch, O(num_data_scientists) § Inference: exciting realtime, O(num_users_of_app) § We Use Simple Models to Highlight Optimizations § Warning: This is not introductory TensorFlow material!
  5. 5. 100% OPEN SOURCE CODE § https://github.com/fluxcapacitor/pipeline/ § Please Star this Repo! J § Slides, code, notebooks, Docker images available here: https://github.com/fluxcapacitor/pipeline/ gpu.ml
  6. 6. HANDS-ON EXERCISES § Combo of Jupyter Notebooks and Command Line § Command Line through Jupyter Terminal § Some Exercises Based on Experimental Features In Other Words, You Will See Errors. It’s OK!!
  7. 7. YOU WILL LEARN… § To Inspect and Debug Models § Training and Predicting Best Practices § To Distribute Training Across a Cluster § To Optimize Training with Queue Feeders § To Optimize Training with XLA JIT Compiler § To Optimize Inference with AOT and Graph Transform Tool (GTT) § Key Components of TensorFlow Serving § To Deploy Models with TensorFlow Serving § To Optimize Inference by Tuning TensorFlow Serving
  8. 8. AGENDA § GPUs and TensorFlow § Train and Debug TensorFlow Model § Train with Distributed TensorFlow Cluster § Optimize Model with XLA JIT Compiler § Optimize Model with XLA AOT and Graph Transforms § Deploy Model to TensorFlow Serving Runtime § Optimize TensorFlow Serving Runtime § Wrap-up and Q&A
  9. 9. EVERYBODY GETS A GPU!
  10. 10. SETUP ENVIRONMENT § Step 1: Browse to the following: http://allocator.community.pipeline.io/allocate § Step 2: Browse to the following: http://<ip-address> Need Help? Use the Chat!
  11. 11. VERIFY SETUP http://<ip-address> Any username, Any password!
  12. 12. LET’S EXPLORE OUR ENVIRONMENT § Navigate to the following notebook: 01_Explore_Environment § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  13. 13. PULSE CHECK
  14. 14. BREAK § https://github.com/fluxcapacitor/pipeline/ § Slides, code, notebooks, Docker images available here: https://github.com/fluxcapacitor/pipeline/ gpu.ml Need Help? Use the Chat!
  15. 15. SETTING UP TENSORFLOW WITH GPUS § Very Painful! § Especially inside Docker § Use nvidia-docker § Especially on Kubernetes! § Use Kubernetes 1.7+ § Check Out Pipeline.IO for Links to GitHub + DockerHub
  16. 16. GPU HALF-PRECISION SUPPORT § FP16, INT8 are “Half Precision” § Supported by Pascal P100 (2016) and Volta V100 (2017) § Flexible FP32 GPU Cores Can Fit 2 FP16’s for 2x Throughput! § Half-Precision is OK for Approximate Deep Learning Use Cases
  17. 17. VOLTA V100 RECENTLY ANNOUNCED § 84 Streaming Multiprocessors (SM’s) § 5,376 GPU Cores § 672 Tensor Cores (ie. Google TPU) § Mixed FP16/FP32 Precision § More Shared Memory § New L0 Instruction Cache § Faster L1 Data Cache § V100 vs. P100 Performance § 12x TFLOPS @ Peak Training § 6x Inference Throughput
  18. 18. V100 AND CUDA 9 § Independent Thread Scheduling - Finally!! § Similar to CPU fine-grained thread synchronization semantics § Allows GPU to yield execution of any thread § Still Optimized for SIMT (Same Instruction Multiple Thread) § SIMT units automatically scheduled together § Explicit Synchronization P100 V100
  19. 19. GPU CUDA PROGRAMMING § Barbaric, But Fun Barbaric! § Must Know Underlying Hardware Very Well § Many Great Debuggers/Profilers § Hardware Changes are Painful! § Newer CUDA versions automatically JIT-compile old CUDA code to new NVPTX § Not optimal, of course
  20. 20. CUDA STREAMS § Asynchronous I/O Transfer § Overlap Compute and I/O § Keeps GPUs Saturated § Fundamental to Queue Framework in TensorFlow
  21. 21. LET’S SEE WHAT THIS THING CAN DO! § Navigate to the following notebook: 01a_Explore_GPU 01b_Explore_Numba § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  22. 22. PULSE CHECK
  23. 23. BREAK § https://github.com/fluxcapacitor/pipeline/ § Slides, code, notebooks, Docker images available here: https://github.com/fluxcapacitor/pipeline/ gpu.ml Need Help? Use the Chat!
  24. 24. AGENDA § GPUs and TensorFlow § Train and Debug TensorFlow Model § Train with Distributed TensorFlow Cluster § Optimize Model with XLA JIT Compiler § Optimize Model with XLA AOT and Graph Transforms § Deploy Model to TensorFlow Serving Runtime § Optimize TensorFlow Serving Runtime § Wrap-up and Q&A
  25. 25. TRAINING TERMINOLOGY § Tensors: N-Dimensional Arrays § ie. Scalar, Vector, Matrix § Operations: MatMul, Add, SummaryLog,… § Graph: Graph of Operations (DAG) § Session: Contains Graph(s) § Feeds: Feed inputs into Operation § Fetches: Fetch output from Operation § Variables: What we learn through training § aka “weights”, “parameters” § Devices: Hardware device on which we train -TensorFlow- Trains Variables -User- Fetches Outputs -User- Feeds Inputs -TensorFlow- Performs Operations -TensorFlow- Flows Tensors with tf.device(“worker:0/device/gpu:0,worker:1/device/gpu:0”)
  26. 26. TRAINING DEVICES § cpu:0 § By default, all CPUs § Requires extra config to target a CPU § gpu:0..n § Each GPU has a unique id § TF usually prefers a single GPU § xla_cpu:0, xla_gpu:0..n § “JIT Compiler Device” § Hints TensorFlow to attempt JIT Compile with tf.device(“/cpu:0”): with tf.device(“/gpu:0”): with tf.device(“/gpu:1”):
  27. 27. TRAINING METRICS: TENSORBOARD § Summary Ops § Event Files /root/tensorboard/linear/<version>/events… § Tags § Organize data within Tensorboard UI loss_summary_op = tf.summary.scalar('loss', loss) merge_all_summary_op = tf.summary.merge_all() summary_writer = tf.summary.FileWriter( '/root/tensorboard/linear/<version>', graph=sess.graph)
  28. 28. TRAINING ON EXISTING INFRASTRUCTURE § Data Processing § HDFS/Hadoop § Spark § Containers § Docker § Schedulers § Kubernetes § Mesos <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow-hadoop</artifactId> <version>1.0-SNAPSHOT</version> </dependency> https://github.com/tensorflow/ecosystem
  29. 29. TRAINING PIPELINES: QUEUE + DATASET § Don’t Use feed_dict for Production Workloads!! § feed_dict Requires Python <-> C++ Serialization § Retrieval is Single-threaded, Synchronous, SLOW! § Can’t Retrieve Until Current Batch is Complete § CPUs/GPUs Not Fully Utilized! § Use Queue or Dataset API
  30. 30. QUEUES § More than Just a Traditional Queue § Perform I/O, pre-processing, cropping, shuffling § Pulls from HDFS, S3, Google Storage, Kafka, ... § Combine many small files into large TFRecord files § Typically use CPUs to focus GPUs on compute § Uses CUDA Streams
  31. 31. DATA MOVEMENT WITH QUEUES § GPU Pulls Batch from Queue (CUDA Streams) § GPU pulls next batch while processing current batch GPUs Stay Fully Utilized!
  32. 32. QUEUE CAPACITY PLANNING § batch_size § # examples / batch (ie. 64 jpg) § Limited by GPU RAM § num_processing_threads § CPU threads pull and pre-process batches of data § Limited by CPU Cores § queue_capacity § Limited by CPU RAM (ie. 5 * batch_size)
  33. 33. DETECT UNDERUTILIZED CPUS, GPUS § Instrument training code to generate “timelines” § Analyze with Google Web Tracing Framework (WTF) § Monitor CPU with `top`, GPU with `nvidia-smi` http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  34. 34. LET’S FEED DATA WITH A QUEUE § Navigate to the following notebook: 02_Feed_Queue_HDFS § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  35. 35. PULSE CHECK
  36. 36. BREAK § https://github.com/fluxcapacitor/pipeline/ § Slides, code, notebooks, Docker images available here: https://github.com/fluxcapacitor/pipeline/ gpu.ml Need Help? Use the Chat!
  37. 37. TENSORFLOW MODEL § MetaGraph § Combines GraphDef and Metadata § GraphDef § Architecture of your model (nodes, edges) § Metadata § Asset: Accompanying assets to your model § SignatureDef: Maps external : internal tensors § Variables § Stored separately during training (checkpoint) § Allows training to continue from any checkpoint § Variables are “frozen” into Constants when deployed for inference GraphDef x W mul add b MetaGraph Metadata Assets SignatureDef Tags Version Variables: “W” : 0.328 “b” : -1.407
  38. 38. TENSORFLOW SESSION Session graph: GraphDef Variables: “W” : 0.328 “b” : -1.407 Variables are Periodically Checkpointed GraphDef is Static
  39. 39. TENSORFLOW DEBUGGER § Step through Operations § Inspect Inputs and Outputs § Wrap Session in Debug Session sess = tf.Session(config=config) sess = tf_debug.LocalCLIDebugWrapperSession(sess)
  40. 40. LET’S DEBUG A MODEL § Navigate to the following notebook: 04_Debug_Model § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  41. 41. BATCH NORMALIZATION § Each Mini-Batch May Have Wildly Different Distributions § Normalize per batch (and layer) § Speeds up Training!! § Weights are Learned Quicker § Final Model is More Accurate § Final mean and variance will be folded into Graph later -- Always Use Batch Normalization! -- z = tf.matmul(a_prev, W) a = tf.nn.relu(z) a_mean, a_var = tf.nn.moments(a, [0]) scale = tf.Variable(tf.ones([depth/channels])) beta = tf.Variable(tf.zeros ([depth/channels])) bn = tf.nn.batch_normalizaton(a, a_mean, a_var, beta, scale, 0.001)
  42. 42. AGENDA § GPU Environment § Train and Debug TensorFlow Model § Train with Distributed TensorFlow Cluster § Optimize Model with XLA JIT Compiler § Optimize Model with XLA AOT and Graph Transforms § Deploy Model to TensorFlow Serving Runtime § Optimize TensorFlow Serving Runtime § Wrap-up and Q&A
  43. 43. MULTI-GPU TRAINING (SINGLE NODE) § Variables stored on CPU (cpu:0) § Model graph (aka “replica”, “tower”) is copied to each GPU(gpu:0, gpu:1, …) Multi-GPU Training Steps: 1. CPU transfers model to each GPU 2. CPU waits on all GPUs to finish batch 3. CPU copies all gradients back from all GPUs 4. CPU synchronizes + AVG all gradients from GPUs 5. CPU updates GPUs with new variables/weights 6. Repeat Step 1 until reaching stop condition (ie. max_epochs)
  44. 44. DISTRIBUTED, MULTI-NODE TRAINING § TensorFlow Automatically Inserts Send and Receive Ops into Graph § Parameter Server Synchronously Aggregates Updates to Variables § Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS Worker0 Worker0 Worker1 Worker0 Worker1 Worker2 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu0 gpu0
  45. 45. SYNCHRONOUS VS. ASYNCHRONOUS § Synchronous § Nodes compute gradients § Nodes update Parameter Server (PS) § Nodes sync on PS for latest gradients § Asynchronous § Some nodes delay in computing gradients § Nodes don’t update PS § Nodes get stale gradients from PS § May not converge due to stale reads!
  46. 46. DATA PARALLEL VS MODEL PARALLEL § Data Parallel (“Between-Graph Replication”) § Send exact same model to each device § Each device operates on its partition of data § ie. Spark sends same function to many workers § Each worker operates on their partition of data § Model Parallel (“In-Graph Replication”) § Send different partition of model to each device § Each device operates on all data Very Difficult!! Required for Large Models. (GPU RAM Limitation)
  47. 47. DISTRIBUTED TENSORFLOW CONCEPTS § Client § Program that builds a TF Graph, constructs a session, interacts with the cluster § Written in Python, C++ § Cluster § Set of distributed nodes executing a graph § Nodes can play any role § Jobs (“Roles”) § Parameter Server (“ps”) stores and updates variables § Worker (“worker”) performs compute-intensive tasks (stateless) § Assigned 0..* tasks § Task (“Server Process”) “ps” and “worker” are conventional names
  48. 48. CHIEF WORKER § Worker Task 0 is Chosen by Default § Task 0 is guaranteed to exist § Implements Maintenance Tasks § Writes checkpoints § Initializes parameters at start of training § Writes log summaries § Parameter Server health checks
  49. 49. NODE AND PROCESS FAILURES § Checkpoint to Persistent Storage (HDFS, S3) § Use MonitoredTrainingSession and Hooks § Use a Good Cluster Orchestrator (ie. Kubernetes,Mesos) § Understand Failure Modes and Recovery States Stateless, Not Bad: Training Continues Stateful, Bad: Training Must Stop Dios Mio! Long Night Ahead…
  50. 50. VALIDATING DISTRIBUTED MODEL § Separate Training and Validation Clusters § Validate using Saved Checkpoints from Parameter Servers § Avoids Resource Contention Training Cluster Validation Cluster Parameter Server Cluster
  51. 51. LET’S TRAIN WITH DISTRIBUTED CPU § Navigate to the following notebook: 05_Train_Model_Distributed_CPU § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  52. 52. LET’S TRAIN WITH DISTRIBUTED GPU § Navigate to the following notebook: 05a_Train_Model_Distributed_GPU § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  53. 53. NEW(‘ISH): EXPERIMENT + ESTIMATOR § Higher-Level APIs Simplify Distributed Training § Picks Up Configuration from Environment § Supports Custom Models (ie. Keras) § Used for Training, Validation, and Prediction § API is Changing, but Patterns Remain the Same § Works Well with Google Cloud ML (Surprised?!)
  54. 54. PULSE CHECK
  55. 55. BREAK § https://github.com/fluxcapacitor/pipeline/ § Slides, code, notebooks, Docker images available here: https://github.com/fluxcapacitor/pipeline/ gpu.ml Need Help? Use the Chat!
  56. 56. AGENDA § GPUs and TensorFlow § Train and Debug TensorFlow Model § Train with Distributed TensorFlow Cluster § Optimize Model with XLA JIT Compiler § Optimize Model with XLA AOT and Graph Transforms § Deploy Model to TensorFlow Serving Runtime § Optimize TensorFlow Serving Runtime § Wrap-up and Q&A
  57. 57. XLA FRAMEWORK § Accelerated Linear Algebra (XLA) § Goals: § Reduce reliance on custom operators § Improve execution speed § Improve memory usage § Reduce mobile footprint § Improve portability § Helps TF Stay Flexible and Performant
  58. 58. XLA HIGH LEVEL OPTIMIZER (HLO) § Compiler Intermediate Representation (IR) § Independent of source and target language § Define Graphs using HLO Language § XLA Step 1 Emits Target-Independent HLO § XLA Step 2 Emits Target-Dependent LLVM § LLVM Emits Native Code Specific to Target § Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
  59. 59. JIT COMPILER § Just-In-Time Compiler § Built on XLA Framework § Goals: § Reduce memory movement – especially useful on GPUs § Reduce overhead of multiple function calls § Similar to Spark Operator Fusing in Spark 2.0 § Unroll Loops, Fuse Operators, Fold Constants, … § Scope to session, device, or `with jit_scope():`
  60. 60. VISUALIZING JIT COMPILER IN ACTION Before After Google Web Tracing Framework: http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  61. 61. VISUALIZING FUSING OPERATORS pip install graphviz dot -Tpng /tmp/hlo_graph_1.w5LcGs.dot -o hlo_graph_1.png GraphViz: http://www.graphviz.org hlo_*.dot files generated by XLA
  62. 62. LET’S TRAIN WITH XLA CPU § Navigate to the following notebook: 06_Train_Model_XLA_CPU § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  63. 63. LET’S TRAIN WITH XLA GPU § Navigate to the following notebook: 06a_Train_Model_XLA_GPU § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  64. 64. PULSE CHECK
  65. 65. BREAK § https://github.com/fluxcapacitor/pipeline/ § Slides, code, notebooks, Docker images available here: https://github.com/fluxcapacitor/pipeline/ gpu.ml Need Help? Use the Chat!
  66. 66. IT’S WORTH HIGHLIGHTING… § From Now On, We Optimize Trained Models For Inference § In Other Words… We’re Done with Training Optimizations! Let’s Move to Predicting Optimizations!!
  67. 67. AGENDA § GPUs and TensorFlow § Train and Debug TensorFlow Model § Train with Distributed TensorFlow Cluster § Optimize Model with XLA JIT Compiler § Optimize Model with XLA AOT and Graph Transforms § Deploy Model to TensorFlow Serving Runtime § Optimize TensorFlow Serving Runtime § Wrap-up and Q&A
  68. 68. AOT COMPILER § Standalone, Ahead-Of-Time (AOT) Compiler § Built on XLA framework § tfcompile § Creates executable with minimal TensorFlow Runtime needed § Includes only dependencies needed by subgraph computation § Creates functions with feeds (inputs) and fetches (outputs) § Packaged as cc_libary header and object files to link into your app § Commonly used for mobile device inference graph § Currently, only CPU x86-64 and ARM are supported - no GPU
  69. 69. GRAPH TRANSFORM TOOL (GTT) § Optimize Trained Models for Inference § Remove training-only Ops (checkpoint, drop out, logs) § Remove unreachable nodes between given feed -> fetch § Fuse adjacent operators to improve memory bandwidth § Fold final batch norm mean and variance into variables § Round weights/variables improves compression (ie. 70%) § Quantize weights and activations simplifies model § FP32 down to INT8
  70. 70. BEFORE OPTIMIZATIONS
  71. 71. AFTER STRIPPING UNUSED NODES § Optimizations § strip_unused_nodes § Results § Graph much simpler § File size much smaller
  72. 72. AFTER REMOVING UNUSED NODES § Optimizations § strip_unused_nodes § remove_nodes § Results § Pesky nodes removed § File size a bit smaller
  73. 73. AFTER FOLDING CONSTANTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § Results § Placeholders (feeds) -> Variables* (*Why Variables and not Constants?)
  74. 74. AFTER FOLDING BATCH NORMS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § Results § Graph remains the same § File size approximately the same
  75. 75. WEIGHT QUANTIZATION § FP16 and INT8 Are Smaller and Computationally Simpler § Weights/Variables are Constants § Easy to Linearly Quantize
  76. 76. AFTER QUANTIZING WEIGHTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § Results § Graph is same, file size is smaller, compute is faster
  77. 77. LET’S OPTIMIZE FOR INFERENCE § Navigate to the following notebook: 07_Optimize_Model* (*Why just CPU version? Why not both CPU and GPU?) § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  78. 78. BUT WAIT, THERE’S MORE!
  79. 79. ACTIVATION QUANTIZATION § Activations Not Known Ahead of Time § Depends on input, not easy to quantize § Requires Additional Calibration Step § Use a “representative” dataset § Per Neural Network Layer… § Collect histogram of activation values § Generate many quantized distributions with different saturation thresholds § Choose threshold to minimize… KL_divergence(ref_distribution, quant_distribution) § Not Much Time or Data is Required (Minutes on Commodity Hardware)
  80. 80. ACTIVATION QUANTIZATION GRAPH Create Conversion Subgraph Produces QuantizedMatMul, QuantizedRelu Eliminate Adjacent Dequantize + Quantize
  81. 81. AFTER ACTIVATION QUANTIZATION § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes (activations) § Results § Larger graph, needs calibration! Requires additional freeze_requantization_ranges
  82. 82. LET’S OPTIMIZE FOR INFERENCE (2/2) § Navigate to the following notebook: 08_Optimize_Model_Activations § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  83. 83. PULSE CHECK
  84. 84. BREAK § https://github.com/fluxcapacitor/pipeline/ § Slides, code, notebooks, Docker images available here: https://github.com/fluxcapacitor/pipeline/ gpu.ml Need Help? Use the Chat!
  85. 85. AGENDA § GPUs and TensorFlow § Train and Debug TensorFlow Model § Train with Distributed TensorFlow Cluster § Optimize Model with XLA JIT Compiler § Optimize Model with XLA AOT and Graph Transforms § Deploy Model to TensorFlow Serving Runtime § Optimize TensorFlow Serving Runtime § Wrap-up and Q&A
  86. 86. MODEL SERVING TERMINOLOGY § Inference § Only Forward Propagation through Network § Predict, Classify, Regress, … § Bundle § GraphDef, Variables, Metadata, … § Assets § ie. Map of ClassificationID -> String § {9283: “penguin”, 9284: “bridge”} § Version § Every Model Has a Version Number (Integer) § Version Policy § ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
  87. 87. TENSORFLOW SERVING FEATURES § Supports Auto-Scaling § Custom Loaders beyond File-based § Tune for Low-latency or High-throughput § Serve Diff Models/Versions in Same Process § Customize Models Types beyond HashMap and TensorFlow § Customize Version Policies for A/B and Bandit Tests § Support Request Draining for Graceful Model Updates § Enable Request Batching for Diff Use Cases and HW § Supports Optimized Transport with GRPC and Protocol Buffers
  88. 88. PREDICTION SERVICE § Predict (Original, Generic) § Input: List of Tensor § Output: List of Tensor § Classify § Input: List of tf.Example (key, value) pairs § Output: List of (class_label: String, score: float) § Regress § Input: List of tf.Example (key, value) pairs § Output: List of (label: String, score: float)
  89. 89. PREDICTION INPUTS + OUTPUTS § SignatureDef § Defines inputs and outputs § Maps external (logical) to internal (physical) tensor names § Allows internal (physical) tensor names to change from tensorflow.python.saved_model import utils from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import signature_def_utils graph = tf.get_default_graph() x_observed = graph.get_tensor_by_name('x_observed:0') y_pred = graph.get_tensor_by_name('add:0') inputs_map = {'inputs': x_observed} outputs_map = {'outputs': y_pred} predict_signature = signature_def_utils.predict_signature_def(inputs=inputs_map, outputs=outputs_map)
  90. 90. MULTI-HEADED INFERENCE § Multiple “Heads” of Model § Return class and scores to be fed into another model § Inputs Propagated Forward Only Once § Optimizes Bandwidth, CPU, Latency, Memory, Coolness
  91. 91. BUILD YOUR OWN MODEL SERVER (?!) § Adapt GRPC(Google) <-> HTTP (REST of the World) § Perform Batch Inference vs. Request/Response § Handle Requests Asynchronously § Support Mobile, Embedded Inference § Customize Request Batching § Add Circuit Breakers, Fallbacks § Control Latency Requirements § Reduce Number of Moving Parts #include “tensorflow_serving/model_servers/server_core.h” class MyTensorFlowModelServer { ServerCore::Options options; // set options (model name, path, etc) std::unique_ptr<ServerCore> core; TF_CHECK_OK( ServerCore::Create(std::move(options), &core) ); } Compile and Link libtensorflow.so
  92. 92. FREEZING MODEL FOR DEPLOYMENT § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes § freeze_graph § Results § Variables -> Constants Finally! We’re Ready to Deploy!!
  93. 93. LET’S DEPLOY OPTIMIZED MODEL § Navigate to the following notebook: 09_Deploy_Optimized_Model § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  94. 94. PULSE CHECK
  95. 95. BREAK § https://github.com/fluxcapacitor/pipeline/ § Slides, code, notebooks, Docker images available here: https://github.com/fluxcapacitor/pipeline/ gpu.ml Need Help? Use the Chat!
  96. 96. AGENDA § GPUs and TensorFlow § Train and Debug TensorFlow Model § Train with Distributed TensorFlow Cluster § Optimize Model with XLA JIT Compiler § Optimize Model with XLA AOT and Graph Transforms § Deploy Model to TensorFlow Serving Runtime § Optimize TensorFlow Serving Runtime § Wrap-up and Q&A
  97. 97. REQUEST BATCH TUNING § max_batch_size § Enables throughput/latency tradeoff § Bounded by RAM § batch_timeout_micros § Defines batch time window, latency upper-bound § Bounded by RAM § num_batch_threads § Defines parallelism § Bounded by CPU cores § max_enqueued_batches § Defines queue upper bound, throttling § Bounded by RAM Reaching either threshold will trigger a batch
  98. 98. BATCH SCHEDULER STRATEGIES § BasicBatchScheduler § Best for homogeneous request types (ie. always classify or always regress) § Async callback upon max_batch_size or batch_timeout_micros § BatchTask encapsulates unit of work to be batched § SharedBatchScheduler § Best for heterogeneous request types, multi-step inference, ensembles, … § Groups BatchTasks into separate queues to form homogenous batches § Processes batches fairly through interleaving § StreamingBatchScheduler § Mixed CPU/GPU/IO-bound workloads § Provides fine-grained control for complex, multi-phase inference logic You Must Experiment to Find the Best Strategy for You!! Co-locate and Isolate Homogenous Workloads
  99. 99. LET’S DEPLOY OPTIMIZED MODEL § Navigate to the following notebook: 10_Optimize_Model_Server § https://github.com/fluxcapacitor/pipeline/gpu.ml/ notebooks/
  100. 100. AGENDA § GPUs and TensorFlow § Train and Debug TensorFlow Model § Train with Distributed TensorFlow Cluster § Optimize Model with XLA JIT Compiler § Optimize Model with XLA AOT and Graph Transforms § Deploy Model to TensorFlow Serving Runtime § Optimize TensorFlow Serving Runtime § Wrap-up and Q&A
  101. 101. YOU JUST LEARNED… § To Inspect and Debug Models § Training and Predicting Best Practices § To Distribute Training Across a Cluster § To Optimize Training with Queue Feeders § To Optimize Training with XLA JIT Compiler § To Optimize Inference with AOT and Graph Transform Tool (GTT) § Key Components of TensorFlow Serving § To Deploy Models with TensorFlow Serving § To Optimize Inference by Tuning TensorFlow Serving
  102. 102. Q&A § Thank you!! § https://github.com/fluxcapacitor/pipeline/ § Slides, code, notebooks, Docker images available here: https://github.com/fluxcapacitor/pipeline/ gpu.ml Contact Me @ Email: chris@pipeline.io Twitter: @cfregly

×