Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with GPUs - GPU Tech Conference - Munich, Oct 2017

2,550 views

Published on

Using the latest advancements from TensorFlow including the Accelerated Linear Algebra (XLA) Framework, JIT AOT Compiler, and Graph Transform Tool , I’ll demonstrate how to optimize, profile, and deploy TensorFlow Models in GPU-based production environment.

This talk is 100% demo based with open source tools and completely reproducible through Docker on your own GPU cluster.

In addition, I spin up a GPU cloud instance for every attendee in the audience. We go through the notebooks together as I demonstrate the process of continuously training, optimizing, deploying, and serving a TensorFlow model on a large, distributed cluster of Nvidia GPUs managed by the attendees.

http://pipeline.ai

Published in: Software
  • Be the first to comment

Optimizing, Profiling, and Deploying TensorFlow AI Models in Production with GPUs - GPU Tech Conference - Munich, Oct 2017

  1. 1. OPTIMIZING, PROFILING, AND TUNING TENSORFLOW + GPUS NVIDIA GPU TECH CONF MUNICH, GERMANY OCTOBER 11, 2017 CHRIS FREGLY, FOUNDER @ PIPELINE.AI
  2. 2. INTRODUCTIONS: ME § Chris Fregly, Research Engineer @ § Formerly Netflix and Databricks § Advanced Spark and TensorFlow Meetup Please Join Our 40,000+ Members Globally! * San Francisco * Chicago * Washington DC * London Contact Me chris@pipeline.ai @cfregly
  3. 3. INTRODUCTIONS: YOU § Software Engineer or Data Scientist interested in optimizing and deploying TensorFlow models to production § Assume you have a working knowledge of TensorFlow
  4. 4. CONTENT BREAKDOWN § 50% Training Optimizations (GPUs, Pipeline, XLA+JIT) § 50% Prediction Optimizations (XLA+AOT, TF Serving) § Why Heavy Focus on Predicting? § Training: boring batch O(num_data_scientists) § Inference: exciting real-time O(num_users_of_app)
  5. 5. 100% OPEN SOURCE CODE § https://github.com/PipelineAI/pipeline/ § Please 🌟 this GitHub Repo! § All slides, code, notebooks, and Docker images here: https://github.com/PipelineAI/pipeline/tree/master/gpu
  6. 6. AGENDA § Optimize TensorFlow Training § GPUs § Ingestion Pipeline § XLA JIT Compiler § Optimize TensorFlow Inference § XLA AOT Compiler § Graph Transform Tool (GTT) § TensorFlow Serving § Compare Optimizations in Production
  7. 7. SETTING UP TENSORFLOW WITH GPUS § Very Painful! § Especially inside Docker § Use nvidia-docker § Especially on Kubernetes! § Use Kubernetes 1.7+ § http://pipeline.ai for GitHub + DockerHub Links
  8. 8. GPU HALF-PRECISION SUPPORT § FP16, INT8 are “Half Precision” § Supported by Pascal P100 (2016) and Volta V100 (2017) § Flexible FP32 GPU Cores Can Fit 2 FP16’s for 2x Throughput! § Half-Precision is OK for Approximate Deep Learning Use Cases
  9. 9. VOLTA V100 RECENTLY ANNOUNCED § 84 Streaming Multiprocessors (SM’s) § 5,376 GPU Cores § 672 Tensor Cores (ie. Google TPU) § Mixed FP16/FP32 Precision § More Shared Memory § New L0 Instruction Cache § Faster L1 Data Cache § V100 vs. P100 Performance § 12x TFLOPS @ Peak Training § 6x Inference Throughput
  10. 10. V100 AND CUDA 9 § Independent Thread Scheduling - Finally!! § Similar to CPU fine-grained thread synchronization semantics § Allows GPU to yield execution of any thread § Still Optimized for SIMT (Same Instruction Multiple Thread) § SIMT units automatically scheduled together § Explicit Synchronization P100 V100
  11. 11. CUDA STREAMS § Asynchronous I/O Transfer § Overlap Compute and I/O § Keeps GPUs Saturated § Fundamental to Queue Framework in TensorFlow
  12. 12. AGENDA § Optimize TensorFlow Training § GPUs § Ingestion Pipeline § XLA JIT Compiler § Optimize TensorFlow Inference § XLA AOT Compiler § Graph Transform Tool (GTT) § TensorFlow Serving § Compare Optimizations in Production
  13. 13. EXISTING DATA PIPELINES § Data Processing § HDFS/Hadoop § Spark § Containers § Docker § Google Container § Container Orchestrators § Kubernetes § Mesos <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow-hadoop</artifactId> </dependency> https://github.com/tensorflow/ecosystem
  14. 14. DON’T USE FEED_DICT § Not Optimized for Production Pipelines § feed_dict Requires Python <-> C++ Serialization § Single-threaded, Synchronous, SLOW! § Can’t Retrieve Until Current Batch is Complete § CPUs/GPUs Not Fully Utilized! § Use Queue or Dataset API
  15. 15. QUEUES § More than just a traditional Queue § Perform I/O, pre-processing, cropping, shuffling § Pulls from HDFS, S3, Google Storage, Kafka, ... § Combine many small files into large TFRecord files § Use CPUs to free GPUs for compute § Uses CUDA Streams § Helps saturate CPUs and GPUs
  16. 16. QUEUE CAPACITY PLANNING § batch_size § # examples / batch (ie. 64 jpg) § Limited by GPU RAM § num_processing_threads § CPU threads pull and pre-process batches of data § Limited by CPU Cores § queue_capacity § Limited by CPU RAM (ie. 5 * batch_size)
  17. 17. DETECT UNDERUTILIZED CPUS, GPUS § Instrument training code to generate “timelines” § Analyze with Google Web Tracing Framework (WTF) § Monitor CPU with `top`, GPU with `nvidia-smi` http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  18. 18. SINGLE NODE, MULTI-GPU TRAINING § cpu:0 § By default, all CPUs § Requires extra config to target a CPU § gpu:0..n § Each GPU has a unique id § TF usually prefers a single GPU § xla_cpu:0, xla_gpu:0..n § “JIT Compiler Device” § Hints TensorFlow to attempt JIT Compile with tf.device(“/cpu:0”): with tf.device(“/gpu:0”): with tf.device(“/gpu:1”): GPU 0 GPU 1
  19. 19. MULTI-NODE DISTRIBUTED TRAINING § TensorFlow Automatically Inserts Send and Receive Ops into Graph § Parameter Server Synchronously Aggregates Updates to Variables § Nodes with Multiple GPUs will Pre-Aggregate Before Sending to PS Worker0 Worker0 Worker1 Worker0 Worker1 Worker2 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu2 gpu3 gpu0 gpu1 gpu0 gpu0
  20. 20. SYNCHRONOUS VS. ASYNCHRONOUS § Synchronous § Nodes compute gradients § Nodes update Parameter Server (PS) § Nodes sync on PS for latest gradients § Asynchronous § Some nodes delay in computing gradients § Nodes don’t update PS § Nodes get stale gradients from PS § May not converge due to stale reads!
  21. 21. SEPARATE TRAINING + VALIDATION § Separate Training and Validation Clusters § Validate using Saved Checkpoints from Parameter Servers § Avoids Resource Contention Training Cluster Validation Cluster Parameter Server Cluster
  22. 22. ALWAYS USE BATCH NORMALIZATION § Each Mini-Batch May Have Wildly Different Distributions § Normalize per batch (and layer) § Speeds up Training!! § Weights are Learned Quicker § Final Model is More Accurate § Final mean and variance will be folded into Graph later -- Always Use Batch Normalization! -- z = tf.matmul(a_prev, W) a = tf.nn.relu(z) a_mean, a_var = tf.nn.moments(a, [0]) scale = tf.Variable(tf.ones([depth/channels])) beta = tf.Variable(tf.zeros ([depth/channels])) bn = tf.nn.batch_normalizaton(a, a_mean, a_var, beta, scale, 0.001)
  23. 23. OPTIMIZE GRAPH EXECUTION ORDER § https://github.com/yaroslavvb/stuff Linearize to minimize graph memory usage
  24. 24. AGENDA § Optimize TensorFlow Training § GPUs § Ingestion Pipeline § XLA JIT Compiler § Optimize TensorFlow Inference § XLA AOT Compiler § Graph Transform Tool (GTT) § TensorFlow Serving § Compare Optimizations in Production
  25. 25. XLA FRAMEWORK § Accelerated Linear Algebra (XLA) § Goals: § Reduce reliance on custom operators § Improve execution speed § Improve memory usage § Reduce mobile footprint § Improve portability § Helps TensorFlow Stay Both Flexible and Performant
  26. 26. XLA HIGH LEVEL OPTIMIZER (HLO) § Compiler Intermediate Representation (IR) § Independent of Source and Target Language § Define Graphs using HLO Operations § XLA Step 1 Emits Target-Independent HLO § XLA Step 2 Emits Target-Dependent LLVM § LLVM Emits Native Code Specific to Target § Supports x86-64, ARM64 (CPU), and NVPTX (GPU)
  27. 27. JIT COMPILER § Just-In-Time Compiler § Built on XLA Framework § Goals: § Reduce memory movement – especially useful on GPUs § Reduce overhead of multiple function calls § Similar to Spark Operator Fusing in Spark 2.0 § Unroll Loops, Fuse Operators, Fold Constants, … § Scope to session, device, or `with jit_scope():`
  28. 28. VISUALIZING JIT COMPILER IN ACTION Before After Google Web Tracing Framework: http://google.github.io/tracing-framework/ from tensorflow.python.client import timeline trace = timeline.Timeline(step_stats=run_metadata.step_stats) with open('timeline.json', 'w') as trace_file: trace_file.write( trace.generate_chrome_trace_format(show_memory=True))
  29. 29. VISUALIZING FUSING OPERATORS pip install graphviz dot -Tpng /tmp/hlo_graph_1.w5LcGs.dot -o hlo_graph_1.png GraphViz: http://www.graphviz.org hlo_*.dot files generated by XLA
  30. 30. AGENDA § Optimize TensorFlow Training § GPUs § Ingestion Pipeline § XLA JIT Compiler § Optimize TensorFlow Inference § XLA AOT Compiler § Graph Transform Tool (GTT) § TensorFlow Serving § Compare Optimizations in Production
  31. 31. AOT COMPILER § Standalone, Ahead-Of-Time (AOT) Compiler § Built on XLA framework § tfcompile § Creates executable with minimal TensorFlow Runtime needed § Includes only dependencies needed by subgraph computation § Creates functions with feeds (inputs) and fetches (outputs) § Packaged as cc_libary header and object files to link into your app § Commonly used for mobile device inference graph § Currently, only CPU x86-64 and ARM are supported - no GPU
  32. 32. AGENDA § Optimize TensorFlow Training § GPUs § Ingestion Pipeline § XLA JIT Compiler § Optimize TensorFlow Inference § XLA AOT Compiler § Graph Transform Tool (GTT) § TensorFlow Serving § Compare Optimizations in Production
  33. 33. GRAPH TRANSFORM TOOL (GTT) § Optimize Trained Models for Inference § Remove training-only Ops (checkpoint, drop out, logs) § Remove unreachable nodes between given feed -> fetch § Fuse adjacent operators to improve memory bandwidth § Fold final batch norm mean and variance into variables § Round weights/variables improves compression (ie. 70%) § Quantize (FP32 -> INT8) to speed up math operations
  34. 34. BEFORE OPTIMIZATIONS
  35. 35. GRAPH TRANSFORM TOOL transform_graph --in_graph=tensorflow_inception_graph.pb ß Original Graph --out_graph=optimized_inception_graph.pb ß Transformed Graph --inputs='Mul' ß Feed (Input) --outputs='softmax' ß Fetch (Output) --transforms=' ß List of Transforms strip_unused_nodes remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true) fold_batch_norms fold_old_batch_norms quantize_weights quantize_nodes'
  36. 36. AFTER STRIPPING UNUSED NODES § Optimizations § strip_unused_nodes § Results § Graph much simpler § File size much smaller
  37. 37. AFTER REMOVING UNUSED NODES § Optimizations § strip_unused_nodes § remove_nodes § Results § Pesky nodes removed § File size a bit smaller
  38. 38. AFTER FOLDING CONSTANTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § Results § Placeholders (feeds) -> Variables* (*Why Variables and not Constants?)
  39. 39. AFTER FOLDING BATCH NORMS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § Results § Graph remains the same § File size approximately the same
  40. 40. WEIGHT QUANTIZATION § FP16 and INT8 Are Smaller and Computationally Simpler § Weights/Variables are Constants § Easy to Linearly Quantize
  41. 41. AFTER QUANTIZING WEIGHTS § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § Results § Graph is same, file size is smaller, compute is faster
  42. 42. BUT WAIT, THERE’S MORE!
  43. 43. ACTIVATION QUANTIZATION § Activations Not Known Ahead of Time § Depends on input, not easy to quantize § Requires Additional Calibration Step § Use a “representative” dataset § Per Neural Network Layer… § Collect histogram of activation values § Generate many quantized distributions with different saturation thresholds § Choose threshold to minimize… KL_divergence(ref_distribution, quant_distribution) § Not Much Time or Data is Required (Minutes on Commodity Hardware)
  44. 44. AFTER ACTIVATION QUANTIZATION § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes (activations) § Results § Larger graph, needs calibration! Requires additional freeze_requantization_ranges
  45. 45. FREEZING MODEL FOR DEPLOYMENT § Optimizations § strip_unused_nodes § remove_nodes § fold_constants § fold_batch_norms § quantize_weights § quantize_nodes § freeze_graph § Results § Variables -> Constants Finally! We’re Ready to Deploy!!
  46. 46. AGENDA § Optimize TensorFlow Training § GPUs § Ingestion Pipeline § XLA JIT Compiler § Optimize TensorFlow Inference § XLA AOT Compiler § Graph Transform Tool (GTT) § TensorFlow Serving § Compare Optimizations in Production
  47. 47. TENSORFLOW SERVING OVERVIEW § Inference § Only Forward Propagation through Network § Predict, Classify, Regress, … § Bundle § GraphDef, Variables, Metadata, … § Assets § ie. Map of ClassificationID -> String § {9283: “penguin”, 9284: “bridge”} § Version § Every Model Has a Version Number (Integer) § Version Policy § ie. Serve Only Latest (Highest), Serve Both Latest and Previous, …
  48. 48. MULTI-HEADED INFERENCE § Multiple “heads” (aka “responses”) from 1 model prediction § Response includes both class and scores § Inputs sent only once § Feed scores into ensemble models § Use model for feature engineering § Optimizes bandwidth, CPU, latency, memory, coolness
  49. 49. REQUEST BATCHING § max_batch_size § Enables throughput/latency tradeoff § Bounded by RAM § batch_timeout_micros § Defines batch time window, latency upper-bound § Bounded by RAM § num_batch_threads § Defines parallelism § Bounded by CPU cores § max_enqueued_batches § Defines queue upper bound, throttling § Bounded by RAM Reaching either threshold will trigger a batch
  50. 50. TENSORRT RUNTIME(NVIDIA) § Post-Training Model Optimizations § Alternative to Graph Transform Tool § GPU-Optimized Prediction Runtime § Alternative to TensorFlow Serving
  51. 51. AGENDA § Optimize TensorFlow Training § GPUs § Ingestion Pipeline § XLA JIT Compiler § Optimize TensorFlow Inference § XLA AOT Compiler § Graph Transform Tool (GTT) § TensorFlow Serving § Compare Optimizations in Production
  52. 52. COMPARE OPTIMIZATIONS IN PROD
  53. 53. FAST AND EASY MODEL EXPERIMENTS § Create Experiments with Drag n’ Drop § Deploy Safely into Production § Control Traffic Routing
  54. 54. 360º MODEL COMPARISON § Compare Models Offline and Online § Offline Training and Validation Accuracy § Real-Time Prediction Precision § Real-Time Prediction Response Time
  55. 55. AUTOMATIC TRAFFIC SHIFTING § Dynamically Route Traffic to MAX(Revenue)
  56. 56. AUTOMATIC TRAFFIC SHIFTING § Auto-Shift Traffic to MIN(Cost) § Real-Time Cost Per Prediction § Across Clouds and On-Premise
  57. 57. PREDICTION PROFILING + TUNING § Pinpoint Performance Bottlenecks § Fine-Grained Prediction Metrics § 3 Logic Steps in a Prediction 1. transform_request() 2. predict() 3. transform_response()
  58. 58. LIVE PREDICTION STREAMS § Visually Compare Real-Time Predictions
  59. 59. CONTINUOUS MODEL TRAINING § Identify and Fix Borderline Predictions (50-50% Confidence) § Fix Along Class Boundaries § Enables Crowd Sourcing § Game-ify Tedious Process § Retrain on New Labeled Data
  60. 60. AUTOMATIC MODEL OPTIMIZATIONS § Generate Optimized Model Versions § Weight + Activation Quantization § CPU + GPU Runtime Optimizations § TensorFlow, TensorRT (Nvidia), etc
  61. 61. OPTIMIZE BOTH MODEL + RUNTIME § Build Model + Runtime into Immutable Docker Image § Same Runtime: Local, Dev, Test, Prod § No Dependency Surprises in Production § Tune Model + Runtime Together as One § Hyper-Parameter Tuning includes Runtime Config and Metrics
  62. 62. AGENDA § Optimize TensorFlow Training § GPUs § Ingestion Pipeline § XLA JIT Compiler § Optimize TensorFlow Inference § XLA AOT Compiler § Graph Transform Tool (GTT) § TensorFlow Serving § Compare Optimizations in Production
  63. 63. DANKE SHOEN! HABEN FRAGEN? § https://github.com/PipelineAI/pipeline/ § Please 🌟 this GitHub Repo! § All slides, code, notebooks, and Docker images here: https://github.com/PipelineAI/pipeline/tree/master/gpu Contact Me chris@pipeline.ai @cfregly

×