Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Malo Denielou - No shard left behind: Dynamic work rebalancing in Apache Beam

755 views

Published on

http://flink-forward.org/kb_sessions/no-shard-left-behind-dynamic-work-rebalancing-in-apache-beam/

The Apache Beam (incubating) programming model is designed to support several advanced data processing features such as autoscaling and dynamic work rebalancing. In this talk, we will first explain how dynamic work rebalancing not only provides a general and robust solution to the problem of stragglers in traditional data processing pipelines, but also how it allows autoscaling to be truly effective. We will then present how dynamic work rebalancing works as implemented in Google Cloud Dataflow and which path other Apache Beam runners link Apache Flink can follow to benefit from it.

Published in: Data & Analytics
  • Login to see the comments

  • Be the first to like this

Malo Denielou - No shard left behind: Dynamic work rebalancing in Apache Beam

  1. 1. Confidential + Proprietary No shard left behind Dynamic Work Rebalancing and other adaptive features in Apache Beam (incubating) Malo Denielou malo@google.com
  2. 2. Apache Beam is a unified programming model designed to provide efficient and portable data processing pipelines.
  3. 3. Apache Beam (incubating) 1. The Beam Programming Model 2. SDKs for writing Beam pipelines -- starting with Java 3. Runners for existing distributed processing backends ○ Apache Flink (thanks to data Artisans) ○ Apache Spark (thanks to Cloudera and PayPal) ○ Google Cloud Dataflow (fully managed service) ○ Local runner for testing Beam Model: Fn Runners Apache Flink Apache Spark Beam Model: Pipeline Construction Other LanguagesBeam Java Beam Python Execution Execution Cloud Dataflow Execution
  4. 4. Apache Beam abstractions PCollection – a parallel collection of timestamped elements that are in windows. Sources & Readers – produce PCollections of timestamped elements and a watermark. ParDo – flatmap over elements of a PCollection. (Co)GroupByKey – shuffle & group {{K: V}} → {K: [V]}. Side inputs – global view of a PCollection used for broadcast / joins. Window – reassign elements to zero or more windows; may be data-dependent. Triggers – user flow control based on window, watermark, element count, lateness.
  5. 5. 1.Classic Batch 2. Batch with Fixed Windows 3. Streaming 5. Streaming With Retractions 4. Streaming with Speculative + Late Data 6. Streaming With Sessions
  6. 6. Data processing for realistic workloads Workload Time Streaming pipelines have variable input Batch pipelines have stages of different sizes
  7. 7. The curse of configuration Workload Time Workload Time Over-provisioning resources? Under-provisioning on purpose? A considerable effort is spent to finely tune all the parameters of the jobs.
  8. 8. Ideal case Workload Time A system that adapts.
  9. 9. The straggler problem in batchWorkers Time Tasks do not finish evenly on the workers. ● Data is not evenly distributed among tasks ● Processing time is uneven between tasks ● Runtime constraints Effects are cumulative per stage!
  10. 10. Common straggler mitigation techniques ● Split files into equal sizes? ● Pre-emptively over split? ● Detect slow workers and reexecute? ● Sample the data and split based on partial execution All have major costs, but do not solve completely the problem. Workers Time
  11. 11. No amount of upfront heuristic tuning (be it manual or automatic) is enough to guarantee good performance: the system will always hit unpredictable situations at run-time. A system that's able to dynamically adapt and get out of a bad situation is much more powerful than one that heuristically hopes to avoid getting into it.
  12. 12. Beam abstractions empower runners A bundle is group of elements of a PCollection processed and committed together. APIs (ParDo/DoFn): • startBundle() • processElement() n times • finishBundle() Streaming runner: • small bundles, low-latency pipelining across stages, overhead of frequent commits. Classic batch runner: • large bundles, fewer large commits, more efficient, long synchronous stages. Other runner strategies may strike a different balance.
  13. 13. Beam abstractions empower runners Efficiency at runner’s discretion “Read from this source, splitting it 1000 ways” ➔ user decides “Read from this source” ➔ runner decides APIs: • long getEstimatedSize() • List<Source> splitIntoBundles(size)
  14. 14. Beam abstractions empower runners Efficiency at runner’s discretion “Read from this source, splitting it 1000 ways” ➔ user decides “Read from this source” ➔ runner decides APIs: • long getEstimatedSize() • List<Source> splitIntoBundles(size) Runner Source gs://logs/* Size?
  15. 15. Beam abstractions empower runners Efficiency at runner’s discretion “Read from this source, splitting it 1000 ways” ➔ user decides “Read from this source” ➔ runner decides APIs: • long getEstimatedSize() • List<Source> splitIntoBundles(size) Runner Source gs://logs/* 50TB
  16. 16. Beam abstractions empower runners Efficiency at runner’s discretion “Read from this source, splitting it 1000 ways” ➔ user decides “Read from this source” ➔ runner decides APIs: • long getEstimatedSize() • List<Source> splitIntoBundles(size) Runner (cluster utilization, quota, bandwidth, throughput, concurrent stages, …) Source gs://logs/* Split in chunks of 500GB
  17. 17. Beam abstractions empower runners Efficiency at runner’s discretion “Read from this source, splitting it 1000 ways” ➔ user decides “Read from this source” ➔ runner decides APIs: • long getEstimatedSize() • List<Source> splitIntoBundles(size) Runner Source gs://logs/* List<Source>
  18. 18. Solving the straggler problem: Dynamic Work Rebalancing
  19. 19. Solving the straggler problem: Dynamic Work Rebalancing Workers Time Done work Active work Predicted completion Now Average
  20. 20. Solving the straggler problem: Dynamic Work Rebalancing Workers Time Done work Active work Predicted completion Now Average
  21. 21. Solving the straggler problem: Dynamic Work Rebalancing Workers Time Done work Active work Predicted completion Now Average Workers Time Now Average
  22. 22. Solving the straggler problem: Dynamic Work Rebalancing Workers Time Done work Active work Predicted completion Now Average Workers Time Now Average
  23. 23. Dynamic Work Rebalancing in the wild A classic MapReduce job (read from Google Cloud Storage, GroupByKey, write to Google Cloud Storage), 400 workers. Dynamic Work Rebalancing disabled to demonstrate stragglers. X axis: time (total ~20min.); Y axis: workers
  24. 24. Dynamic Work Rebalancing in the wild A classic MapReduce job (read from Google Cloud Storage, GroupByKey, write to Google Cloud Storage), 400 workers. Dynamic Work Rebalancing disabled to demonstrate stragglers. X axis: time (total ~20min.); Y axis: workers Same job, Dynamic Work Rebalancing enabled. X axis: time (total ~15min.); Y axis: workers
  25. 25. Dynamic Work Rebalancing in the wild A classic MapReduce job (read from Google Cloud Storage, GroupByKey, write to Google Cloud Storage), 400 workers. Dynamic Work Rebalancing disabled to demonstrate stragglers. X axis: time (total ~20min.); Y axis: workers Same job, Dynamic Work Rebalancing enabled. X axis: time (total ~15min.); Y axis: workers Savings
  26. 26. Dynamic Work Rebalancing with Autoscaling Initial allocation of 80 workers, based on size Multiple rounds of upsizing, enabled by dynamic work rebalancing Upscales to 1000 workers. ● tasks are balanced ● no oversplitting or manual tuning
  27. 27. Apache Beam enable dynamic adaptation Beam Readers provide simple progress signals, which enable runners to take action based on execution-time characteristics. All Beam runners can implement autoscaling and Dynamic Work Rebalancing. APIs for how much work is pending. • bounded: double getFractionConsumed() • unbounded: long getBacklogBytes() APIs for splitting: • bounded: • Source splitAtFraction(double) • int getParallelismRemaining()
  28. 28. Apache Beam is a unified programming model designed to provide efficient and portable data processing pipelines.
  29. 29. To learn more Read our blog posts! • No shard left behind: Dynamic work rebalancing in Google Cloud Dataflow https://cloud.google.com/blog/big-data/2016/05/no-shard-left-behind-dyna mic-work-rebalancing-in-google-cloud-dataflow • Comparing Cloud Dataflow autoscaling to Spark and Hadoop https://cloud.google.com/blog/big-data/2016/03/comparing-cloud-dataflow- autoscaling-to-spark-and-hadoop Join the Apache Beam community! https://beam.incubator.apache.org/

×