Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Ludicrous scalewithloadbalancers


Published on

Scaling Load Balancers

Published in: Software
  • Be the first to comment

  • Be the first to like this

Ludicrous scalewithloadbalancers

  1. 1. Anurag Palsule Ludicruous scaling of SSL Traffic Increase Application Capacity, Reliability, and Scale
  2. 2. Why do we need ludicruous scale? - Cashless transactions have gone to 25% from 5% in a matter of few months! - IRCTC bookings have grown from 29 tickets per day to 13L per day ! - IHS forecasts that the IoT market will grow from an installed base of 15.4 billion devices in 2015 to 30.7 billion devices in 2020 and 75.4 billion in 2025 => Huge scalability requirements on IOT applications
  3. 3. Load Balancer Scalability – New Considerations • SSL/TLS traffic seeing explosive growth • Performance myth: Ultra expensive and inflexible hardware appliances the only solution • Moore’s law: advances in Intel x86 servers – processors, memory, and networking • Crypto advances: RSA 2K vs. ECC encryption keys • Software-defined architectural advances enable significant elasticity
  4. 4. Architectural Approaches to Scale Load Balancers - Hierarchical load balancers - DNS + Proxy load balancers - Route injection/Anycast load balancers
  5. 5. Hierarchical Load Balancers Concept: • Chaining of load balancing services • Tier 1 – Layer 4 (TCP/UDP) load balancing • Tier 2 – Layer 7 load balancing Pros: • Simplest approach • May suffice for small scale environments Cons: • Limited by performance of Tier-1 LB Users Tier 1 Load Balancer Tier 2 Load Balancer Application Instances
  6. 6. DNS + Proxy Load Balancers Concept: • DNS redirections with server mirroring • Dynamic mapping of hostname to IP addresses Pros: • Easy to configure • Scales well Cons: • DNS caches can become stale Users DNS Load Balancer Application Instances IP1 IP2 IP3 IP4 IP1 IP2 IP3 IP4
  7. 7. Route injection/Anycast Load Balancers Concept: • DNS resolves to single IP • Upstream router holds IP address • Router performs flow-based ECMP to next hop load balancers Pros: • Can scale significantly – most routers support at least 64 next hops Cons: • Access to an upstream router is needed Users Router Load Balancer Application Instances
  8. 8. Legacy 90s Arch, Box approach • Proprietary Hardware • Manage Each Device • No Automation • No Telemetry • Static Capacity The State of Load Balancing/Application Delivery WebScale computing is here but load balancing is still a bottleneck! Takeaways from AWS/FB/Microsoft • Commodity x86 • Manage As One • Highly Automated • Built-In Telemetry • Elastic Flexible, Fluid CapacityRigid Legacy ADC/LBs WEB SCALE TECH Load Balancers
  9. 9. Virtualized Containers Public CloudCompute ComputeCompute Modern Distributed Architecture Separate Control and Dataplane Manage as one, not many devices Controller Load Balancers Management Plane: UI/CLI Data Plane: LB
  10. 10. Virtualized Containers Public CloudCompute Modern Distributed Architecture Separate Control and Dataplane Manage as one, not many devices Controller Load Balancers
  11. 11. Modern Distributed Architecture Separate Control and Dataplane Manage as one, not many devices Load Balancers Bare Metal Virtualized Containers Public Cloud Controller MESOS Management & Orchestration REST API Multi-Cloud Fabric Single solution, any environment Automation Highly programmable, Plug-n-Play Built-In Visibility & Analytics Actionable insights key to automation Innovation
  12. 12. 1 Million TPS on Google Compute Engine - Setup Avi Networks – Elastic Application Services Fabric 320x Test Clients 40x Avi Service Engines (Load Balancers) ab ab ab n1-highcpu-16 ab ab ab n1-highcpu-16 ab ab ab n1-highcpu-16 GCP Router Controller ab ab ab n1-highcpu-16 Application Instances
  13. 13. Key Stats - Total cost for setup in Google Compute < $50 - SSL TPS – 0 to 1 million TPS in a few seconds - Dataplane: 40 VM instances with 32 hyperthreaded cores each - Traffic generators – 320 VM instances on 16 hyperthreaded cores each
  14. 14. • Setup in Google Compute • Bootstrap instance - 1 g1-small instance • Avi Controller - 1 n1-standard-4 instance • Avi Service Engines (load balancers) - 40 n1-highcpu-32 instances • Pool server - 1 g1-small instance • Test clients (load/traffic generators) - 320 n1-highcpu-16 instances • Running the test • : This public repo has all the scripts required for anyone to perform the scalability test Test setup and methodology
  15. 15. Avi Networks Proprietary and Confidential 2017 Scale Performance Up and Out Managed as One Elastic Load Balancer Fabric • 1 LB, 1 core • 5 Gbps • 2,500 SSL TPS • 1 LB, 24 cores (2 Sockets) • 20 Gbps (10 Gbps NICs) • 60,000 SSL TPS SCALE-UP More cores & IO LB performance scales with CPUs (Moore’s Law) & IO (40 Gbps NICs) • 1 LB, 2 core • 10 Gbps • 5,000 SSL TPS Single App Perf • 640 Gbps • 1.9M SSL TPS Performance • 4 Tbps • 12M SSL TPS Scale to 200 LBs • 2 LB, 1 core • 10 Gbps • 5,000 SSL TPS SCALE-OUT More LBs Fabric performance scales horizontally with LBs Centralized API Management Monitoring
  16. 16. Beyond Google Compute; Any Data Center or Public Cloud Clients Load Balancers Controller Application Instances GCP Router
  17. 17. DEMO Real-time Insights for Elastic Application Services
  18. 18. The New Rules of Elastic, Cost-effective Load Balancing 1 Take advantage of WebScale architectures 2 Use analytics-driven decisions for on-demand elasticity Automate L4 – L7 services with APIs3 Leverage load balancers for application intelligence4 Eliminate hardware overprovisioning5
  19. 19. Anurag Palsule Thank You! Avi Networks (India) Pvt Ltd. JB House, 110, 4th Cross, 5th Block, Koramangala Industrial Layout, Bangalore 560 095, Karnataka.