SlideShare a Scribd company logo
1 of 56
How does the Cloud Foundry
Diego Project Run at Scale?
and updates on .NET Support
Who’s this guy?
• Amit Gupta
• https://akgupta.ca
• @amitkgupta84
Who’s this guy?
• Berkeley math grad school… dropout
• Rails consulting… deserter
• now I do BOSH, Cloud Foundry, Diego, etc.
Testing Diego Performance at Scale
• current Diego architecture
• performance testing approach
• test specifications
• test implementation and tools
• results
• bottom line
• next steps
Current Diego Architecture
+
Current Diego Architecture
What’s new-ish?
• consul for service discovery
• receptor (API) to decouple from CC
• SSH proxy for container access
• NATS-less auction
• garden-windows for .NET applications
Current Diego Architecture
Main components:
• etcd ephemeral data store
• consul service discovery
• receptor Diego API
• nsync sync CC desired state w/Diego
• route-emitter sync with gorouter
• converger health mgmt & consistency
• garden containerization
• rep sync garden actual state w/Diego
• auctioneer workload scheduling
Performance Testing Approach
• full end-to-end tests
• do a lot of stuff:
– is it correct, is it performant?
• kill a lot of stuff:
– is it correct, is it performant?
• emit logs and metrics (business as usual)
• plot & visualize
• fix stuff, repeat at higher scale*
Test Specifications
#1: #2:
#3: #4:
Test Specifications
#1: #2:
#3: #4:
x 1
#1: #2:
#3: #4:
x 2
#1: #2:
#3: #4:
x 5
#1: #2:
#3: #4:
x 10
n
Test Specifications
• Diego does tasks and long-running processes
• launch 10n, …, 400n tasks:
– workload distribution?
– scheduling time distribution?
– running time distribution?
– success rate?
– growth rate?
• launch 10n, …, 400n-instance LRP:
– same questions…
Test Specifications
• Diego+CF stages and runs apps
• > cf push
• upload source bits
• fetch buildpack and stage droplet (task)
• fetch droplet and run app (LRP)
• dynamic routing
• streaming logs
Test Specifications
• bring up n nodes in parallel
– from each node, push a apps in parallel
– from each node, repeat this for r rounds
• a is always ≈ 20
• r is always = 40
• n starts out = 1
Test Specifications
• the pushed apps have varying characteristics:
– 1-4 instances
– 128M-1024M memory
– 1M-200M source code payload
– 1-20 log lines/second
– crash never vs. every 30 s
Test Specifications
• starting with n=1:
– app instances ≈ 1k
– instances/cell ≈ 100
– memory utilization across cells ≈ 90%
– app instances crashing (by-design) ≈ 10%
Test Specifications
• evaluate:
– workload distribution
– success rate of pushes
– success rate of app routability
– times for all the things in the push lifecycles
– crash recovery behaviour
– all the metrics!
Test Specifications
• kill 10% of cells
– watch metrics for recovery behaviour
• kill moar cells… and etcd
– does system handle excess load gracefully?
• revive everything with > bosh cck
– does system recover gracefully…
– with no further manual intervention?
Test Specifications
– Figure Out What’s Broke –
– Fix Stuff –
– Move On Scale Up & Repeat –
Test Implementation and Tools
• S3 log, graph, plot backups
• ginkgo & gomega testing DSL
• BOSH parallel test-lab deploys
• tmux & ssh run test suites remotely
• papertrail log archives
• datadog metrics visualizations
• cicerone (custom) log visualizations
Results
400 tasks’ lifecycle timelines, dominated by container creation
Results
Maybe some cells’ gardens were running slower?
Results
Grouping by cell shows uniform container creation slowdown
Results
So that’s not it…
Also, what’s with the blue steps?
Let’s visualize logs a couple more ways
Then take stock of the questions raised
Results
Let’s just look at scheduling (ignore container creation, etc.)
Results
Scheduling again, grouped by which API node handled the request
Results
And how about some histograms of all the things?
Results
From the 400-task request from “Fezzik”:
• only 3-4 (out of 10) API nodes handle reqs?
• recording task reqs take increasing time?
• submitting auction reqs sometimes slow?
• later auctions take so long?
• outliers wtf?
• container creation takes increasing time?
Results
• only 3-4 (out of 10) API nodes handle reqs?
– when multiple address requests during DNS lookup, Golang
returns the DNS response to all requests; this results in only 3-4
API endpoint lookups for the whole set of tasks
• recording task reqs take increasing time?
– API servers use an etcd client with throttling on # of concurrent
requests
• submitting auction reqs sometimes slow?
– auction requests require API node to lookup auctioneer address
in etcd, using throttled etcd client
Results
• later auctions take so long?
– reps were taking longer to report their state to auctioneer,
because they were making expensive calls to garden,
sequentially, to determine current resource usage
• outliers wtf?
– combination of missing logs due to papertrail lossiness, +
cicerone handling missing data poorly
• container creation takes increasing time?
– garden team tasked with investigation
Results
Problems can come from:
• our software
– throttled etcd client
– sequential calls to garden
• software we consume
– garden container creation
• “experiment apparatus” (tools and services):
– papertrail lossiness
– cicerone sloppiness
• language runtime
– Golang’s DNS behaviour
Results
Fixed what we could control, and now it’s all garden
Results
Okay, so far, that’s just been
#1: #2:
#3: #4:
x 1
#1: #2:
#3: #4:
x 2
#1: #2:
#3: #4:
x 5
#1: #2:
#3: #4:
x 10
Results
Next, the timelines of pushing 1k app instances
Results
• for the fastest pushes
– dominated by red, blue, gold
– i.e. upload source & CC emit “start”, staging process,
upload droplet
• pushes get slower
– growth in green, light blue, fucsia, teal
– i.e. schedule staging, create staging container,
schedule running, create running container
• main concern: why is scheduling slowing down?
Results
• we had a theory (blame app log chattiness)
• reproduced experiment in BOSH-Lite
– with chattiness turned on
– with chattiness turned off
• appeared to work better
• tried it on AWS
• no improvement 
Results
• spelunked through more logs
• SSH’d onto nodes and tried hitting services
• eventually pinpointed it:
– auctioneer asks cells for state
– cell reps ask garden for usage
– garden gets container disk usage  bottleneck
Results
Garden stops sending disk usage stats, scheduling time disappears
Results
Let’s let things stew between
and
Results
Right after all app pushes, decent workload distribution
Results
… an hour later, something pretty bad happened
Results
• cells heartbeat their presence to etcd
• if ttl expires, converger reschedules LRPs
• cells may reappear after their workloads have
been reassigned
• they remain underutilized
• but why do cells disappear in the first place?
• added more logging, hope to catch in n=2 round
Results
With the one lingering question about cell disappearnce, on to n=2
#1: #2:
#3: #4:
x 1
#1: #2:
#3: #4:
x 2
#1: #2:
#3: #4:
x 5
#1: #2:
#3: #4:
x 10
✓✓
✓ ✓
?
Results
With 800 concurrent task reqs, found container cleanup garden bug
Results
With 800-instance LRP, found API node request scheduling serially
Results
• we added a story to the garden backlog
• the serial request issue was an easy fix
• then, with n=2 parallel test-lab nodes, we
pushed 2x the apps
– things worked correctly
– system was performant as a whole
– but individual components showed signs of scale
issues
Results
Our “bulk durations” doubled
Results
• nsync fetches state from CC and etcd to make
sure CC desired state is reflected in diego
• converger fetches desired and actual state
from etcd to make sure things are consistent
• route-emitter fetches state from etcd to keep
gorouter in sync
• bulk loop times doubled from n=1
Results
… and this happened again
Results
– the etcd and consul story –
Results
Fast-forward to today
#1: #2:
#3: #4:
x 1
#1: #2:
#3: #4:
x 2
#1: #2:
#3: #4:
x 5
#1: #2:
#3: #4:
x 10
✓✓
✓ ✓
? ✓✓
✓ ✓
?
✓✓
✓ ✓
? ✓ ???
Bottom Line
At the highest scale:
• 4000 concurrent tasks ✓
• 4000-instance LRP ✓
• 10k “real app” instances @ 100 instances/cell:
– etcd (ephemeral data store) ✓
– consul (service discovery) ? (… it’s a long story)
– receptor (Diego API) ? (bulk JSON)
– nsync (CC desired state sync) ? (because of receptor)
– route-emitter (gorouter sync) ? (because of receptor)
– garden (containerizer) ✓
– rep (garden actual state sync) ✓
– auctioneer (scheduler) ✓
Next Steps
• Security
– mutual SSL between all components
– encrypting data-at-rest
• Versioning
– handle breaking API changes gracefully
– production hardening
• Optimize data models
– hand-in-hand with versioning
– shrink payload for bulk reqs
– investigate faster encodings; protobufs > JSON
– initial experiments show 100x speedup
Updates on .NET Support
Updates on .NET Support
• what’s currently supported?
– ASP.NET MVC
– nothing too exotic
– most CF/Diego features, e.g. security groups
– VisualStudio plugin, similar to the Eclipse CF plugin for
Java
• what are the limitations?
– some newer Diego features, e.g. SSH
– in α/β stage, dev-only
Updates on .NET Support
• what’s coming up?
– make it easier to deploy Windows cell
– more VisualStudio plugin features
– hardening testing/CI
• further down the line?
– remote debugging
– the “Spring experience”
Updates on .NET Support
• shout outs
– CenturyLink
– HP
• feedback & questions?
– Mark Kropf (PM): mkropf@pivotal.io
– David Morhovich (Lead): dmorhovich@pivotal.io

More Related Content

What's hot

Effective testing for spark programs Strata NY 2015
Effective testing for spark programs   Strata NY 2015Effective testing for spark programs   Strata NY 2015
Effective testing for spark programs Strata NY 2015
Holden Karau
 

What's hot (20)

Resilient Applications with Akka Persistence - Scaladays 2014
Resilient Applications with Akka Persistence - Scaladays 2014Resilient Applications with Akka Persistence - Scaladays 2014
Resilient Applications with Akka Persistence - Scaladays 2014
 
Building production spark streaming applications
Building production spark streaming applicationsBuilding production spark streaming applications
Building production spark streaming applications
 
Evolving Streaming Applications
Evolving Streaming ApplicationsEvolving Streaming Applications
Evolving Streaming Applications
 
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016
 
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben...
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben...S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben...
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben...
 
Benchmarking at Parse
Benchmarking at ParseBenchmarking at Parse
Benchmarking at Parse
 
Flink Forward SF 2017: Feng Wang & Zhijiang Wang - Runtime Improvements in Bl...
Flink Forward SF 2017: Feng Wang & Zhijiang Wang - Runtime Improvements in Bl...Flink Forward SF 2017: Feng Wang & Zhijiang Wang - Runtime Improvements in Bl...
Flink Forward SF 2017: Feng Wang & Zhijiang Wang - Runtime Improvements in Bl...
 
Patterns of-streaming-applications-qcon-2018-monal-daxini
Patterns of-streaming-applications-qcon-2018-monal-daxiniPatterns of-streaming-applications-qcon-2018-monal-daxini
Patterns of-streaming-applications-qcon-2018-monal-daxini
 
Journey into Reactive Streams and Akka Streams
Journey into Reactive Streams and Akka StreamsJourney into Reactive Streams and Akka Streams
Journey into Reactive Streams and Akka Streams
 
Reactive Streams / Akka Streams - GeeCON Prague 2014
Reactive Streams / Akka Streams - GeeCON Prague 2014Reactive Streams / Akka Streams - GeeCON Prague 2014
Reactive Streams / Akka Streams - GeeCON Prague 2014
 
Async – react, don't wait
Async – react, don't waitAsync – react, don't wait
Async – react, don't wait
 
Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...
Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...
Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...
 
Atlanta Spark User Meetup 09 22 2016
Atlanta Spark User Meetup 09 22 2016Atlanta Spark User Meetup 09 22 2016
Atlanta Spark User Meetup 09 22 2016
 
2014 akka-streams-tokyo-japanese
2014 akka-streams-tokyo-japanese2014 akka-streams-tokyo-japanese
2014 akka-streams-tokyo-japanese
 
Flink Forward SF 2017: Dean Wampler - Streaming Deep Learning Scenarios with...
Flink Forward SF 2017: Dean Wampler -  Streaming Deep Learning Scenarios with...Flink Forward SF 2017: Dean Wampler -  Streaming Deep Learning Scenarios with...
Flink Forward SF 2017: Dean Wampler - Streaming Deep Learning Scenarios with...
 
Reactive Streams: Handling Data-Flow the Reactive Way
Reactive Streams: Handling Data-Flow the Reactive WayReactive Streams: Handling Data-Flow the Reactive Way
Reactive Streams: Handling Data-Flow the Reactive Way
 
Streaming all the things with akka streams
Streaming all the things with akka streams   Streaming all the things with akka streams
Streaming all the things with akka streams
 
DDDing Tools = Akka Persistence
DDDing Tools = Akka PersistenceDDDing Tools = Akka Persistence
DDDing Tools = Akka Persistence
 
GNW01: In-Memory Processing for Databases
GNW01: In-Memory Processing for DatabasesGNW01: In-Memory Processing for Databases
GNW01: In-Memory Processing for Databases
 
Effective testing for spark programs Strata NY 2015
Effective testing for spark programs   Strata NY 2015Effective testing for spark programs   Strata NY 2015
Effective testing for spark programs Strata NY 2015
 

Viewers also liked

Cloud Foundry Diego: Modular and Extensible Substructure for Microservices
Cloud Foundry Diego: Modular and Extensible Substructure for MicroservicesCloud Foundry Diego: Modular and Extensible Substructure for Microservices
Cloud Foundry Diego: Modular and Extensible Substructure for Microservices
Matt Stine
 

Viewers also liked (11)

Akka in 10 minutes
Akka in 10 minutesAkka in 10 minutes
Akka in 10 minutes
 
Cloud Foundry loves Docker
Cloud Foundry loves DockerCloud Foundry loves Docker
Cloud Foundry loves Docker
 
Containers in the Cloud
Containers in the CloudContainers in the Cloud
Containers in the Cloud
 
Diego container scheduler
Diego container schedulerDiego container scheduler
Diego container scheduler
 
BOSH deploys distributed systems, and Diego runs any containers
BOSH deploys distributed systems, and Diego runs any containersBOSH deploys distributed systems, and Diego runs any containers
BOSH deploys distributed systems, and Diego runs any containers
 
Cloud Foundry Diego: The New Cloud Runtime - CloudOpen Europe Talk 2015
Cloud Foundry Diego: The New Cloud Runtime - CloudOpen Europe Talk 2015Cloud Foundry Diego: The New Cloud Runtime - CloudOpen Europe Talk 2015
Cloud Foundry Diego: The New Cloud Runtime - CloudOpen Europe Talk 2015
 
Message queues
Message queuesMessage queues
Message queues
 
Cloud Foundry Diego: Modular and Extensible Substructure for Microservices
Cloud Foundry Diego: Modular and Extensible Substructure for MicroservicesCloud Foundry Diego: Modular and Extensible Substructure for Microservices
Cloud Foundry Diego: Modular and Extensible Substructure for Microservices
 
PCF1: Cloud Foundry Diego ( Predix Transform 2016)
PCF1: Cloud Foundry Diego ( Predix Transform 2016)PCF1: Cloud Foundry Diego ( Predix Transform 2016)
PCF1: Cloud Foundry Diego ( Predix Transform 2016)
 
Who Lives in Our Garden?
Who Lives in Our Garden?Who Lives in Our Garden?
Who Lives in Our Garden?
 
Apache kafka
Apache kafkaApache kafka
Apache kafka
 

Similar to How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Support

Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACPerformance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Kristofferson A
 
Rackspace: Email's Solution for Indexing 50K Documents per Second: Presented ...
Rackspace: Email's Solution for Indexing 50K Documents per Second: Presented ...Rackspace: Email's Solution for Indexing 50K Documents per Second: Presented ...
Rackspace: Email's Solution for Indexing 50K Documents per Second: Presented ...
Lucidworks
 

Similar to How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Support (20)

Using Riak for Events storage and analysis at Booking.com
Using Riak for Events storage and analysis at Booking.comUsing Riak for Events storage and analysis at Booking.com
Using Riak for Events storage and analysis at Booking.com
 
How to Make Norikra Perfect
How to Make Norikra PerfectHow to Make Norikra Perfect
How to Make Norikra Perfect
 
Scaling tappsi
Scaling tappsiScaling tappsi
Scaling tappsi
 
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACPerformance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
 
Advanced Benchmarking at Parse
Advanced Benchmarking at ParseAdvanced Benchmarking at Parse
Advanced Benchmarking at Parse
 
Rackspace: Email's Solution for Indexing 50K Documents per Second: Presented ...
Rackspace: Email's Solution for Indexing 50K Documents per Second: Presented ...Rackspace: Email's Solution for Indexing 50K Documents per Second: Presented ...
Rackspace: Email's Solution for Indexing 50K Documents per Second: Presented ...
 
Scaling habits of ASP.NET
Scaling habits of ASP.NETScaling habits of ASP.NET
Scaling habits of ASP.NET
 
3.2 Streaming and Messaging
3.2 Streaming and Messaging3.2 Streaming and Messaging
3.2 Streaming and Messaging
 
Next Gen Big Data Analytics with Apache Apex
Next Gen Big Data Analytics with Apache Apex Next Gen Big Data Analytics with Apache Apex
Next Gen Big Data Analytics with Apache Apex
 
Hadoop Summit SJ 2016: Next Gen Big Data Analytics with Apache Apex
Hadoop Summit SJ 2016: Next Gen Big Data Analytics with Apache ApexHadoop Summit SJ 2016: Next Gen Big Data Analytics with Apache Apex
Hadoop Summit SJ 2016: Next Gen Big Data Analytics with Apache Apex
 
Capacity Planning for fun & profit
Capacity Planning for fun & profitCapacity Planning for fun & profit
Capacity Planning for fun & profit
 
Internals of Presto Service
Internals of Presto ServiceInternals of Presto Service
Internals of Presto Service
 
Intro to Apache Apex - Next Gen Platform for Ingest and Transform
Intro to Apache Apex - Next Gen Platform for Ingest and TransformIntro to Apache Apex - Next Gen Platform for Ingest and Transform
Intro to Apache Apex - Next Gen Platform for Ingest and Transform
 
Tale of two streaming frameworks- Apace Storm & Apache Flink
Tale of two streaming frameworks- Apace Storm & Apache FlinkTale of two streaming frameworks- Apace Storm & Apache Flink
Tale of two streaming frameworks- Apace Storm & Apache Flink
 
Tale of two streaming frameworks (Karthik D - Walmart)
Tale of two streaming frameworks (Karthik D - Walmart)Tale of two streaming frameworks (Karthik D - Walmart)
Tale of two streaming frameworks (Karthik D - Walmart)
 
John adams talk cloudy
John adams   talk cloudyJohn adams   talk cloudy
John adams talk cloudy
 
Apache Big Data 2016: Next Gen Big Data Analytics with Apache Apex
Apache Big Data 2016: Next Gen Big Data Analytics with Apache ApexApache Big Data 2016: Next Gen Big Data Analytics with Apache Apex
Apache Big Data 2016: Next Gen Big Data Analytics with Apache Apex
 
Advanced Operations
Advanced OperationsAdvanced Operations
Advanced Operations
 
Diagnosing Problems in Production (Nov 2015)
Diagnosing Problems in Production (Nov 2015)Diagnosing Problems in Production (Nov 2015)
Diagnosing Problems in Production (Nov 2015)
 
Kubernetes at NU.nl (Kubernetes meetup 2019-09-05)
Kubernetes at NU.nl   (Kubernetes meetup 2019-09-05)Kubernetes at NU.nl   (Kubernetes meetup 2019-09-05)
Kubernetes at NU.nl (Kubernetes meetup 2019-09-05)
 

Recently uploaded

+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
VictoriaMetrics
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
masabamasaba
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
masabamasaba
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
masabamasaba
 
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
masabamasaba
 
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Medical / Health Care (+971588192166) Mifepristone and Misoprostol tablets 200mg
 

Recently uploaded (20)

+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
 
%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
 
Architecture decision records - How not to get lost in the past
Architecture decision records - How not to get lost in the pastArchitecture decision records - How not to get lost in the past
Architecture decision records - How not to get lost in the past
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
%+27788225528 love spells in Huntington Beach Psychic Readings, Attraction sp...
 
%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK Software
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
 
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
Abortion Pill Prices Tembisa [(+27832195400*)] 🏥 Women's Abortion Clinic in T...
 

How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Support

  • 1. How does the Cloud Foundry Diego Project Run at Scale? and updates on .NET Support
  • 2. Who’s this guy? • Amit Gupta • https://akgupta.ca • @amitkgupta84
  • 3. Who’s this guy? • Berkeley math grad school… dropout • Rails consulting… deserter • now I do BOSH, Cloud Foundry, Diego, etc.
  • 4. Testing Diego Performance at Scale • current Diego architecture • performance testing approach • test specifications • test implementation and tools • results • bottom line • next steps
  • 6. Current Diego Architecture What’s new-ish? • consul for service discovery • receptor (API) to decouple from CC • SSH proxy for container access • NATS-less auction • garden-windows for .NET applications
  • 7. Current Diego Architecture Main components: • etcd ephemeral data store • consul service discovery • receptor Diego API • nsync sync CC desired state w/Diego • route-emitter sync with gorouter • converger health mgmt & consistency • garden containerization • rep sync garden actual state w/Diego • auctioneer workload scheduling
  • 8. Performance Testing Approach • full end-to-end tests • do a lot of stuff: – is it correct, is it performant? • kill a lot of stuff: – is it correct, is it performant? • emit logs and metrics (business as usual) • plot & visualize • fix stuff, repeat at higher scale*
  • 10. Test Specifications #1: #2: #3: #4: x 1 #1: #2: #3: #4: x 2 #1: #2: #3: #4: x 5 #1: #2: #3: #4: x 10 n
  • 11. Test Specifications • Diego does tasks and long-running processes • launch 10n, …, 400n tasks: – workload distribution? – scheduling time distribution? – running time distribution? – success rate? – growth rate? • launch 10n, …, 400n-instance LRP: – same questions…
  • 12. Test Specifications • Diego+CF stages and runs apps • > cf push • upload source bits • fetch buildpack and stage droplet (task) • fetch droplet and run app (LRP) • dynamic routing • streaming logs
  • 13. Test Specifications • bring up n nodes in parallel – from each node, push a apps in parallel – from each node, repeat this for r rounds • a is always ≈ 20 • r is always = 40 • n starts out = 1
  • 14. Test Specifications • the pushed apps have varying characteristics: – 1-4 instances – 128M-1024M memory – 1M-200M source code payload – 1-20 log lines/second – crash never vs. every 30 s
  • 15. Test Specifications • starting with n=1: – app instances ≈ 1k – instances/cell ≈ 100 – memory utilization across cells ≈ 90% – app instances crashing (by-design) ≈ 10%
  • 16. Test Specifications • evaluate: – workload distribution – success rate of pushes – success rate of app routability – times for all the things in the push lifecycles – crash recovery behaviour – all the metrics!
  • 17. Test Specifications • kill 10% of cells – watch metrics for recovery behaviour • kill moar cells… and etcd – does system handle excess load gracefully? • revive everything with > bosh cck – does system recover gracefully… – with no further manual intervention?
  • 18. Test Specifications – Figure Out What’s Broke – – Fix Stuff – – Move On Scale Up & Repeat –
  • 19. Test Implementation and Tools • S3 log, graph, plot backups • ginkgo & gomega testing DSL • BOSH parallel test-lab deploys • tmux & ssh run test suites remotely • papertrail log archives • datadog metrics visualizations • cicerone (custom) log visualizations
  • 20. Results 400 tasks’ lifecycle timelines, dominated by container creation
  • 21. Results Maybe some cells’ gardens were running slower?
  • 22. Results Grouping by cell shows uniform container creation slowdown
  • 23. Results So that’s not it… Also, what’s with the blue steps? Let’s visualize logs a couple more ways Then take stock of the questions raised
  • 24. Results Let’s just look at scheduling (ignore container creation, etc.)
  • 25. Results Scheduling again, grouped by which API node handled the request
  • 26. Results And how about some histograms of all the things?
  • 27. Results From the 400-task request from “Fezzik”: • only 3-4 (out of 10) API nodes handle reqs? • recording task reqs take increasing time? • submitting auction reqs sometimes slow? • later auctions take so long? • outliers wtf? • container creation takes increasing time?
  • 28. Results • only 3-4 (out of 10) API nodes handle reqs? – when multiple address requests during DNS lookup, Golang returns the DNS response to all requests; this results in only 3-4 API endpoint lookups for the whole set of tasks • recording task reqs take increasing time? – API servers use an etcd client with throttling on # of concurrent requests • submitting auction reqs sometimes slow? – auction requests require API node to lookup auctioneer address in etcd, using throttled etcd client
  • 29. Results • later auctions take so long? – reps were taking longer to report their state to auctioneer, because they were making expensive calls to garden, sequentially, to determine current resource usage • outliers wtf? – combination of missing logs due to papertrail lossiness, + cicerone handling missing data poorly • container creation takes increasing time? – garden team tasked with investigation
  • 30. Results Problems can come from: • our software – throttled etcd client – sequential calls to garden • software we consume – garden container creation • “experiment apparatus” (tools and services): – papertrail lossiness – cicerone sloppiness • language runtime – Golang’s DNS behaviour
  • 31. Results Fixed what we could control, and now it’s all garden
  • 32. Results Okay, so far, that’s just been #1: #2: #3: #4: x 1 #1: #2: #3: #4: x 2 #1: #2: #3: #4: x 5 #1: #2: #3: #4: x 10
  • 33. Results Next, the timelines of pushing 1k app instances
  • 34. Results • for the fastest pushes – dominated by red, blue, gold – i.e. upload source & CC emit “start”, staging process, upload droplet • pushes get slower – growth in green, light blue, fucsia, teal – i.e. schedule staging, create staging container, schedule running, create running container • main concern: why is scheduling slowing down?
  • 35. Results • we had a theory (blame app log chattiness) • reproduced experiment in BOSH-Lite – with chattiness turned on – with chattiness turned off • appeared to work better • tried it on AWS • no improvement 
  • 36. Results • spelunked through more logs • SSH’d onto nodes and tried hitting services • eventually pinpointed it: – auctioneer asks cells for state – cell reps ask garden for usage – garden gets container disk usage  bottleneck
  • 37. Results Garden stops sending disk usage stats, scheduling time disappears
  • 38. Results Let’s let things stew between and
  • 39. Results Right after all app pushes, decent workload distribution
  • 40. Results … an hour later, something pretty bad happened
  • 41. Results • cells heartbeat their presence to etcd • if ttl expires, converger reschedules LRPs • cells may reappear after their workloads have been reassigned • they remain underutilized • but why do cells disappear in the first place? • added more logging, hope to catch in n=2 round
  • 42. Results With the one lingering question about cell disappearnce, on to n=2 #1: #2: #3: #4: x 1 #1: #2: #3: #4: x 2 #1: #2: #3: #4: x 5 #1: #2: #3: #4: x 10 ✓✓ ✓ ✓ ?
  • 43. Results With 800 concurrent task reqs, found container cleanup garden bug
  • 44. Results With 800-instance LRP, found API node request scheduling serially
  • 45. Results • we added a story to the garden backlog • the serial request issue was an easy fix • then, with n=2 parallel test-lab nodes, we pushed 2x the apps – things worked correctly – system was performant as a whole – but individual components showed signs of scale issues
  • 47. Results • nsync fetches state from CC and etcd to make sure CC desired state is reflected in diego • converger fetches desired and actual state from etcd to make sure things are consistent • route-emitter fetches state from etcd to keep gorouter in sync • bulk loop times doubled from n=1
  • 48. Results … and this happened again
  • 49. Results – the etcd and consul story –
  • 50. Results Fast-forward to today #1: #2: #3: #4: x 1 #1: #2: #3: #4: x 2 #1: #2: #3: #4: x 5 #1: #2: #3: #4: x 10 ✓✓ ✓ ✓ ? ✓✓ ✓ ✓ ? ✓✓ ✓ ✓ ? ✓ ???
  • 51. Bottom Line At the highest scale: • 4000 concurrent tasks ✓ • 4000-instance LRP ✓ • 10k “real app” instances @ 100 instances/cell: – etcd (ephemeral data store) ✓ – consul (service discovery) ? (… it’s a long story) – receptor (Diego API) ? (bulk JSON) – nsync (CC desired state sync) ? (because of receptor) – route-emitter (gorouter sync) ? (because of receptor) – garden (containerizer) ✓ – rep (garden actual state sync) ✓ – auctioneer (scheduler) ✓
  • 52. Next Steps • Security – mutual SSL between all components – encrypting data-at-rest • Versioning – handle breaking API changes gracefully – production hardening • Optimize data models – hand-in-hand with versioning – shrink payload for bulk reqs – investigate faster encodings; protobufs > JSON – initial experiments show 100x speedup
  • 53. Updates on .NET Support
  • 54. Updates on .NET Support • what’s currently supported? – ASP.NET MVC – nothing too exotic – most CF/Diego features, e.g. security groups – VisualStudio plugin, similar to the Eclipse CF plugin for Java • what are the limitations? – some newer Diego features, e.g. SSH – in α/β stage, dev-only
  • 55. Updates on .NET Support • what’s coming up? – make it easier to deploy Windows cell – more VisualStudio plugin features – hardening testing/CI • further down the line? – remote debugging – the “Spring experience”
  • 56. Updates on .NET Support • shout outs – CenturyLink – HP • feedback & questions? – Mark Kropf (PM): mkropf@pivotal.io – David Morhovich (Lead): dmorhovich@pivotal.io