Divide and stress: the journey to component load test talk was given at ExpoQA 2017 under the track of Quality Assurance and Performance.
Describes what are the most common pains that big companies suffer on load testing processes with expesive cost of 1:1s replicas of production environment for performance testing.
In order to reduce those expesive costs, The Workshop designed a new methodology that aims to reduce operational costs, human errors and enables the performance testing in Continuous Delivery pipelines, that it also can be adopted by Continuous Deployment scenarios.
Component Based Load Testing (CBT) is a methodology designed in The Workshop (http://theworkshop.com) that rethinks what the future of performance testing should do.
CBT introduces the load test executon as part of CD pipelines, ensuring the quality of our products through defined exit criteria for the main metrics, determining if the changes of a new release is ready or not to progress to next environments (stage, prod, etc).
CBT tries to use a pool resources efficiently, making them avaiable for any load test execution requested by any of our products. CBT main mission is trying to reduce all operative costs of maintain 1:1 replicas of production environments, by having a reduce pool of resources where the performance tests are executed using dockers by a reduced time under really volatile environments.
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Divide and stress: the journey to component load test
1. Divide and stress: the journey
to component load test
Juan Pedro Escalona
jescalona@theworkshop.com
@otioti
2. Agenda
• Introduction
• What is Component Based Testing?
• Requirements
• CBT versions:
• 1.0: Manual builds, fixed scenarios
• 2.0: Builds done as part of the pipeline,
multiple scenarios
• 3.0: Pool of resources to host multiple
scenarios
8. Introduction
• The load test
environment is really
expensive if it is
unavailable due to a
wrong deploy
• Really difficult to
determine which
component would be
impacting to the overall
performance of the
environment
•Teams…
9. Introduction
• The load test
environment is really
expensive if it is
unavailable due to a
wrong deploy
• Really difficult to
determine which
component would be
impacting to the overall
performance of the
environment
•Teams…
10. Introduction
• Load Test process
– Automated execution and report.
– Results are evaluated also against exit criteria
automatically
• FAIL: results out of exit criteria + tolerance margin
• PASS: results
– within exit criteria or
– out of exit criteria but within tolerance margin
Tolerance
FAIL PASS PASS
Exit Criteria
13. What is Component Load Testing?
• It IS a test against an isolated component
• It IS NOT a functional test
• It IS NOT representative of production
load.
15. Requirements
• Why do you think it is necessary to adapt
the exit criteria on each new load test
execution?
16. Requirements
• Automated exit criteria population
– Discover new transactions
• Automated exit criteria update
– Adjust exit criteria even if it’s a PASS.
• Component testing is a development stage
– Scripts and configuration must live in the code
17. Requirements
• Resources
– Hardware resources must be released if not in
use!
• Fast feedback
– Results ready in <5minutes.
• CD Ready
– Component testing is designed as a stage of a
CD pipeline
18. Requirements
• Repeatable test
– Only the component code changes in
between iterations
• Test environment
– Scalable, allowing to run several tests in
paralell
20. CBT Refinement
• Based on our own experience of
performance testing
• Continuously improving in every new
iteration
• 1.0: First version, fixed scenarios, manual
builds
• 2.0: Multiple scenarios not in parallel, fixed
infrastructure
• 3.0: Multiple scenarios in parallel, pool of
resources (reusable by everybody)
21. CBT 1.0
• Process:
• Manual build of Docker images using the version under test
• Docker Compose creates the scenario and their dependencies.
It starts and initializes the scenario.
• Mocks: some scenarios would require mocks
• Infrastructure:
• Docker image generation (Bamboo)
• Docker image storage (Nexus)
• Docker host to run the scenario
• ULTL runs and reports the load test execution
24. CBT 1.0 Limitations
• The software under test has fixed versions
• Maintenance of versions is manual
• The scenario runs every time in the same
• Dedicated docker host, usually pre-assigned by
teams.
• As a consequence, the load generators need:
• DNS records: must resolve assigned Docker host
• access to exposed services (HTTP, HTTPS, JMS
queue, smtp, etc)
25. CBT 1.0 Facts
• Requires a lot of manual interventions:
managing DNS records, access, software under
test versions, defining scenarios, etc.
• Waste of infrastructure resources?
26. CBT 1.0 Facts
• Requires a lot of manual interventions:
managing DNS records, access, software under
test versions, defining scenarios, etc.
• Waste of infrastructure resources: Machines that
run Docker scenarios are not in use if there is
not load test ongoing.
27. CBT 2.0
• Process:
• Introduction of fabric8-maven-plugin
• Every software release generates a new Docker
image
• Scenarios are defined using Kubernetes. Docker
compose is still accepted.
• Multi-scenario support. fabric8-maven-plugin lets us
define multiple scenarios in product source.
• Infrastructure:
• Same infrastructure requirements as CBT 1.0
28. CBT 2.0 Flow Management
• Very similar to 1.0
• Scenario is deployed once we have a new
release, as simply as run a maven
command:
• mvn fabric8:deploy
• Each scenario defines its own
configuration: services, load balancers,
ports, etc.
33. CBT 2.0 Limitations
• We still inherited limitations from CBT 1.0:
• Fixed Docker host to run the scenarios
• Services are strictly attached to the Docker
host which runs the scenario: DNS, ACLs, etc
• Need to grant access and expose services
under test
34. CBT 2.0 Facts
• Better usage of resources: We are able to run
many scenarios using same resources
• Better products versions management:
• For the product under test, the version is already
known
• For every dependency in each scenario, we lookup
for the versions deployed already in production.
35. CBT 3.0
• Infrastructure
• Scenarios represented as pair of Docker hosts,
known as scenario units:
• A host runs the scenario
• A host generates the load
• Cluster of scenario units
• Anyone can use them if they are not in use
• A new service is in charge of tracking and managing
scenario requests: CBT Handler
36.
37. CBT 3.0
• Why is the application performance not
impacted by the load generation resources
consumption?
42. CBT 3.0 Actors
• Bamboo
– Generates product releases
– Creates and publishes Docker images a as
part of the pipeline.
– Orchestrates with other components to
provision/destroy scenarios and request the
load test.
43. CBT 3.0 Actors
• CBT Handler
– Bamboo requests a new scenario for one specific
product to CBT Handler
– Once the scenario is ready, PDEV Handler
performs a callback on Bamboo.
– Once the load test is completed, PDEV Handler
will receive a request from Bamboo to release the
resources of exercised scenario, freeing our pool
resources for other possible scenarios waiting in
the queue.
47. CBT 3.0 Actors
• ULTL
– Automates the load test execution.
– Manages multiple load tests of different
scenarios in parallel.
– Callback to Bamboo, posting the results of the
CBT executions.
49. CBT 3.0 Facts
• Efficient use of pool resources
• Reduce complexity for infrastructure management:
• Configuration items (DNS, resources) are automatically
generated due to Kubernetes or Docker Compose
• Reduce complexity of configuration of load test scripts:
• Do not need to manage overrides for URL per environment.
• Open for any load performance testing software
– If it can be put inside Docker, you can use it.