AlbaniaDreamin24 - How to easily use an API with Flows
Performance testing of a transit application using cloud-based load testing tools
1. M O H I T V E R M A
P E R F O R M A N C E E N G I N E E R I N G E V A N G E L I S T
Performance Engineering
and testing using Cloud based
tools: A Case Study
2. Agenda
Testing from the Cloud, Why and When?
Cloud vs On Premise: Benefits & Challenges
Cloud Tool Vendors: A Quick Survey
Project Description
Tool Selection Criteria
Performance Characterization of the Application
Test Scenarios/Strategy/APM
Test Results/Findings
Tool Caveats and Learnings
3. Testing from the Cloud: Why and When?
Application under test is deployed in the Cloud?
Do not own any commercial tool or do not have Load test
infrastructure on premise
SAAS-based software companies - startups
Global/national application or Corporation with users
across the nation or globe – need to mimic user traffic
from multiple locations
Single complete Test Cycle - multiple cycles could be
expensive
Users across the spectrum: Mobile, Desktop, Laptop, IOT
Devices
4. Cloud: Benefits & Challenges
Benefits Challenges
Most vendors give you the ability to scale to
millions of users at a leaner cost
Scripts usually are simpler and may not be
able to mimic all functionality
Ability to emulate multiple Devices Increased Security Risk
Ability to emulate different network
conditions – network virtualization
Testing scope may be limited dependent on
tool and available resources
Ability to emulate multiple locations Typically limited to API testing – web
applications specific (HTTP)
Ability to ramp up and ramp down test
infrastructure
Data masking may be needed which comes
with its own risks and performance
considerations
Less red tape Compliance may need to be met
Cost effective for single cycle testing Tools are limited in functionality and
reporting
Environments can be built on demand and
scaled as needed
Real time monitoring could vary depending
on network location and may not be accurate
Dedicated IPS are often needed
5. On Premise: Benefits & Challenges
Benefits Challenges
Most vendors give you the ability to scale to
millions of users but the cost could be
exorbitant
Typically hard to do unless you have a well
established performance testing practice
Cost effective for multiple cycle testing Expensive for single time testing
Scripts can be complex Increased red tape to execute tests due to
shared environments
Security risk is limited Tools are expensive – infrastructure is
expensive
Load infrastructure can be built and
maintained to support continuous testing
Expensive to mimic multiple locations and
network conditions
Typically testing is protocol agnostic and
commercial and open source tools are
mature enough
Expensive and difficult to test multiple
devices and configuration
Controlled testing possible (test lab) -
benchmarking
Perfect for LAN applications
6. Cloud Tool Vendors
Soasta – acquired by Akamai
OctoPerf
Blazemeter – acquired by CA Technologies
StormRunner Load
Flood.io
Loader.io
Neoload
Apica
7. Project Description
New deployment of Transit System cards. Travel cards
can be scanned, recharged at Kiosks, booths , online, etc.
Client wants to test up to 12 use cases
West Coast Location
Wants to use open source tools (Jmeter) to save expense due to cost
hindrances
Plan for 1 time deployment including pilot over several months
Find out what is the breaking point of current infrastructure using
upto 2000 concurrent users
Expected usage is only about 200 concurrent users – but available
data is limited. Transit authority has several hundred thousand users
every day today
Limited number of tests and hours available
Third party domains included in the test
Mobile apps are out of scope
8. Tool Selection Criteria
Cost Driven
Limited Vendors
Client wanted to use Jmeter and any available cloud
vendor supporting it.
Blazemeter – selected due to familiarity with it with Jmeter 3.0
and easy NewRelic integration for application performance
monitoring
OctoPerf
StormRunner Load – accepts Jmeter scripts but more
expensive
9. Performance Characterization of Application
Use Case Start-Up
Weighting Steady-State Weighting
1 Create an account 25.0% 2.5%
2
Add a card to the account (& review card
history) 25.0% 5.0%
3 Add stored value w/o stored funding source 10.0% 30.0%
4 Add stored value using stored funding source 15.0% 30.0%
5 Browse website for informational purposes 10.0% 5.0%
6 Anonymous balance check 2.5% 10.0%
7
Obtain card info for N cards (N=20) and
download History in CSV and PDF (UC 11 and
12) 2.5% 10.0%
8 Change customer name 2.5% 2.5%
9 Block & then unblock transit account 2.5% 2.5%
10 Send support ticket (or help request) 5.0% 2.5%
TOTAL 100.0% 100.0%
14. Jmeter scripting challenges/recommendations
Jmeter Web Test Plan Template used to record scripts
Multiple cookies to be correlated
Regular Expression Extractor used for correlation
Header variables need to be correlated – same variable correlated
multiple times
Images and Javascript were not included in the http requests
Response Assertions needed to be added to validate page content
Timers included (random and constant)
Generate parent sample setup to reduce number of labels in report
In order to limit number of thread groups – some of the scripts
were combined
Some third party calls could not be scripted with Jmeter – use case
3 was skipped and use case 4 was increased in the scenario to
compensate for the scripting issue – this was signed off by the
Client
57. Blazemeter Tips
Convert HAR, XML, Selenium, and JSON to JMX
http://converter.blazemeter.com/
Blazemeter APM integrations
https://guide.blazemeter.com/hc/en-us/sections/202472329-APM-integrations
Blazemeter Recorder can be used to create tests – plugin
https://guide.blazemeter.com/hc/en-us/articles/207420545-BlazeMeter-Proxy-
Recorder-Mobile-and-web-
Reduce the number of labels – so that they are viewable in graphs –
blazemeter only show 100 labels- Use Parent Timer in Jmeter to
reduce labels and consolidate them
Firewall rules need to be modified – sandbox test not available for
dedicated IP tests
CI integration available with Blazemeter/Taurus
Use Jmeter scheduling or blazemeter (cannot mix and match)
If using Jmeter thread groups – you can to keep in mind number of threads need
to be 1/loadservers so you do not exceed the total number of users
Editor's Notes
This presentation will include a case study of a Transit System automation application which was tested using Cloud performance testing and APM tools. We will walk through benefits and challenges of testing from the cloud along with the caveats and learnings from the Project which can be applied to future cloud based testing scenarios. We will walk through the complete performance engineering lifecycle
SAAS based companies would prefer cloud based tools since they have national or international presence and multi device presence. Corporations typically have many internal applications which are difficult to test with Cloud based tools due to security parameters.
Discussing the various pros and cons of testing from the Cloud
We also provide benefits and challenges of testing on-premise
List of cloud performance testing vendors -
Scope for test application under test (AUT). 12 use cases were tested and needed to be scripted. Application was written in NODE and Angular. No Mobile users were tested for this project at this time. The application would be deployed in July 2017 with Beta testing from Jan-June 2017
A number of tools were looked into but due to Blazemeter’s existing relationship – it was chosen for testing. The scripts were to be developed in Jmeter.
12 Use cases with indicated weighting were scripted. 2 sets of weighing were provided to identify with expected volumes initially and then steady state. For the purposes of scripts threadgroups were combined to capture the 12 Use cases and create 7 thread groups for ease of scripting and management.
10 Use cases combined into 7 thread groups in Jmeter. Each of the thread groups had to be configured to run for 1 hour (3600 seconds) and total number of users in this case were 25% of 2000 = 500, due to 4 load generators needed we had to to limit the number of users to 125 so that we do not exceed 500 across the 4 load generators.
10 Use cases combined into 7 thread groups in Jmeter. Each of the thread groups had to be configured to run for 1 hour (3600 seconds) and total number of users in this case were 60% of 2000 = 1200, due to 4 load generators needed we had to to limit the number of users to 300 so that we do not exceed 500 across the 4 load generators. Since we could not script UC3 – we used UC4 which made the third party call as well which we were interested – Cybersource.com for card payment processing
Third party integration was not possible due to insufficent information in documentation of cybersource and inablity of Jmeter to capture the authentication token. Interestingly enough we were able to script this using Loadrunner Truclient scripts.
Client wanted to test google map localization calls which are limited to 25000 calls/day for free. You can create your own account and get an APIKEY you can use for testing
UC3 – it was able to script it in Loadrunner - but Jmeter was not possible – enough diligent time was spent to resolve any correlation, since we did not have access to the code.
Blazemeter requires a login (free) – landing page is shown – where you can have multiple tests and results stored – Results are typically store for 1 year
Upload your Jmeter JMX file along with any data files used by the Test Plan – data files are assumed to exist in your JMETER directory and should be setup in the thread groups assuming that.
Many different kid of tests are available . Blazemeter also has its own recorder and HAR to JMX converter as well as a Chrome Plugin for recording scripts Taurus is tool of choice for Devops integration
Typical Test Setup screen
All JMETER dependent files – and input files are uploaded and viewable from the Main Test Scenario screen
Tests can run from AWS, Google Compute and Microsoft Azure sites across the world to provide realistic testing
Can add Just Jmeter properties to Save cookies, etc.
Test startup typically takes some time – dependent on how many load generators that need to be provisioned.
Main Test execution screen with available graphs and various key metrics (TPS, number of users, Response time, Error Rate). In this test there were high errors as the application broke down at 1100 users
Reports can be seen in realtime. Graphs are best viewed after the test is complete.
You can drilldown by time and the report will refresh to give the results. Request Stats can also be downloaded as a CSV for any manual massaging.
Columns in the reports can be customized with columns and labels. The report refreshes as the drilldown is changed or made smaller.
Sample Reports are available which can be customized by reports.
Load generation monitoring is available in a separate graph. Typical metrics are CPU and Memory < 80% so that the performance test is accurate
Error reports can be seen by error groups. And can be drilled down.
All the log files can be downloaded including the logs on the load generators for review. There is a .jtl file that is created that can be viewed in your local machine and massaged to your needs, etc.
Test configurations and JMX files are saved for future needs for comparison, etc
Can drilldown to results before error rate went up to get results till the breaking point
You can drill down to the error codes
Usage Metrics for Center of Excellence – process optimization are available which can help management in deciding how we are using the tool and provide ROI/management buyin.
Incorrectly setting – causes Blazemeter to quadruple the expected load.
Results can be compared and presented to stakeholders showing differences in Baseline to Changes.
Look the below webinar for more details: http://info.blazemeter.com/loadtesting_2016_webinar?utm_source=BM&utm_medium=kb&utm_campaign=new-relic-apm – Load testing like a pro
Integrates with most tools.
In order to integrate NewRelic APM with your test – you need to install NewRelic on the machines you want to instrument and provide the API key to Blazemeter so that it can capture the metrics in Blazemeter and see them in the timeline report.
Login has to be setup by the entity you are testing – company where the application is hosted
Error rate graph can be correlated in Blazemeter
Errors can be viewed in New Relic. Integration with Blazemeter was not working properly (seeing metrics in Blazemeter) at the time of the test.
URIs can be viewed by various KPIs
Response Times
Apdex scores
Highest Throughput
Time consuming
Oracle SQL calls can be viewed by various KPIs as well
Multiple graphs in New Relic can be viewed at the same time
All the times are in milliseconds. The last column is more of a stress test and hence has higher response times.