4. The Challenges
• Shorter development cycles require more tests in
less time
• ‘Working’ code does not always perform well
• Developer needs feedback
8. Importance of Continuous Performance
Assessments?
• Avoid late performance problem discovery
• Making changes earlier when they are cheaper
Conception Design Development Testing Release
Cost to Fix a bug
X1000
X100
X10
x1
10. The Big Picture
FINAL SPRINT
DEPLOYMENT PHASE
Setup Performance
Monitoring System
Create Backlog Items
for Identified Issues
Responding to
Performance Alerts
SPRINT N
DEVELOPMENT PHASE
Bottleneck
Identification
Architecture
Assessment
Add Performance
Engineering items to
Product Backlog
Major Release
Assessment
Trend Analyzing and
Benchmarking
SPRINT N+1
Prioritization
Implementation
Re-assessment
SPRINT 0
INITIAL PHASE
Understand
Performance
Requirements
Setup Environment &
Framework
Create Performance
Strategy
Knowledge transfer
between all the
stakeholders
12. Demo
• Record JMX through BlazeMeter
• Configure Jmeter and Jenksins for the CPA
• Configure Blazemter plugin for Jenkins
• Compare the performance of builds
Editor's Notes
It is very important to do the performance assessment in any project. HealthCare web site had to shutdown after its launch. One of the reason for failure of this project was not having proper performance assessment in place.
They planned Performance Testing, But it was pushed from one sprint to another sprint due to different reasons.
Ultimate result was bad image and lost of millions of dollars. This project team had a challenge of doing performance assessment. There are some other challenges – Agile Team might face.
- In Agile development, Release cycles are very short. QA Engineers have to do lots of different type of testing in a sprint. They may have to do feature testing / Acceptance Testing, Sometimes System Test too. So how can we do a performance testing in this kind of challenging environment.
Agile teams is put on delivering working code, but is code really ‘working’ if it fails when the application is under load? Should user stories and tasks really be marked as “done” if the code associated with them causes the application to crash with 100 users?
- Developers need to know more than just the fact that their code is causing performance issues: they need to know when their code started causing problems and what story they were working on when the issue started.
Solution for this, is to do performance testing in continuous way. How about a mechanism of
What happens in the CI environment is that ,Code get checked into source control, run through a Continuous Integration build, pass all of the automated tests and unit tests ( If it is in place ) and get deployed to the production server in a matter of minutes.
How about a way to do the performance testing like the way of automation test, To run the performance script and get results.
This can be configured according to the performance requirements. Ex: You may have a function which needs to have a response time somewhere between 20ms-25 ms for a ‘X’ number of users. So we can create/compose a performance script and place it in the CI server. Whenever developer commits a change/ Whenever we get a new build, It will run through the performance script and it will give you the result. Depends on that result, Build could be in either ‘Pass’ or ‘Fail’ state.
If the Build is in ‘Fail’ state, Developer has to fix it.
Also this performance criteria / Performance result can be set in the form of graph. So that team can analyze a trend.
Keep in mind of two points,
1) Always use a static user load – We are not doing a load testing. What we do is run builds in a continuous way for a period of time to identify / analyze performance regression 2)
2) Performance script should be good. It should be according to the service level agreement.
Above points will be depend on the importance of the business functionality.
This is how we do Continuous performance assessment If you have Continuous : Integration in place.
- For today’s demo, We are going to use a server which installed Jenkins and we are going to run our performance scripts on top of that.
Here you can see that Build # 5 has a response time of 1400 ms, But Build #9 has a 800 ms. What we can learn from this downwards trend is that developers are doing something and that is helping to reduce the response time. But in Build #10 response time again increases to 1000 ms. There is something related to performance happens again.
But we are not going to report an issue at this point. Reason for this sudden increase could be something because of a network related issue. But If next build ( which will run on next day ) has a same response time around 1000 ms, then you will see an upwards trend on response.
Now this would be a good time to inform developers about this, So developers know that something happened on Build#9. They can easily check the changesets / Builds and isolate performance issue.
- One of the importance of continuous performance assessments is that, it will avoid late performance issues. If you identify your performance issues early as possible, It would be easy to fix and this can save your company thousand or even millions.
This is how everything connects in the continuous delivery model.
Everything goes with a cost, so you need to educate your clients on the importance of this. Knowledge transferring & then understand performance requirements / create performance strategy
Then you can move to arch: assessments , analyze the trends , do some release assessments and start adding performance issues to the backlog ( Ex: If you use Jira, you can add a tag )
Then prioritize items and fix it. Continuously check items and continually fixing issues
Then finally you can set up Application Performance systems such as NewRelic.
This is not only way to implement Continuous performance assessments, Different tools like Telerik/ DynaTrace has a way to implement continuous performance assessments