2. 2C O N F I D E N T I A L
Glimpse of Contents
› Why do we need to assess performance
› Performance Testing in Waterfall Model
› Performance Testing in Agile Model
› Evolution to Performance Engineering
› Tools and Techniques
› A peek into Big Data World
› Best Practices
› Technical Agility perspective
3. 3C O N F I D E N T I A L
Why do we need to assess performance
• Primarily to determine the speed and
responsiveness of a software under a defined
workload using different approaches such as
Load, Stress, Spike, Soak etc
• Assessment needed both from behaviour and
system level metrics. Also both at atomic and
cluster level.
• Feature readiness based on simulation of
customer environment and SLAs.
• System stability based on simulation of data
seasonality and cluster characteristics.
• One size doesn’t fit all so assessment needs
to be planned and customised
4. 4C O N F I D E N T I A L
Performance Testing in Waterfall model
What’s not an issue
• It is easier to plan the resources as the timelines for release and Test phase is clearly defined. So if
all goes well, resource estimation gets a big thumbs-up.
• Performance testing is a stage close to acceptance testing and if the criteria is met, the system is
ready to go into production.
What are the issues
• Might demand architectural changes towards the end of development when carried out along with
all other testing aspects.
• Effort estimation takes a hit as design changes might require complete Regression cycle
accompanied by new test scenarios.
• Testing scope is limited to the scenarios documented during Test Planning. Defects get filed based
on the destination not the journey.
5. 5C O N F I D E N T I A L
Performance Testing in Agile Model
• It’s part of the journey right from the onset. Stakeholders engage early providing constant feedback
• It is an iterative process across Sprints where components need to be tested individually and in an
Integrated manner.
• Easier said than done !
6. 6C O N F I D E N T I A L
Evolution to Performance Engineering
• A proactive shift-left approach that includes systematic techniques, practices,
and activities in every Sprint to meet performance needs
• Focus on the design principles and architecture
• Detecting bottlenecks early
• The person/team involved needs to be adept in application and infrastructure
diagnosis and optimisation
• A decent understanding of threading and concurrency in code
• A decent understanding of partitioning, indexing in database along with query optimisation.
• A decent understanding of network protocols.
• In other words, it requires a persona having the skills of one or more of a Performance Analyst, a
Performance Tester , a developer, a Database administrator , a domain expert and a Network Engineer
• A culture that enables teams to deliver fast, efficient, and responsive systems architected for large-
scale deployments
• The responsibility for performance starts with software designers and system architects, extends to the
developers who do the coding, and ends with QA.
8. 8C O N F I D E N T I A L
Dashboard enablement
Metricbeat, Prometheusbeat and other multi-
purpose Beat utilities based on requirement.
Cluster Nodes with beat and custom scripts
running as agents to continuously extract
stats
Automated Index creation
based on specific projects
and features
Data Collection &
Persistence
Custom filters and
aggregations
Visualization &
Dashboarding
9. 9C O N F I D E N T I A L
A peek into big data dashboards
Depicting System level KPIs
10. 10C O N F I D E N T I A L
Depicting Container level KPIs and frequent crashes resulting in re-design
11. 11C O N F I D E N T I A L
Depicting deviation in Behavioral KPIs for underlying datastore e.g. Kafka resulting in re-design
12. 12C O N F I D E N T I A L
TestOps Focus
• It essentially means to have the necessary ecosystem both in terms of practices and
frameworks to support a quality deliverable.
• High focus on inclusion of necessary third party tools in the arsenal e.g. Traffic
generators, monitoring tools , profilers ,alerting tools among others.
• High availability of Perf Test environments.
• Deployment of containerized test agents and execution of selective test suites on
demand.
• Strong integration with DevOps to leverage and build upon an efficient CI CD workflow
• Inclusion of Unit Testing , Code Quality Tests and necessary commit practices in
Automation Framework Development Stages as well.
13. 13C O N F I D E N T I A L
A skilled craftsman also needs the right practices
Agile Best Practices
Plan User Stories around Performance tasks. It needs time
and effort.
Prioritize fixing performance defects and any related functional
defects that block performance tests.
Bottlenecks cannot be guessed. Back your performance tests
with the correct statistics.
Carefully assess production environment and SLAs
Know when performance is good enough to gracefully close
the iterations.
Invest in automation of test setup, data population, monitoring
and analysis of results along with integration of tools
Design smart keeping modularization in mind.
14. 14C O N F I D E N T I A L
Technical Agility perspective
› Technical agility is all about a mature software delivery culture
› Agility requires writing a breathing software which is driven by three rules : continuous
refactoring , continuous testing and an evolutionary design
› Continuous Testing involves continuously executing tests as part of the software
delivery pipeline in order to measure the business risks associated with software
release as rapidly as possible.
› Performance issues beget software redesign hence the business risks are immense
when things get pushed later in the SDLC.
› So continuous Performance Engineering is the need of the hour.