Mistakes we make_and_howto_avoid_them_v0.12

Trevor Warren
Trevor WarrenPrincipal, Tech Arch - Performance Engineering, Techonology Consulting at Accenture - http://www.accenture.com
Mistakes We Makeand how-to Avoid them 
CMGA ACT Meetup–Nov 18th, 2014
Mistakes we make_and_howto_avoid_them_v0.12
Let’s Look At The Agile Development Process 
Source –www.cprime.com 
Source –www.agileatlas.org
Let’s Look At The Waterfall Development Process 
Source –www.seguetech.com
Try To See The Bigger Picture 
Source –www.eeight.com 
•It’s all about the customer and the end user 
•Performance isn’t a point in time process, it’s on-going and includes activities that are performed across the entire life of a system 
•The technology part of performance is generally the easy, well defined part, we mostly tend to fail because we ignore the human element i.e. interaction, experience, expectation, etc. 
•Always tie performance outcomes to business outcomes. It helps set the right expectation and drives outcomes business is willing to fund. 
•Focus on buy in and selling the value proposition. The technology, capability, people, tools, etc. will all come once you’ve got your stakeholders on board. 
•Performance Test is not Performance Engineering
Poorly Defined Non Functional Requirements 
Source –cartoonstock.com 
•Lack of good NFR’s is one of the most common reasons systems fail 
•Don’t leave NFR’s to the Business Analysts. Take ownership, work with Business and IT to define relevant NFR’s. 
•Refer to industry standards, leverage past experience and speak to peers. It’s rare that a program needs to invent the wheel. 
•Follow the SMART approach i.e. SMART i.e. Specific, Measureable, Assignable/Achievable, Realistic, Time related. 
•Define NFR’s that can be measured and validated at Performance Test. Defining NFR’s that can’t be measured or validated is useless. It’s not funny how often this happens. 
•Get teams across the program to buy into the NFR’s and own them across their relevant domains.
Investing In An Architecture That Doesn’t Scale 
Source –www.barrylupton.com 
•Vendor benchmarks are delivered on systems with specs and hardware you’ll rarely have access to. Be realistic with your assumptions of scalability. 
•Build a PoCand pilot your solution in a sandpit environment. Don’t wait to get to Performance Test to work out Architectural Constraints. 
•Leverage existing capability, peer reviews and industry experience to validate your solution design. 
•Avoid fancy 3rdparty COTS components if possible and stick to simple designs and implementations. 
•Make it a point to stay updated on Anti Patterns. Every development platform has it own set of Anti Patterns.
Tools Galore, Give Me Something That Works!!! 
Source –www.toondoo.com 
•Most clients have a gazillion tools sitting around. Find something that works and implement it. 
•Don’t expect an integrated monitoring, diagnostics, reporting solution. Grab the tools you can and get them to work. 
•OpenSourcetools work well when it comes to basic systems monitoring. Be well aware of their limitations. 
•When it comes to Performance Testing, identify your requirements and call out the need to invest early on. Performance Testing tools can be really expensive and budget generally ends up being the biggest impediment to their procurement. 
•Performance Tools are required across the life cycle of the system –Modelling, Profiling/Diagnostics, Performance Testing, Monitoring & Capacity Management
Execute SVT In A Scaled Down Environment 
Source –www.glasbergen.com 
•Executing performance testing in a scaled down environment does come with an element of risk 
•Unless your SVT environments are 70-80% like production (in terms of configuration & capacity) extrapolation of the numbers carries high risk 
•Most environments are virtual these days. Watch out for Dev/Test environments which might have a higher over provisioning ratio as compared to production which will again skew the numbers. 
•Assuming linear scalability for your application when modelling performance or extrapolating performance using data from scaled down environments can get you into a lot of trouble. 
•Keep an eye out for background workload especially on virtual infrastructure which can skew your numbers in a test environment.
Read Between The Lines –Cloud Service SLA’s 
Source –www.truthliesdeceptioncoverups.info 
•When building applications that leverage cloud components be careful and read between the lines. Most vendor SLA’s don’t go beyond their data centers. 
•If you own performance, you own SLA’s end to end while your vendor only owns SLA’s within their data centers. Build resilience into your solution. 
•Most cloud vendors charge large sums to performance test using their systems, it’s preferable to bake those requirements into the procurement process. 
•If you choose to use stubs to test third party components beware of the performance penalty you are overlooking. 
•Most services are moving to the cloud and there can be a significant performance penalty when interacting with multiple cloud services hosted across different locations. Design sensibly.
Trend Infra Utilization Metrics & Extrapolate 
Source –www.andertoons.com 
•Managing performance of a system goes well beyond performance testing and continues past go-live. 
•Trending CPU, Mem, Disk utilization metrics and extrapolating them using statistical approaches is a high risk approach. 
•Focus on key business workload drivers. Map the business workload drivers to the relevant infrastructure workload drivers and build your statistical models using relevant modelling techniques. 
•Capture data and relevant time intervals (30s or less) for purposes of modelling and forecasting. Roll them up using your custom roll up routines. Don’t depend on the system monitoring tools to give you one data point per hour, those numbers are quite useless. 
•Capacity Management doesn’t require expensive tooling. It’s requires a well defined approach and some basic modelling techniques any maths grad can pick up.
Thank You 
Source –www.dilbert.com
1 of 12

More Related Content

What's hot(20)

Software requirementSoftware requirement
Software requirement
setalk479 views
36858073685807
3685807
nazeer pasha493 views
Lecture 04Lecture 04
Lecture 04
Rana Ali105 views
Use Case WorkshopUse Case Workshop
Use Case Workshop
elkensteyin3.8K views
Software product lineSoftware product line
Software product line
Himanshu 3.6K views
Requirement Management 1Requirement Management 1
Requirement Management 1
pikuoec694 views
Requirement analysisRequirement analysis
Requirement analysis
Sangeet Shah2K views
Need for Software EngineeringNeed for Software Engineering
Need for Software Engineering
Upekha Vandebona19.1K views
Requirement Management 3Requirement Management 3
Requirement Management 3
pikuoec583 views
Architecture ReviewArchitecture Review
Architecture Review
Himanshu 1.3K views
Unit 2Unit 2
Unit 2
KRAMANJANEYULU12.2K views
Slides chapter 5Slides chapter 5
Slides chapter 5
Priyanka Shetty997 views
Role of BA in TestingRole of BA in Testing
Role of BA in Testing
Shwetha-BA5.1K views
Software Development Life Cycle (SDLC)Software Development Life Cycle (SDLC)
Software Development Life Cycle (SDLC)
Compare Infobase Limited167.9K views

Similar to Mistakes we make_and_howto_avoid_them_v0.12(20)

No more excuses QASymphonyNo more excuses QASymphony
No more excuses QASymphony
QASymphony 458 views
Calto Commercial RIS SystemsCalto Commercial RIS Systems
Calto Commercial RIS Systems
National Information Standards Organization (NISO)888 views
Lifecycle of a Data Science ProjectLifecycle of a Data Science Project
Lifecycle of a Data Science Project
Digital Vidya1.2K views
Neev Load Testing ServicesNeev Load Testing Services
Neev Load Testing Services
Neev Technologies725 views
Ncerc rlmca202 adm m4 ssmNcerc rlmca202 adm m4 ssm
Ncerc rlmca202 adm m4 ssm
ssmarar28 views
NZS-4555 - IT Analytics Keynote - IT Analytics for the EnterpriseNZS-4555 - IT Analytics Keynote - IT Analytics for the Enterprise
NZS-4555 - IT Analytics Keynote - IT Analytics for the Enterprise
IBM z Systems Software - IT Service Management876 views
Equipment finance projects 101Equipment finance projects 101
Equipment finance projects 101
David Pedreno102 views

Recently uploaded(20)

Java Platform Approach 1.0 - Picnic MeetupJava Platform Approach 1.0 - Picnic Meetup
Java Platform Approach 1.0 - Picnic Meetup
Rick Ossendrijver24 views

Mistakes we make_and_howto_avoid_them_v0.12

  • 1. Mistakes We Makeand how-to Avoid them CMGA ACT Meetup–Nov 18th, 2014
  • 3. Let’s Look At The Agile Development Process Source –www.cprime.com Source –www.agileatlas.org
  • 4. Let’s Look At The Waterfall Development Process Source –www.seguetech.com
  • 5. Try To See The Bigger Picture Source –www.eeight.com •It’s all about the customer and the end user •Performance isn’t a point in time process, it’s on-going and includes activities that are performed across the entire life of a system •The technology part of performance is generally the easy, well defined part, we mostly tend to fail because we ignore the human element i.e. interaction, experience, expectation, etc. •Always tie performance outcomes to business outcomes. It helps set the right expectation and drives outcomes business is willing to fund. •Focus on buy in and selling the value proposition. The technology, capability, people, tools, etc. will all come once you’ve got your stakeholders on board. •Performance Test is not Performance Engineering
  • 6. Poorly Defined Non Functional Requirements Source –cartoonstock.com •Lack of good NFR’s is one of the most common reasons systems fail •Don’t leave NFR’s to the Business Analysts. Take ownership, work with Business and IT to define relevant NFR’s. •Refer to industry standards, leverage past experience and speak to peers. It’s rare that a program needs to invent the wheel. •Follow the SMART approach i.e. SMART i.e. Specific, Measureable, Assignable/Achievable, Realistic, Time related. •Define NFR’s that can be measured and validated at Performance Test. Defining NFR’s that can’t be measured or validated is useless. It’s not funny how often this happens. •Get teams across the program to buy into the NFR’s and own them across their relevant domains.
  • 7. Investing In An Architecture That Doesn’t Scale Source –www.barrylupton.com •Vendor benchmarks are delivered on systems with specs and hardware you’ll rarely have access to. Be realistic with your assumptions of scalability. •Build a PoCand pilot your solution in a sandpit environment. Don’t wait to get to Performance Test to work out Architectural Constraints. •Leverage existing capability, peer reviews and industry experience to validate your solution design. •Avoid fancy 3rdparty COTS components if possible and stick to simple designs and implementations. •Make it a point to stay updated on Anti Patterns. Every development platform has it own set of Anti Patterns.
  • 8. Tools Galore, Give Me Something That Works!!! Source –www.toondoo.com •Most clients have a gazillion tools sitting around. Find something that works and implement it. •Don’t expect an integrated monitoring, diagnostics, reporting solution. Grab the tools you can and get them to work. •OpenSourcetools work well when it comes to basic systems monitoring. Be well aware of their limitations. •When it comes to Performance Testing, identify your requirements and call out the need to invest early on. Performance Testing tools can be really expensive and budget generally ends up being the biggest impediment to their procurement. •Performance Tools are required across the life cycle of the system –Modelling, Profiling/Diagnostics, Performance Testing, Monitoring & Capacity Management
  • 9. Execute SVT In A Scaled Down Environment Source –www.glasbergen.com •Executing performance testing in a scaled down environment does come with an element of risk •Unless your SVT environments are 70-80% like production (in terms of configuration & capacity) extrapolation of the numbers carries high risk •Most environments are virtual these days. Watch out for Dev/Test environments which might have a higher over provisioning ratio as compared to production which will again skew the numbers. •Assuming linear scalability for your application when modelling performance or extrapolating performance using data from scaled down environments can get you into a lot of trouble. •Keep an eye out for background workload especially on virtual infrastructure which can skew your numbers in a test environment.
  • 10. Read Between The Lines –Cloud Service SLA’s Source –www.truthliesdeceptioncoverups.info •When building applications that leverage cloud components be careful and read between the lines. Most vendor SLA’s don’t go beyond their data centers. •If you own performance, you own SLA’s end to end while your vendor only owns SLA’s within their data centers. Build resilience into your solution. •Most cloud vendors charge large sums to performance test using their systems, it’s preferable to bake those requirements into the procurement process. •If you choose to use stubs to test third party components beware of the performance penalty you are overlooking. •Most services are moving to the cloud and there can be a significant performance penalty when interacting with multiple cloud services hosted across different locations. Design sensibly.
  • 11. Trend Infra Utilization Metrics & Extrapolate Source –www.andertoons.com •Managing performance of a system goes well beyond performance testing and continues past go-live. •Trending CPU, Mem, Disk utilization metrics and extrapolating them using statistical approaches is a high risk approach. •Focus on key business workload drivers. Map the business workload drivers to the relevant infrastructure workload drivers and build your statistical models using relevant modelling techniques. •Capture data and relevant time intervals (30s or less) for purposes of modelling and forecasting. Roll them up using your custom roll up routines. Don’t depend on the system monitoring tools to give you one data point per hour, those numbers are quite useless. •Capacity Management doesn’t require expensive tooling. It’s requires a well defined approach and some basic modelling techniques any maths grad can pick up.
  • 12. Thank You Source –www.dilbert.com