Successfully reported this slideshow.

Door to perfomance testing

1

Share

Upcoming SlideShare
Performance Testing
Performance Testing
Loading in …3
×
1 of 38
1 of 38

Door to perfomance testing

1

Share

Download to read offline

Gude and keys to start a performance testing project and areas to consider and review when starting a performance test. And focus on key points model test and inputs needs to take for proper modeling

Gude and keys to start a performance testing project and areas to consider and review when starting a performance test. And focus on key points model test and inputs needs to take for proper modeling

More Related Content

Related Books

Free with a 14 day trial from Scribd

See all

Related Audiobooks

Free with a 14 day trial from Scribd

See all

Door to perfomance testing

  1. 1. ReThink Road to effective Performance test Dharshana Warusavitharana
  2. 2. Why Performance Test Can traditional testing can capture everything ?????
  3. 3. 1. Down time is expensive 2. Functional testing does not reveal concurrent issues 3. Simulation of real time experience 4. Test software meets expectations under expected workloads a. speed, b. scalability and c. stability
  4. 4. What is a good performing system
  5. 5. 1. Availability 2. Response time 3. Throughput 4. Utilization Relative ( Response time) 1. Greater than 15 seconds 2. Greater than 4 seconds 3. 2 to 4 seconds 4. Less than 2 seconds 5. Sub Second response time 6. Deci-second response time
  6. 6. How to Structure a Perf Test
  7. 7. The performance-testing approach consists of the following activities: Activity 1. Identify the Test Environment. Activity 2. Identify Performance Acceptance Criteria. Activity 3. Plan and Design Tests. Activity 4. Configure the Test Environment. Activity 5. Implement the Test Design. Activity 6. Execute the Test. Activity 7. Analyze Results, Report, and Retest.
  8. 8. How customers will reach us
  9. 9. 1. Need to know limits of the system 2. Need to understand the bottlenecks of some specific scenarios. 3. Want to check system usability 4. Already faced an issues and need our help in isolating the issue 5. Want to see whether the system meets the defined KPIs 6. Want to test the product that is ready to ship for end users
  10. 10. What are solutions in our bucket and When
  11. 11. ● Load testing ● Stress testing ● Endurance testing ● Spike testing ● Volume testing ● Scalability testing
  12. 12. Performance testing. 1. This type of testing determines or validates the speed, scalability, and/or stability characteristics of the system or application under test. 2. Performance is concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the project or product.
  13. 13. Load testing. Focused on determining or validating performance characteristics of the system or application under test when subjected to workloads and load volumes anticipated during production operations. Endurance / long running tests ○ Run for a long time ○ Determine whether no issues in long run
  14. 14. Stress testing. ○ This subcategory of performance testing is focused on determining or validating performance characteristics of the system or application under test when subjected to conditions beyond those anticipated during production operations. ○ Stress tests may also include tests focused on determining or validating performance characteristics of the system or application under test when subjected to other stressful conditions, such as limited memory, insufficient disk space, or server failure. ○ These tests are designed to determine under what conditions an application will fail, how it will fail,
  15. 15. What should you ask from customer
  16. 16. 1. What makes you think there is a performance problem? 2. Can the problem be expressed in terms of latency/TPS? 3. What is the environment (AWS, Openstack)? 4. What are the user patterns and how users use the system? 5. What are the peak and off peak load expected? 6. What are the products and the versions used ? 7. What are the server and network monitoring key Performance Indicators (KPIs) 8. Configuration/Deployment (if we do not have this information)?
  17. 17. Capacity Planning 1. How many end users will actually use the application? 2. How many of these users will use it concurrently? 3. How will the end users connect to the application? 4. How many additional end users will require access to the application over time? 5. What will the final application landscape look like in terms of the number and location of the servers? 6. What effect will the application have on network capacity?
  18. 18. Selecting your tools
  19. 19. Test execution Monitoring Debugging
  20. 20. 1. Tools are not universal, 2. Need to be creative with the environment. 3. Need to know what to monitor to select right tools . 4. Depends on the customer deployment. 5. Should adjust to the level of access to the system.
  21. 21. Designing an Appropriate Performance Test Environment 1. Scaling of the test tool , clustering with load 2. Data collection and deployment of data bases. 3. Configure proxy services, Load balancers to avoid throttling the traffic 4. Scale Up EC2 instances or other PaaS infrastructure to support the load. 5. Deploy monitoring tools
  22. 22. Checklist before test execution
  23. 23. 1. Choosing an appropriate performance testing tool 2. Designing an appropriate performance test environment 3. Setting realistic and appropriate performance targets 4. Making sure your application is stable 5. Obtaining a code freeze 6. Identifying and scripting the business-critical transactions 7. Providing sufficient test data of high quality 8. Ensuring accurate performance test design 9. Identifying the server and network monitoring key 10. Performance Indicators (KPIs) 11. Allocating enough time to performance test effectively
  24. 24. Monitoring and what we collect
  25. 25. 1. Processor Usage, Load Average 2. Memory usage 3. Disk time ,Disk queue length 4. Network Bandwidth, Network output queue length 5. Memory pages/second ,Page faults/second 6. Response time ,Throughput 7. Amount of connection pooling 8. Hit ratios ,Hits per second 9. Database locks 10. Thread counts 11. Garbage collection
  26. 26. Tools used in monitoring 1. Testing tool it self a. Throughput b. Response time 2. Linux System stats ( SAR recording) a. Load Average b. CPU load c. Memory 3. Linux top command a. Memory allocation for the time 4. Dynatrace/Nagios/Amazon cloud watch 5. Splunk/ Logstash a. Monitor the distributed error logs
  27. 27. Debugging , What we need
  28. 28. Collect resources for debugging Some resources can be collected in test execution time 1. Heap dumps 2. Thread dumps 3. Garbage collector logs For some need to execute separately 1. JFR recording 2. Jprofiler 3. And other all profiling 4. JDBC spy
  29. 29. High CPU 1. JFR 2. Thread IDs and CPU usage of JAVA threads 3. Multiple thread dumps High CPU consumption of a single or multiple application threads
  30. 30. High CPU and High GC 1. JFR 2. Thread IDs and CPU usage of JAVA threads 3. Multiple thread dumps 4. GC logs 5. Heap dumps (if necessary) ○ High CPU consumption of a single or multiple application threads ○ High CPU consumption of GC threads ○ Both of the above
  31. 31. Application Slows Down, Application Hangs, Request latency increases 1. JFR 2. Thread IDs and CPU usage of JAVA threads 3. Multiple thread dumps 4. Heap dumps (if necessary) 5. GC logs ○ Thread blocks in application threads when unable to obtain a lock ○ Threads continuously wait due to I/O ○ High GC activity during the peak load ○ High CPU consumption of a single or multiple threads during the peak load ○ High CPU consumption of a periodical event ○ Deadlocks
  32. 32. High Memory Usage, OOM, Memory Leak 1. JFR 2. GC logs 3. Heap dumps (if necessary) ○ Not enough memory allocated to JVM ○ High memory usage (with no leak) ○ Too many allocations due to auto-boxing ○ Memory leak
  33. 33. Data collection and Reporting
  34. 34. Data collection
  35. 35. Reporing 1. Should indicate all your analysis on the current system. 2. Clearly state your assumptions. 3. Tools used and test deployment. 4. Tuning and resource allocations. 5. Test break down and and concurrency details 6. Observations and evidence for observation. 7. Your conclusion. 8. Suggestions with reference and summary.
  36. 36. Engineer skill set Let's Discuss
  37. 37. 1. Communication, Communication and Communication 2. For me more i do more i learn 3. Every time I learn a lot 4. Good understanding on basics 5. Flexibility, Some coding skills will be Handy 6. Good on Operating systems 7. Adjustability and communication 8. Skills on performance tooling
  38. 38. Thank you Time for Questions

×