Load testing is an important part of the performance engineering process. However the industry is changing and load testing should adjust to these changes - a stereotypical, last-moment performance check is not enough anymore. There are multiple aspects of load testing - such as environment, load generation, testing approach, life-cycle integration, feedback and analysis - and none remains static. This presentation discusses how performance testing is adapting to industry trends to remain relevant and bring value to the table.
Multiple Dimensions of Load Testing, CMG 2015 paperAlexander Podelko
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. It is important to see a bigger picture beyond stereotypical, last-moment load testing. There are multiple dimensions of load testing: environment, load generation, testing approach, life-cycle integration, feedback and analysis. This paper discusses these dimensions and how load testing tools support them.
Continuous Performance Testing: Myths and RealitiesAlexander Podelko
The document discusses continuous performance testing in the context of agile development. It notes that while continuous performance testing is becoming more common, it remains a challenge to fully integrate into continuous integration processes. Different approaches like record and playback automation, API scripting, and exploratory testing each have advantages and disadvantages depending on the system and development context. Fully automating all performance tests may not be realistic, so a tiered approach with some simple automated tests alongside more extensive manual testing is often needed. The key is finding the right balance and mix of approaches for each unique situation.
Continuous Performance Testing: Challenges and ApproachesAlexander Podelko
Integrating performance testing into agile development processes presents several challenges. Continuous performance testing aims to automate performance tests run during each build or iteration. This helps detect performance regressions early. However, it requires optimizing test coverage given time and resource constraints. Other challenges include reducing noise in test results, detecting changes between builds, advanced analysis of results, and defining organizational roles and responsibilities for maintaining tests. The document discusses these challenges and how companies like MongoDB have implemented continuous performance testing in their development pipelines to address them.
Load testing is an important part of the performance engineering process. However the industry is changing and load testing should adjust to these changes - a stereotypical, last-moment performance check is not enough anymore. There are multiple aspects of load testing - such as environment, load generation, testing approach, life-cycle integration, feedback and analysis - and none remains static. This presentation discusses how performance testing is adapting to industry trends to remain relevant and bring value to the table.
Performance Testing in the Agile LifecycleLee Barnes
Traditional large scale end-of-cycle performance tests served enterprises well in the waterfall era. However, as organizations transition to agile development models, many find their tried and true approach to performance testing—and their performance testing resources—becoming somewhat irrelevant. The strict requirements and lengthy durations just don’t fit in the context of an agile cycle. Additionally, investigating system performance at the end of the development effort misses out on the early stage feedback offered by an agile approach. And it’s more important than ever that today’s agile-built systems perform. So how can agile organizations ensure optimum performance of their business critical systems? Lee Barnes discusses why agile teams need to change their thinking about performance from a narrow focus on testing to a broader focus on analysis—from a people, process and technology perspective. Take back techniques for shifting your performance testing/analysis earlier in the development cycle and extracting performance data that is immediately actionable.
This document discusses performance testing challenges for an agile development team working on a performance critical Java application. It estimates that manually executing performance tests against 9 configurations would take 1+ man-months. To address this, it evaluates options like adding more performance engineers, limiting tests and configurations, or automating performance testing. It recommends automating testing for benefits like running tests continuously and allowing small teams to efficiently test performance. The case study details how the team automated testing using JMeter, built a process integrated with TeamCity, and upgraded infrastructure to support concurrent testing. Automation reduced the testing cycle from over 1 man-month to 4 days, allowing more time for analysis and new testing while finding 17 issues.
Top Ten Secret Weapons For Agile Performance TestingAndriy Melnyk
This document outlines top secret weapons for agile performance testing. It discusses making performance explicit, having performance testers work as part of development teams, driving performance tests with customer requirements, taking a disciplined scientific approach to analyzing test results, starting performance testing early in projects, automating performance test workflows, and getting frequent feedback to iteratively improve.
Multiple Dimensions of Load Testing, CMG 2015 paperAlexander Podelko
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. It is important to see a bigger picture beyond stereotypical, last-moment load testing. There are multiple dimensions of load testing: environment, load generation, testing approach, life-cycle integration, feedback and analysis. This paper discusses these dimensions and how load testing tools support them.
Continuous Performance Testing: Myths and RealitiesAlexander Podelko
The document discusses continuous performance testing in the context of agile development. It notes that while continuous performance testing is becoming more common, it remains a challenge to fully integrate into continuous integration processes. Different approaches like record and playback automation, API scripting, and exploratory testing each have advantages and disadvantages depending on the system and development context. Fully automating all performance tests may not be realistic, so a tiered approach with some simple automated tests alongside more extensive manual testing is often needed. The key is finding the right balance and mix of approaches for each unique situation.
Continuous Performance Testing: Challenges and ApproachesAlexander Podelko
Integrating performance testing into agile development processes presents several challenges. Continuous performance testing aims to automate performance tests run during each build or iteration. This helps detect performance regressions early. However, it requires optimizing test coverage given time and resource constraints. Other challenges include reducing noise in test results, detecting changes between builds, advanced analysis of results, and defining organizational roles and responsibilities for maintaining tests. The document discusses these challenges and how companies like MongoDB have implemented continuous performance testing in their development pipelines to address them.
Load testing is an important part of the performance engineering process. However the industry is changing and load testing should adjust to these changes - a stereotypical, last-moment performance check is not enough anymore. There are multiple aspects of load testing - such as environment, load generation, testing approach, life-cycle integration, feedback and analysis - and none remains static. This presentation discusses how performance testing is adapting to industry trends to remain relevant and bring value to the table.
Performance Testing in the Agile LifecycleLee Barnes
Traditional large scale end-of-cycle performance tests served enterprises well in the waterfall era. However, as organizations transition to agile development models, many find their tried and true approach to performance testing—and their performance testing resources—becoming somewhat irrelevant. The strict requirements and lengthy durations just don’t fit in the context of an agile cycle. Additionally, investigating system performance at the end of the development effort misses out on the early stage feedback offered by an agile approach. And it’s more important than ever that today’s agile-built systems perform. So how can agile organizations ensure optimum performance of their business critical systems? Lee Barnes discusses why agile teams need to change their thinking about performance from a narrow focus on testing to a broader focus on analysis—from a people, process and technology perspective. Take back techniques for shifting your performance testing/analysis earlier in the development cycle and extracting performance data that is immediately actionable.
This document discusses performance testing challenges for an agile development team working on a performance critical Java application. It estimates that manually executing performance tests against 9 configurations would take 1+ man-months. To address this, it evaluates options like adding more performance engineers, limiting tests and configurations, or automating performance testing. It recommends automating testing for benefits like running tests continuously and allowing small teams to efficiently test performance. The case study details how the team automated testing using JMeter, built a process integrated with TeamCity, and upgraded infrastructure to support concurrent testing. Automation reduced the testing cycle from over 1 man-month to 4 days, allowing more time for analysis and new testing while finding 17 issues.
Top Ten Secret Weapons For Agile Performance TestingAndriy Melnyk
This document outlines top secret weapons for agile performance testing. It discusses making performance explicit, having performance testers work as part of development teams, driving performance tests with customer requirements, taking a disciplined scientific approach to analyzing test results, starting performance testing early in projects, automating performance test workflows, and getting frequent feedback to iteratively improve.
You want to integrate skilled testing and development work. But how do you accomplish this without developers accidentally subverting the testing process or testers becoming an obstruction? Efficient, deep testing requires “critical distance” from the development process, commitment and planning to build a testable product, dedication to uncovering the truth, responsiveness among team members, and often a skill set that developers alone—or testers alone—do not ordinarily possess. James Bach presents a model—a redesign of the famous Agile Testing Quadrants that distinguished between business vs. technical facing tests and supporting vs. critiquing―that frames these dynamics and helps teams think through the nature of development and testing roles and how they might blend, conflict, or support each other on an Agile project. James includes a brief discussion of the original Agile Testing Quadrants model, which the presenters believe has created much confusion about the role of testing in Agile.
Wim Demey - Regression Testing in a Migration Project TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Regression Testing in a Migration Project by Wim Demey. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Overview
1. The main goals of performance testing
2. The Advantages of Performance Tests
3. The Disadvantages of Performance Tests
4. Types of performance tests
5. Determining a successful performance testing project
Enjoy!
Enterprise software needs to be faster than the competition.
In this presentation we will explore what is performance testing, why it is important and when should you implement these tests.
T19 performance testing effort - estimation or guesstimation revisedTEST Huddle
This document discusses performance testing estimation and provides tips to improve the estimation process. It recommends dividing estimation into stages like requirements analysis, design, development, testing and delivery. Key factors to consider include non-functional requirements, test cases, test runs, server monitoring needs, and data/environment setup. Tasks that typically consume more time include scripting, test execution and data setup. The document emphasizes estimating early, listing assumptions, and using a technique rather than guessing to improve accuracy.
Darius Silingas - From Model Driven Testing to Test Driven ModellingTEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on From Model Driven Testing to Test Driven Modelling by Darius Silingas. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Performance testing validates an application's responsiveness, stability, and other quality attributes under various workloads. It involves load testing, stress testing, endurance testing, spike testing, volume testing, availability testing, and scalability testing. The key parameters analyzed are response time, throughput, and memory utilization. Performance testing helps determine an application's speed, scalability, stability, and ability to handle changes in load and traffic over time.
Extreme programming (xp) | David TzemachDavid Tzemach
It’s simply the best presentation that explains the agile methodology of Extreme Programming!
Overview
1. What is Extreme programming?
2. Extreme programming as an agile methodology.
3. The values of Extreme programming
4. The Activities of Extreme programming
5. The 12 core practices of Extreme programming
6. The roles of Extreme programming
Enjoy :)
James Brodie - Outsourcing Partnership - Shared Perspectives TEST Huddle
This presentation discusses NFU Mutual's outsourcing project and partnership with a vendor for additional testing. It covers their selection process, including developing criteria and running proofs of concept. It also discusses how they have lived the relationship, including governance, service level agreements, integrating teams, and moving work offshore. Metrics and cultural integration are important factors for a successful partnership. Overall, the key to success is open communication, agreed metrics, and addressing potential issues upfront.
Mieke Gevers - Performance Testing in 5 Steps - A Guideline to a Successful L...TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Performance Testing in 5 Steps - A Guideline to a Successful Load Test by Mieke Gevers. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011TEST Huddle
EuroSTAR Software Testing Conference 2011 presentation on Creating Customer Value With Agile Testing by Ben Walters. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
This document summarizes a full-day tutorial on fundamentals of risk-based testing presented by Dale Perry of Software Quality Engineering on April 29, 2013. The tutorial is intended to provide an overview of risk-based testing and how it can be used to prioritize testing efforts. It discusses determining product risks, analyzing risks, developing test plans based on risks, and evaluating results. The document also provides background on the presenter Dale Perry and the training organization Software Quality Engineering.
Lean Enterprise, A Definitive Approach in Software Development ProductionBerk Dülger
This document summarizes a presentation on lean enterprise and software development methodologies. It discusses:
1) The chaotic nature of software development and how 66% of projects are unsuccessful or 30% cancelled due to unpredictable nature, leading many to adopt agile/lean methods.
2) An overview of agile methods like Scrum and Kanban, as well as DevOps and Continuous Delivery approaches.
3) How the target methodology selection is DevOps combined with Scrumban for management, to increase efficiency and push increments to production just in time, unlike sprints in Scrum.
4) The final section discusses tactical DevOps adoption through continuous testing and visible business value at each step to encourage practitioners
Doron Reuveni - The Mobile App Quality Challenge - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on The Mobile App Quality Challenge by Doron Reuveni. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
The document discusses the "test pyramid" concept for balancing test suites from unit to end-to-end tests. It provides examples of different types of tests including unit tests, integration tests, UI/end-to-end tests. It also discusses challenges with different types of tests and strategies for addressing those challenges including dependency injection, mocks, and tools like Cucumber, Robolectric, and Pacto. The document seeks feedback on testing approaches and provides additional resources on testing best practices.
EuroSTAR Software Testing Conference 2010 presentation on Testing "slow flows" Fast, Automated End-2-End Testing using interrupts by Dominic Maes. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
DevOps Tactical Adoption Theory - DevOpsDays istanbul 2016Berk Dülger
This document contains information about DevOps tactics and adoption theories presented by Berk Dülger. It discusses implementing DevOps practices incrementally to demonstrate business value at each step and gain management support. It also emphasizes the importance of continuous testing, including both unit and UI testing, and provides an example of a tactical adoption approach for continuous testing.
Kanoah Tests is a test management tool that integrates seamlessly with JIRA. It allows coordinating all test management activities like planning, authoring, execution, tracking and reporting from within JIRA. Key features include native JIRA integration, reusable test cases across projects, powerful REST API, and out-of-the-box reports for real-time insights. Reviews praise its simple and elegant solution for linking tests between projects without needing to learn new tools or switch contexts.
This document discusses the principles and steps for designing a workload to test the performance and scalability of a web application. It outlines defining metrics, designing operations and their mix, scaling rules, and generating representative data. The goal is to create a workload that is predictable, repeatable, and scalable to emulate real usage and identify bottlenecks. A 3-page sample web application is used to demonstrate the design process.
This document discusses various performance testing methodologies. It begins by introducing performance testing as a subset of performance engineering aimed at building performance into a system's design. The document then describes different types of performance testing including load testing, stress testing, endurance/soak testing, spike testing, and configuration testing. It emphasizes that performance testing validates attributes like scalability and resource usage under various loads.
You want to integrate skilled testing and development work. But how do you accomplish this without developers accidentally subverting the testing process or testers becoming an obstruction? Efficient, deep testing requires “critical distance” from the development process, commitment and planning to build a testable product, dedication to uncovering the truth, responsiveness among team members, and often a skill set that developers alone—or testers alone—do not ordinarily possess. James Bach presents a model—a redesign of the famous Agile Testing Quadrants that distinguished between business vs. technical facing tests and supporting vs. critiquing―that frames these dynamics and helps teams think through the nature of development and testing roles and how they might blend, conflict, or support each other on an Agile project. James includes a brief discussion of the original Agile Testing Quadrants model, which the presenters believe has created much confusion about the role of testing in Agile.
Wim Demey - Regression Testing in a Migration Project TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Regression Testing in a Migration Project by Wim Demey. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Overview
1. The main goals of performance testing
2. The Advantages of Performance Tests
3. The Disadvantages of Performance Tests
4. Types of performance tests
5. Determining a successful performance testing project
Enjoy!
Enterprise software needs to be faster than the competition.
In this presentation we will explore what is performance testing, why it is important and when should you implement these tests.
T19 performance testing effort - estimation or guesstimation revisedTEST Huddle
This document discusses performance testing estimation and provides tips to improve the estimation process. It recommends dividing estimation into stages like requirements analysis, design, development, testing and delivery. Key factors to consider include non-functional requirements, test cases, test runs, server monitoring needs, and data/environment setup. Tasks that typically consume more time include scripting, test execution and data setup. The document emphasizes estimating early, listing assumptions, and using a technique rather than guessing to improve accuracy.
Darius Silingas - From Model Driven Testing to Test Driven ModellingTEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on From Model Driven Testing to Test Driven Modelling by Darius Silingas. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Performance testing validates an application's responsiveness, stability, and other quality attributes under various workloads. It involves load testing, stress testing, endurance testing, spike testing, volume testing, availability testing, and scalability testing. The key parameters analyzed are response time, throughput, and memory utilization. Performance testing helps determine an application's speed, scalability, stability, and ability to handle changes in load and traffic over time.
Extreme programming (xp) | David TzemachDavid Tzemach
It’s simply the best presentation that explains the agile methodology of Extreme Programming!
Overview
1. What is Extreme programming?
2. Extreme programming as an agile methodology.
3. The values of Extreme programming
4. The Activities of Extreme programming
5. The 12 core practices of Extreme programming
6. The roles of Extreme programming
Enjoy :)
James Brodie - Outsourcing Partnership - Shared Perspectives TEST Huddle
This presentation discusses NFU Mutual's outsourcing project and partnership with a vendor for additional testing. It covers their selection process, including developing criteria and running proofs of concept. It also discusses how they have lived the relationship, including governance, service level agreements, integrating teams, and moving work offshore. Metrics and cultural integration are important factors for a successful partnership. Overall, the key to success is open communication, agreed metrics, and addressing potential issues upfront.
Mieke Gevers - Performance Testing in 5 Steps - A Guideline to a Successful L...TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Performance Testing in 5 Steps - A Guideline to a Successful Load Test by Mieke Gevers. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011TEST Huddle
EuroSTAR Software Testing Conference 2011 presentation on Creating Customer Value With Agile Testing by Ben Walters. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
This document summarizes a full-day tutorial on fundamentals of risk-based testing presented by Dale Perry of Software Quality Engineering on April 29, 2013. The tutorial is intended to provide an overview of risk-based testing and how it can be used to prioritize testing efforts. It discusses determining product risks, analyzing risks, developing test plans based on risks, and evaluating results. The document also provides background on the presenter Dale Perry and the training organization Software Quality Engineering.
Lean Enterprise, A Definitive Approach in Software Development ProductionBerk Dülger
This document summarizes a presentation on lean enterprise and software development methodologies. It discusses:
1) The chaotic nature of software development and how 66% of projects are unsuccessful or 30% cancelled due to unpredictable nature, leading many to adopt agile/lean methods.
2) An overview of agile methods like Scrum and Kanban, as well as DevOps and Continuous Delivery approaches.
3) How the target methodology selection is DevOps combined with Scrumban for management, to increase efficiency and push increments to production just in time, unlike sprints in Scrum.
4) The final section discusses tactical DevOps adoption through continuous testing and visible business value at each step to encourage practitioners
Doron Reuveni - The Mobile App Quality Challenge - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on The Mobile App Quality Challenge by Doron Reuveni. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
The document discusses the "test pyramid" concept for balancing test suites from unit to end-to-end tests. It provides examples of different types of tests including unit tests, integration tests, UI/end-to-end tests. It also discusses challenges with different types of tests and strategies for addressing those challenges including dependency injection, mocks, and tools like Cucumber, Robolectric, and Pacto. The document seeks feedback on testing approaches and provides additional resources on testing best practices.
EuroSTAR Software Testing Conference 2010 presentation on Testing "slow flows" Fast, Automated End-2-End Testing using interrupts by Dominic Maes. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
DevOps Tactical Adoption Theory - DevOpsDays istanbul 2016Berk Dülger
This document contains information about DevOps tactics and adoption theories presented by Berk Dülger. It discusses implementing DevOps practices incrementally to demonstrate business value at each step and gain management support. It also emphasizes the importance of continuous testing, including both unit and UI testing, and provides an example of a tactical adoption approach for continuous testing.
Kanoah Tests is a test management tool that integrates seamlessly with JIRA. It allows coordinating all test management activities like planning, authoring, execution, tracking and reporting from within JIRA. Key features include native JIRA integration, reusable test cases across projects, powerful REST API, and out-of-the-box reports for real-time insights. Reviews praise its simple and elegant solution for linking tests between projects without needing to learn new tools or switch contexts.
This document discusses the principles and steps for designing a workload to test the performance and scalability of a web application. It outlines defining metrics, designing operations and their mix, scaling rules, and generating representative data. The goal is to create a workload that is predictable, repeatable, and scalable to emulate real usage and identify bottlenecks. A 3-page sample web application is used to demonstrate the design process.
This document discusses various performance testing methodologies. It begins by introducing performance testing as a subset of performance engineering aimed at building performance into a system's design. The document then describes different types of performance testing including load testing, stress testing, endurance/soak testing, spike testing, and configuration testing. It emphasizes that performance testing validates attributes like scalability and resource usage under various loads.
The document discusses how organizations can leverage cloud capabilities for product testing. It outlines the benefits of using cloud testing such as reduced costs, on-demand access to resources, and the ability to test at scale. It also addresses challenges like sharing resources and test environment deployment. The document proposes a step-by-step approach defined by Impetus Technologies to effectively adopt cloud testing based on their test engineering maturity model.
Performance testing interview questions and answersGaruda Trainings
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
The document outlines best practices and tips for application performance testing. It discusses defining test plans that include load testing, stress testing, and other types of performance testing. Key best practices include testing early and often using an iterative approach, taking a DevOps approach where development and operations work as a team, considering the user experience, understanding different types of performance tests, building a complete performance model, and including performance testing in unit tests. The document also provides tips to avoid such as not allowing enough time and using a QA system that differs from production.
Automated performance testing simulates real users to determine an application's speed, scalability, and stability under load before deployment. It helps detect bottlenecks, ensures the system can handle peak load, and provides confidence that the application will work as expected on launch day. The process involves evaluating user expectations and system limits, creating test scripts, executing load, stress, and duration tests while monitoring servers, and analyzing results to identify areas for improvement.
Automated performance testing simulates real users to determine an application's speed, scalability, and stability under load before deployment. It helps detect bottlenecks, ensures the system can handle peak usage, and provides confidence that the application will work as expected on launch day. The process involves evaluating user needs, drafting test scripts, executing different types of load tests, and monitoring servers and applications to identify performance issues or degradation over time.
Testing throughout the software life cyclemuhamad iqbal
This document discusses testing throughout the software life cycle. It covers common software development models like the waterfall model and iterative models. It describes different levels of testing like unit testing, integration testing, system testing, and acceptance testing. It explains the objectives, typical activities, and outputs for each test level. Finally, it compares different types of testing like functional, non-functional, structural, and regression testing.
Performance Evaluation of a Network Using Simulation Tools or Packet TracerIOSRjournaljce
Today, the importance of information and accessing information is increasing rapidly. With the advancement of technology, one of the greatest means of achieving knowledge are, computers have entered in many areas of our lives. But the most important of them are the communication fields. This study will be a practical guide for understanding how to assemble and analyze various parameters in network performance evaluation and when designing a network what is necessary to looking for to remove the consequences of degrading performance. Therefore, what can you do in a network performance evaluation using simulation tools such as Network Simulation or Packet tracer and how various parameters can be brought together successfully? CCNA, CCNP, HCNA and HCNP educational level has been used and important setting has been simulated one by one. At the result this is a good guide for a local or wide area network. Finally, the performance issues precautions described. Considering the necessary parameters, imaginary networks were designed and evaluated both in CISCO Packet Tracer and Huawei's eNSP simulation program. But it should not be left unsaid that the networks have been designed and evaluated in free virtual environments, not in a real laboratory. Therefore, it is impossible to make actual performance appraisal and output as there is no actual data available.
This document introduces the Yet Another Performance Profiling Method (YAPP Method) for holistically tuning Oracle database performance. It begins by defining key terms like response time, scalability, and the 80/20 rule. It then explains that most performance issues originate from application design/implementation rather than the database itself. The document advocates focusing tuning efforts on the application level using a simplified approach focused on the areas of highest impact.
Workshop BI/DWH AGILE TESTING SNS Bank EnglishMarcus Drost
The workshop focused on improving testing processes for data intensive environments like business intelligence and data warehousing systems. Participants discussed challenges with the traditional waterfall model and benefits of agile/Scrum approaches. Common problems identified included unstable test data, long test runtimes, lack of automation, and pressure on testers. Potential solutions proposed applying agile practices like continuous integration and regression testing, automating test data generation, deployments, and output validation to make testing more efficient and independent of production systems. The workshop aimed to provide insights into both problems with current testing approaches and actions that could be taken to address them.
The most important aspect to release any product or application in the market is to deliver a satisfying user experience. And this can only be achieved when the application performs impeccably. To help understand the ways and means to ensure the same, this PPT will shed light on the essential elements under performance testing. To know more on software performance testing, performance testing, app performance testing, web performance testing, website load testing and performance tuning, go through this presentation and gear up for the upcoming ones.
The document discusses Effektives Consulting's performance engineering portfolio, which includes user experience and web performance management, cloud-based commerce recommendations, zero-touch deployments, and emerging augmented reality applications. It focuses on web performance management, covering infrastructure capacity planning, a two-stage performance testing approach using both on-premise and cloud-based resources, application profiling, and reporting.
The document discusses cost based performance modeling as a way to deal with uncertainties in system performance. It involves creating a model based on measuring the costs of individual transactions to map system behavior to resource requirements. Transactions represent units of work for the system. The costs of transactions are measured by testing them at different rates. If transactions are linear with rate, the total resource usage can be calculated by summing costs. Non-linearities require a different approach. The model allows estimating performance for various scenarios without exhaustive testing.
Harnessing the Cloud for Performance Testing- Impetus White PaperImpetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
The paper provides insights on the various benefits of using the Cloud for Performance Testing as well as how to address the various challenges associated with this approach.
Performance testing ensures that a software system performs well under expected workloads over time by testing aspects like response time, speed, scalability, and stability. It is done to identify bottlenecks and limitations so they can be addressed before public release. There are different types of performance testing like load testing, stress testing, endurance testing and volume testing that evaluate how an application handles different types and levels of usage. The performance testing process involves planning tests, setting up environments, executing tests, verifying results, and retesting as needed.
Performance testing ensures that a software system performs well under expected workloads over time by testing aspects like response time, speed, scalability, and stability. It is done to identify bottlenecks and limitations so they can be addressed before public release. There are different types of performance testing like load testing, stress testing, endurance testing and volume testing that check how an application handles different types and levels of usage. The performance testing process involves planning tests, setting up environments, executing tests, verifying results, and retesting as needed.
Cloud-enabled Performance Testing vis-à-vis On-premise- Impetus White PaperImpetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
In this white paper we talk about how you can integrate on-premise and cloud-based options to build an effective performance testing strategy.
Performance testing Web Application - A complete GuideTestingXperts
Application Performance testing validates various factors and checks applications to ensure and maintain their reliability and scalability. Leverage TestingXperts Performance testing services to enhance your application performance and such high performing apps are bound to drive more traffic and help spread your brand
The document introduces in-container testing tools to improve testing of gCube services. It notes that current testing practices are manual, not repeatable, slow, and not integrated into build processes. Tests are executed in containers separate from the code, causing reproducibility issues. The proposed in-container testing tools would run tests inside the same container as the code, improving speed, repeatability, and integration with development workflows. This would help discover bugs earlier and improve system testing approaches.
Similar to Reinventing Performance Testing. CMG imPACt 2016 paper (20)
The document discusses context-driven performance testing. It advocates for early performance testing using exploratory and continuous testing approaches in agile development. Testing should be done at the component level using various environments like cloud, lab, and production. Load can be generated through record and playback, programming, or using production workloads. Defining a performance testing strategy involves determining risks, appropriate tests, timing, and processes based on the project context. The strategy is part of an overall performance engineering approach.
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. It is important to see a bigger picture beyond stereotypical, last-moment load testing. There are multiple dimensions of load testing: environment, load generation, testing approach, life-cycle integration, feedback and analysis. This paper discusses these dimensions and how load testing tools support them.
Tools of the Trade: Load Testing - Ignite session at WebPerfDays NY 14Alexander Podelko
Tools of the Trade: Load Testing - an Ignite session at WebPerfDays NY 2014. Some consideration about load testing and selecting load testing tools - as much as could be squeezed into 5 min / 20 slides.
Load Testing: See a Bigger Picture, ALM Forum, 2014Alexander Podelko
The document discusses different approaches to load testing, including load generation techniques like record and playback, real users, programming, and mixed approaches. It also covers load testing environments such as lab vs cloud vs production environments. Finally, it provides an overview of various load testing tools and how their suitability depends on factors like the technologies being tested and testing needs. The key message is that load testing is an important part of performance risk mitigation but requires choosing the right approach and tools based on the specific testing situation.
This document summarizes a presentation on web performance given by Alexander Podelko at WebPerfDays New York 2013. The presentation covered performance basics, the importance of considering both front-end and back-end performance, and different approaches to performance risk mitigation. However, the presenter argued that load testing is still needed to complement these approaches, as it is the only way to verify that a system can handle expected load levels and identify potential multi-user issues. Load testing was discussed in more detail, with examples of how it can be used for performance optimization and debugging. The presentation concluded by emphasizing the importance of taking a holistic, end-to-end view of performance.
The document provides a short history of performance engineering, beginning in the 1960s with the introduction of instrumentation tools for mainframe systems and the first studies of human response times. Key developments include the establishment of the performance engineering community in the 1970s, the first commercial performance analysis tools and distributed computing in the late 1970s, and the publication of early books on software performance engineering and applying existing expertise to web performance in the 1990s. The history shows that performance has been an ongoing concern across different computing paradigms, with new challenges arising with each new technology.
Performance Requirements: CMG'11 slides with notes (pdf)Alexander Podelko
Performance requirements should be tracked throughout a system's entire lifecycle, from inception through design, development, testing, operations, and maintenance. However, different groups involved at each stage use their own terminology and metrics, making performance requirements confusing. The document aims to provide a holistic view of performance requirements by discussing key metrics like throughput, response time, and concurrency used across the lifecycle. It also addresses issues like ensuring requirements are defined consistently regardless of changing workloads or system optimizations.
This document discusses performance assurance for packaged applications like Oracle Enterprise Performance Management. It outlines key steps for performance assurance including defining requirements, designing for best practices, verifying performance during development, testing, and monitoring production. Performance testing is recommended to mitigate risks, though it requires realistic loads and careful scripting. A top-down approach is advocated for performance troubleshooting, examining hardware, configuration, design and logs before suspecting product issues. Examples of common performance problems and their solutions are also provided.
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. Still it is important to see a bigger picture beyond stereotypical last-moment load testing. There are different ways to create load; a single approach may not work in all situations. Many tools allow you to use different ways of recording/playback and programming. This session discusses pros and cons of each approach, when it can be used and what tool's features we need to support it.
Performance Requirements: the Backbone of the Performance Engineering ProcessAlexander Podelko
Performance requirements should to be tracked from system's inception through its whole lifecycle including design, development, testing, operations, and maintenance. They are the backbone of the performance engineering process. However different groups of people are involved in each stage and they use their own vision, terminology, metrics, and tools that makes the subject confusing when you go into details. The presentation discusses existing issues and approaches in their relationship with the performance engineering process.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-Efficiency
Reinventing Performance Testing. CMG imPACt 2016 paper
1. Reinventing Performance Testing
Alexander Podelko
Oracle
Load testing is an important part of the performance engineering process. However the
industry is changing and load testing should adjust to these changes - a stereotypical,
last-moment performance check is not enough anymore. There are multiple aspects of
load testing - such as environment, load generation, testing approach, life-cycle
integration, feedback and analysis - and none remains static. This presentation discusses
how performance testing is adapting to industry trends to remain relevant and bring value
to the table.
Performance Engineering Puzzle
There are many discussions about performance, but they often concentrate on only one specific facet of
performance. The main problem with that is that performance is the result of every design and implementation
detail, so you can't ensure performance approaching it from a single angle only.
There are different approaches and techniques to alleviate performance risks, such as:
Software Performance Engineering (SPE). Everything that helps in selecting appropriate architecture and
design and proving that it will scale according to our needs. Including performance patterns and anti-patterns,
scalable architectures, and modeling.
Single-User Performance Engineering. Everything that helps to ensure that single-user response times, the
critical performance path, match our expectations. Including profiling, tracking and optimization of single-user
performance, and Web Performance Optimization (WPO).
Instrumentation / Application Performance Management (APM)/ Monitoring. Everything that provides
insights in what is going on inside the working system and tracks down performance issues and trends.
Capacity Planning / Management. Everything that ensures that we will have enough resources for the system.
Including both people-driven approaches and automatic self-management such as auto-scaling.
Load Testing. Everything used for testing the system under any multi-user load (including all other variations of
multi-user testing, such as performance, concurrency, stress, endurance, longevity, scalability, reliability, and
similar).
Continuous Integration / Delivery / Deployment. Everything allowing quick deployment and removal of
changes, decreasing the impact of performance issues.
And, of course, all the above do not exist not in a vacuum, but on top of high-priority functional requirements and
resource constraints (including time, money, skills, etc.).
Every approach or technique mentioned above somewhat mitigates performance risks and improves chances that
the system will perform up to expectations. However, none of them guarantees that. And, moreover, none
completely replaces the others, as each one addresses different facets of performance.
A Closer Look at Load Testing
To illustrate that point of importance of each approach let's look at load testing. With the recent trends towards
agile development, DevOps, lean startups, and web operations, the importance of load testing gets sometimes
2. questioned. Some (not many) are openly saying that they don't need load testing while others are still paying lip
service to it – but just never get there. In more traditional corporate world we still see performance testing groups
and most important systems get load tested before deployment. So what load testing delivers that other
performance engineering approaches don’t?
There are always risks of crashing a system or experiencing performance issues under heavy load – and the only
way to mitigate them is to actually test the system. Even stellar performance in production and a highly scalable
architecture don't guarantee that it won't crash under a slightly higher load.
Fig.1. Typical response time curve.
A typical response time curve is shown on fig.1, adapted from Andy Hawkes’ post discussing the topic [HAWK13].
As it can be seen, a relatively small increase in load near the curve knee may kill the system – so the system
would be unresponsive (or crash) under the peak load.
However, load testing doesn't completely guarantee that the system won’t crash: for example, if the real-life
workload would be different from what was tested (so you need to monitor the production system to verify that
your synthetic load is close enough). But load testing significantly decreases the risk if done properly (and, of
course, may be completely useless if done not properly – so it usually requires at least some experience and
qualifications).
Another important value of load testing is checking how changes impact multi-user performance. The impact on
multi-user performance is not usually proportional to what you see with single-user performance and often may be
counterintuitive; sometimes single-user performance improvement may lead to multi-user performance
degradation. And the more complex the system is, the more likely exotic multi-user performance issues may pop
up.
It can be seen on fig.2, where the black lines represent better single-user performance (lower on the left side of
the graph), but worse multi-user load: the knee happens under a lower load and the system won’t able to reach
the load it supported before.
3. Fig.2 Single-user performance vs. multi-user performance.
Another major value of load testing is providing a reliable and reproducible way to apply multi-user load needed
for performance optimization and performance troubleshooting. You apply exactly the same synthetic load and
see if the change makes a difference. In most cases you can’t do it in production when load is changing – so you
never know if the result comes from your code change or from change in the workload (except, maybe, a rather
rare case of very homogeneous and very manageable workloads when you may apply a very precisely measured
portion of the real workload). And, of course, a reproducible synthetic workload significantly simplifies debugging
and verification of multi-user issues.
Moreover, with existing trends of system self-regulation (such as auto-scaling or changing the level of services
depending on load), load testing is needed to verify that functionality. You need to apply heavy load to see how
auto-scaling will work. So load testing becomes a way to test functionality of the system, blurring the traditional
division between functional and nonfunctional testing.
Changing Dynamic
It may be possible to survive without load testing by using other ways to mitigate performance risks if the cost of
performance issues and downtime is low. However, it actually means that you use customers to test your system,
addressing only those issues that pop up; this approach become risky once performance and downtime start to
matter.
The question is discussed in detail in Load Testing at Netflix: Virtual Interview with Coburn Watson [PODE14a]. As
explained there, Netflix was very successful in using canary testing in some cases instead of load testing. A ctually
canary testing is the performance testing that uses real users to create load instead of creating synthetic load by a
load testing tool. It makes sense when 1) you have very homogenous workloads and can control them precisely
2) potential issues have minimal impact on user satisfaction and company image and you can easily rollback the
changes 3) you have fully parallel and scalable architecture. That was the case with Netflix - they just traded in
the need to generate (and validate) workload for a possibility of minor issues and minor load variability. But the
further you are away from these conditions, the more questionable such practice would be.
Yes, the other ways to mitigate performance risks mentioned above definitely decrease performance risks
comparing to situations where nothing is done about performance at all. And, perhaps, may be more efficient
comparing with the old stereotypical way of doing load testing – running few tests at the last moment before
rolling out the system in production without any instrumentation. But they still leave risks of crashing and
performance degradation under multi-user load. So actually you need to have a combination of different
performance engineering approaches to mitigate performance risks – but the exact mix depends on your system
4. and your goals. Blindly copying approaches used, for example, by social networking companies onto financial or
e-commerce systems may be disastrous.
As the industry is changing with all the modern trends, the components of performance engineering and their
interactions is changing too. Still it doesn’t look like any particular one is going away. Some approaches and
techniques need to be adjusted to new realities – but there is nothing new in that, we may see such changing
dynamic throughout all the history of performance engineering.
Historical View
It is interesting to look how handling performance changed with time. Probably performance engineering went
beyond single-user profiling when mainframes started to support multitasking, forming as a separate discipline in
1960-s. It was mainly batch loads with sophisticated ways to schedule and ration consumed resources as well as
pretty powerful OS-level instrumentation allowing to track down performance issues. The cost of mainframe
resources was high, so there were capacity planners and performance analysts to optimize mainframe usage.
Then the paradigm changed to client-server and distributed systems. Available operating systems didn't have
almost any instrumentation or workload management capabilities, so load testing became almost only remedy in
addition to system-level monitoring to handle multi-user performance. Deploying across multiple machines was
more difficult and the cost of rollback was significant, especially for Commercial Of-The-Shelf (COTS) software
which may be deployed by thousands of customers. Load testing became probably the main way to ensure
performance of distributed systems and performance testing groups became the centers of performance-related
activities in many organizations.
While cloud looks quite different from mainframes, there are many similarities between them [EIJK11], especially
from the performance point of view. Such as availability of computer resources to be allocated, an easy way to
evaluate the cost associated with these resources and implement chargeback, isolation of systems inside a larger
pool of resources, easier ways to deploy a system and pull it back if needed without impacting other systems.
However there are notable differences and they make managing performance in cloud more challenging. First of
all, there is no instrumentation on the OS level and even resource monitoring becomes less reliable. So all
instrumentation should be on the application level. Second, systems are not completely isolated from the
performance point of view and they could impact each other (and even more so when we talk about containers).
And, of course, we mostly have multi-user interactive workloads which are difficult to predict and manage. That
means that such performance risk mitigation approaches as APM, load testing, and capacity management are
very important in cloud.
So it doesn’t look like the need in particular performance risk mitigation approaches, such as load testing or
capacity planning, is going away. Even in case of web operations, we would probably see load testing coming
back as soon as systems become more complex and performance issues start to hurt business. Still the dynamic
of using different approaches is changing (as it was during the whole history of performance engineering).
Probably there would be less need for "performance testers" limited only to running tests – due to better
instrumenting, APM tools, continuous integration, resource availability, etc. – but I'd expect more need for
performance experts who would be able to see the whole picture using all available tools and techniques.
Let’s further consider how today’s industry trends impact load testing.
Cloud
Cloud and cloud services significantly increased a number of options to configure for both the system under test
and the load generators. Cloud practically eliminated the lack of appropriate hardware as a reason for not doing
load testing and significantly decreased cost of large-scale load tests as it may provide large amount of resources
for a relatively short period of time.
We still have the challenge of making the system under test as close to the production environment as possible
(in all aspects – hardware, software, data, configuration). One interesting new trend is testing in production. Not
that it is something new by itself; what is new is that it is advocated as a preferable and safe (if done properly)
way of doing performance testing. As systems become so huge and complex that it is extremely difficult to
5. reproduce them in a test setup, people are more ready to accept the issues and risks related to using the
production site for testing.
If we create a test system, the main challenge is to make it as close to the production system as possible. In case
we can't replicate it, the issue would be to project results to the production environment. And while it is still an
option – there are mathematical models that will allow making such a projection in most cases – but the further
away is the test system from the production system, the more risky and less reliable would be such projections.
There were many discussions about different deployment models. Options include traditional internal (and
external) labs; cloud as ‘Infrastructure as a Service’ (IaaS), when some parts of the system or everything are
deployed there; and service, cloud as ‘Software as a Service (SaaS)’, when vendors provide load testing service.
There are some advantages and disadvantage of each model. Depending on the specific goals and the systems
to test, one deployment model may be preferred over another.
For example, to see the effect of performance improvement (performance optimization), using an isolated lab
environment may be a better option to see even small variations introduced by a change. To load test the whole
production environment end-to-end just to make sure that the system will handle the load without any major issue,
testing from the cloud or a service may be more appropriate. To create a production-like test environment without
going bankrupt, moving everything to the cloud for periodical performance testing may be a solution.
For comprehensive performance testing, you probably need to use several approaches - for example, lab testing
(for performance optimization to get reproducible results) and distributed, realistic outside testing (to check real-
life issues you can't simulate in the lab). Limiting yourself to one approach limits the risks you will mitigate.
The scale also may be a serious consideration. When you have only a few users to simulate, it is usually not a
problem. The more users you need to simulate, the more important the right tool becomes. Tools differ drastically
on how many resources they need per simulated user and how well they handle large volumes of information.
This may differ significantly even for the same tool, depending on the protocol used and the specifics of your
script. As soon as you need to simulate thousands of users, it may become a major problem. For a very large
number of users, some automation – like automatic creation of a specified number of load generators across
several clouds – may be very handy. Cloud services may be another option here.
Agile Development
Agile development eliminates the main problem of tradition development: you need to have a working system
before you may test it, so performance testing happened at the last moment. While it was always recommended
to start performance testing earlier, it was usually rather few activities you can do before the system is ready. Now,
with agile development, we got a major “shift left”, allowing indeed to start testing early.
In practice it is not too straightforward. Practical agile development is struggling with performance in general. Agile
methods are oriented toward breaking projects into small tasks, which is quite difficult to do with performance (and
many other non-functional requirements) – performance-related activities usually span the whole project.
There is no standard approach to specifying performance requirements in agile methods. Mostly it is suggested to
present them as user stories or as constraints. The difference between user stories and constraints approaches is
not in performance requirements per se, but how to address them during the development process. The point of
the constraint approach is that user stories should represent finite manageable tasks, while performance-related
activities can’t be handled as such because they usually span multiple components and iterations. Those who
suggest to use user stories address that concern in another way – for example, separating cost of initial
compliance and cost of ongoing compliance [HAZR11].
And practical agile development is struggling with performance testing in particular. Theoretically it should be
rather straightforward: every iteration you have a working system and know exactly where you stand with the
system’s performance. You shouldn’t wait until the end of the waterfall process to figure out where you are – on
every iteration you can track your performance against requirements and see the progress (making adjustments
on what is already implemented and what is not yet). Clearly it is supposed to make the whole performance
6. engineering process easier and solve the main problem of the traditional approach that the system should be
ready for performance testing (so it happened very late in the process when the cost of fixing found issues is very
high).
From the agile development side the problem is that, unfortunately, it doesn’t always work this way in practice. So
such notions as “hardening iterations” and “technical debt” get introduced. Although it is probably the same old
problem: functionality gets priority over performance (which is somewhat explainable: you first need some
functionality before you can talk about its performance). So performance related activities slip toward the end of
the project and the chance to implement a proper performance engineering process built around performance
requirements is missed.
From the performance testing side the problem is that performance engineering teams don’t scale well, even
assuming that they are competent and effective. At least not in their traditional form. They work well in traditional
corporate environments where they check products for performance before release, but they face challenges as
soon as we start to expand the scope of performance engineering (early involvement, more
products/configurations/scenarios, etc.). And agile projects, where we need to test the product each iteration or
build, expose the problem through an increased volume of work to do.
Just to avoid misunderstandings, I am a strong supporter of having performance teams and I believe that it is the
best approach to building performance culture. Performance is a special area and performance specialists should
have an opportunity to work together to grow professionally. The details of organizational structure may vary, but a
center of performance expertise (formal or informal) should exist. Only thing said here is that while the approach
works fine in traditional environments, it needs major changes in organization, tools, and skills when the scope of
performance engineering should be extended (as in the case of agile projects).
Remedies recommended are usually automation and making performance everyone jobs (full immersion)
[CRIS09, BARB11]. However they haven’t yet developed in mature practices and probably will vary much more
depending on context than the traditional approach.
Automation means here not only using tools (in performance testing we almost always use tools), but automating
the whole process including setting up environment, running tests, and reporting / analyzing results. Historically
performance testing automation was almost non-existent (at least in traditional environments). Performance
testing automation is much more difficult than, for example, functional testing automation. Setups are much more
complicated. A list of possible issues is long. Results are complex (not just pass/fail). It is not easy to compare two
result sets. So it is definitely much more difficult and may require more human intervention. And, of course,
changing interfaces is a major challenge. Especially when recording is used to create scripts as it is difficult to
predict if product changes break scripts.
The cost of performance testing automation is high. You need to know system well enough to make meaningful
automation. Automation for a new system doesn't make much sense - overheads are too high. So there was
almost no automation in traditional environment (with testing in the end with a record/playback tool). When you
test the system once in a while before a next major release, chances to re-use your artifacts are low.
It is opposite when the same system is tested again and again (as it should be in agile projects). It makes sense
to invest in setting up automation. It rarely happened in traditional environments – even if you test each release,
they are far apart and the difference between the releases prevents re-using the artifacts (especially with recorded
scripts – APIs, for example, is usually more stable). So demand for automation was rather low and tool vendors
didn't pay much attention to it. Well, the situation is changing – we may see more automation-related features in
load testing tools soon.
While automation would take a significant role in the future, it addresses one side of the challenge. Another side
of agile challenge is usually left unmentioned. The blessing of agile development – allowing to test the system
early – highlights that for early testing we need another mindset and another set of skills and tools. Performance
7. testing of new systems is agile and exploratory in itself and can't be replaced by automation (well, at least not in
the foreseen future). Automation would complement it – together with additional input from development
offloading performance engineers from routine tasks not requiring sophisticated research and analysis. But testing
early – bringing most benefits by identifying problems early when the cost of their fixing is low – does require
research and analysis, it is not a routine activity and can’t be easily formalized.
It is similar to functional testing where both automated regression testing and exploratory testing are needed
[BACH16] – with the difference that tools are used in performance testing in any case and setting up continuous
performance testing is much more new and challenging.
The problem is that early performance testing requires a mentality change from a simplistic "record/playback"
performance testing occurring late in the product life-cycle to a performance engineering approach starting early
in the product life-cycle. You need to translate "business functions" performed by the end user into
component/unit-level usage and end-user requirements into component/unit-level requirements. You need to go
from the record/playback approach to utilizing programming skills to generate the workload and create stubs to
isolate the component from other parts of the system. You need to go from "black box" performance testing to
"grey box", understanding the architecture of the system and how your load impact.
The concept of exploratory performance testing is still rather alien. But the notion of exploring is much more
important for performance testing than for functional testing. Functionality of systems is usually more or less
defined (whether it is well documented is a separate question) and testing boils down to validating if it works
properly. In performance testing, you won't have a clue how the system would behave until you try it. Having
requirements – which in most cases are goals you want your system to meet – doesn't help you much here
because actual system behavior may be not even close to them. It is rather a performance engineering process
(with tuning, optimization, troubleshooting and fixing multi-user issues) eventually bringing the system to the
proper state than just testing.
If we have the testing approach dimension, the opposite of exploratory would be regression testing. We want to
make sure that we have no regressions as we modify the product – and we want to make it quick and, if possible,
automatic. And as soon as we get to an iterative development process where we have product changing all the
time - we need to verify that there is no regression all the time. It is a very important part of the continuum without
which your testing doesn't quite work. You will be missing regressions again and again going through the agony
of tracing them down in real time. Automated regression testing becomes a must as soon as we get to iterative
development where we need to test each iteration.
So we have a continuum from regression testing to exploratory testing, with traditional load testing being just a dot
on that dimension somewhere in the middle. Which approach to use (or, more exactly, which combination of
approaches to use) depends on the system. When the system is completely new, it would be mainly exploratory
testing. If the system is well known and you need to test it again and again for each minor change – it would be
regression testing and here is where you can benefit from automation (which can be complemented by
exploratory testing of new functional areas – later added to the regression suite as their behavior become well
understood).
If we see the continuum this way, the question which kind of testing is better looks completely meaningless. You
need to use the right combination of approaches for your system in order to achieve better results. Seeing the
whole testing continuum between regression and exploratory testing should help in understanding what should be
done.
Continuous Integration
In more and more cases, performance testing should not be just an independent step of the software
development life-cycle when you get the system shortly before release. In agile development / DevOps
environments it should be interwoven with the whole development process. There are no easy answers here that
8. fit all situations. While agile development / DevOps become mainstream nowadays, their integration with
performance testing is just making first steps.
Integration support becomes increasingly important as we start to talk about continuous integration (CI) and agile
methodologies. Until recently, while there were some vendors claiming their load testing tools better fit agile
processes, it usually meant that the tool is a little easier to handle (and, unfortunately, often just because there is
not much functionality offered).
What makes agile projects really different is the need to run a large number of tests repeatedly, resulting in the
need for tools to support performance testing automation. The situation started to change recently as agile
support became the main theme in load testing tools [LOAD14]. Several tools recently announced integration with
Continuous Integration Servers (such as Jenkins or Hudson). While initial integration may be minimal, it is
definitively an important step toward real automation support.
It doesn't looks like we may have standard solutions here, as agile and DevOps approaches differ significantly
and proper integration of performance testing can't be done without considering such factors as development and
deployment process, system, workload, ability to automate and automatically analyze results.
The continuum here would be from old traditional load testing (which basically means no real integration: it is a
step in the project schedule to be started as soon as system would be ready, but otherwise it is executed
separately as a sub-project) to full integration into CI when tests are run and analyzed automatically for every
change in the system.
Automation means here not only using tools (in performance testing tools are used in most cases), but
automating the whole process including setting up environment, running tests, and reporting / analyzing results.
However “full performance testing automation” doesn’t look like a probable option in most cases. Using
automation in performance testing helps with finding regressions and checking against requirements only – and it
should fit the CI process (being reasonable in the length and amount of resources required). So large-scale,
large-scope, and long-length tests would not probably fit, as well as all kinds of exploratory tests. What would be
probably needed is a combination of shorter automated tests inside CI with periodic larger / longer tests outside
or, maybe, in parallel to the critical CI path as well as exploratory tests.
While already mentioned above, cloud integration and support of new technologies are important for integration.
Cloud integration, including automated deployment to public clouds, private cloud automation, and cloud services
simplify deployment automation. Support of new technologies minimizes amount of manual work needed.
New Architectures
Cloud seriously impacts system architectures that has a lot of performance-related consequences.
First, we have a shift to centrally managed systems. ‘Software as a Service’ (SaaS) basically are centrally
managed systems with multiple tenants/instances. Mitigating performance risks moves to SaaS vendors. From
one side, it makes it easier to monitor and update / rollback systems that lowers performance-related risks. From
another side, you get much more sophisticated systems when every issue potentially impacts a large number of
customers, thus increasing performance-related risks.
To get full advantage of cloud, such cloud-specific features as auto-scaling should be implemented. Auto-scaling
is often presented as a panacea for performance problems, but, even if it is properly implemented (which is, of
course, better to be tested), it just assign a price tag for performance. It will allocate resources automatically – but
you need to pay for them. And the question is how effective is the system – any performance improvement results
in immediate savings.
Another major trend is using multiple third-party components and services, which may be not easy to properly
incorporate into testing. The answer to this challenge is service virtualization, which allow to simulate real services
during testing without actual access.
Cloud and virtualization triggered appearance dynamic, auto-scaling architectures, which significantly impact
getting and analyzing feedback. System’s configuration is not given anymore and often can’t be easily mapped to
hardware. As already mentioned, performance testing is rather a performance engineering process (with tuning,
9. optimization, troubleshooting and fixing multi-user issues) eventually bringing the system to the proper state
rather than just testing. And the main feedback you get during your testing is the results of monitoring your system
(such as response times, errors, and resources your system consumes).
The dynamic architectures represent a major challenge for both monitoring and analysis. It makes it difficult to
analyze results as the underling system is changing all the time. Even before it often went beyond comparing
results against goals – for example, when the system under test didn't match the production system exactly or
when tests didn’t represent the full projected load. It becomes even a much more serious challenge when
configuration is dynamic – for both monitoring and analysis. Another challenge is when tests are a part of CI,
where all monitoring and analysis should be done automatically. The more complex the system, the more
important feedback and analysis become. A possibility to analyze monitoring results and test results together
helps a lot.
Traditionally monitoring was on the system level. Due to virtualization system-level monitoring doesn’t help much
anymore and may be misleading – so getting information from application (via, for example, JMX) and database
servers becomes very important. Many load testing tools recently announced integration with Application
Performance Management / Monitoring (APM) tools, such as AppDynamics, New Relics, or Dynatrace. If using
such tools is an option, it definitely opens new opportunities to see what is going on inside the system under load
and what needs to be optimized. One thing to keep in mind is that older APM tools and profilers may be not
appropriate to use under load due to the high overheads they introduce.
With really dynamic architectures, we have a great challenge here to discover configuration automatically, collect
all needed information, and they properly map the collected information and results onto changing configuration
and system components in a way to highlight existing and potential issues, and, potentially, make automatic
adjustments to avoid them. It would require very sophisticated algorithms (including machine learning) and
potentially creates real Application Performance Management (the word “Management” today is rather a promise
than the reality).
In additions to new challenges in monitoring and analysis, virtualized and dynamic architectures open a new
applications for performance testing: to test if the system is dynamically changing under load in a way it is
supposed to change.
New Technologies
Quite often the whole area of load testing is reduced to pre-production testing using protocol-level
recording/playback. Sometimes it even lead to conclusions like “performance testing hitting the wall” [BUKSH12]
just because load generation may be a challenge. While protocol-level recording/playback was (and still is) the
mainstream approach to testing applications, it is definitely just one type of load testing using only one type of
load generation; such equivalency is a serious conceptual mistake, dwarfing load testing and undermining
performance engineering in general [SMITH02].
Well, the time when all communication between client and server was using simple HTTP is in the past and the
trend is to provide more and more sophisticated interfaces and protocols. While load generation is rather a
technical issue, it is the basis for load testing – you can't proceed until you figure out a way to generate load. As a
technical issue, it depends heavily on the tools and functionality supported.
There are three main approaches to workload generation [PODE12] and every tool may be evaluated on which of
them it supports and how.
Protocol-level recording/playback
This is the mainstream approach to load testing: recording communication between two tiers of the system and
playing back the automatically created script (usually, of course, after proper correlation and parameterization). As
far as no client-side activities are involved, it allows the simulation of a large number of users. Such tool can only
be used if it supports the specific protocol used for communication between two tiers of the system.
10. Fig.3 Record and playback approach, protocol level
With quick internet growth and the popularity of browser-based clients, most products support only HTTP or a few
select web-related protocols. To the author's knowledge, only HP LoadRunner and Borland SilkPerformer try to
keep up with support for all popular protocols (other products claiming support of different protocols usually use
only UI-level recording/playback, described below). Therefore, if you need to record a special protocol, you will
probably end up looking at these two tools (unless you find a special niche tool supporting your specific protocol).
This somewhat explains the popularity of LoadRunner at large corporations because they usually using many
different protocols. The level of support for specific protocols differs significantly, too. Some HTTP-based protocols
are extremely difficult to correlate if there is no built-in support, so it is recommended that you look for that kind of
specific support if such technologies are used. For example, Oracle Application Testing Suite may have better
support of Oracle technologies (especially new ones such as Oracle Application Development Framework, ADF).
UI-level recording/playback
This option has been available for a long time, but it is much more viable now. For example, it was possible to use
Mercury/HP WinRunner or QuickTest Professional (QTP) scripts in load tests, but a separate machine was
needed for each virtual user (or at least a separate terminal session). This drastically limited the load level that
could be achieved. Other known options were, for example, Citrix and Remote Desktop Protocol (RDP) protocols
in LoadRunner – which always were the last resort when nothing else was working, but were notoriously tricky to
play back [PERF].
Fig.4 Record and playback approach, GUI users
New UI-level tools for browsers, such as Selenium, have extended the possibilities of the UI-level approach,
allowing running of multiple browsers per machine (limiting scalability only to the resources available to run
browsers). Moreover, UI-less browsers, such as HtmlUnit or PhantomJS, require significantly fewer resources
than real browsers.
Tool App.Client
ServerLoad Generator
Application
Network
G
U
I
A
P
I
Load Testing Tool
Virtual Users
ServerLoad Generator
Application
Network
11. Fig.5 Record and playback approach, browser users
Today there are multiple tools supporting this approach, such as Appvance, which directly harnesses Selenium
and HtmlUnit for load testing; or LoadRunner TruClient protocol and SOASTA CloudTest, which use proprietary
solutions to achieve low-overhead playback. Nevertheless, questions of supported technologies, scalability, and
timing accuracy remain largely undocumented, so the approach requires evaluation in every specific case.
Programming
There are cases when recording can't be used at all, or when it can, but with great difficulty. In such cases, API
calls from the script may be an option. Often it is the only option for component performance testing. Other
variations of this approach are web services scripting or use of unit testing scripts for load testing. And, of course,
there is a need to sequence and parameterize your API calls to represent a meaningful workload. The script is
created in whatever way is appropriate and then either a test harness is created or a load testing tool is used to
execute scripts, coordinate their executions, and report and analyze results.
Fig 6. Programming API using a Load Testing Tool.
To do this, the tool should have the ability to add code to (or invoke code from) your script. And, of course, if the
tool's language is different from the language of your API, you would need to figure out a way to plumb them.
Tools, using standard languages such as C (e.g. LoadRunner) or Java (e.g. Oracle Application Testing Suite) may
have an advantage here. However, you need to understand all of the details of the communication between client
and server to use the right sequences of API calls; this is often the challenge.
The importance of API programming increases in agile / DevOps environments as tests are run often during the
development process. In many cases APIs are more stable than GUI or protocol communication – and even if
something changed, the changes usually can be localized and fixed – while GUI- or protocol-based scripts often
need to be re-created.
Special cases
There are special cases which should be evaluated separately, even if they use the same generic approaches as
listed above. The most prominent special case is mobile technologies. While the existing approaches remain
basically the same, there are many details on how these approaches get implemented that need special attention.
The level of support for mobile technologies differs drastically: from the very basic ability to record HTTP traffic
from a mobile device and play it back against the server up to end-to-end testing for native mobile applications
and providing a "device cloud".
Load TestingTool App.
Virtual
Users
ServerLoad Generator
Application
Network
A
P
I
Tool Browser
ServerLoad Generator
Application
Network
12. Summary
There are many more ways of doing performance testing than just the old, stereotypical approach of last-minute
pre-production performance validation. While all these ways are not something completely new – the new industry
trends push them into the mainstream.
The industry is rapidly changing (cloud, agile, continuous integration, DevOps, new architectures and
technologies) – and to keep up performance testing should reinvent itself to become a flexible, context- and
business-driven discipline. It is not that we just need to find a new recipe – it looks like we would never get to that
point again - we need to adjust on the fly to every specific situation to remain relevant.
Performance testing should fully embrace [at last!] early testing (with exploratory approaches and “shift left” to
performance engineering) as well as agile / iterative development (with regressions performance testing and
getting to the next level of automation).
Good tools can help you here – and not so good tools may limit you in what you can do. And getting so many
options in performance testing, we can't just rank tools on a simple better/worse scale. It may be the case that a
simple tool will work quite well in a particular situation. A tool may be very good in one situation and completely
useless in another. The value of the tool is not absolute; rather it is relative to your situation.
References
[BACH16] Bach, J., Bolton, M. A Context - Driven Approach to Automation in Testing. 2016.
http://www.satisfice.com/articles/cdt-automation.pdf
[BARB11] Barber, S. Performance Testing in the Agile Enterprise. STP, 2011.
http://www.slideshare.net/rsbarber/agile-enterprise
[BUKS12] Buksh, J. Performance Testing is hitting the wall. 2012.
http://www.perftesting.co.uk/performance-testing-is-hitting-the-wall/2012/04/11/
[CRIS09] Crispin, L., Gregory, J. Agile Testing: A Practical Guide for Testers and Agile Teams. Pearson Education,
2009.
[EIJK11] Eijk, P. Cloud Is the New Mainframe. 2011.
http://www.circleid.com/posts/cloud_is_the_new_mainframe
[HAZR11] Hazrati, V. Nailing Down Non-Functional Requirements. InfoQ, 2011.
http://www.infoq.com/news/2011/06/nailing-quality-requirements
[HAWK13] Andy Hawkes, A. When 80/20 Becomes 20/80.
http://www.speedawarenessmonth.com/when-8020-becomes-2080/
[LOAD14] Load Testing at the Speed of Agile. Neotys White Paper, 2014.
http://www.neotys.com/documents/whitepapers/whitepaper_agile_load_testing_en.pdf
[MOLU14] Molyneaux, I. The Art of Application Performance Testing. O'Reilly, 2014.
[PERF] Performance Testing Citrix Applications Using LoadRunner: Citrix Virtual User Best Practices, Northway
white paper. http://northwaysolutions.com/our-work/downloads
[PERF07] Performance Testing Guidance for Web Applications. 2007.
http://perftestingguide.codeplex.com/
[PODE12] Podelko, A. Load Testing: See a Bigger Picture. CMG, 2012.
13. [PODE14] Podelko, A. Adjusting Performance Testing to the World of Agile. CMG, 2014.
[PODE14a] Podelko, A. Load Testing at Netflix: Virtual Interview with Coburn Watson. 2014.
http://alexanderpodelko.com/blog/2014/02/11/load-testing-at-netflix-virtual-interview-with-coburn-watson/
[SEGUE05] Choosing a Load Testing Strategy, Segue white paper, 2005.
http://bento.cdn.pbs.org/hostedbento-qa/filer_public/smoke/load_testing.pdf
[SMITH02] Smith, C.U., Williams, L.G. Performance Solutions. Addison-Wesley, 2002.
[STIR02] Stirling, S. Load Testing Terminology. Quality Techniques Newsletter, 2002.
http://blogs.rediff.com/sumitjaitly/2007/06/04/load-test-article-by-scott-stirling/
*All mentioned brands and trademarks are the property of their owners.