Speed: Ex. How long should it take for a response to display when submitting a request? Scalability Ex: How many db connections are configured? So what happens when you exceed that threshold? Stability: Ex: How does the application behave while a batch process is running in the background? Confidence Ex: Once the results are in – is the sponsor of this project confident in the behavior of this application under load? Costs: Focus: User Expectation: What is the acceptable response time for a response on the web page? System Contraints: What is the concurrency of users which causes the application to degrade in performance…? What happens if we make a change to application/architecture/infrastructure…? Cost: What is the cost to the business if the application is unusable due to slowness? Or worst case scenario it crashes?
Performance Testing is done to break the system - Normal load testing is done to understand the behavior of systems under expected load. Specifically Stress Testing is done to study breaking points of systems. Performance Testing only involves creation of scripts/running scripts - The scripting process is important, but it is only a small component of performance testing. The challenge of Performance Testing is determining what tests need to be executed and knowing how to interpret results to determine where any problems/bottlenecks might occur and what the behavior will be. Application Changes cause simple refactoring of performance testing scripts - Any change to the UI of a product (especially a web product) will cause re-development of scripts. If the changes are drastic, complete re-development of scripts may be in order. This point emphasizes that products need to be in a stable state before undergoing performance testing.
Why can’t you just synchronize some real users to test your application at the same time while recording the response time with a stop watch? Drawbacks: (1) Measuring response time would depend on the users accuracy. (2) It is very hard to synchronize the testing AND repeat these tests. (3) And the coordination problem is expounded if you think about including international users. The time spent attempting to coordinate would outweigh the upfront time it would take to automate the performance testing. Basically this type of manual testing is not feasible for a system that must support a load and requires repeatable test.
So what happens with Automated Performance Testing?
In order to determine the System’s Purpose and what Activities it is used for will need to measure or estimate: - All possible user activity. - How often each activity is accomplished. - All “types” of users. - What activities are most performance intensive. - Other user community modeling information. Basically model “Real” users. There are some tools in place that could be used to gather user statistics/data. Like the site metrics tool called NetInsight. This provides the ability to see how the applications is being used by tracking the pages visited. Also there is the monitoring tools from the BAC in which server statistics - Must solicit requirement from all stakeholders to determine acceptable performance. Developers, Business, Infrastructure, Architecture -
Basically model “Real” users. We’ll take your business transactions and script the navigation. - The only way to predict actual user experience (end-to-end response time) is to execute tests using realistic User Model(s). - Extrapolating expected performance based on incomplete models does not work.
Baseline is the execution, validation, and debugging of scripts collectively in a test scenario. These results will be used for comparison with future testing results. LIMITED AMOUNT OF VIRTUAL USERS. Load Tests are end to end performance tests under anticipated production load. The primary objective of this test is to determine the response times for various time critical transactions and business processes and that they are within documented expectations (or Service Level Agreements - SLAs). The test also measures the capability of the application to function correctly under load, by measuring transaction pass/fail/error rates. This test is one of the most fundamental load and performance tests and needs to be well understood. Stress Tests determine the load under which a system fails, and how it fails. This is in contrast to Load Testing , which attempts to simulate anticipated load. It is important to know in advance if a ‘stress’ situation will result in a catastrophic system failure, or if everything just “goes really slow”. Duration Tests (and / or stability test) are test with a constant load over a period between 8 and 24 hours. Duration tests will be conducted to determine if an application's performance degrades over an extended period of time. This type of testing may indicate if there are issues such as memory leaks that could affect the application's performance. Unit Test ???? Web service test ???? << need to elaborate
<<redo the graph>> The results from each test iteration will be reviewed with the team to determine if requirements have been met. If the requirements are not met – then tuning of the application and server After each test execution you will receive performance test results that explain the outcome of the test. At the completion of performance testing the team will receive a complete result document which summarizes the overall test objective, results found, and the tuning recommendations/modifications implemented.
The tools that we use have the widest application support of any automated performance testing solution in the market. Not only do we support web solutions and ERP/CRM (Peoplesoft applications), but we also support all the common databases, middleware and legacy solutions. LoadRunner can monitor the entire system from the OS to the network, the custom or packaged application itself and even the database. Monitoring:
Interactive Slide – see if audience can name all the reasons we would test. Question: What events should trigger a performance testing cycle? New System How visible is this application? What are the critical performance metrics? What is the usage pattern? Will this application or new feature be used often? What capacity needs to be supported? Is it required that this application be available 24/7? How many users need to be supported? Will this application be used by a significant amount of users? This one is pretty obvious – if it is customer facing and is expected to handle any substantial load, it must be tested. Increasing Number of Users Is this application going to be release to an additional set of users? What’s the impact? How is the system currently being used? What features are being utilized? What is the point at which system performance becomes acceptable? Hopefully this has been accounted for by previous testing, if not new load tests must be performed to verify that the increased number of users will be handled by the system. New Functionality What is the change to the usage pattern? What features drive the system performance? Software Upgrades This includes code changes due to major fixes that change the behavior as well as full releases. Operating System Upgrades This one should be obvious – changes in operating systems can involve ways that OS’s handle threading, memory, etc. Does it need to work with VISTA or a MAC? Hardware Upgrades Upgrades to servers will require performance testing to study how the upgrades add to the capacity of applications. Infrastructure Upgrades Changes to load balancing, network pipes, etc.
- Performance Center provides a &quot;global&quot; access to testing resources across the enterprise via a Web-based interface. This allows the IT organization to centrally manage those testing resources to perform load testing on an enterprise wide scale — increasing resource productivity and testing capacity. - The Virtual User Generator (VuGen) allows a user to record and/or script the test to be performed against the application under test , and enables the performance tester to play back and make modifications to the script as needed. Such modifications may include Parameterization (selecting data for keyword-driven testing ), Correlation and Error handling. LoadRunner supports several protocols like Web HTTP/HTTPS, Remote Terminal Emulator, Oracle and Web Services. - Sitescope provides the ability to set up monitors during a load test to monitor the performance of individual components under load. Some monitors include Oracle monitors, WebSphere monitors, etc... - Analysis This tool takes the completed scenario result and prepares the necessary graphs for the tester to view. Also, graphs can be merged to get a good picture of the performance. The tester can then make needed adjustments to the graph and prepare a LoadRunner report. The report, including all the necessary graphs, can be saved in several formats, including HTML and Microsoft Word format - HP Performance Diagnostics for .NET and J2EE (Java) provides comprehensive visibility into .NET applications deployed in heterogeneous composite environments. Out-of-the-box dashboards provide quick time to value. Visual Studio Team System includes a load test tool. This new tool provides many exciting features you can use for performance and stress testing your Web sites, Web services, and other server components. This is used against application with Janus Security. Allows us to reference the .dll’s that we need to use the bindings that work with Janus.
In order for the performance test effort to be successful – it is a collaborative effort between these team members. It is imperative that during performance test cycles that all of these team members are involved and aware of any changes that comes out performance testing.
We have our remedy queue. Just enter information and the request will be assigned to one of us. We’ll contact the project team to start the planning process. On the eawiki – we have a slew of information, demonstrations on how to use our Loadrunner and MS Visual Studio tools…
Performance Test Slideshow Recent
<ul><li>Determine the usability/effectiveness of an application under load. </li></ul><ul><li>Detect bottlenecks before a new system or upgrade is deployed. </li></ul><ul><li>Tune for better performance </li></ul><ul><li>The peace of mind that it will work on go-live day </li></ul><ul><li>alone justifies the cost of performance testing. </li></ul>Why do we Performance Test..?
What is Performance Testing? <ul><li>Performance Testing Determines </li></ul><ul><li>Speed </li></ul><ul><li>Scalability </li></ul><ul><li>Stability </li></ul><ul><li>Confidence </li></ul><ul><li>… while focusing on </li></ul><ul><li>User Expectations </li></ul><ul><li>System Constraints </li></ul><ul><li>Costs </li></ul><ul><li>Specifically, it answers: </li></ul><ul><li>How many…? </li></ul><ul><li>How much…? </li></ul><ul><li>What happens if…? </li></ul>
Myths of Performance Testing <ul><li>Performance Testing is done to break the system </li></ul><ul><li>Performance Testing only involves creation of scripts and running the scripts </li></ul><ul><li>Application Changes cause simple refactoring of performance testing scripts </li></ul>
WHY NOT MANUALLY PERFORMANCE TEST? Manual performance testing can be done by gathering numerous folks together and synchronizing executing the transactions. The drawback is that measuring response time depends on user accuracy. And it is very hard to synchronize the testing AND repeat the tests. And the coordination problem is expanded if you think about including international users.
AUTOMATED PERFORMANCE TESTING Replaces real users with virtual users Generate a consistent, measurable, and repeatable load, managed from a single point of control Efficiently isolates performance bottlenecks User Simulation Controller Web Server Application Server Database Internet / WAN
Performance Test Process “Evaluate System” <ul><li>This is the most important process because it involves … </li></ul><ul><li>predicting actual user experience </li></ul><ul><li>accessing any system limitations </li></ul><ul><li>defining stakeholder expectations </li></ul>
Performance Test Process “ Draft test scripts/scenario” The scripts will contain transactions which are the most intensive activities performed on the application.
Performance Test Process “Execute Performance Tests” LOAD TESTS are the end to end performance test under anticipated production load. STRESS TESTS determine the load under which a system fails and how it recovers from failure. DURATION TESTS are test with a constant load over a period between 8 and 24 hours to determine if an application’s performance degrades over an extended period of time. BASELINE TESTS is the execution, validation, and debugging of scripts collectively in a test scenario. The results will be used for comparison with future testing results.
Performance Test Process “Entire Process” Response Time Degradation Curve
Performance Test Services Database <ul><li>EJB </li></ul><ul><li>JDBC </li></ul><ul><li>JSP </li></ul><ul><li>Sitraka JMonitor </li></ul><ul><li>Oracle </li></ul><ul><li>MSSQL Server </li></ul><ul><li>DB2 </li></ul>Network Web Servers App Servers Java <ul><li>3270 </li></ul><ul><li>5250 </li></ul><ul><li>VT100 </li></ul>Legacy <ul><li>Oracle </li></ul><ul><li>MS SQLServer </li></ul><ul><li>DB2 </li></ul><ul><li>ODBC </li></ul>Databases <ul><li>EJBs </li></ul><ul><li>CORBA </li></ul><ul><li>COM </li></ul><ul><li>RMI </li></ul><ul><li>MQSeries </li></ul>Middleware <ul><li>Web Services </li></ul><ul><li>HTTP(S) </li></ul><ul><li>XML </li></ul><ul><li>Citrix ICA </li></ul>Web <ul><li>PeopleSoft </li></ul><ul><li>Oracle </li></ul>ERP/CRM Protocols Monitors Diagnostics Operating Systems Databases <ul><li>Windows </li></ul><ul><li>Unix </li></ul><ul><li>Linux </li></ul><ul><li>SNMP </li></ul><ul><li>WAN Emulation </li></ul><ul><li>MS IIS </li></ul><ul><li>iPlanet </li></ul><ul><li>Apache </li></ul><ul><li>BEA WebLogic </li></ul><ul><li>IBM WebSphere </li></ul><ul><li>ATG Dynamo </li></ul><ul><li>iPlanet App Server </li></ul><ul><li>J2EE </li></ul><ul><li>.NET </li></ul>Platforms We have the ability to mimic many protocols. Here is a list of the protocols. The Diagnostics tool provides a set of Diagnostics modules that trace, time, and troubleshoot end-user transactions across ALL tiers. And while a testing is executing we will monitor your servers! Web Server Application Server Internet/WAN Load Generator Controller Load Balancer
Performance Test Triggers? <ul><li>New Systems </li></ul><ul><li>Increasing Number of Users </li></ul><ul><li>New Functionality </li></ul><ul><li>Software Upgrades </li></ul><ul><li>Operating System Upgrades </li></ul><ul><li>Hardware Upgrades </li></ul><ul><li>Infrastructure Upgrades/Changes </li></ul><ul><li>Slow application response </li></ul>Here are some examples of what should trigger a performance test execution!
Performance Test Toolbox Our team has the latest and greatest tools…!!!
Project Team Collaboration <ul><li>Quality Assurance </li></ul><ul><ul><li>Develop Manual Test Conditions, test data, and expected results </li></ul></ul><ul><li>Business Analysts </li></ul><ul><ul><li>Identify and document business performance requirements </li></ul></ul><ul><li>Project Managers </li></ul><ul><ul><li>Manages the application implementation </li></ul></ul><ul><li>DBAs </li></ul><ul><ul><li>Analyzes and tunes the database </li></ul></ul><ul><li>System Administrators </li></ul><ul><ul><li>Server System Administrator, Configure downstream and </li></ul></ul><ul><ul><li>Production environments </li></ul></ul><ul><li>Architects </li></ul><ul><ul><li>Closely involved in identifying bottlenecks, tuning </li></ul></ul><ul><li>Infrastructure </li></ul><ul><ul><li>Manage and secure the platform on which applications reside </li></ul></ul>A successful performance test execution relies on involvement from the entire team..!!
How do you engage our team? <ul><li>Answer: Just open up a Performance Test SOS ticket… </li></ul><ul><li>http://eaptsos </li></ul><ul><li>More information about performance testing is on the </li></ul><ul><li>http://tim.turner.com/tso/ea/groups/sharedsvc/perftest/default.aspx </li></ul>