Your SlideShare is downloading. ×
Jmeter Performance Testing
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Jmeter Performance Testing


Published on

Published in: Technology

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide
  • Performance testing is commonly conducted to accomplish the following:
    Evaluate against performance criteria
    Compare performance characteristics of multiple systems or system configurations
    Find the source of performance problems
    Support system tuning
    Find throughput levels
  • - To assess the system capacity for growth
    The load and response data gained from the tests can be used to validate the capacity planning model and assist decision making.
    - To identify weak points in the architecture
    The controlled load can be increased to extreme levels to stress the architecture and break it bottlenecks and weak components can be fixed or replaced
    - To detect obscure bugs in software
    Tests executed for extended periods can cause failures caused by memory leaks and reveal obscure contention problems or conflicts
    - To tune the system
    Repeat runs of tests can be performed to verify that tuning activities are having the desired effect – improving performance.
    - To verify resilience & reliability
    Executing tests at production loads for extended periods is the only way to access the systems resilience and reliability to ensure required service levels are likely to be met.
  • Performance Test: To determine or validate speed, scalability, and/or stability.
    Load Test: To verify application behavior under normal and peak load conditions.
    Stress Test: To determine or validate an application’s behavior when it is pushed beyond normal or peak load conditions.
    Endurance Test: To determine the performance characteristics of the product under test over an extended period of time.
    Volume/Capacity Test: To determine how many users and/or transactions a given system will support and still meet performance goals.
    Scalability Test: To measure system performance while steadily increasing user load.
  • Load Runner, commercial load testing tool from HP
    JMeter, an open source tool from Apache
    RPT, commercial load test tool from IBM
    NeoLoad, commercial, for Windows, Linux, Solaris
    Microsoft Visual Studio Team System 2010, commercial, for Windows, which includes Load Test Analyzer and Load Test Monitor tools.
    OpenLoad, commercial load testing tool and hosted service
    OpenSTA, an open source tool
    PureLoad, commercial, multiplatform load testing tool
    PushToTest TestMaker, an open source testing framework (load testing and more)
    QEngine, free and commercial, from AdventNet (free edition supports 5 virtual users)
    SQLQueryStress Performance Testing Tool, free, for testing SQL Server StressIT, commercial and free
    The Grinder, an open source tool
    Flood, open source from and for Apache
    WAPT, Web Application Testing tool, a commercial product, for Windows
    WatchMouse, commercial hosted load testing service
    WebKing, commercial, multiplatform
    WebServer Stress Tool, commercial and free, from Paessler
  • Performance Testing Life Cycle
  • Performance Testing Life Cycle
  • Transcript

    • 1. JmeterTool
    • 2. Chapter 1 Introduction
    • 3. What’s Inside  Overview of Performance Testing  Purpose of Performance Testing  Key Types of Performance Testing  Goal of Performance Testing  Pre-Requisites Performance Testing  Performance Management  Performance Testing Life Cycle  Why to use performance Testing tool?  Performance Symptoms and Issues  Challenged with Performance Testing  Gather Non-Functional Requirements
    • 4. Overview of PerformanceTesting Performance testing is an non-functional testing to determine the system responsiveness i.e. speed, stability, reliability and scalability. Speed Throughput Reliability Scalability
    • 5. Purpose of Performance Testing Performance Testing ensures that your application works under real-world loads before your customers find out that it doesn't! GOAL/OBJECTIVE IS TO FIND THE BOTTLE NECK IN THE SYSTEM
    • 6. KeyTypes of Performance Testing Load Testing Focus: “Response Time” Stress Testing Focus: “Response Time” and “Throughput" Endurance Testing Focus: “Memory” Volume/Capacity Testing Focus: “Response Time” Scalable Testing Focus: “Response Time” and “Throughput"
    • 7. Goal of Performance Testing  Access the Production Readiness.  Compare two platforms with the same software to see which performs better.  Compare Performance characteristics of system configurations.  Evaluate System against performance criteria.  Discover what parts of the application perform poorly and under what conditions.  Finding the source of performance problems.  Support system tuning.
    • 8. Pre-Requisites PerformanceTesting  Stable and defect-free dedicated environment.  Performance testing environment similar to production environment.  No other testing should be performed while Performance Testing.  Performance testing before going live.  Performance testing plan development.  Test Data Preparation.  Performance testing requirements gathering.  Application architecture.  Servers information.  Application usage information.  Performance Acceptance criteria.
    • 9. Performance Management There are two approaches of managing the performance testing activities: Reactive Approach Performance is only done after the System testing. Proactive Approach Performance parameter is analyzed and addressed in testing environment before it really impact the production system and fix it before launching the application.  Non-Functional Requirements Gathering phase  Design phase Development phase  Development phase  Test Execution phase  Maintenance
    • 10. Performance Testing Life Cycle Typical Performance testing cycle consists of following activities: 1.Establish Performance Test Objectives 2.Prepare test environment 3.Create and modify scripts 4.Execute Performance test 5.Monitor impact of load on servers/databases. 6.Analyze results 7.Tune system 8.Repeat from Step 4 9.Deploy mission critical application with confidence.
    • 11. Why to use performance Testing tool?  Almost impossible without one!  Without tool relies on massive hardware and personnel to generate load. The cost of repeating a 'manual' test never diminishes. Without a tool, no end reports are created.  Reponses Time  CPU Utilization  Disk I/O  Network I/O  Other
    • 12. Performance Symptoms and Issues Application Server  Poor database tuning.  Poor cache management.  Poor session management. Web Server  Poor server design.  Poor configuration & usage. Database Server  Insufficient indexing.  Fragmented databases.  Faulty application design. Network  Firewall throughput.  Load balancers, gateways, routers. Performance Symptoms  Long user response time  Long server response time  Memory leaks  High CPU usage  Too many open connections  Length queues for requests  Too many table scans of database  Database deadlocks  Erroneous data returned  HTTP errors  Pages not available  Page check errors
    • 13. Typical order of Fixes  Improve current application design: Algorithms, caching, DB calls, memory use  Upgrade hardware: RAM, CPU, network bandwidth  Upgrade software infrastructure: OS, web server, database (database connection polling)  Upgrade system architecture: Client-server to basic n-tier, basic n-tier to enterprise n-tier, software and hardware changes, Use Apache HTTPD in front of Tomcat to serve static resources, Use hardware load balancing /SSL.
    • 14. Challenged with Performance Testing Following is a list of challenges with Performance Testing:  Test environment setup  Collection and Analysis of huge data  Bottleneck root cause identification  Obtaining accurate results  Client involvement  Testing inside the firewall  Performance testing of new technologies  Testing on live environment  Expensive  Cooperative effort required (Product vendors, Architects, Developers, Testers, Database administrators, System administrators, Network administrators)
    • 15. PerformanceTesting Best Practices  Use user Ramp up and Ramp down approach.  Ignore the results collected during Ramp up and Ramp down periods.  Run individual tests of performance scenarios before combining them in a single test.  Run a baseline test with single user to validate the script.  Run a benchmark test with 15 to 20 percent of desired load to validate the system matrices at lower load and check the system readiness for high load.  Run the test for at least 10-15 minutes at stable peak load.  Repeat tests at least 3 times to confirm the results.  Run tests at different times.
    • 16. Gather Non-Functional Requirements  Clear and complete requirements are mandatory for successful performance testing.  What we need to start the performance testing?  What is the type of application and its architecture?  What are the known current as well as previous performance bottlenecks?  Which application scenarios to be tested?  What will be the workload model?  What are the performance goals?  Gathering of requirements for performance comes in three categories.  General Information.  Workload Model.  Performance Goals.
    • 17. Chapter 2 Get Started
    • 18. What’s Inside  What is Jmeter?  What can you test in JMeter  Installing Jmeter  Setting up Environment  Running Jmeter
    • 19. What is Jmeter?  Open source tool  Friendly GUI Design  Platform Independent  Full multithreading framework  Visualize Test Results  Easy Installation  Support multi-protocol Jmeter Works: Send request to target server Get statistics information of target server Generate test report in different format
    • 20. What can you test in Jmeter?  Can load and performance test many different server types:  Web - HTTP, HTTPS  SOAP  Database via JDBC  LDAP  JMS  Mail - POP3(S) and IMAP(S)
    • 21. Installing Jmeter  Install Java (2.2. or higher)  Download JMeter  Add path of java installation in environment path variables.
    • 22. Setting up Environment Jmeter have simple environment:
    • 23. Running JMeter  Open command prompt (user administrative mode to avoid unnecessary hassle).  Traverse to [Jmeter installation path]bin  Run Jmeter.bat
    • 24. Chapter 3 Introduction to Elements of Jmeter Test Plan
    • 25. What’s Inside  Test Plan  Thread Group  Controllers  Samplers  Logic Controllers  Listeners  Timers  Assertions  Configuration Elements  Pre-Processor Elements  Post-Processor Elements  Execution Order  Scoping Rules
    • 26. Test Plan  Test plan node is where REAL stuff is kept.  Used for containing the test.  A layout of how and what to test.  Test Plan describe a series of steps JMeter will execute once the Test Plan runs.  A test plan must have at least one Thread Group.
    • 27. Thread Group  Thread Group is used for representing users.  There could be one or more Thread Groups in a Test Plan.  Each thread group will execute completely independently from each other.  Thread Group can control:  Number of users simulated (No. of threads).  Ramp Up time (how long it takes to start all the threads).  Number of times to perform the test.
    • 28. Controllers  Controllers used for grouping and applying logic to test items.  It’s of two types: Controllers Sampler (Send Request to Server) Logical Controller (Customize logic to send request)
    • 29. Samplers  Used for performing actual task.  It allow JMeter to send specific types of requests to a server.  The following is a list of all Samplers JMeter provides: Sampler HTTP Request FTP Request JDBC Request Web Service (SOAP) Request Access Log Sampler Bean Shell Sampler BSF Sampler TCP Sampler
    • 30. Logic Controllers  These allow you to customize the logic that JMeter uses to decide when to send requests.  The following list consists of all the Logic Controllers JMeter provides: Logic Controller Simple Controller Loop Controller Once Only Controller Random Controller Throughput Controller If Controller While Controller Switch Controller Transaction Controller Recording Controller
    • 31. Listeners  Listeners are used for displaying of data.  Listeners provide means to view, save, and read saved test results.  The following list consists of all the Listeners JMeter provides: Listeners Graph Full Results Spline Visualizer Assertion Results View Results Tree Aggregate Report View Results in Table Aggregate Graph
    • 32. Timers  Allow JMeter to delay between each request that a thread makes.  Think Time is used for emulating real-life visitors/users.  The following list consists of all the Timers JMeter provides: Timer Constant Timer Uniform Random Timer Gaussian Random Timer Synchronizing Timer Bean Shell Time
    • 33. Assertions  Used for Validating Test.  Allow you to "test" that your application is returning the results you expect it to.  The following list consists of all the Assertions JMeter provides: Assertion Response Assertion Duration Assertion Size Assertion Bean Shell Assertion HTML Assertion XPath Assertion
    • 34. Configuration Elements  They are used to add or modify requests made by Samplers.  They Works closely with a Sampler, these can add to or modify requests.  The following list consists of all the Configuration Elements JMeter provides: Configuration Element CSV Data Set Config FTP Request Defaults HTTP Authorization Manager HTTP Cookie Manager HTTP Request Defaults JDBC Connection Configuration Login Config Element
    • 35. Pre-Processor Elements  They Execute prior to sampler request  It's used to modify the settings of a Sample Request just before it runs.  The following list consists of all the Pre-Processor Elements JMeter provides: Pre-Processor Element HTML Link Parser HTTP URL Re- writing Modifier User Parameters Bean Shell PreProcessor
    • 36. Post-Processor Elements  They Execute some action after sampler request.  It's execute after a request has been made from a Sampler.  The following list consists of all the Post-Processor Elements JMeter provides: Post-Processor Element Regular Expression Extractor XPath Extractor Save Responses to a file Bean Shell PostProcessor
    • 37. Execution Order  JMeter execute test in below order: o Configuration elements o Pre-Processors o Timers o Sampler o Post-Processors (unless SampleResult is null) o Assertions (unless SampleResult is null) o Listeners (unless SampleResult is null)  Timers, Assertions, Pre- and Post-Processors are only processed if there is a sampler to which they apply.  Logic Controllers and Samplers are processed in the order in which they appear in the tree.  Other test elements are processed according to the scope in which they are found, and the type of test element
    • 38. Scoping Rules  JMeter test tree contains elements that are both hierarchical and ordered.  Strictly hierarchical (Listeners, Config Elements, Post-Processor, Pre- Processors, Assertions, Timers)  Primarily ordered (controllers, samplers). The order of requests will be, One, Two, Three, Four. Assertion #1 is applied only to Request One, while Assertion #2 is applied to Requests Two and Three. Timer #1 will apply to Requests Two, Three, and Four (notice how order is irrelevant for hierarchical elements). Assertion #1 will apply only to Request Three. Timer #2 will affect all the requests.
    • 39. Chapter 4 Building a Web Test Plan
    • 40. What’s Inside  Recording & Playback  Adding Users  Adding Default HTTP Request Properties  Adding Cookie Support  Adding HTTP Requests  Adding a Listener to View/Store the Test Results
    • 41. Recording & Playback  Jmeter can act as a proxy server between your browser and the web recording your actions.  This can help in writing your web tests.  Lets do another small demo? ◦ Proxy Server ◦ Script Recoding Tool (Badboy, Wireshark)
    • 42. Adding Users  5 users send 2 requests on and repeat it twice. (5 users x 2 requests x 2 repeat = 20 requests)  Right click on test note >> Add >> Thread (users) >> Thread Group.
    • 43. Adding Default HTTP Request Properties  This will let you define default http parameter for every request.  You can add default http request setting from ADD > Config Element > HTTP request default.
    • 44. Adding HTTP Requests  You can add it from ADD > Sampler > HTTP Request
    • 45. Adding Cookie Support  They are normally included to maintains a certain state for each user, e.g. cookies.  You can add it from ADD > Config Element > HTTP Cookie Manager.
    • 46. Adding a Listener toView/Store the Test Results  Listener responsible for storing all of the results of our HTTP request and presenting in Visualize mode.  You can add it from ADD > Listener > Summary Report.
    • 47. Chapter 5 Load/Performance Testing of Websites
    • 48. What’s Inside  Preparing for Load Testing  Need to Know  Some Helpful Tips to Get Better Results  Using Jmeter Components  Recording HTTP Requests  Creating the Test Plan  Adding Listeners  Running the Test Plan  Interpreting the Results  Monitoring the Server's Performance  Performance Test Reporting
    • 49. Preparing for Load Testing  Need to address a number of concerns with regards to the target server under test.  A load testing helps to benchmark performance behavior of a server, it is important to be able to identify the general expectations and other matters that would normally be taken into account in order to carry out a successful load testing.
    • 50. Need to Know  A suitable time to load-test the application, for instance when no development work is taking place on the server and/or no other users are accessing the server.  The performance metrics, accepted levels, or SLAs and goals.  Objectives of the test.  The Internet protocol(s) the application is(are) using (HTTPS, HTTP, FTP, etc.)  If your application has a state, the method used to manage it (URL rewriting, cookies, etc.)  The workload at normal time and at peak time.
    • 51. Some HelpfulTips to Get Better Results  Use meaningful test scenarios to construct 'real-life' test cases.  Run JMeter on a machine other than that running the application.  The machine running JMeter should have sufficient network bandwidth, memory, CPU to generate load.  Let JMeter Test Plan run for long time periods, hours or days, or for a large number of iterations.  Ensure that the application is stable and optimized for one user before testing it for concurrent users.  Incorporate 'thinking time' or delays using Timers in your JMeter Test Plan.  Keep a close watch on the four main things: processor, memory, disk, and network.
    • 52. Using Jmeter Components  We will test five key scenarios:-  Homepage  Keyword Search  Create Account  Select A Title  Add To Cart These scenarios will be included in a single JMeter Test Plan, for simplicity reasons.
    • 53. Recording HTTP Requests  JMeter Proxy can use to record all request send to server.  Create test plan with default http testing  Add HTTP Proxy Server in Workbench node.  Define port number of proxy server
    • 54. Creating the Test Plan  Right-click on the Test Plan element and select Add | Thread Group.  Configure the Number of Threads to 10, Ramp-Up Period to 1 second, and Loop Count to 50.  Add to the Test Plan Config Element | HTTP Request Defaults.  Add Pre-Processor | User Parameter Element or Config Element | CSV Data Set Config.
    • 55. Adding Listeners  Right click Thread Group node > Add > Listener.
    • 56. Running the Test Plan  When Jmeter script gets ready.  Click on “Start”.
    • 57. Interpreting the Results  Once the test is completed, we can now retrieve the results we have saved for each Controller.  With the exception of the Assertion Result Listener, the saved data can be viewed in numerous forms.
    • 58. Monitoring the Server's Performance  There is a special Listener that allows you to monitor the target server's performance as Samplers make requests.  This Monitor Result Listener is designed to work with Apache Tomcat version 5 and above.  Some server monitoring tools:  Perfmon  Blazemeter
    • 59. PerformanceTest Reporting  Introduction  Test Environment (Software and Hardware)  Goals of this Report  Assumptions  Performance requirements  Workload Scenario  Performance Testing Approach  Results  Samples Response Monitoring (Jmeter results) o Run Date: 6/01/2014 o Duration: 1 hour, 38 minutes, 30 seconds o High Load Count: 2 users every 300 seconds (varies) o Ramp-up Increment: 35 users o Think Time: varies o Connection Speed: Broadband
    • 60. Chapter 6 Handling the dynamic server values
    • 61. What’s Inside  What is Correlation  Why Correlation  Regular Expression  Using Regular Expression Extractor in Jmeter Tests
    • 62. What is Correlation  A Correlation is a Connection or Association.  Capturing the dynamic data that is being generated by the server is called correlation.  Correlation will be done using the Regular Expression Extractor in Jmeter.
    • 63. Why Correlation  To overcome script fail we need a way which can capture these dynamically generated session values and pass it subsequently to any part of the script, wherever required. This method to identify and set the dynamic generated value is known as correlation. Sample of Regular Expression and Usage: Session ID = (.+?)  to correlate the url/dynamic id. EX: Session ID = jkjoujn434897h3jh35y9h&OrderID=ikikikke99874kmnjhh2
    • 64. Regular Expression  Extract single string o Suppose you want to match the following portion of a web-page: o name="file" value="readme.txt"> o And you want to extract readme.txt . o A suitable regular expression would be: o name="file" value="(.+?)"> o The special characters above are: o ( and ) - enclose the portion of the match string to be returned o . - match any character o + - one or more times o ? - don't be greedy, i.e. stop when first match succeeds
    • 65. Regular Expression  Extract multiple string o Suppose you want to match the following portion of a web-page: o name="" value="readme.txt“ o And you want to extract both and readme.txt . o A suitable regular expression would be: o name="([^"]+)" value="([^"]+)" o This would create 2 groups, which could be used in the JMeter Regular Expression Extractor template as $1$ and $2$. o For example, assume: o Reference Name: MYREF o Regex: name="(.+?)" value="(.+?)" o Template: $1$$2$
    • 66. Using Regular Expression Extractor in Jmeter Tests  Name Regular Expression Extractor : jsessionid.  Reference Name : jsessionid  Enter Regular Expression : jsessionid = "(.+?)" /> o (.+?) this refers JMeter captures any values in between LB and RB)  Template : $1$ (this is for grouping)  Match No : 1  Default Value : NONE (To pass default value enter string here)
    • 67. Chapter 7 Parameterize with test data
    • 68. What’s Inside  What is Parameterization  Why Parameterize  Identifying the test data on AUT  Using the CSV Data Config in Jmeter Tests
    • 69. What is Parameterization  It is the way of replacing a hard coded value in the script with a parameter which represents a list of values.  It is a script that contains the actual values used during recording and during script enhancement phase test engineer has to replace the recorded values with parameters is known as parameterizing the script.
    • 70. Why Parameterize  It allows you to test your script with different values.  Replace the constant values in Script with parameters.  simulate real user behavior while running the test
    • 71. Identifying the test data on AUT  While login “Username” & “Password” are parameter for which test data needed.  While registration “First Name”, “City” etc. is parameter for which test data needed.
    • 72. Using the CSV Data Config in Jmeter Tests  Create CSV file with list of username and password.  Store in same folder where your test stores.  Add CSV Data set into your test tree from config elements.  Add ${username},${password} in request sampler as parameter.
    • 73. Chapter 8 Adding Assertions to the test script
    • 74. What’s Inside  What is Assertion  Why Assertion  Types of Assertion in Jmeter  Running the tests and analyzing the Assertion results
    • 75. What is Assertion  Assertions allow you to include some validation test on the response.  To ensure the responses you receive.
    • 76. Why Assertion  Assertions are used to perform additional checks on samplers.  They are processed after every sampler in the same scope.
    • 77. Types of Assertion in Jmeter  Response Assertion: Compared against various fields of response.  Duration Assertion: Tests each response was received within a given amount of time.  Size Assertion: Tests each response contains the right number of bytes in it.  XML Assertion: Tests response data consists of a formally correct XML document.  BeanShell Assertion: Checking using a BeanShell script.  MD5Hex Assertion: Check the MD5 hash of the response data.  HTML Assertion: Check the HTML syntax of the response.  XPath Assertion: Tests a document for well formedness.  XML Schema Assertion: The XML Schema Assertion allows the user to validate a response against an XML Schema.
    • 78. Running the tests and analyzing the Assertion results  Assertions allow you to assert facts about responses received from the server being tested. Using an assertion, you can essentially "test" that your application is returning the results you expect it to.
    • 79. Chapter 9 Timers
    • 80. What’s Inside  What is Timers  Types of Timers
    • 81. What isTimers  Timers allow JMeter to delay between each request which a thread makes.  Timer will help mimic the real time behavior.  Timers are processed before each sampler in the scope in which they are found.
    • 82. Types of Timer  Constant Timer o Constant timer delays each user request for the same amount of time.  Gaussian Random Timer o Gaussian random timer delays each user request for a random amount of time. o Deviations + Constant Delay Offset = Total Delay Time  Uniform Random Timer o Gaussian random timer delays each user request for a random amount of time. o Random Delay Maximum + Constant Delay Offset = Total Delay Time  Synchronizing Timer o Synchronizing Timer will synchronize requests of multiple threads. o To create heavy load bursts on application.
    • 83. Chapter 10 Advanced Features
    • 84. What’s Inside  Testing Web Services  Testing a Database Server  Testing an FTP Server
    • 85. Testing Web Services  Building a WebService Test Plan o Create a Test Plan to test a WebService. o The total number of requests is (2 users) x (1 requests) x (repeat 2 times) = 4 HTTP requests. o WebserviceTest(our Test Plan) > Add > Threads(Users) > Thread Group.
    • 86. Testing Web Services  Adding WebService Requests o WebService(SOAP) Request o Add > Sampler > SOAP/XML-RPC Request.
    • 87. Testing Web Services  Adding a Listener to View Store the Test Results o The final element you need to add to your Test Plan is a Listener. This element is responsible for storing all of the results of your HTTP requests in a file and presenting a visual model of the data. o Add > Listener > View Results Tree
    • 88. Testing a Database Server  A few things you will need before proceeding to build a Database Test Plan are:  A working database driver, copy the .jar file contained in the database driver and pastes it in the lib folder of your JMeter.  A valid database-schema.  Valid non-empty database table(s).  A valid user-level access to the database.
    • 89. Testing an FTP Server  What you will need before proceeding to build a FTP Test Plan includes:  A running FTP server on the target machine.  A valid path to the shared files in your FTP server.  Valid non-empty files in the FTP installation path.  A valid user-level access to the files.
    • 90. Chapter 11 Best Practices
    • 91. What’s Inside  Limit the Number of Threads  Where to Put the Cookie Manager  Where to Put the Authorization Manager  Reducing resource requirements  Bean Shell Server  Distributed Testing  Sever Monitoring Tools (PerfMon)  Database Monitoring Tool (Jet Profiler)  Request Snipers Tools  Jmeter Functions
    • 92. Reducing resource requirements  Do not use GUI mode better use non-GUI mode.  Use Remote and Distributed testing for larger load testing.  Do not load more than 300 threads per Jmeter.  Use naming conventions for all the elements.  Use as few Listeners as possible.  Don't use "View Results Tree" or "View Results in Table" listeners during the load test; use them only during scripting phase to debug your scripts.  Rather than using lots of similar samplers, use the same sampler in a loop, and use variables (CSV Data Set) to vary the sample.  Use CSV output rather than XML.  Only save the data that you need.  Use as few Assertions as possible.
    • 93. Where to Put the Cookie Manager  The cookie manager stores and sends cookies just like a web browser.  Received Cookies can be stored as JMeter thread variables.  Manually add a cookie to the Cookie Manager.
    • 94. Where to Put the Authorization Manager  It’s specify one or more user logins for web pages that are restricted using server authentication.
    • 95. Bean Shell Server  The Bean Shell interpreter has a very useful feature it can act as a server, which is accessible by telnet or http.
    • 96. Distributed Testing  Master: The system running Jmeter GUI which control the test.  Slave: The System running Jmeter-server which takes commends from the GUI and send the requests to the target system.  Target: The Web Server planned for the load test.  JMeter client machine may not able to simulate enough users to stress server.  Control multiple machine to run JMeter without copying test samples to different machine.
    • 97. Sever Monitoring Tools (PerfMon)  During a load test, it is important to know the health of the servers loaded.  To address this, the plugin package now supports server monitoring!  Detail Link:
    • 98. Database Monitoring Tool (Jet Profiler)  Jet Profiler for MySQL is a query profiling tool for the MySQL database server.  Query, table and user performance  Graphical Visualization  Low overhead  User friendly  Detail Link:
    • 99. Request Snipers Tools  It’s to monitors and analyzes all incoming and outgoing HTTP traffic between the browser and the web servers.  HttpFox  httpwatch  Gtmetrix  Yslow  Web Page Test  google speed test
    • 100. Jmeter Functions  The Function Helper Dialog is available from JMeter's Options tab.  Can generate different functions, Once you have made changes, click the “Generate" button, and use appropriate place by copy- paste into your test plan wherever you like.
    • 101. Jmeter Functions  There are two kinds of functions: user-defined static values (or variables), and built-in functions.  User-defined static values allow the user to define variables to be replaced with their static value when a test tree is compiled and submitted to be run.  Note that variables cannot currently be nested; i.e ${Var${N}} does not work.  The __V (variable) function (versions after 2.2) can be used to do this: ${__V(Var${N})}.  This type of replacement is possible without functions, but was less convenient and less intuitive
    • 102. The Function Helper Dialog  There are two kinds of functions: user-defined static values (or variables), and built-in functions.  User-defined static values allow the user to define variables to be replaced with their static value when a test tree is compiled and submitted to be run.  Note that variables cannot currently be nested; i.e ${Var${N}} does not work.  The __V (variable) function (versions after 2.2) can be used to do this: ${__V(Var${N})}.  This type of replacement is possible without functions, but was less convenient and less intuitive
    • 103. Thank you