Ramesh Padmanabha's slides on Mercury Interactive LoadRunner


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Numerous legacy systems and support groups. Data collection was complex.
  • Ramesh Padmanabha's slides on Mercury Interactive LoadRunner

    1. 1. Performance Load Testing Case Study – Agilent Technologies
    2. 2. Agenda <ul><li>Introductions </li></ul><ul><li>Background </li></ul><ul><li>Testing Objectives </li></ul><ul><li>Preparation Phase </li></ul><ul><li>Execution Phase </li></ul><ul><li>Analysis </li></ul><ul><li>Lessons Learnt </li></ul><ul><li>Contact Information </li></ul>
    3. 3. Introduction <ul><li>Ramesh Padmanabhan </li></ul><ul><ul><li>Entegration Software </li></ul></ul><ul><ul><li>Consulting & product company based in San Jose </li></ul></ul><ul><ul><li>Proud to be service partners of </li></ul></ul><ul><ul><ul><li>Oracle Corporation </li></ul></ul></ul><ul><ul><ul><li>Mercury Interactive </li></ul></ul></ul><ul><ul><ul><li>Yash Technologies </li></ul></ul></ul>
    4. 4. Introduction <ul><li>Agilent Technologies </li></ul><ul><ul><li>$6 Billion Global Mfg Company </li></ul></ul><ul><ul><li>Over 30,000 employees in more than 50 countries </li></ul></ul><ul><ul><li>One of the largest global single instance installs of Oracle E-business suite </li></ul></ul><ul><ul><li>Consolidated over 150 legacy systems </li></ul></ul><ul><ul><li>Expect a maximum 5,000 concurrent users </li></ul></ul>
    5. 5. Background <ul><li>Largest single instance install </li></ul><ul><li>3 HP Superdomes –Production, Reporting, Planning </li></ul><ul><li>Single US based data center </li></ul><ul><li>Over 50 operating units </li></ul><ul><li>Significant business volume in Asia & Europe </li></ul><ul><li>Consolidating over 125 different legacy systems </li></ul><ul><li>Implemented all Financial & MFG Modules </li></ul>
    6. 6. Testing Objectives
    7. 7. Testing Objectives <ul><li>Validate single instance strategy </li></ul><ul><li>Validate network and hardware infrastructure </li></ul><ul><li>Scalability to 5000 concurrent users </li></ul><ul><li>Stress test for “high water mark” </li></ul><ul><li>Set user response time expectations </li></ul><ul><li>Identify and fix significant performance tuning issues within Oracle Applications </li></ul><ul><li>Identify and drive solutions for hardware issues </li></ul>
    8. 8. Preparation Phase
    9. 9. Data Gathering <ul><li>Identified major transactions within each application module </li></ul><ul><li>Questionnaires sent for legacy data volumes by geography (US, Asia, Europe) </li></ul><ul><li>Short listed transactions with high volume or data intensive processing </li></ul><ul><li>Identified user distribution by region and by application areas </li></ul><ul><li>Determined estimation methodology for inquiry transactions </li></ul>
    10. 10. Hardware Preparation <ul><li>Ensure that the production configuration of back-end server and middle tier machines were set-up and configured </li></ul><ul><li>Procure the Load generation agent boxes and have them installed and setup at the right locations </li></ul><ul><li>Ensure that the Cisco load balancing router was correctly set up </li></ul><ul><li>Set up network ‘sniffing’ devices to get detailed metrics of network traffic </li></ul>
    11. 11. Software Preparation <ul><li>Procure and install LoadRunner on the agent and controller boxes </li></ul><ul><li>Install LoadRunner and the Oracle Applications client on the machines of the scripters </li></ul><ul><li>Install/Setup other database monitoring software </li></ul><ul><li>Prepare scripts for detailed transaction analysis </li></ul>
    12. 12. Data Preparation <ul><li>Validated various application setups </li></ul><ul><li>Initial cycles required all key master data to be fabricated </li></ul><ul><li>Developed numerous scripts to extract key data elements like items, customers, vendors etc. to be used in transactions </li></ul><ul><li>Ensured adequate breadth of data. </li></ul><ul><li>Identified key data and parameters for background load </li></ul>
    13. 13. Develop LoadRunner Scripts <ul><li>Recorded scripts for all the critical and high volume transactions </li></ul><ul><li>Adequate mix of inquiry and update txns. </li></ul><ul><li>Parameterized all the critical pieces of data like item, customer, orders etc. </li></ul><ul><li>Identified activities for which server response times were key and set up transaction timers around them e.g. commits, quick-picks etc. </li></ul>
    14. 14. Execution Phase
    15. 15. Build Test Scenarios <ul><li>Develop matrix for users by geography by transaction </li></ul><ul><li>Manual scenarios </li></ul><ul><li>Goal oriented scenarios </li></ul><ul><li>Transactions split into three groups based on data dependency conditions </li></ul>
    16. 16. Run Tests… <ul><li>5 cycles of testing </li></ul><ul><ul><li>1- validation cycle </li></ul></ul><ul><ul><li>2 – complete cycle with converted data </li></ul></ul><ul><ul><li>3- Stress test cycle </li></ul></ul><ul><ul><li>4- Complete integrated test with key interfaces and customizations </li></ul></ul><ul><ul><li>5- Production simulation run </li></ul></ul><ul><li>Each cycle comprised of two major runs/day for two weeks. Each test run was about 4-7hrs long </li></ul>
    17. 17. Run Tests… <ul><li>5000 concurrent user load generated from 8 LoadRunner agents – 4 in US, 2 each in Europe & Asia </li></ul><ul><li>LoadRunner monitors set up for network, backend server & middle-tier boxes </li></ul><ul><li>Dedicated DBA and performance tuning experts monitored the HP Superdome server </li></ul>
    18. 18. Analysis <ul><li>Used LoadRunner Analysis tool </li></ul><ul><li>Real time graphical interface to monitor the test progress </li></ul><ul><li>Post run analysis includes numerous graphs and transaction timers </li></ul><ul><li>More detailed analysis was done from the result data stored by LoadRunner in an Access database </li></ul>
    19. 19. Analysis <ul><li>Data from the analysis used to </li></ul><ul><ul><li>Set up realistic response time expectations from the end users </li></ul></ul><ul><ul><li>Modify various database parameters in the init.ora to better performance </li></ul></ul><ul><ul><li>Tweak settings of the Cisco load balancer for middle tier machines </li></ul></ul><ul><ul><li>Identify and tune some of the application code that had bad performance </li></ul></ul>
    20. 20. Limitations <ul><li>Some performance intensive processes could not be tested due to data dependency issues e.g. lock-box receipts </li></ul><ul><li>Some dynamic and interactive processes could not be tested very well e.g. configured orders </li></ul><ul><li>Some custom code not stable till the last cycle </li></ul><ul><li>Some of the newer application modules not stable for a reasonable test </li></ul><ul><li>Application version and patch set lags </li></ul>
    21. 21. Lessons Learnt <ul><li>Performance test will only be as good as the data collected in the analysis phase </li></ul><ul><li>While performance test can significantly reduce risk of poor performance, it is not a guaranty </li></ul><ul><li>Initial performance testing cycles should focus more on non-code related performance variables </li></ul>
    22. 22. Lessons Learnt <ul><li>Intensive code related performance testing & tuning should take place after custom solutions have been put into testing and application patch sets are frozen </li></ul><ul><li>Performance testing should be in the critical path of project plan and performance testing instances should be patched just like the BST instances </li></ul><ul><li>Should plan on at least one marathon testing run that extends for 3 or 4 days </li></ul>
    23. 23. Contact Information Ramesh Padmanabhan Entegration Software [email_address] 408-674-3701 www.entegration.com