Your SlideShare is downloading. ×

Tips Hints For Developing Automation Testing QTP


Published on

Tips Hints For Developing Automation Testing QTP …

Tips Hints For Developing Automation Testing QTP

with fallowing topics discussion.

1) Debug Scripts Incrementally.
2) Test Script Synchronization.
3) Signed-off, Peer Reviewed.
4) Recording, Playing Back Against Hidden Objects.
5) Documentation.
6) Perform Version Control on Test Scripts.
7) Adhere to Test Script Naming Standards and Storage.

Published in: Technology, Business

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. || || H2KInfosys@Gmail.comUSA: +1-770-777-1269 || UK: || || H2KInfosys@Gmail.comUSA: +1-770-777-1269 || UK: +44-0203-371-7165
  • 2. || || H2KInfosys@Gmail.comUSA: +1-770-777-1269 || UK: || || H2KInfosys@Gmail.comUSA: +1-770-777-1269 || UK: +44-0203-371-7165
  • 3. || || H2KInfosys@Gmail.comUSA: +1-770-777-1269 || UK: || || H2KInfosys@Gmail.comUSA: +1-770-777-1269 || UK: +44-0203-371-7165Tips and Hints for Developing Automated Test Scripts1) Debug Scripts IncrementallyRecorded test scripts, like other software development efforts, can become quite large. Hundreds of lines of codes willneed debugging for successful playback, which might contain several sets of data for parameterized data driven testscripts. The common approach to debugging a test script is to first record all the business processes and requirements,then have the tester play back the test script to identify and correct any problems. The tester continues to debug the testscript until it successfully plays back with a single set of data and/or multiple sets of data.Debugging and troubleshooting test scripts becomes extremely tedious and intractable when the test script has hundredsof lines of code, verification points, branching logic, error handling, parameters, and data correlation among variousrecorded business processes. A much more manageable approach to debugging complex and lengthy test scripts is torecord portions of the script and debug these portions individually before recording other parts of the test script. Aftertesting individual portions, you can determine how one portion of the test script works with another portion and how dataflows from one recorded process to the other. After all sections for a test script have been recorded, one can playback theentire test script and ensure that it properly plays back from the beginning to the end with one or more sets of data.2) Test Script SynchronizationTest tools can play back recorded test script at rates much faster than an end-users manual keystrokes. Subsequentlythis can overwhelm the application under test since the application might not display data or retrieve values from thedatabase fast enough to allow proper test script playback. When the test application cannot respond to the test script, thescript execution can terminate abruptly thus requiring user intervention. In order to synchronize applications under testand test scripts during playback, the testing team introduces artificial wait times within the recorded test scripts. Wait timesembedded in the test script meant to slow down the test script execution are at best arbitrary and estimated through trialand error. The main problem with wait times is that they either wait too long or not long enough.For instance, the tester might notice the test script is playing back too fast for the application under test. He might decideto slow it down several times until the test script execution is synchronized with the test application. This technique canbackfire--even fail--if the application is running slower than the newly introduced wait times during test execution due toexternal factors such as the LAN having delays or system maintenance. In this scenario, the tester would have tocontinually guess a new reasonable wait time--each time. Slowing down a script with wait times is not very scientific anddoes not contribute to the creation of a robust automated test script that plays back successfully without user intervention.If possible, testers should avoid introducing artificial wait times or arbitrary sleep variables to synchronize test scripts withthe application. "While" statements or nested "loops" are appropriate techniques used to synchronize test scripts thatrequire synchronization points and will playback successfully regardless of the response times of the application undertest. Inserting "nested" loops or "while" statements within a test script also reduces user intervention during the test scriptplayback. For example, I insert "while" statements in recorded test scripts that continually press the Enter button until ascheduling agreement is created no matter how long the application under test takes to generate the agreement. The testscript works independently of the response times for the application under test.3) Signed-off, Peer ReviewedAs part of the test readiness review criteria, test scripts should be formally accepted and approved prior to starting the testcycle. SMEs, business analysts, and developers should be involved in approving recorded test scripts. The tester writingthe automated test script should demonstrate that the test script successfully plays back in the QA environment and, ifpossible, with various sets of data.4) Recording, Playing Back Against Hidden Objects
  • 4. || || H2KInfosys@Gmail.comUSA: +1-770-777-1269 || UK: || || H2KInfosys@Gmail.comUSA: +1-770-777-1269 || UK: +44-0203-371-7165Scripts might be recorded to populate or double click values for a field within a table grid or an array where the location ofthis field is not fixed. If the fields location within a table grid or array changes from the time it was recorded, the scriptmight fail during playback. Test scripts often fail during playback because the locations of objects that are not displayed orvisible within the screen have changed.In order to playback scripts that are location sensitive or where the location is subject to change, it might be necessary toenhance the script with functionality such as "scroll down," "next page," or "find." Including such utilities ensure thathidden objects requiring playback will be identified, populated, and/or double-clicked regardless of their location within anarray, table grid, or the displayed screen.5) DocumentationTo make test scripts reusable and easier to maintain, document all relevant information for executing the test script, a testscript header, and any special conditions for execution of the test script. For example:1. Adjust dates within the application under test for closing of the books,2. Update any fields that require unique data,3. Display settings for context sensitive/analog/bitmap recording,4. List other test scripts that are dependencies,5. Specify necessary authorization levels or user roles for executing the script,6. Conditions under which the script can fail and work around for re-launching the script,7. Applications that need to be either opened or closed during the script execution, and8. Specific data formats, for instance, European date format versus US date format, etc.Furthermore, scripts should contain a header with a description (i.e. what it is used for) and its particular purpose (i.e.regression testing). The script header should also include the script author and owner, creation and modification date,requirement identifiers that the script traces back to, the business area that the script supports, and number of variablesand parameters for the script. Providing this information in the test script header facilitates the execution, modification,and maintenance of the script for future testing efforts.6) Perform Version Control on Test ScriptsMany corporations spend tens of thousands of dollars for test tools, but ignore the by-product of the test tool -- namely therecorded test script. For companies building libraries and repositories of automated test scripts, it is highly advisable toperform version control on automated test scripts. Version control helps to track changes made to the test scripts, and tomaintain multiple versions of the same test script.7) Adhere to Test Script Naming Standards and StorageTest scripts should follow the projects accepted naming standards and should be stored in the designated repository suchas a shared drive or test management tool.The test manager should designate naming standards for the test scripts that include information for these areas:1. Project name (i.e. GSI which stands for Global SAP Implementation),2. Release number (i.e. version or release number that will be released/deployed,3. Subject area or test category (i.e. SC for Security Testing, LT for Load Testing),4. Sequential test case number5. Title or function that will be tested (i.e. procurement from external vendors).Following these tips enable testers to build more robust test scripts for their organizations. Also, developing maintainabletest scripts maximizes the benefits of automated test tools. Companies can realize ROI from automated test tools whenautomated test scripts are used for future testing efforts, reducing the time needed to complete a testing cycle. Thetechniques above will help companies build test scripts that will meet these objectives.
  • 5. || || H2KInfosys@Gmail.comUSA: +1-770-777-1269 || UK: || || H2KInfosys@Gmail.comUSA: +1-770-777-1269 || UK: +44-0203-371-7165Thank youH2KInfosys