TestComplete in SRS

WHAT WE DID, WHAT WE LEARNED AND
      WHAT WE SHOULD DO
TestComplete

 Tool for creating automated testing used in SRS for
  more than 5 years
 Have several years in the market
 Used by several companies like Siemens, Corel,
  Adobe and UPS.
What we did with TC in SRS?

 Created 50+ test scripts
 Created tool for automated scheduled execution
  (codename TSET)
 Lots of know how in wiki pages
TC script screenshot
TSET screenshot
What we DO today in SRS with TC?

 Still leveraging TC and TSET for running some tests
   Edge smoke tests

   Edge WebAccess morning check

   SMTP blacklist daily check (knowyourvehicle.com, etc)

   etc

 This is aside the Re En changes
What we learned using TC in SRS?

 Script maintenance is the key issue
 Script execution time is measured in minutes and
  not secconds
 One test failed doesn´t means one bug 100% of the
  times
 Infrastructure setup is a KSF
 TC is not compatible with
    Mobile UI test automation
    Silverlight UI test automation (believe me )
    “Spoonized” apps
KSF for using TC

 Develop apps for testing automation
   i.e. create hotkey combination for exit the app
   i.e. avoind changing form items names like buttons, labels, etc
   ….

 Leverage best practices to keep script maintenance
  low
 Test one single item per script
 Have dedicated infrastructure for running tests
 Avoid as much as possible using UI steps to prepare
  data for running the test
     Go directly to the DB or use WSM instead
How to continue?

 Figure out how to deal with false alarms
   Manual verification is not desirable

   Multiple executions of failed tests?

 Create a test script naming standard
   We relied on TFS Test Case # for this

   More information will be need in the script header like steps,
    etc.
 Create a guideline “how to develop having
  automation in mind?”
How to continue?

 Use the “3 steps” rule for creating a test:
   1 – Use the recorder tool following the execution steps

   2 – Polish the output code to match best practices and
    standards
   3 – Create the “asserts” manually
Questions?

Test complete, work done so far

  • 1.
    TestComplete in SRS WHATWE DID, WHAT WE LEARNED AND WHAT WE SHOULD DO
  • 2.
    TestComplete  Tool forcreating automated testing used in SRS for more than 5 years  Have several years in the market  Used by several companies like Siemens, Corel, Adobe and UPS.
  • 3.
    What we didwith TC in SRS?  Created 50+ test scripts  Created tool for automated scheduled execution (codename TSET)  Lots of know how in wiki pages
  • 4.
  • 5.
  • 6.
    What we DOtoday in SRS with TC?  Still leveraging TC and TSET for running some tests  Edge smoke tests  Edge WebAccess morning check  SMTP blacklist daily check (knowyourvehicle.com, etc)  etc  This is aside the Re En changes
  • 7.
    What we learnedusing TC in SRS?  Script maintenance is the key issue  Script execution time is measured in minutes and not secconds  One test failed doesn´t means one bug 100% of the times  Infrastructure setup is a KSF  TC is not compatible with  Mobile UI test automation  Silverlight UI test automation (believe me )  “Spoonized” apps
  • 8.
    KSF for usingTC  Develop apps for testing automation  i.e. create hotkey combination for exit the app  i.e. avoind changing form items names like buttons, labels, etc  ….  Leverage best practices to keep script maintenance low  Test one single item per script  Have dedicated infrastructure for running tests  Avoid as much as possible using UI steps to prepare data for running the test  Go directly to the DB or use WSM instead
  • 9.
    How to continue? Figure out how to deal with false alarms  Manual verification is not desirable  Multiple executions of failed tests?  Create a test script naming standard  We relied on TFS Test Case # for this  More information will be need in the script header like steps, etc.  Create a guideline “how to develop having automation in mind?”
  • 10.
    How to continue? Use the “3 steps” rule for creating a test:  1 – Use the recorder tool following the execution steps  2 – Polish the output code to match best practices and standards  3 – Create the “asserts” manually
  • 11.