Author Notes: This is the PowerPoint template for the Innovate 2011 Track Sessions ALL FINAL FILES MUST BE CONVERTED TO LOTUS SYMPHONY. Learn more here: http://w3.ibm.com/connections/wikis/home?lang=en#/wiki/Rational%27s%20Phased%20Approach%20in%20Migrating%20to%20Lotus%20Symphony Additional IBM Rational presentation resources can be found on Rational’s Managing the Brand W3 Intranet site: https://w3-03.ibm.com/software/marketing/marksite.nsf/AllMarketingPages/Brand-Rational-rt_rtb?opendocument?opendocument Third party material cannot be used in a presentation without written permission (this includes product and Web page screen shots). Images must be acquired from a ‘royalty-free to use’ source such as: Microsoft Clip Art library http://www.freebyte.com/clipart_images_photos_icons/#freevectorgraphics http://www.freedigitalphotos.net/ IBMers can use images from: IBM image library: https://w3-03.ibm.com/software/marketing/marksite.nsf/AllMarketingPages/Brand-Rational-rt_rtb?OpenDocument&ExpandSection=3,2#_Section3 Royalty free images in Marketing Asset Manager database (you will need to register to access this site) : http://126.96.36.199/IBM001/templates/login.html
Let’s start with a little information on your presenters today.
And a little information on the company we work for.
Automated software testing has long been considered critical for software development organizations but is often thought to be too expensive or difficult for companies to implement. Today we will discuss the phases we followed to overcome these Test Automation challenges and the lessons we learned along the way. Here is a high level overview of our presentation that we are giving today. In making the transition to building a testing automation infrastructure, we went through three core phases. (describe phases)
As with any new process, it is best to have a roadmap. This graph displays the five milestones achieved along the path to test automation. The vertical axis represents the process from manual to automated testing. While the horizontal axis represents the frequency of tests running. The end goal was to have automated test run and report on a daily basis without the need for user interaction. In phase 1, we focused on setting up the infrastructure.
The motivation for our organization to make the switch to automated testing were spurred on by the following: Large amounts of time and labor it took to manually test -It was impractical to test each build -It was difficult to test all combinations -and it was impossible to report the quality of daily builds
Making the switch to automated testing takes setup. At a high level, here are the steps we used to achieve this:
Automation of manual tests was deemed a high priority by our organization's testing leadership. Management decision and support of automation was founded. This was the initial and most important step to making the switch to automated testing. Once the decision was made, review of automation software was conducted. We finally chose Rational Functional Tester after reviewing many options.
The next step was to create staff position(s) to promote automation in the organization To allow for the best chance of success, we promoted a central person as lead for driving the automation process across the organization. This position has changed over time, but this position was initially critical given our group did not have test automation processes. You want someone in this position that will train and teach the automation skills, as well as overseeing software, hardware and process changes. Don’t kid yourself that this cannot be done part time to get it going. Since test automation really is another software development effort, it's important that those performing the work have the correct skill sets. Good QA staff are still necessary to identify and write test cases for what needs to be tested. But a software quality engineer takes these test cases and writes code to automate the process of executing those tests.
Next up, create a central site on your department’s internal website where you can place documentation and reporting links. To avoid sending the same information time and again to staff, create a page with instructions and just link to that page when the topic is relevant. We made a FAQ page. Which eventually led to a WIKI page. We also created a central page where we could review the daily test logs from CQTM.
Create a training process for testing staff to use automation tools Initially, our training methodology was to train the trainer. The automation lead was given training from our software vendor. The lead then wrote training based on the coursework supplied by the vendor, giving classes in how to set up the software and how to use it to record/playback tests under a project. This allowed the automation lead to be the conduit as issues and questions arose on using the software. Also, the lead helped staff set up their projects which allowed consistency across projects. Our organization did not have prior automation experience so we chose a GUI testing framework for initial testing methodology. Our test automation tool, Rational Functional Tester provided record and playback features that allowed us to interactively record user actions and replay them back any number of times, comparing actual results to those expected. The advantage of this approach is that it required little or no software development skills. Though there were some inherent problems with this methodology. First, test automation was only applied at the final stage of testing when it was most expensive to go back and correct the problem. QA did not get a chance to create scripts until the product was nearly finished and turned over. Second, just using capture/playback was temporarily effective, but using capture/playback to create an entire suite made the scripts hard to maintain as application modifications were made. Our initial strategy for test automation was to achieve small successes and grow. This gave us the opportunity to try things, make mistakes and design even better approaches like code-driven testing. Implement a curriculum of object oriented methodology for QA staff Code-driven testing is a more advanced concept that uses interfaces to classes, modules, and common libraries are tested with a variety of input arguments to validate that the results that are returned are correct. But to write code-driven testing, we needed to increase our teams collective knowledge of Java and how to use in developing code-driven testing. We did this by offering a three-month training class for staff where the basic concepts of Java were reviewed and applied. As a benefit of taking this class, we started to realize that what code-driven testing meant. We found that automation test scripts should be designed to be modular which has options to run several tests scripts. The scripts should be relatively small and modular so they can be shared and easily maintained. Data for input should not be hard coded into the script, but rather read from a file or spreadsheet and loop through the module for as many times as you wish to test with variations of data. This code-driven methodology considerably shortened the test script making it easier to maintain and reuse by other test scripts. We have moved on to code-driven testing and will discuss more on this later. Set the time aside to train in house staff Another note, we found that due to project restraints, some full time staff did not have time initially to write automation for their projects. We started to rely more on a few contractors for automation work. This caused an issue when the contractor’s time was up, the experience and skills 'walked away' when the contractor left. We ended up spending several months back filling the test automation with staff. This could have been alleviated if we would have just given the staff the automation assignment to begin with, as the loss of a resource affected the maintenance effort and new development of test scripts. Since then we have corrected this by integrating our fulltime employees who are working on automation into our automation scrum process. We will talk more on this later.
You need more than just a testing tool. We found early on if you have more than one person working on the same project, you’ll run into source control issues. You need a source control repository and a test management system where you can run and report on the tests. We use…
In addition to the test software setup, you will need additional hardware. We found that if a user runs test scripts on their primary pc, it doesn’t save any time. We also found that setting up a pc lab from scratch would be cost prohibitive and difficult to maintain. So we adopted the following hardware setup.
In summary, Phase 1 was a combination of making the switch to automated testing and setting up infrastructure. Here is what you can expect for testing methodology in phase 1. We also had shortcomings.
In Phase two we built upon the growing automation infrastructure and developed an automation reporting architecture as well as a common code methodology.
In Phase 2 we… Use methods for scripting Write once, use often – we should never copy and paste! Use constants to store object properties Sharing objects becomes intuitive – at the enterprise level sharing object maps among multiple team members was no longer feasible Methods become property based – Methods are not associated any objects maps, allows methods to be shared Evolves methods into common methods – Share methods across projects Develop common architecture for project file structure Standardization – Define what goes where Easy Maintenance – Pinpoint bugs in code Easy to Understand – Code is easy to read Develop Automation Reporting
Improve Maintainability Examples: Super Helper : enter text, click Proj Common Lib : Login, navigation, log out Feature common Lib: Test case subroutines, search, validate search result, validate search details Test Case: calls subroutines (animation) Making changes. Ex. Login method at the Project Common Lib Level, changes trickle down to test case.
So what are these methods, and where did they come from? Here is an example of a basic web app. To test this page we want to enter some text, click the submit button, and then validate that a desired result shows on the page. We found through experience that each person, recorded or coded these actions in their own way. We also found that we were copying and pasting sections of code very often In order to reduce the repetition of work, we used our collective experience to develop methods that would perform these common actions.
These are the most popular methods. They are each a collective of code that incorporates the best accepted techniques for interacting with objects as well as standard timeouts, and error handling. Each method has been vetted and is ensured to work. setTextByProp – enter text in forms clickObjectByProp – click on buttons and links validateObjectWithProp – test for certain pieces of text contained within objects, logs pass/fail and captures screen shots if test fails Every test script uses these, so we don’t have to reinvent the wheel each time. In order for these methods to be modular and portable, we had to store the object properties in a modular way as well.
This led to the development of constants. Each object in the web app under test has properties. Instead of using the Object Map, or coding these properties in the test script. We stored these as constants. Making them easily organized, and maintainable. When objects are renamed, or page content is updated in the app, fixes can be made to the constants without making edits to the test scripts. Spend less time fixing scripts and more time writing scripts. This made test development easier, and more streamlined which results in the development of more complex test cases and increased test coverage.
So how does this all fit together to improve test coverage? Example: This is the myregence.com provider search. Suppose the business requirement is that a minimum of 3 results need to returned regardless of the search parameters. How do we do this? Record and Playback: Test a single search. Data-Driven: A datapool of search parameters and results. However the database is ever evolving and the results may change from build to build. How do we make a test that is both thorough robust. We drive the test with code.
We drive the test with code. (Animation) 1.Create an array of search parameters. Here I use the top 100 last names from the 2000 US census. 2. We write a method that completes a search by using setTextByProp and clickObjectByProp. 3. Now how do we count the number of results if they will be different each time? Look at the url, the first part of the url is the same for all results. (Animation) 4. Write a method that counts links, getObjectCount. Now choose a random name each time the test is run. We now have a test that covers a much wider range of data. And we can confidently say that no matter what search parameter we threw at the application, it always returned a min of 3 results.
To Recap: If we generate test data at runtime, and run the test several times, our test coverage is wider. Infinitely wide, depending on how many times we run the test. Much wider than manual testing or record and playback scripting.
When testing manually and with record and playback, we can achieve moderate test coverage in a minimal amount of time. To achieve increased test coverage we had to take the time to write the methods necessary for code-driven testing. This was our greatest hurdle in switching to automation. Once we build up a strong armament of methods, we are able to write better tests in less time. Now how do we communicate our testing success?
We created a website that queried the CQTM database, and reported the test results via the intranet. We also created a service that emailed a snapshot of the daily testing results.
In summary, Phase 2 was about making the switch from record and playback to code-driven testing . Here is what you can expect for testing methodology in phase 2.
In Phase 3, we further refined the common code methodology and reporting, and implemented the Agile process.
In Phase 3 we created a common library for ALL automation Continue to use constants to store object properties Sharing objects becomes intuitive – at the enterprise level sharing object maps among multiple team members was no longer feasible Methods become property based – Methods are not associated any objects maps, allows methods to be shared Evolve methods into common methods – Share methods across projects This lead to: Shorter Development Time Share Code Across the Enterprise Reduce Repetition of Work Implement Agile Process for ALL automation efforts
In order to share our, achievements and to reduce the test development time for other projects, we created a project containing what we call Super Helper methods which could be included in other projects. These methods are generic enough that they can be applied to Web Apps, Java Apps, or even Win Forms. With this type of standardization, and QA engineer can automate and maintain any application in the enterprise. We used the java doc feature to document the methods. This is useful when examining the method in Eclipse, and on the WIKI. Any project in the enterprise is expected to use this library and adhere to a common file structure, as well as a common execution and reporting process. They are also expected to join the Agile scrum, which will be discussed later.
This is the common execution and reporting process. It is designed to be loosely coupled and modular. There are three components, Running, Scripting, and Reporting. Each can be upgraded independent of the other. In fact we are currently in the process of replacing the reporting component to RQM; this would not affect how we write the scripts or execute them. At this time we also further refined the reporting website to contain links to the HTML version of the RFT test logs. So that failures can be quickly triaged straight from the email. Managers and QA staff can view the failures without having to open up CQTM or RFT.
Agile Methodology has extended our success of automation. Given Testers are at different levels of experience across diverse projects, having a SCRUM team has allowed us to join as a group to overcome automation challenges by creating a cohesive team that allows incubation of automation concepts. Without a SCRUM process, automation challenges can hinder or stop your effort cold. Some of the challenges we came across were:
15 minute daily scrums Manageable time commitment for all participants Maintain focus on automation goals for the sprint Improve team communication and commitment Scrum-master Beginning of sprint planning session All Automation Project participants present in the planning Each user puts forth their stories they will automate in upcoming sprint Voting is done by team for each story on complexity Hold a End of sprint Demo All Automation Project participants present in the demo Showcase of automation efforts completed during sprint Diverse audience including key stakeholders and management
We have been using these elements of agile over the past year. Because of this practice, we have overcome some difficult issues. Here’s a snapshot of our success to date.
In summary Phase 3 was successful in all aspects…
Automated software testing has long been considered critical for software development organizations but is often thought to be too expensive or difficult for companies to implement. At Regence, we have broken through these barriers. Integrated test automation efforts have been in place now for the past year and half. Though there has been a learning curve, we are realizing the fruits of our labors by a reduction in manual testing effort and improved test coverage through creation of a common automation testing process that can be harnessed by many different testing teams. Just one example of actual reduction in manual effort was highlighted earlier this year when the one of our QA teams was awarded the IT Innovation Award. Pre-2010, it took three FTEs two weeks to run 1250 tests. Now, the same FTEs run the tests in less than one week with more accuracy, thanks to test automation. It also allowed us to test features that would otherwise be impossible to test manually due to the breadth of the test data. The feature had 1,000 test data sets, each taking 5 minutes to run and log manually. Which would take about 80 hours to complete manually. As an automated test running on two computers simultaneously, this took 8 hours to complete. This results in a time savings of 100% because it required absolutely zero user interaction from start to finish. Next Steps: We have proved this process is successful. Our short term goal is to fine tune the process so that it can be recreated with little change across test systems. We would like to push this out to a broader audience. Now we would like to open this up for questions.
Daily iPod Touch giveaway sponsored by Alliance Tech Each time you complete a session survey, your name will be entered to win the daily IPOD touch! Complete your session surveys online each day at a conference kiosk or on your Innovate 2010 Portal! On Wednesday be sure to complete your full conference evaluation to receive your free conference t-shirt!
Author Note: Mandatory Rational Closing Slide (includes standard legal disclaimer). Available in English only.
How We Built Test Automation within a Manual Testing Organization
How We Built TestAutomation within a ManualTesting OrganizationRandy Pellegrini, Andy DoanThe Regence Grouprandy.email@example.comSession Track Number: 1608 June 5–9 Orlando, Florida