Putting the "User" back into UAT


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • This presentation is aimed at organisations that are, or will be; implementing custom developed or package software systems and who are unsure how to tackle the critical User Acceptance Testing (UAT) activities. UAT is a key contributor to the successful implementation of any software system, ensuring that the system is fit for purpose before it is deployed for production use. However, UAT can be seen as an extreme and extra demand placed on already busy users and often suffers from poor planning, rushed or flawed execution, or occasionally it can be ignored altogether. The intended users of a software system have critical knowledge of the business domain in which the system will operate. To improve the success of software system implementation, it is paramount that priority be given to user UAT with user involvement. This presentation illustrates a pragmatic and commonsense method for UAT that users with limited or no previous testing experience can follow to effectively conduct UAT activities. In addition, it reinforces the benefits that organisations receive from performing UAT and addresses the common reasons organisations consider UAT difficult. This pragmatic method was used for the UAT of the National Library’s Web Curator Tool (WCT) and the presentation will include a brief overview of the WCT Project and a discussion of the Library’s experiences while applying this method.
  • Andrew McDowell is a senior consultant for Equinox Limited, a leading New Zealand owned software development, IT consulting and IT training company, established in 1995. Andrew has nine years IT consulting experience and has performed a variety of roles including Business Analyst, Solution Architect, Developer, Test Analyst, Performance Tester and UAT Liaison. Andrew introduced this UAT method to the National Library for the WCT Project. Dr. Gordon W. Paynter is a Technical Analyst for the Innovation Centre at the National Library of New Zealand. He earned a Ph.D. in Computer Science from the University of Waikato, and then worked at the New Zealand Digital Library Project and the University of California. His current projects centre on web archiving and newspaper digitisation.
  • What is User Acceptance Testing? Testing performed by the people the system has been designed for, to verify that the system satisfies the business requirements and is capable of achieving its intended purpose. This differs from other testing activities such as unit testing and system testing which focus on ensuring the system operates according to its design specification Successful execution of UAT is usually the primary criterion for acceptance of the system by the organisation. UAT objectives Confirm the key business activities the system was built to support can be performed successfully – is it fit for purpose? Reveal any inefficiencies in the way the system operates that may negatively impact day to day business activities Confirm the system is usable by its intended user community UAT does not replace other test activities It’s important to note that UAT is just one activity of the Software Testing Process. It does not replace the other test activities such as Unit Testing, Integration Testing, System Testing and Speciality Testing (Performance, Security).
  • Benefits of UAT The organisation gains confidence that the system is capable of doing what it needs to do (or not) before production deployment. UAT provides evidence that the system does work as intended, supporting the decision to sign off on (accept) the completed system Intended users have detailed knowledge of the business domain in which the software will operate and are likely to identify test cases that may have been omitted during other test activities. Users gain experience using the application. UAT is an effective form of training. Users get the opportunity to get involved in the development process and provide feedback on the system. Their involvement means they are more likely to buy into the system and champion it to their colleagues.
  • Barriers to UAT Organisations aren’t sure or how to go about it. This is understandable given most organisations aren’t in the business of software testing. Organisations don’t want to bear the costs of freeing up people and resources for UAT. Make no mistake, UAT does require an investment of time and money, but this investment is far less than the cost of deploying an application that is not used because it fails to satisfy user needs. Organisations do not believe they have suitably skilled staff to perform the UAT testing activity. On the contrary, the intended users have the best knowledge of the business domain to perform UAT effectively. Besides, if you don’t have any staff that can perform UAT, how do you expect your staff to use the system? A common trap is for organisations to agree to a signoff timeframe that is insufficient for anything other than a cursory review of the system (e.g. brief exploratory testing). To overcome this, start planning for UAT early. There is nothing to stop you from developing your test cases as soon as the requirements are confirmed. Secondly allow sufficient time to execute at least one full iteration of your test cases during the signoff period. This timeframe will vary depending on the size of the project and its complexity. Organisations do not believe UAT is necessary given the system has been rigorously tested by the vendor or development team. Tui Ad. Note that most vendors/dev teams will make a serious effort to test their software, but it’s important to note their goal of the testing activity is to confirm the system operates according to its design specification. To achieve this they will make use of unit testing, system testing and specialty testing techniques (e.g. Performance and Security testing). The goal of UAT is to confirm the system satisfies the business requirements.
  • Tui ad modified with permission of DB Breweries Ltd.
  • Agile approach How many people use Use Cases for their requirements documentation? The approach we recommend is an agile approach which involves leveraging outputs from the requirements analysis process to form your core set of test cases as well as minimising documentation as much as possible while still ensuring there is sufficient information to make the test cases repeatable. It is important to note that although we’ve termed this an agile approach, it does not require the development or other testing activities to follow an agile methodology. It is intended to reflect that this approach is flexible and adaptable. Traditional approaches There have been two common approaches to UAT. An ad-hoc approach loosely based on exploratory testing where testers essentially give the system a test drive with no specific testing goals. Although this approach tends to be straight forward and requires relatively little effort, the main problem is that is significantly undermines the value an organisation can receive from UAT, particularly in regard to confidence and evidence. Those organisations that have pursued UAT with vigour have generally ended up with a highly structured approach, involving significant amounts of test planning as well as documentation. Although this approach is more likely to achieve the benefits of UAT, the major disadvantage is that it does require significant amounts of effort. It has also tended to be based on a system testing model, hence less easily understood by users with no previous testing experience. Our agile approach cherry picks the best aspects of each of the traditional approaches to maximise the benefits of UAT.
  • Testers The first step is to determine the people to perform the testing activities. Select people that are going to be part of the user group for the system. The benefits being these people will have the domain expertise in which the system will operate, they’ll get the opportunity to gain experience with the system and you’re much more likely to get user buy in for the system. Often BA’s are selected to perform UAT. Given their knowledge of the system, BA’s are ideal to act as UAT Team Leaders/mentors. Their knowledge can be particularly helpful for the test planning activities, but our suggestion is to ensure members from the user community are involved as much as possible, especially with the execution activities. The reason being getting the BA(s) to do all the planning and execution will detract from some of the people oriented benefits UAT offers. (i.e. training, buy in). Defect management Keeping track of defects during the UAT testing process is important. You need to know what defect have been raised, fixed and verified, as well as the defects that are outstanding. There are dozens of open source and proprietary software applications that provide defect tracking support. e.g. Bugzilla, Trac, Borland StarTeam, Microsoft Team Foundation System (TFS). You’ll probably find that for a custom development, your vendor/dev team will be using one of these systems. If not you can always use the software testers hammer – Microsoft Excel. As a guide the basic information a defect needs is a reference number, a title, a description, sufficiently detailed steps to allow the developer to reproduce the defect, a state (e.g. open, fixed, verified), the priority/severity of the defect and the version of the software the defect was found in as well as the version it was fixed in.
  • Test cases The hardest part of the UAT process is to identify the scenarios to test (i.e. test cases). A good rule of thumb is to focus on tasks a user can undertake with the system to achieve a specific goal/outcome (e.g. perform a search, update a customers details, etc. ). If you have requirements in Use Case form, this saves a lot of effort as Use Case’s describe the interactions the system has with users and other systems to achieve business goals. Use Case flows and scenarios make ideal test cases. If your requirements are in the standard statement format, identifying test cases can be a little more difficult as requirements in this form are often at varying levels of detail. As noted, base test cases on the key tasks users will undertake with the system and cross reference the test cases back to the requirements to ensure adequate coverage of system requirements. Data Identifying test data can also be difficult. Focus on the key data elements that will contribute to the outcome of the test case (I.e. data elements the system is likely to process, manipulate or validate in some way). These key data elements should be recorded with the test cases to ensure test cases can be repeated time and again with the same values. Use business knowledge to determine test data for the key data elements that is representative of real world values. This is where using people from the intended user group bears fruit. If the new system is a replacement for an existing system or paper based process, the testers will be aware of data they usually deal with and more importantly the data that has caused problems in the past. Documentation During the planning stage you want to ensure you have documented sufficient test cases to cover you system requirements. If necessary you can do this by cross referencing your test cases back to the system requirements they cover. Document the cases enough to understand what they are trying to achieve. It’s unlikely you’ll know at this stage exactly how the system will work, so don’t get too caught up in defining the steps for each test case and/or the expected results and methods to use to verify the expected results.
  • Execution Take advantage of early releases of the software to work iteratively through the test cases. This will give you the opportunity to see the system in action early as well as become comfortable with the testing process. Be systematic and record the results of each test case execution. These results provide the evidence the system is (or is not) fit for purpose. Focusing on the test cases that failed during previous iterations can help to speed up testing. You can—usually—assume that a test that passed in a previous release will pass now. Of course, you sometimes have to start over and test everything again (e.g. for a major release and/or for the final signoff) During the first and second test iterations you can elaborate the documentation you have to clarify the steps involved and even the expected results. This will make subsequent test iterations easier. Be sure to add any ad-hoc tests as new test cases, however one word of caution, make sure any new tests relate directly to the documented system requirements. If its not a system requirement, it doesn’t need to be tested (I.e. it’s a change request). Wrapping Up Listen to what the testers have to say about the system. The system may provide the functionality to meet the system requirements, but if the testers are telling you it operates inefficiently and/or does not meet business requirements, it will be worth your time to reassess the value of system before deploying it.
  • Tui ad modified with permission of DB Breweries Ltd.
  • Putting the "User" back into UAT

    1. 1. Putting the “User” back into UAT An agile approach to User Acceptance Testing GOVIS 2007 1.30 p.m. Thursday 10 May 2007
    2. 2. Introduction <ul><li>Andrew McDowell, Senior Consultant, Equinox Limited </li></ul><ul><ul><li>What is UAT? </li></ul></ul><ul><ul><li>The benefits of UAT </li></ul></ul><ul><ul><li>The barriers to UAT </li></ul></ul><ul><ul><li>UAT the agile way </li></ul></ul><ul><li>Dr. Gordon Paynter, Technical Analyst, Innovation Centre, National Library of New Zealand </li></ul><ul><ul><li>Case study – UAT at the National Library of New Zealand </li></ul></ul><ul><ul><li>Conclusion </li></ul></ul><ul><li>Questions </li></ul>
    3. 3. What is User Acceptance Testing (UAT)? <ul><li>A software testing activity performed by end users to </li></ul><ul><ul><li>Test the functionality of the system before it is deployed </li></ul></ul><ul><ul><li>Determine if the system meets user needs </li></ul></ul>
    4. 4. The benefits of UAT <ul><li>To gain confidence before deployment </li></ul><ul><li>To obtain evidence the system is fit for purpose </li></ul><ul><li>Users have domain knowledge </li></ul><ul><li>Users gain application experience (training) </li></ul><ul><li>User buy in </li></ul>
    5. 5. The barriers to UAT <ul><li>Not sure how to go about it </li></ul><ul><li>The cost of freeing up people and resources </li></ul><ul><li>Belief that there are no suitably skilled staff to do it </li></ul><ul><li>Already agreed to a signoff timeframe that is too short </li></ul><ul><li>Belief that the system has been rigorously tested by the vendor or development team </li></ul>
    6. 6. The barriers to UAT <ul><li>Not sure how to go about it </li></ul><ul><li>The cost of freeing up people and resources </li></ul><ul><li>Belief that there are no suitably skilled staff to do it </li></ul><ul><li>Already agreed to a signoff timeframe that is too short </li></ul><ul><li>Belief that the system has been rigorously tested by the vendor or development team </li></ul>
    7. 7. UAT the agile way <ul><li>We recommend an agile approach to UAT </li></ul><ul><ul><li>Leverage outputs of the requirements analysis process </li></ul></ul><ul><ul><li>Minimise documentation </li></ul></ul><ul><li>Traditional approaches </li></ul><ul><ul><li>Exploratory testing (ad-hoc) </li></ul></ul><ul><ul><li>Structured approach </li></ul></ul>
    8. 8. UAT the agile way <ul><li>Getting started </li></ul><ul><ul><li>Select testers from the intended user group </li></ul></ul><ul><ul><li>Establish defect management </li></ul></ul><ul><li>Planning </li></ul><ul><ul><li>Base your test cases on your Use Case documentation </li></ul></ul><ul><ul><li>Focus on key data elements </li></ul></ul><ul><ul><li>Document enough to confirm coverage of requirements </li></ul></ul>
    9. 9. UAT the agile way <ul><li>Getting started </li></ul><ul><ul><li>Select testers from the intended user group </li></ul></ul><ul><ul><li>Establish defect management </li></ul></ul><ul><li>Planning </li></ul><ul><ul><li>Base your test cases on your Use Case documentation </li></ul></ul><ul><ul><li>Focus on key data elements </li></ul></ul><ul><ul><li>Document enough to confirm coverage of requirements </li></ul></ul>
    10. 10. UAT the agile way <ul><li>Execution </li></ul><ul><ul><li>Work iteratively </li></ul></ul><ul><ul><li>Be systematic </li></ul></ul><ul><ul><li>Focus on failures </li></ul></ul><ul><ul><li>Elaborate your tests if necessary to ensure they are repeatable </li></ul></ul><ul><li>Wrapping up </li></ul><ul><ul><li>Listen to the testers </li></ul></ul>
    11. 11. Case study – UAT for the WCT <ul><li>The Web Curator Tool is a tool for managing the selective web harvesting process. </li></ul><ul><li>It is designed for use in libraries by non-technical users. </li></ul><ul><li>Development was a joint project of the National Library of New Zealand and the British Library. </li></ul>
    12. 12. Case study – UAT for the WCT <ul><li>What Equinox did </li></ul><ul><ul><li>Requirements analysis and Use Case documentation </li></ul></ul><ul><ul><li>Test strategy and test plan templates </li></ul></ul><ul><li>What the software developer did </li></ul><ul><ul><li>Wrote the software </li></ul></ul><ul><li>What the National Library did </li></ul><ul><ul><li>Project management </li></ul></ul><ul><ul><li>Wrote test scenarios based on use cases </li></ul></ul><ul><ul><li>Tested the software </li></ul></ul>
    13. 13. Test case documents
    14. 17. Results document
    15. 19. Lessons learned <ul><li>Swapping between scenario spreadsheet and result spreadsheet is time consuming (and annoying) </li></ul><ul><li>The extent of re-test depends on the number and size of releases </li></ul><ul><li>Detail-oriented completists make good testers </li></ul><ul><li>Add new test cases (or the same cases with new data) </li></ul><ul><li>Good evidence to support signoff </li></ul>
    16. 20. Lessons applied <ul><li>For the next project </li></ul><ul><ul><li>Combine test cases and test results onto single sheet </li></ul></ul><ul><ul><li>Use an even more iterative development process </li></ul></ul>
    17. 22. Conclusion – Overcoming the barriers <ul><li>Not sure how to go about it </li></ul><ul><ul><li>It is simple and straightforward </li></ul></ul><ul><li>The cost of freeing up people and resources </li></ul><ul><ul><li>Focus on minimising peoples time and maximising results </li></ul></ul><ul><li>Belief that there are no suitably skilled staff to do it </li></ul><ul><ul><li>Actually, testers don’t need previous testing experience </li></ul></ul><ul><li>Already agreed to a signoff timeframe that is too short </li></ul><ul><ul><li>Test cases and data prepared before the test period begins </li></ul></ul><ul><li>Belief that the system has been rigorously tested by the vendor or development team </li></ul><ul><ul><li>Less reliance on vendor/dev team as you’ll have your own evidence </li></ul></ul>
    18. 23. <ul><li>www.equinox.co.nz/events </li></ul><ul><ul><li>Presentation and Excel templates available to download </li></ul></ul>More information – www.equinox.co.nz ews