Getting Started with Risk-based Testing
Upcoming SlideShare
Loading in...5
×
 

Getting Started with Risk-based Testing

on

  • 57 views

Whether you are new to testing or looking for a better way to organize your test practices and processes, the Systematic Test and Evaluation Process (STEP™) offers a flexible approach to help you ...

Whether you are new to testing or looking for a better way to organize your test practices and processes, the Systematic Test and Evaluation Process (STEP™) offers a flexible approach to help you and your team succeed. Dale Perry describes this risk-based framework—applicable to any development lifecycle model—to help you make critical testing decisions earlier and with more confidence. The STEP™ approach helps you decide how to focus your testing effort, what elements and areas to test, and how to organize test designs and documentation. Learn the fundamentals of test analysis and how to develop an inventory of test objectives to help prioritize your testing efforts. Discover how to translate these objectives into a concrete strategy for designing and developing tests. With a prioritized inventory and focused test architecture, you will be able to create test cases, execute the resulting tests, and accurately report on the quality of your application and the effectiveness of your testing. Take back a proven approach to organize your testing efforts and new ways to add more value to your project and organization.

Statistics

Views

Total Views
57
Slideshare-icon Views on SlideShare
50
Embed Views
7

Actions

Likes
0
Downloads
1
Comments
0

1 Embed 7

http://www.stickyminds.com 7

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Getting Started with Risk-based Testing Getting Started with Risk-based Testing Document Transcript

    • MC Full-day Tutorials 5/5/2014 8:30:00 AM Getting Started with Risk- based Testing Presented by: Dale Perry Software Quality Engineering Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
    • Dale Perry Software Quality Engineering Dale Perry has more than thirty-six years of experience in information technology as a programmer/analyst, database administrator, project manager, development manager, tester, and test manager. Dale’s project experience includes large-system development and conversions, distributed systems, and both web-based and client/server applications. A professional instructor for more than twenty years, he has presented at numerous industry conferences on development and testing. With Software Quality Engineering for fifteen years, Dale has specialized in training and consulting on testing, inspections and reviews, and other testing and quality-related topics.
    • 1©2014 SQE Training - STAR East
    • This page left blank 2©2014 SQE Training - STAR East
    • Notice of Rights Entire contents © 1986-2014 by SQE Training, unless otherwise noted on specific items. All rights reserved. No material in this publication may be reproduced in any form without the express written permission of SQE Training. Home Office SQE Training 340 Corporate Way, Suite 300 Orange Park, FL 32073 U.S.A. (904) 278-0524 (904) 278-4380 fax www.sqetraining.com Notice of Liability The information provided in this book is distributed on an “as is” basis, without warranty. Neither the author nor SQE Training shall have any liability to any person or entity with respect to any loss or damage caused or alleged to have been caused directly or indirectly by the content provided in this course. 3©2014 SQE Training - STAR East
    • 4©2014 SQE Training - STAR East
    • 5©2014 SQE Training - STAR East
    • This page left blank 6©2014 SQE Training - STAR East
    • 7©2014 SQE Training - STAR East
    • Many of the development models in use today originated with the concepts developed by Dr. Shewhart in 1938 at Bell Laboratories and extended to manufacturing by Dr. Demming in the 1950s. This concept was called the life cycle of life cycles. 8©2014 SQE Training - STAR East
    • Formal definitions of testing: IEEE Standard 829-2008 Testing: the process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs) and to evaluate the features of the software item IEEE Standard 610.12-1990 Testing: the process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. 9©2014 SQE Training - STAR East
    • Static testing: Testing of a component or system at specification or implementation level without execution of that software, e.g., reviews or static code analysis Dynamic testing: Testing that involves the execution of the software of a component or system 10©2014 SQE Training - STAR East
    • Testing every possible data value, every possible navigation path through the code, and every possible combination of input values is almost always an infinite task which never can be completed. Even if it were possible, it is not necessarily a good idea because many of the test cases would be redundant, consume resources to create, delay time to market, and not add anything of value. For example, a single screen (GUI) or data stream, with thirteen variables, each with three values • To test every possible combination is 3(13) possible combinations or 1,594,323 tests • Plus testing of interfaces, etc. 11©2014 SQE Training - STAR East
    • 12©2014 SQE Training - STAR East
    • 13©2014 SQE Training - STAR East
    • 14©2014 SQE Training - STAR East
    • 15©2014 SQE Training - STAR East
    • 16©2014 SQE Training - STAR East
    • 17©2014 SQE Training - STAR East
    • 18©2014 SQE Training - STAR East
    • 19©2014 SQE Training - STAR East
    • 20©2014 SQE Training - STAR East
    • The purpose of discussing software risk is to determine the primary focus of testing. Generally speaking, most organizations find that their resources are inadequate to test everything in a given release. Outlining software risks helps the testers prioritize what to test and allows them to concentrate on those areas that are likely to fail or those areas that will critically impact the customer if they do fail. Risks are used to decide where to start testing and where to test more. Testing is used to reduce the risk of an adverse effect occurring or to reduce the impact of an adverse effect. 21©2014 SQE Training - STAR East
    • Organizations that work on safety-critical software usually can use the information from their safety and hazard analysis to identify areas of risk. However, many companies make no attempt to verbalize software risks in any fashion. If your company does not currently do any type of risk analysis, try a brainstorming session among a small group of users, developers, and testers to identify concerns. 22©2014 SQE Training - STAR East
    • 23©2014 SQE Training - STAR East
    • Risk Factor 1 Ambiguous Improvement Targets 2 Artificial Maturity Levels 3 Canceled Projects 4 Corporate Politics 5 Cost Overruns 6 Creeping User Requirements 7 Crowded Office Conditions 8 Error Prone Modules 9 Excessive Paperwork 10 Excessive schedule Pressure 11 Excessive Time to Market 12 False Productivity Claims 13 Friction Between Clients and Software Contractors 14 Friction Between Software Management and Senior Executives 15 High Maintenance Costs 16 Inaccurate Cost Estimating 17 Inaccurate Metrics 18 Inaccurate Quality Estimating 19 Inaccurate Sizing of Deliverables 20 Inadequate Assessments 21 Inadequate Compensation Plans 24©2014 SQE Training - STAR East
    • Risk Factor 22 Inadequate Configuration Control and Project Repositories 23 Inadequate Curricula (Software Engineering) 24 Inadequate Curricula (Software Management) 25 Inadequate Measurement 26 Inadequate Package Acquisition Methods 27 Inadequate Research and Reference Facilities 28 Inadequate Software Policies and Standards 29 Inadequate Project Risk Analysis 30 Inadequate Project value Analysis 31 Inadequate Tools and Methods (Project Management) 32 Inadequate Tools and Methods (Quality Assurance) 33 Inadequate Tools and Methods (Software Engineering) 34 Inadequate Tools and Methods (Technical Documentation) 35 Lack of Reusable Architecture 36 Lack of Reusable Code 37 Lack of Reusable Data 38 Lack of Reusable Designs (Blueprints) 39 Lack of Reusable Documentation 40 Lack of Reusable Estimates (Templates) 25©2014 SQE Training - STAR East
    • Risk Factor 41 Lack of Reusable Human Interfaces 42 Lack of Reusable Project Plans 43 Lack of Reusable Requirements 44 Lack of Reusable Test Plans. Test Cases and Test Data 45 Lack of Specialization 46 Long Service Life of Obsolete Systems 47 Low Productivity 48 Low Quality 49 Low Status of Software Personnel and Management 50 Low User Satisfaction 51 Malpractice (Project Management) 52 Malpractice (Technical Staff) 53 Missed Schedules 54 Partial Life-Cycle Definitions 55 Poor Organization Structures 56 Poor Technology Investments 57 Severe Layoffs and Cutbacks of Staff 58 Short-Range Improvement Planning 59 Silver Bullet Syndrome 60 Slow Technology Transfer 26©2014 SQE Training - STAR East
    • 27©2014 SQE Training - STAR East
    • Managers tend to focus on two key elements • Controlling and managing costs • Return on investment (ROI) Marketing and sales people tend to be driven by a singular goal, “competitive advantage” • The concerns of marketing and sales are not necessarily related to functionality, they need an edge Engineers (developers, analysts, etc.) tend to be focused on the technology itself • Driven by the use of “new” technology and techniques • Not interested in functionality except as it relates to the use of technology • Some technically valid decisions may cause functional problems 28©2014 SQE Training - STAR East
    • Customers are seeking the answer to a very simple question, “can I do my job?” Their view of a product is a tool to do their job If they can’t use your product to that end, it’s a bad product 29©2014 SQE Training - STAR East
    • 30©2014 SQE Training - STAR East
    • 31©2014 SQE Training - STAR East
    • The concept of risk driven testing applies to all software development models and processes. It is critical to developing quality software that meets user/customer expectations and is the focus of both the STEP™ methodology and many of the new agile development processes. If you analyze the newer “agile” development methods, this is one of the key concepts. It’s interesting that this is not really a new concept at all; it has been around for a couple of decades. 32©2014 SQE Training - STAR East
    • There are many different software lifecycle approaches: waterfall, spiral, incremental delivery, prototyping (evolutionary and throwaway), RAD, extreme programming (XP), SCRUM, DSDM, etc. The key is to know which process the project is following and to integrate into that process as soon as possible (reasonable). The later you get involved, the less chance you have to prevent problems. 33©2014 SQE Training - STAR East
    • This page left blank 34©2014 SQE Training - STAR East
    • 35©2014 SQE Training - STAR East
    • 36©2014 SQE Training - STAR East
    • 37©2014 SQE Training - STAR East
    • More information can be obtained on the planning process from several courses offered by SQE as well as the STAR tutorial on test planning and management. 38©2014 SQE Training - STAR East
    • 39©2014 SQE Training - STAR East
    • 40©2014 SQE Training - STAR East
    • 41©2014 SQE Training - STAR East
    • 42©2014 SQE Training - STAR East
    • 43©2014 SQE Training - STAR East
    • 44©2014 SQE Training - STAR East
    • 45©2014 SQE Training - STAR East
    • This model is based on the risk analysis part of the STEP™ testing methodology developed by Software Quality Engineering. 46©2014 SQE Training - STAR East
    • 47©2014 SQE Training - STAR East
    • Reference materials include any information available that can assist in determining the testing objects/conditions. Some lifecycles do not have formal sources of documentation. No formal requirements are written. However, there is usually some information about what type of system is being created, the platform on which it will run, the goals of the client, etc. Any information you can gather will help you better understand the test requirements for this project. 48©2014 SQE Training - STAR East
    • Risk identification and assessment needs to include the viewpoints noted earlier The key is to include testers to focus the team on the testing issues and to help determine the priority of the features to be developed Testers need to know the risks and issues in order to properly analyze and design reasonable tests Different groups have different ideas about software. The more of these disparate groups you can combine, the more accurate the picture you will have of the risks, priorities, and goals for development, and the more accurate the testing goals and objects/conditions for this project become. 49©2014 SQE Training - STAR East
    • 50©2014 SQE Training - STAR East
    • 51©2014 SQE Training - STAR East
    • The inventory process is an iterative process. You begin the process at requirements and continue the process at each stage of the development process. How far you take the process is determined by the scope and risks associated with the software being tested. In addition to the process being iterative, it is also cumulative. The information from requirements is used to improve the requirements (static testing/reviews), to focus the design, and possibly to improve the design. At the design stage of a project the information from the requirements inventory process is used to evaluate the design and to ensure problems are corrected and additional items are gathered from the design. This process can be continued as far as the risks to the project warrant. 52©2014 SQE Training - STAR East
    • 53©2014 SQE Training - STAR East
    • 54©2014 SQE Training - STAR East
    • 55©2014 SQE Training - STAR East
    • 56©2014 SQE Training - STAR East
    • 57©2014 SQE Training - STAR East
    • 58©2014 SQE Training - STAR East
    • 59©2014 SQE Training - STAR East
    • 60©2014 SQE Training - STAR East
    • There are some common aspects of applications that can be drawn from the design specifications 61©2014 SQE Training - STAR East
    • 62©2014 SQE Training - STAR East
    • 63©2014 SQE Training - STAR East
    • 64©2014 SQE Training - STAR East
    • 65©2014 SQE Training - STAR East
    • 66©2014 SQE Training - STAR East
    • Impact is sometimes referred to as severity. Once the inventory has been built, the next step is to determine the Impact and likelihood of something going wrong with each of the elements identified in the inventory. • Determine the impact (loss or damage) and likelihood (frequency or probability) of the feature or attribute failing. While some organizations like to use percentages, number of days/years between occurrences, or even probability “half lives,” using a set of simple categories such as the ones listed in the slide above typically provide sufficient accuracy. If the likelihood or impact of something going wrong is none or zero, then this item could be removed from the analysis. However, the removal should be documented. • This is not recommended. Just leave it in the inventory, it will naturally drop to the bottom. 67©2014 SQE Training - STAR East
    • 68©2014 SQE Training - STAR East
    • 69©2014 SQE Training - STAR East
    • 70©2014 SQE Training - STAR East
    • 71©2014 SQE Training - STAR East
    • 72©2014 SQE Training - STAR East
    • Most organization use a combination of both methods • Qualitative helps express risk in terms a person can understand • Quantitative is then used in conjunction with qualitative categories to create a matrix with each risk given a weighted priority Each qualitative category needs to be defined in the organization’s overall process directives. Ideally, there needs to be several examples of each risk assessment category to aid people is determining which category is appropriate to the object they are assessing. Risk is in the eye of the beholder as noted earlier • Any two people may look at the same event and see an entirely different set of issues. What is critical to one may be trivial to the other 73©2014 SQE Training - STAR East
    • Likelihood = The probability or chance of an event occurring (e.g., the likelihood that a user will make a mistake and, if a mistake is made, the likelihood that it will go undetected by the software) Impact = The damage that results from a failure (e.g., the system crashing or corrupting data might be considered high impact) 74©2014 SQE Training - STAR East
    • 75©2014 SQE Training - STAR East
    • Under likelihood and impact, there may be differences of opinion as to the risk. It can be high business risk but low technical risk, etc. So you may have to compromise on an acceptable level of risk. The numbers are calculated using the values from our original matrix (page 70 and multiplying them. H = High which has a value of 3 M = Medium which has a value of 2 L = Low which has a value of 1 76©2014 SQE Training - STAR East
    • Make adjustments and sort by the agreed priority. We now have a risk- based assessment of what needs to be tested. 77©2014 SQE Training - STAR East
    • Of course, successful prioritization of risks does not help unless test cases are defined for each risk—with the highest priority risks being assigned the most comprehensive tests and priority scheduling. The object of each test case is to mitigate at least one of these risks. If time or resources are an issue, then the priority associated with each feature or attribute can be used to determine which test cases should be created and/or run. If testing must be cut, then the risk priority can be used to determine how and what to drop. • Cut low risk completely (indicated by the horizontal line). If you plan to ship the low risk features, you may want to consider an across the board approach At least that way, the features do not ship untested (risk unknown). This will entail some additional risk as higher risk features get less testing. 78©2014 SQE Training - STAR East
    • 79©2014 SQE Training - STAR East
    • 80©2014 SQE Training - STAR East
    • 81©2014 SQE Training - STAR East
    • Risk mitigation/Risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels. Risk type: A specific category of risk related to the type of testing that can mitigate (control) that category. For example the risk of user- interactions being misunderstood can be mitigated by usability testing. 82©2014 SQE Training - STAR East
    • Murphy’s Law: “If anything can go wrong, it will, and will do so at the worst possible time”. 83©2014 SQE Training - STAR East
    • 84©2014 SQE Training - STAR East
    • 85©2014 SQE Training - STAR East
    • 86©2014 SQE Training - STAR East
    • 87©2014 SQE Training - STAR East
    • 88©2014 SQE Training - STAR East
    • 89©2014 SQE Training - STAR East
    • This page left blank 90©2014 SQE Training - STAR East
    • 91©2014 SQE Training - STAR East
    • 92©2014 SQE Training - STAR East
    • 93©2014 SQE Training - STAR East
    • 94©2014 SQE Training - STAR East
    • 95©2014 SQE Training - STAR East
    • 96©2014 SQE Training - STAR East
    • 97©2014 SQE Training - STAR East
    • 98©2014 SQE Training - STAR East
    • 99©2014 SQE Training - STAR East
    • 100©2014 SQE Training - STAR East
    • Although exploratory testing primarily relies on the skills and knowledge of the tester and tends to be more dynamic than traditional technique-driven design, it too can be more formalized. Using the inventory process as part of an exploratory test process can add structure to the definition of the areas to be investigated rather than relying only on the skills of the individual tester. 101©2014 SQE Training - STAR East
    • 102©2014 SQE Training - STAR East
    • The level and complexity of documentation represents a serious risk to the testing process. • Too much overly detailed, complex documentation takes significant time to design and create. When things change, the maintenance costs can be extreme. • Excessive detail is not necessarily a good characteristic of documentation. Too little documentation—with insufficient information to allow for the analysis, understanding, and maintenance of the tests—is equally bad. • The time spent reacquiring lost knowledge can be very expensive. The key is to strike a balance between the level of detail in test documentation and the time and cost to define, create, and maintain that same documentation. 103©2014 SQE Training - STAR East
    • The goal is to avoid gaps in the testing as well as to avoid overlapping testing too much. Depending on how you define your inventories, based on generic groupings or application specific groupings, the idea to decide who will test which objects at what stage/level. Some objects cannot be tested until later stages of the process (i.e., scenarios and usage based objects). Conversely some elements, such as field edits, valid ranges, error messages etc., are best tested in the earlier stages. These code logic elements, created by the programmers, are best tested at that stage of the process. Finding such errors late in the process can be very costly. 104©2014 SQE Training - STAR East
    • 105©2014 SQE Training - STAR East
    • 106©2014 SQE Training - STAR East
    • 107©2014 SQE Training - STAR East
    • 108©2014 SQE Training - STAR East
    • 109©2014 SQE Training - STAR East
    • 110©2014 SQE Training - STAR East
    • 111©2014 SQE Training - STAR East
    • 112©2014 SQE Training - STAR East
    • 113©2014 SQE Training - STAR East
    • 114©2014 SQE Training - STAR East
    • 115©2014 SQE Training - STAR East
    • 116©2014 SQE Training - STAR East
    • 117©2014 SQE Training - STAR East
    • 118©2014 SQE Training - STAR East
    • 119©2014 SQE Training - STAR East
    • 120©2014 SQE Training - STAR East
    • 121©2014 SQE Training - STAR East
    • 122©2014 SQE Training - STAR East
    • The sequence is • Planned • Specified • Implemented • Executed • Passed/failed Again, a trace matrix can help. If you don’t know how many tests were planned, how do you assess progress? Passed/failed only has meaning if you know what you were intending to do. 123©2014 SQE Training - STAR East
    • 124©2014 SQE Training - STAR East
    • 125©2014 SQE Training - STAR East
    • 126©2014 SQE Training - STAR East
    • 127©2014 SQE Training - STAR East
    • 128©2014 SQE Training - STAR East
    • 129©2014 SQE Training - STAR East
    • 130©2014 SQE Training - STAR East
    • 131©2014 SQE Training - STAR East
    • 132©2014 SQE Training - STAR East
    • 133©2014 SQE Training - STAR East
    • 134©2014 SQE Training - STAR East
    • The first document in this series is the overall project overview and the project requirements specification. 135©2014 SQE Training - STAR East
    • Reassigned Sales – Project Overview This project is expected to take approximately 8 months to implement. The staffing for this project will come from the following internal Widgits departments: • Marketing and Sales • Internal MIS • Applications development • Quality Assurance • Infrastructure • Data base administration • Network administration Each department will be responsible to the overall project manager for providing identified resources. The specific roles and responsibilities will be determined by the project manager working within corporate guidelines. Additionally our vendor Zippy Corp. will be providing assistance in the following areas: Applications development • XML programming and support • AS/400 programming assistance Testing and verification • Overall testing strategy • Test design and specification • Verification of the conversion of XML data into Widgits database formats • Working with the Marketing and Sales department to create and verify an acceptance test plan 136©2014 SQE Training - STAR East
    • 137 This application consists of a manufacturer (Widgets Inc.), that sells a product through a sales staff but also allows its’ products to be sold by third party distributors and retail outlets (resellers). Because the company allows this invasion into their sales people’s territories they have a method for calculating the degree of encroachment and compensation for the affected sales representative. 1. The reassigned sales system will receive reassigned sales data though an XML interface using standard data definitions. 1A. A separate transaction type will be used to identify sales data from client address data and both types of data will be passed to the new reassigned sales process. • It is recognized that each third party may have separate internal account numbers for their customers. The Widgits database will have to provide a mechanism where-by a Widgits account number can be associated with multiple third party account numbers; all sharing one address record in the Widgits database. • Account information will use a separate XML data format transaction type and will be separated into their own data base files. • The account information will be validated against the existing customer master file to ensure no duplicates are stored. • A separate cross reference table will be created to identify individual accounts that purchase both directly from us and a distributor and those that purchase from multiple distributors. • There is a minimum amount of account identification that must be received to create an account. The process must identify any incomplete accounts and the sales administration must have a method for correcting these records. 2. An on-line process will be created for the sales administration staff providing the following functions. 2A. Provide data roll-up displays allowing sales staff to review sales data by individual week or by the month. 2B. Allow sales data received in a week to be posted back to the previous sales week, eliminating the need for manual adjustments. • A cutoff period will be defined as to how long the system will wait for a reseller to report sales data. Once that cutoff has been reached all reported sales will be moved to the next sales period for calculating a sales person’s compensation. 2C. The application will allow the sales staff to eliminate groups and sets of transactions (received through the XML interface) that are in error, or appear to be duplicated. 137©2014 SQE Training - STAR East
    • 138 2D. Provide on-line review capabilities to allow the sales staff to review whether a reseller has or has not sent in their sales data. • A form of notification process will be provided to do the following: • Notify the sales commission staff of a late distributor. • Provide a mechanism (XML based) to contact the distributor. • Provide a process for the vendor to request a delay. If delayed the data would automatically move to the next reporting period. • A notification to the affected sales person(s) of this delay in commission credit. 2E. Allow the sales staff to activate the commissions posting process after all data has been reviewed. • The posting process “must be” initiated by a staff person. 2F. Provide a reports menu with reports by sales region, sales person, month, week, product line. 3. The system must provide an archive system to remove posted transactions from the active files on a monthly basis. • This process must execute after the monthly close has been processed. • Transactions that are not in a valid state and that have not been posted will be removed from the active system during archiving but will be retained on the archive for later analysis. • Transactions in a valid state, that have not been posted will be retained on the active files and will be rolled in into the next period (first week) sales data, regardless of sales date. • These transactions will not be archived until they have been posted. 4. Prototype screen layouts and report layouts will be provided for user review during the early stages of system development. • Once the prototypes have been approved, all additional changes will be addressed on an as needed, priority basis within the constraints of the project time line. 5. An initial accounts address generation run can be made by a distributor to send in all available account information prior to starting the XML process. This will enable us to have an initial set of account records in place. • Vendors must be provided the opportunity to add, change and delete customer address records related to their sales activities. • However, once a record is added to the Widgit’s master address file it will not be deleted unless all the following are true. • There are no other vendors related to the same end client. • There are no direct sales to the client by Widgit’s sales persons. • The record has not had a recorded purchase in the last two years. 138©2014 SQE Training - STAR East
    • The following are the initial test objects for the Reassigned Sales project. An initial high level risk assessment will be done on these items in conjunction with the systems design process. 139©2014 SQE Training - STAR East
    • TEST OBJECT INVENTORY - REQUIREMENTS BASED 1. REQUIREMENTS A. XML B. Order Entry shared interface C. New/modified screens D. New/modified reports E. Sales account information F. Sales calculations G. Legacy systems interfaces H. Archive 2. FEATURES AND FUNCTIONS A. Order Entry interface B. Manual interface C. Reports interface D. Sales account information 3. TRANSACTIONS A. XML from distributors B. XML to distributors C. Mailbox management D. External applications (legacy, back office) 4. DATA A. New format AS/400 database files B. Messages C. Conversion D. Archive E. Recovery and backups 5. INTERFACES A. Order Entry B. Manual User interface C. Reports interface D. Sales account information (existing) E. XML 140©2014 SQE Training - STAR East
    • F. Account data files G. External applications 6. PERFORMANCE A. Data downloads from mailbox B. Manual user screens (roll-ups etc.) response time C. Archive (delays in processing could result for OE) D. Sales commissions reports (timeliness do to delayed postings) E. Database responsiveness (volume) 7. CONSTRAINTS A. Security access to screens B. Access to mailbox service 8. BACK-UP AND RECOVERY A. Archive 141©2014 SQE Training - STAR East
    • The following is the risk assessment based on the object inventory developed from the requirements specification. 142©2014 SQE Training - STAR East
    • Some potential risk factors to consider include: Impact risk factors • Endangerment of human life or highly valued resources increases the impact risk • For non-critical features, the more immediate the failure detection the lower the impact risk • The increased availability of a practical work-around lowers the impact risk • Updating critical data structures is riskier than just accessing them • Interfaces with critical functions are riskier Likelihood risk factors • New components are riskier than reliable, existing ones • Components with a history of unreliability are riskier • Frequently changed components tend to get disorganized over time • Components developed by personnel with a record of poor product reliability are riskier • Components developed by personnel with a poor understanding of either the requirements or the design are riskier • This can be compounded by projects that use outsourced services • Components with frequently changing requirements are riskier • Poorly designed components are riskier • Programs solving complex problems are riskier than those solving simpler ones • Programs doing multiple functions are riskier that those doing the corresponding single functions • The more dynamic and complex the data structure, the riskier 143©2014 SQE Training - STAR East
    • The following inventory and risk assessment will be used in the test planning process. Close coordination with the developers will be required to ensure critical features are developed in a priority sequence where possible. The development of any non-critical features, early in the schedule, must be approved by all management groups (project, development and testing). Elements indicated, as High risk must be considered for development first as other features are, to a great degree, dependent on the completion of those features first. All risk assessments included user, technical and testing considerations in assigning the risk and priorities. The priorities are in descending order from 10 (highest) to 1 (lowest) Categories break down as follows High 10, 9, 8 Medium 7, 6, 5 Low 4, 3 Very low 2 No risk 1 OBJECT INVENTORY - REQUIREMENTS BASED INITIAL RISK ASSESSMENT (COMBINED) RISK PRIORITY 1. REQUIREMENTS A. XML High 10 B. Order Entry shared interface High 10 C. New/modified screens Medium 7 D. New/modified reports Low 4 E. Sales account information High 9 F. Sales calculations High 10 G. Legacy systems interfaces Medium 7 H. Archive Low 3 2. FEATURES AND FUNCTIONS A. Order Entry interface High 10 B. Manual interface Medium 7 C. Reports interface Low 4 D. Sales account information High 9 144©2014 SQE Training - STAR East
    • 3. TRANSACTIONS A. XML from distributors High 10 B. XML to distributors High 10 C. Mailbox management Medium 7 D. External applications (legacy, back office) Medium 7 4. DATA A. New format AS/400 database files Low 4 B. Messages Low 4 C. Conversion High 10 D. Archive Low 4 E. Recovery and backups Medium 5 5. INTERFACES A. Order Entry High 10 B. Manual User interface Medium 7 C. Reports interface Low 4 D. Sales account information (existing) High 9 E. XML High 10 F. Account data files High 9 G. External applications Medium 7 6. PERFORMANCE A. Data downloads from mailbox High 9 B. Manual user screens (roll-ups etc.) response time Low 4 C. Archive (delays in processing could result for OE) Low 3 D. Sales commissions reports (timeliness do to delayed postings) Medium 5 E. Database responsiveness (volume) Low 3 145©2014 SQE Training - STAR East
    • 7. CONSTRAINTS A. Security access to screens Medium 6 B. Access to mailbox service Low 4 8. BACKUP AND RECOVERY A. Archive Low 3 146©2014 SQE Training - STAR East
    • The next document is the system level test plan for the project. 147©2014 SQE Training - STAR East
    • 1. TEST PLAN IDENTIFIER RS-STP01.3 2. REFERENCES 1. Reassigned Sales System Rewrite Requirements - SST_RQMT04.1 2. Reassigned Sales Master test plan RS-MTP01.3 2. Reassigned Sales General Design Specification - RS-SDS01.3 3. INTRODUCTION This is the System/Integration Test Plan for the Reassigned Sales project. This plan will address only those items and elements that are related to the Reassigned Sales process, both directly and indirectly affected elements will be addressed. The primary focus of this plan is to ensure that the new Reassigned Sales application functions within the proscribed limits and meets the minimum functions specified in both the requirements specification and the general design specification. The system/integration testing will begin as soon as the first complete increment of the application is available. It is anticipated that the application will be available in several increments as identified in the test items section. Systems/integration will be conducted by the development team with the assistance of one full time test person. The sales administration team will be involved in the screen/report verification process only. Final user approval and acceptance will be during acceptance testing. 4. TEST ITEMS The following is a list of the functional areas to focus on during systems/integration testing. Each area will be a separate test cycle with the complete process being tested as the final phase. Test 1. - XML Interface Test 2. - Translator Interface (both sales data and account information) Test 3. - Manual Intervention interface (including all review/update screens) Test 4. - Reassigned sales posting process Test 5. - Reassigned sales reports Test 6. - Archiving Test 7. - Backup and recovery 148©2014 SQE Training - STAR East
    • 5. SOFTWARE RISK ISSUES There are several interface issues that require additional focus during systems test in addition to those issues identified in the Master test plan RS-MTP01.3 A. The XML interfaces’ capability to support the added reassigned sales transaction volume in addition to the current Order Entry transaction Volume. B. The different timing of the two interfaces pulling from the shared mailbox on the Advantis network. The reassigned sales transactions must append to the existing files until the process is executed to process the data. C. The reformatting of the XML data transaction formats into the appropriate reassigned sale control files by the translation process is critical to application success. D. The maintenance of the accounts cross reference file to prevent multiple accounts from being created must be closely monitored. The manual user intervention required to correct accounts in error will require close scrutiny to prevent overload of the user process. E. Proper identification of existing accounts to prevent duplicate accounts is critical to the accounts process. Single accounts shared by multiple distributors must be properly controlled through the cross reference file. F. Availability of the XML interface at the initial distributor. It is critical to beginning systems/integration testing and will also impact some unit testing. G. Access to, and updating of, existing customer master file shared with Order Entry. As Order Entry is an on-line, interactive process file contention and record locking will be a major concern as our process may generate a large column of account updates to the file. The distributors have agreed to send initial client files identifying all their current accounts with their internal account number as well as the information required to identify the account locally. These will be verified and put through the system in bulk after hours to avoid problems with Order Entry. However, it is critical that the process be complete and verified prior to the next days business. H. Posting the reassigned sales data to the existing summary and history files must be closely monitored to ensure that the correct to/from accounts are identified and that all transactions totals balance by distributor. Errors in postings can cause errors in the Order Entry systems credits and balances processes. 149©2014 SQE Training - STAR East
    • 6. FEATURES TO BE TESTED The following is a list of the areas to be focused on during testing of the application. Key areas by test cycle are noted. Test Cycle 1. - XML Interface A. Receipt of transactions. 1. Transaction reformatting 2. Single Distributor 3. Multiple Distributors 4. Single daily pull 5. Multiple pulls in a single day 6. With Order Entry data 7. Sales data alone 8. Overlapping requests B. Error recovery C. Backups D. Access to XML process and menus Test Cycle 2. - Translator Interface (both reassigned sales and account information) A. Sales transaction 1. Valid transaction 2. Error transactions 3. Error report B. Account transactions 1. New accounts 2. Account updates 3. Duplicate accounts 4. Account errors 5. Cross reference file maintenance 6. Error and status reports 7. Weekly control files updates 8. Control table processing Test Cycle 3. - Manual Intervention interface (including all review/update screens) A. Access controls (security) B. Account review screen(s) 1. Account Errors 2. Valid account review 3. Customer master file updates/adds 4. Cross reference file updates/adds 5. Account change process C. Sales transaction review 1. Monthly screen(s) 2. Weekly screen(s) 150©2014 SQE Training - STAR East
    • D. Accounts generation 1. Holding file processing 2. Cross reference file 3. Customer master file 4. Submission of update job E. Reports 1. Monthly transmission report 2. Weekly transmission report 3. New accounts report 4. Territory report 5. Account match report Test Cycle 4. - Reassigned sales posting process A. Sales transaction postings B. Weekly control file updates C. Sales History file updates D. Decision support system file updates Test Cycle 5. - Archiving A. Manual archive request B. Automated monthly archiving process C. File cleanup and compression Test Cycle 6. - Independent reports (converted from prior system) A. Distributor reports B. Retail reports C. High volume purchase reports D. Variance reports E. Decision support system reports Test Cycle 7. - Backup and recovery A. Recovery of interrupted XML transmission B. Restart of translation process at each step in the process 1. Verification of control areas for restart C. Restart of Account update process after interrupt D. Restart of Posting process after interrupt 7. FEATURES NOT TO BE TESTED Other than those areas identified in the master test plan no additional areas have been identified for exclusion. 151©2014 SQE Training - STAR East
    • 8. APPROACH System/Integration (combined) will commence as soon as the function parts of the application are available as identified in the individual test cycles in section, features to be tested. A requirement has been noted for an additional full time independent test person for system/integration testing. However, with the budget constraints and time line established; most testing will be done by the test manager with the development teams participation. Entry into system testing will be controlled by the test manager. Proof of unit testing must be provided by the programmer to the team leader before unit testing will be accepted and passed on to the test person. All unit test information will also be provided to the test person. Program versions submitted for systems/integration testing will be copied from the development libraries into the test team libraries and will be deleted from the development library unless the module is segmented and will be used in several overlapping test cycles. If the module is to be delivered in functional segments then additional regression testing of previous test cycles will be performed to ensure all functions still work properly. 9. ITEM PASS/FAIL CRITERIA Each test cycle will be evaluated on an individual basis. If a cycle has no critical defects and only one (1) major defect, providing it has a functional, reasonable, work-around, the cycle will be considered complete in terms of starting the next, independent cycle. The major defect will have to be corrected prior to going to acceptance testing. Minor defects will be addressed on an as needed basis depending on resource availability and project schedule. However, if there are more than fifteen minor defects in a single aspect of the application the systems test cycle will be considered incomplete. Acceptance testing can begin even if there are two major defects in the entire application. This is acceptable if there are reasonable workarounds. All Major defects must be repaired prior to pilot testing and final acceptance testing. In some instances; (low impact majors and minors), the application can be corrected and bypass systems/integration testing. This decision will be made by the Test Manager and Project Manager on an on-going basis 152©2014 SQE Training - STAR East
    • 10. SUSPENSION CRITERIA AND RESUMPTION REQUIREMENTS 1. No Distributors are ready for testing when system testing is scheduled to begin. Some testing can be done in areas such as general application flow and module integration but actual data validation and verification cannot be done until data is received from the distributors. Systems testing will be delayed for a time period to be determined based on the delay in receiving data from the distributor(s). 11. TEST DELIVERABLES A. High level System/Integration test design B. Defect reports C. Weekly testing status reports D. Sample reports from process execution 12. REMAINING TEST TASKS TASK Assigned To Status Create Requirements based Inventories Client, PM, TM, Dev, Test Create/Update Design inventories Dev, TM, PM, Test Create System/Integration Test Design TM, PM, Test Define System/Integration Test rules and Procedures TM, PM, Test Setup Controlled Test environment TM, Test 153©2014 SQE Training - STAR East
    • 13. ENVIRONMENTAL NEEDS The following elements are required to support the Systems/Integration testing. A. Access to both the development and production AS/400 systems; for development, data acquisition and testing. B. Creation and control of test team libraries for systemsintegration testing. A separate set of source control libraries, data files, and control tables will be required to ensure the quality of systems testing. C. A time segment on the data transmission interface to receive test XML transmissions from the distributors. This segment should also have time where overlap exists with the Order Entry process. 14. STAFFING AND TRAINING NEEDS Time will have to be allocated to the test team to allow for source file movement and data acquisition and control. Time will also have to be allocated to allow for weekly defect and status reports to be prepared for the team and a meeting will have to be scheduled to report on system/integration test results. Time must be allocated to the development team members to attend systems/integration status meetings when required. 15. RESPONSIBILITIES TM PM Dev Team Test Team Client Create Requirements based Inventories X X X X X Create/Update Design inventories X X X X Create System/Integration Test Design X X Define System/Integration Test rules and Procedures X X X Setup Controlled Test environment X X The entire project team will participate in the review of the system and detail designs as well as review of any change requests that are generated by the user or as a result of defects discovered during development and testing. The full team will participate in the initial development of the high level 154©2014 SQE Training - STAR East
    • 16. SCHEDULE All scheduling is documented in the project plan time line and is the responsibility of the project manager to maintain. The Test Manager will provide task estimates as required. 17. PLANNING RISKS AND CONTINGENCIES 1. Limited Testing staff. The test team is currently comprised of the developers and the Test Manager only. Additional resources are identified to assist in testing but if those resources are not available then the development team will have to provide additional assistance to the Test Manager during systems/integration testing. If development must assist in systems/integration testing then there is the possibility that both development and testing will be delayed due to the overlap of resource requirements. 18. APPROVALS Project Sponsor - Steve Putnam Development Management - Ron Meade EDI Project Manager - Peggy Bloodworth RS Test Manager - Dale Perry RS Development Team Manager - Dale Perry Reassigned Sales - Cathy Capelli Order Entry EDI Team Manager - Julie Cross 155©2014 SQE Training - STAR East