Upcoming SlideShare
×

# Boundary and equivalnce systematic test design

2,726 views

Published on

Introduction to Black Box Function Testing - Boundary Analysis and Equivalence Partitioning.

1 Comment
4 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• A systematic approach to test analysis, using equivalence partitioning and boundary value analysis to create input and output test vector sets.

Are you sure you want to  Yes  No
Views
Total views
2,726
On SlideShare
0
From Embeds
0
Number of Embeds
80
Actions
Shares
0
140
1
Likes
4
Embeds 0
No embeds

No notes for slide

### Boundary and equivalnce systematic test design

1. 1. Black Box Test Techniques - EquivalencePartitioning and Boundary Value Analysis Author: Ian R McDonald © 2010, 2013 Ian McDonald 1
2. 2. Overview In this presentation you will learn about Dynamic Testing Techniques. This is where data is injected and the outputs monitored for the expected output. The method is dynamic, since we have to supply data to the system under test. We can design our test data with no understanding of the internal working of the system – called a “Black Box” Technique. Alternatively we may choose to structure our data based on a full knowledge of the internal workings – called a “White Box” Technique. Sometimes also known as Clear Box, Glass Box, Structural or Crystal techniques. Another approach is to use our experience an judgement to guess what is the best values to use for data – called “Error Guessing”. In this unit we will learn about 2 specific Black Box techniques to design tests. These are: 1. Equivalence Partitioning 2. Boundary Value Analysis 2
3. 3. Objectives By the end of this presentation you should be able to: 1. Understand the terms: 1. Dynamic Testing 2. Black Box 3. Equivalence Partitioning 4. Boundary Value Analysis 2. Understand when it is appropriate to use Black Box techniques 3. Be able to identify equivalence partitions within data. 4. Be able to identify boundary values for a system under test. 5. Use material from this unit and reference it to BS7925-1 and BS7925-2 3
4. 4. Dynamic Analysis “Dynamic Analysis is the process of evaluating a system or component based upon it’s behaviour during execution” BS7925-1 adopted from the IEEE (Institute of Electrical and Electronics Engineers) We use Dynamic Analysis techniques to define test inputs and matching expected outputs to exercise a software component or a full system. There are several Black Box techniques in designing data for testing:  Equivalence Partitioning  Boundary Value Analysis  Cause-effect Graphing (not covered here)  Other techniques e.g. Classification Tree Analysis (not covered here) Note: The opposite of a Dynamic technique is a Static Technique. Static Testing is the term applied in reviewing code and documents, where the programme is not executed (i.e. the programme is in a static state). 4
5. 5. Test Techniques We apply test techniques to ensure: 1. To ensure a systematic approach to testing – so that we use the minimum number of tests to achieve as much testing as possible in a cost affective way. In life critical systems we want to ensure that we have not missed out any specific values. 2. We want to ensure that tests are repeatable. We may have to justify to our managers, customers and even a court that we tested a specific set of values. 3. We want to provide confidence that we are applying a well structured testing approach. NOTE: We can use tools to help us in defining tests. E.g. the Classification Tree Approach used the Classification Tree Editor (CTE), which is covered in a separate presentation. Black box approaches to testing require no knowledge of the internal working of an application. In contrast White box approaches do require knowledge. White box approaches can sometimes be called clear box, transparent or crystal approaches. 5
6. 6. Functional Test Case Design  A Test Case is a set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular programme path or to verify compliance with a specific requirement. BS7925-1  Functional Test Case Design, selects the test cases that is based on an analysis of the specification of the component without reference to the internal workings. BS7925-1  Black Box testing is the use of functional test case design, using the requirements, component and system specifications and other documentation describing what is required of the product under development. 6
7. 7. Black Box Testing Exercise  For this exercise, you are a salesman in a television shop. A customer comes into your shop and asks you to demonstrate that the TV they intend to purchase works.  To see if the TV works, do you need to know what goes on inside the set?  What tests do you think you will want to include?  How could we be systematic in our test approach? 7
8. 8. Black Box Testing Exercise – Suggested Answers  There are a range of possible answers including:  Check standby light operates from off to standby, standby to on and on to standby.  Check the control panels to verify, volume, brightness, contrast, colour.  Check main terrestrial TV stations.  Check Digital TV channels.  Check headphones socket functions.  Check other sockets for format – eg can you plug a video recorder into TV? Notes:  If checking Volume level, you may want a decibel meter and specify the signal strength at the aerial input.  Would you want to check any ranges for each identified input? 8
9. 9. Smoke Test  You may come across the term smoke test.  This was originally applied to civil engineering when smoke was put down a pipe and checks could be quickly made for more obvious leeks.  In hardware engineering, a quick way to check if all the power lines were correctly set up, electrolytic capacitors set up the right way, would be to switch on the circuit and see if any smoke appears.  Smoke tests provide initial confidence, they are however no substitute for detailed testing. Sometimes a test manager will do a smoke test before accepting a build into detailed testing. This provides confidence that the build is at an appropriate state to enter system test.  The TV could be turned on and we can check for a picture of some sort and for sound. This does not mean that the sound matches the picture, or you can change channel.  Would you be happy with a medical system or an aircraft that had only been smoke tested? Probably and most likely no. In the case of the TV, they will have been tested using automation before leaving the factory. 9
10. 10. Black Box Testing  Black Box testing  Focuses on WHAT a system does not HOW it does it.  Focuses on the functional capabilities of the system  Is also known as functional testing. Data input Actual Results Black Box If the actual results are as expected then the test has passed 10
11. 11. Where do we use Black Box Testing  Black Box testing is used in the following stages of the software development life cycle.  Component Testing  Integration of Components (called integration in the small)  Systems Testing  Systems Integration (called integration in the large)  User Acceptance Testing (UAT)  Assisting in the business verification and validation of the application – Operational Acceptance Testing (OAT) In short you can use Black Box analysis at every phase. White box testing however helps supplement the testing analysis for code development and integration (in the small). 11
12. 12. Equivalence Partitioning #1  Itis very difficult, expensive and time consuming if not at times impossible to test every single input value combination for a system.  We can break our set of test cases into sub- sets. We then choose representative values for each subset and ensure that we test these.  Each subset of tests is thought of as a partition between neighbouring subsets or domains. 12
13. 13. Equivalence Partitioning #2  Equivalence Partitioning:  Makes use of the principle that software acts in a general way (generalises) in the way it deals with subsets of data,  Selects Groups of inputs that will be expected to be handled in the same way.  Within a group of data, a representative input can be selected for testing.  For many professional testers this is fairly intuitive.  The approach formalises the technique allowing an intuitive approach to become repeatable. 13
14. 14. Equivalence Partitioning #3  EP Example:  Consider a requirement for a software system:  “The customer is eligible for a life assurance discount if they are at least 18 and no older than 56 years of age.” For the exercise only consider integer years. 14
15. 15. Equivalence Partitioning #4  “The customer is eligible for a life assurance discount if they are at least 18 and no older than 56 years of age.” 18 56 Invalid Partition Valid Partition Invalid Partition Less than or Range 19 to 56 Greater than 56 equal to 18 15
16. 16. Equivalence Partitioning #5  What if our developer incorrectly interpreted the requirement as:  “The customer is eligible for a life assurance discount if they are over 18 and less than 56 years of age.”  People aged exactly 18 or exactly 56 would now not get a discount. 18 56 Invalid Partition Valid Partition Invalid Partition =< 18 18< Range <56 <56 Errors are more common at boundary values, either just below, just above or specifically on the boundary value. 16
17. 17. Boundary Analysis #1  “The customer is eligible for a life assurance discount if they are at least 18 and no older than 56 years of age.” 17, 18, 19 55, 56, 57 Boundaries Invalid Partition Valid Partition Invalid Partition Less than 18 Range 17 to 56 Greater than 56 Test values would be: 17, 18, 19, 55, 56 and 57. This assumes that we are dealing with integers and so least significant digit is 1 either side of boundary. 17
18. 18. Boundary Analysis #2  Foreach boundary we test +/- 1 in the least significant digit of either side of the boundary. Boundary Limit Boundary Limit -1 Boundary Limit + 1 If significant digit was second decimal place, then the limits above would be +/- 0.01 18
19. 19. Boundary Analysis #3  While the textbooks may limit testing to the boundaries, we are interested in how software normally behaves and how it reacts to handling error conditions. Therefore it is normal to treat NOT ONLY the boundaries but also:  A typical mid range value e.g. 37  Zero (since divide by 0 errors can occur).  Negative values  Numbers out of range by a long way e.g. +/-1000  Illegal data entries like “nineteen” as letters, Fred, banana.  Illegal characters such as # \$ & ‘ @ : ; 19
20. 20. Taking EP and BVA Further  Consider the following requirement: “The customers must be at least 18. Customers are eligible for a life assurance discount of 40% if they are at least 18 and no older than 25 years of age. Customers are entitled to a 30% discount if they are older than 25 years of age, but under 40. Customers are entitled to a 10% discount if they are 40 or over, but no older than 56. Over 56 customers are not entitled to a discount.”  What are the equivalence partitions?  What are the boundary values to be tested?  What other values might you test? 20
21. 21. Taking EP and BVA Further -Answer “The customers must be at least 18. Customers are eligible for a life assurance discount of 40% if they are at least 18 and no older than 25 years of age. Age is only recorded in integer years. Customers are entitled to a 30% discount if they are older than 25 years of age, but under 40. Customers are entitled to a 10% discount if they are 40 or over, but no older than 56. Over 56 customers are not entitled to a discount.” 17, 18, 19 24, 25, 26 38, 39, 40 55, 56, 57 40% 10% 10% 0%invalid discount discount discount discount Might also test: 0, -5, 200, Fred, 0.00000001, some typical mid range values: 21, 32, 47. Note boundary values tested +/- least significant recorded digit. 21
22. 22. Invalid Partitions and ISEB / ISTQB Examination  Note when taking the ISEB / ISTQB examinations, they may specifically require that you identify the valid partitions and not the invalid partitions.  Be aware of this when studying for the examination and consult the syllabus for specific instructions. 22
23. 23. Question 1  One of the fields on a form contains a text box which accepts numeric values in the range of 18 to 26. Identify the invalid Equivalence class. a) 17 b) 19 c) 25 d) 21 23
24. 24. Solution 1  The text box accepts numeric values in the range 18 to 25 (18 and 25 are also part of the class). So this class becomes our valid class. But the question is to identify invalid equivalence class. The classes will be as follows: Class I: values < 18 => invalid class Class II: 18 to 25 => valid class Class III: values > 25 => invalid class 17 fall under invalid class. 19, 25 and 21 fall under valid class. So answer is ‘a’ (17) 24
25. 25. Question 2  In an Examination a candidate has to score minimum of 25 marks in order to pass the exam. The maximum that he can score is 50 marks. Identify the Valid Equivalence values if the student passes the exam. a) 22,24,27 b) 21,39,40 c) 29,30,31 d) 0,15,22 25
26. 26. Solution 2  The classes will be as follows: Class I: values < 25 => invalid class Class II: 25 to 50 => valid class Class III: values > 50 => invalid class We have to indentify Valid Equivalence values. Valid Equivalence values will be there in Valid Equivalence class. All the values should be in Class II. So answer is ‘c’ ( 29,30,31) 26
27. 27. Ariane 5 – Lessons Learned Take a look on YouTube the Ariane 5 Takeoff e.g: http://www.youtube.com/watch?v=c9Hf4qTxdxs The video shows the takeoff of Arianne 5. The rocket used the navigation code of Ariane 4. However as the code was from a working rocket system it was not fully tested in the context of the new Arianne 5 – it was assumed there would be no surprises. Normally we would want to test the interface between new and existing code and we would want to use boundary value analysis as part of this. As the rocket took off one partition set of values was specifically reached…. 27
28. 28. Testing is about reducing risk  Remember testing is about reducing risk to an acceptable level. Not removing all risk.  Targeting testing using systematic design techniques, such as equivalence partitions and boundaries significantly, traps more error conditions for fewer tests.  Boundaries between software interfaces presents a significant risk. Do not assume that new code will always work with well established older interfaces.  Typically within projects test analyst specialists are brought onto projects too late. They need to be present from day one, when producing requirements.  Testing can also be underestimated by project managers. In the case of Arianne 5, this was a project that was delivered to time at the launch site and was successful. However was it really successful? A wise project manager mitigates risk early and invests in testing. 28