2. Topics toCover
❑ Dynamic Testing
❑ Black Box Testing
✓Equivalence Class Partitioning T
esting
✓Boundary Value T
esting
✓Negative T
esting
✓Null case Testing
✓Localization T
esting
✓Globalization T
esting
✓Integration T
esting
3. Black box Testing
31
✓In science and engineering, a black box is a device, system or object which can be
viewed solely in terms of its input, output and transfer characteristics without any knowledge
of its internal workings, that is, its implementation is "opaque" (black).
✓Also known as functional testing. A software testing technique whereby the internal
workings of the item being tested are not known by the tester.
✓For example, in a black box test on a software design the tester only knows the inputs
and what the expected outcomes should be and not how the program arrives at those
outputs.
5. Black-Box Testing
33
✓ Tester can be non-technical.
✓ This testing is most likely to find
those bugs as the user would find.
✓ Testing helps
contradiction
specifications.
to identify the
in functional
✓ Test cases can be designed as
soon as the functional
specifications are complete.
✓ May leave many program paths
untested.
✓ It is difficult to identify all
possible inputs in limited testing
time.
6. Practically, due to time and budget considerations, it is not
possible to perform exhausting testing for each set of test
data, especially when there is a large pool of input
combinations.
We need an easy way or special techniques that can
select test cases intelligently from the pool of test-case,
such that all test scenarios are covered.
We use two techniques - Equivalence Partitioning &
Boundary Value Analysis testing techniques to
achieve this
7. Equivalence Class Partitioning T
esting
❑ Equivalence Partitioning is a black-box testing method that divides the input domain of a
program into classes of data from which test cases can be derived
❑ An ideal test case single handedly uncovers a class of errors specially in case of numeric
fields.
❑ Equivalence Partitioning strives to define the test case that uncovers classes of errors, there by
reducing the total number of test cases that must be developed.
❑ An equivalence class represents a set of valid or invalid states for input conditions.
8. Equivalence Class Partitioning Testing
Equivalence classes can be define according to the following guidelines;
▪ If an input condition specifies a range, at least one valid and two invalid
equivalence classes are defined.
▪ If an input condition specifies a specific value, at least one valid and two invalid
equivalence classes are defined.
▪ If an input condition specifies a member of a set, one valid and one invalid
equivalence class are defined.
9. Boundary Value Testing
❑ (Specific case of Equivalence Class Partitioning Testing)
❑ Boundary value analysis leads to a selection of test
cases that exercise bounding values. This technique
is developed because a great number of errors
tend to occur at the boundary of input domain
rather than at the center.
❑ Tests program response to extreme input or
output values in each equivalence class.
❑ Guideline for BVA are following;
▪ If an input condition specifies a range bounded by
values a and b, test cases should be designed with
values a and b and just above and below a and b.
11. Negative Testing
❑ Negative testing or fuzzing is a software testing technique, that involves
providing invalid or unexpected data to the inputs of a computer
program. The program is then monitored for exceptions such as crashes
or failing built-in code.
❑ Web login testing : accessing a page without login
❑ Testing Field Type : testing performed in previously mentioned techniques
12. Null Case T
esting
❑ Null Testing:
❑ Exposes defects triggered by no data or missing data.
❑ Often triggers defects because developers create programs to act upon data, they
don’t think of the case where the project may not contain specific data types
✓ Example: X, Y coordinate missing for drawing various shapes in Graphics
editor.
✓ Example: Blank file names
13. Globalization Testing` `
❑ Globalization Testing is the software testing process for checking if the
software can work properly in various culture/locale settings using every
type of international input.
14. Localizing Testing
❑ Localization is the process of customizing a software application that was originally designed
for a domestic market so that it can be released in foreign markets.
❑ This process involves translating all native language strings to the target language and
customizing the GUI so that it is appropriate for the target market
❑ Localization testing checks how well the build has been translated into a particular
target language (e.g., Japanese product for Japanese user).
❑ We should invite the local staff to help our localization testing by checking the quality
of translation as well.
❑ Common bugs found from this testing
❑ Cannot display the correct format
❑ Functionality is broken
15. Integration T
esting
❑ Integration testing (sometimes called Integration and Testing, abbreviated "I&T")
is the phase in software testing in which individual software modules are
combined and tested as a group.
❑ It occurs after unit testing and before system testing.
❑ Integration testing takes as its input modules that have been unit tested, groups them
in larger aggregates, applies tests defined in an integration test plan to
those aggregates, and delivers as its output the integrated system ready for
system testing.
16. Integration T
esting Types
❑ Big Bang Integration
❑ Incremental Integration
❑ T
op Down Integration
❑ Bottom Up Integration
❑ Sandwich T
esting
17. Big Bang Integration
❑ In this approach, all or most of the developed modules are coupled together to form a
complete software system or major part of the system and then used for
integration testing.
❑ The Big Bang method is very effective for saving time in the integration testing process.
However, the major drawback of Big Bang Integration is to find the actual location
of error.
18. Big Bang Integration
❑ Not Advised
❑ Hard to isolate faults
❑ Wait all modules to develop
No need of help of any middle components like stubs, driver on which testing is
dependent.
❑ Advantage
19. Incremental Integration
TOP DOWN INTEGRATION
❑Top Down Testing is an approach to integrated
testing where the top integrated modules are
tested and the branch of the module is tested step by
step until the end of the related module.
20. Incremental Integration
❑In depth first approach all modules on
a control path are integrated first. See the
fig. on the right. Here sequence of integration
would be M1, M2, M3, M4, M5, M6, M7, and
M8.
❑In breadth firstall modules
subordinate at each level are
directly
integrated
together. Using breadth first for this fig. the
sequence of integration would be M1, M2, M8, M3,
M6 M4, M7, andM5.
21. Top-down integration testing is an integration testing
technique used in order to simulate the behaviour of
the lower-level modules that are not yet integrated.
Stubs are the modules that act as temporary
replacement for a called module and give the same
output as that of the actual product.
22. Order of the integration:
1,2
1,3
2,Stub 1
2,Stub 2
3,Stub 3
3,Stub 4
Testing Approach:
+ Firstly, the integration between the modules 1,2 and 3
+ Test the integration between the module 2 and stub 1,stub 2
+ Test the integration between the module 3 and stub 3,stub 4
23. Incremental Integration
BOTTOM UP INTEGRATION
❑Bottom Up Testing is an approach to integrated testing
where the lowest level components are tested first, then used
to facilitate the testing of higher level components. The process is
repeated until the component at the top of the hierarchy is
tested.
❑All the bottom or low-level modules, procedures or functions are
integrated and then tested. After the integration testing of
lower level integrated modules, the next level of modules
will be formed and can be used for integration testing. .
❑Drivers are simple program designed specifically for testing
that make calls to these lower layers.
24. Incremental Integration
BOTTOM UP INTEGRATION
Bottom up integration is performed in a series of steps:
❑
1. Low level components are combined into clusters.
A driver is written (if main module is not build)to coordinate test case input and output. The
cluster is tested.
Drivers are removed and clusters are combined moving upward in the program structure.
2.
3.
4.
25. Sandwich Testing
❑ Sandwich T
esting is an approach to combine top down testing with bottom
up testing.
❑ The system is viewed as having three layers
✓ A target layer in the middle
✓ A layer above the target
✓ A layer below the target
✓ T
esting converges at the target layer
❑ How do you select the target layer if there are more than 3 layers?
26. Execution of the testing strategy in Sandwich Testing:
The bottom up aspect of the testing initiates from the middle or target
layer and goes upward towards the upper levels in the application.
The top down aspect of the testing starts from the middle layer and
proceeds downwards towards the lower levels in the software product
under test.
The user interface is tested in isolation with the use of stubs and the
functions related to the lower levels are verified using drivers. However the
point to be noted in the midst of all this is the fact that test stubs and
drivers are not necessary for the topmost and the bottommost level.
With the successful test coverage of the entire software application from
both the directions, only the middle one or the target layer is left for the
performance of the final set of tests.
Sandwich testing is pretty useful for large scale projects having a number
of sub projects.
27. Topics toCover
❑ Black Box Testing
✓Load Testing
✓Stress T
esting
✓Volume testing
✓Configuration testing
✓Compatibility test
✓Security testing
✓Recovery testing
✓Security Testing
✓Documentation Testing
✓Exploratory Testing
✓Installation Testing
✓Progressive and Regression Testing
28. Load Testing
❑ Load testing is the process of putting demand on a system or device and measuring
its response. Load testing is performed to determine a system’s behavior under both
normal and anticipated peak load conditions.
❑ It helps to identify the maximum operating capacity of an application and determine
which element is causing degradation.
❑ This testing is most relevant for multi-user systems; using a client/server model, such
as web servers.
✓ Example: Using automation software to simulate 500 users logging into a web site and performing
end- user activities at the same time.
29. Volume Testing
❑ Volume testing refers to testing a software application with a certain amount of data. This
amount can, in generic terms, be the database size or it could also be the size of an interface
file that is the subject of volume testing.
❑ For example, if you want to volume test your application with a specific database size, you will
expand your database to that size and then test the application's performance on it. Another
example could be when there is a requirement for your application to interact with an interface
file; this interaction could be reading and/or writing on to/from the file.
30. Stress Testing
❑ Stress testing is a form of testing that is used to
determine the stability of a given system or entity.
❑ It involves testing beyond normal operational
capacity, often to a breaking point, in order to
observe the results.
❑ In stress testing you continually put excessive load
on the system until the system crashes
❑ The system is repaired and the stress test is
repeated until a level of stress is reached that is
higher than expected to be present at a customer
site.
31. Difference between Volume, Load and stress testing in
software
Volume Testing = Large amounts of data
Load Testing = Large amount of users
Stress Testing = Too many users, too much data, too
little time and too little room
32. CompatibilityTesting
❑ Exposes defects related to using files from output one version of the software in another version
of the software.
❑ Most applications are designed to be “forwards” compatible, meaning files created in a
previous release of the software can be used in the version currently under test.
❑ They are not designed to be “backwards” compatible, meaning a file output in the version
under test will not work in a current released version.
33. Configuration Testing
❑ During this testing tester validate how well our current project is able to
supports on different types of hardware technologies like as different
types of printers, android and ios phones etc. This testing is also called as
hardware testing or portable testing
34. Recovery Testing
❑ In software testing, recovery testing is the activity of testing how fast and better
an application is able to recover from crashes, hardware failures and other
similar problems.
❑Recovery testing is the forced failure of the software in a variety of ways to
verify that recovery is properly performed.
❑Examples of recovery testing:
While an application is running, suddenly restart the computer, and afterwards check the
application's data integrity.
While an application is receiving data from a network, unplug the connecting cable. After some
time, plug the cable back in and analyze the application's ability to continue receiving data
from the point at which the network connection disappeared.
Restart the system while a browser has a definite number of sessions. Afterwards, check that the
browser is able to recover all of them.
35. Resource Testing
❑ In resource testing you have to check whether an AUT(Application under
test) utilizes more resources (e.g memory) than it should be utilized
36. Documentation Testing
❑ Exposes defects in the content and access of on-line user manuals (Help files) and content
of training manuals.
❑ The Testing Group tests that all Help files appear on the screen when selected.
❑ On-Line documentation is very important requirement for product release
❑ Documentation testing can be approached in two phases:
✓ 1s Phase is Review and Inspection,examines the
Documen ts for editorialclarity.
✓ 2nd phase is Live Test, which uses the documentation in
conjunction with the use of the actual program.
37. Exploratory Testing
❑ Exploratory testing is an approach to software testing that is concisely described as
simultaneous learning, test design and test execution. Exploratory software testing is
a powerful and fun approach to testing.
❑ The essence of exploratory testing is that you learn while you test, and you design
your tests based on what you are learning
❑ Exploratory testing is a method of manual testing.
❑ The testing is dependent on the tester's skill of inventing test cases and finding
defects. The more the tester knows about the product and different test methods, the
better the testing will be.
38. Installation Testing
❑ Installation testing is a kind of quality assurance work in the software industry that
focuses on what customers will need to do to install and set up the new software
successfully. The testing process may involve full, partial or upgrades install/uninstall
processes.
❑ Process of installing your software could be different for different platforms. It could
be a neat GUI for windows or plain command line for Unix.
39. Installation Testing
If installation is dependent on some other components like database, server etc. test
cases should be written specifically to address this.
Negative cases like insufficient memory, aborted installation should also be covered
as part of installation testing.
Software Distribution cases
If software is distributed using physical CD format, test activities should include following things -
◼ Test cases should be present to check the sequence of CDs used.
◼ Test cases should be present for the graceful handling of corrupted CD.
If software are distributed from Internet, test cases should be included for
◼ Bad network speed and broken connection.
◼ Firewall and security related.
◼ Concurrent installation/downloads
40. Regression Testing
❑
Exposes defects in code that should not have changed.
❑ Re-executes some or all existing test cases to exercise code that was tested in a previous
release or previous test cycle.
❑ Performed when previously tested code has been re-linked such as
when:
Ported to a new operatingsystem
A fix has been made to a specific part of thecode.
✓
✓
❑ Studies shows that:
✓ The probability of changing the program correctly on the first try is only 50% if the change
involves 10 or fewer lines of code.
The probability of changing the program correctly on the first try is only 20% if the change
involves around 50 lines of code.
✓
❑
41. Progressive VS Regressive Testing
▪ When testing new code, you are performing “progressive testing.”
▪ When testing a program to determine if a change has introduced errors in the unchanged code,
you are performing “regression testing.”
▪ All black box test design methods apply to both progressive and regressive testing. Eventually,
all your “progressive” tests should become “regression” tests.
▪ The Testing Group performs a lot of Regression Testing because most development projects are
adding enhancements (new functionality) to existing programs. Therefore, the existing code
(code that did not change) must be regression tested.
42. Regression Testing VS Retesting
❑ Re- test - Retesting means we testing only the certain part of an application again
and not considering how it will effect in the other part or in the whole application.
❑ Regression Testing - Testing the application after a change in a module or part of the
application for testing that is the code change will affect rest of the application.
43. Security Testing
❑ Security testing is a process to determine that an information system protects data and
maintains functionality as intended.
❑ To check whether there is any information leakage.
❑ To test the application whether it has unauthorized access
❑ To finding out all the potential loopholes and weaknesses of the system.
❑ Primary purpose of security testing is to identify the vulnerabilities and subsequently repairing
them.
44. Security Testing Concepts
❑ Confidentiality
A security measure which protects against the disclosure of information to parties
other than the intended recipient. Ensuring information is accessible only for those with
authorized access and to prevent information theft.
❑ Integrity
A measure intended to allow the receiver to determine that the information which it is
providing is correct.
❑ Authentication
This might involve confirming the identity of a person etc.
45. Security Testing Concepts
❑ Authorization
The process of determining that a requester is allowed to receive a service or
perform an operation. (e.g Access Control)
❑ Non-repudiation
It means to ensure that a transferred message has been sent and received by the
parties claiming to have sent and received the message. Non-repudiation is a way to
guarantee that the sender of a message cannot later deny having sent the message
and that the recipient cannot deny having received the message.
47. Security Testing
Penetration Testing/ Ethical Hacking
✓An ethical hacker is a computer and network expert
who attacks a security system on behalf of its owners,
seeking vulnerabilities that a malicious hacker could
exploit.
✓To test a security system, ethical hackers use the same
methods as their colleagues, and report problems
instead of taking advantage of them.
✓Ethical hacking is also known as penetration testing,
intrusion testing and red teaming.
✓An ethical hacker is sometimes called a white hat, a
term that comes from old Western movies, where the
"good guy" wore a white hat and the "bad guy" wore a
black hat.
✓This is live test mimicking the actions of real life
attackers
48. Security Testing
Password Cracking
✓Password cracking programs can be used to identify weak passwords.
✓Password cracking verifies that users are employing sufficiently strong passwords.
Vulnerability Scanning
✓It involves scanning of the application for all known vulnerabilities.
✓A computer program designed to assess computers, computer
systems, networks or applications for weaknesses.
✓Generally done through various vulnerability scanning software. Ex : Nessus, Sara, and ISS.
51. Smoke Testing
❑ Smoke testing refers to physical tests made to closed systems of pipes to test for leaks.
❑ Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a
program work, but not bothering with finer details.
❑ The term comes to software testing from a similarly basic type of hardware testing, in which the
device passed the test if it didn't catch fire the first time it was turned on. A daily build and
smoke test is among industry best practices promoted by the IEEE (Institute of Electrical and
Electronics Engineers).
52. Smoke Testing
❑ Software Testing done to ensure that whether the build can be accepted for through
software testing or not. Basically, it is done to check the stability of the build received
for software testing. (Does the program run? Does it open the windows? Such kind of
basic tests.)
❑ In software industry, smoke testing is a shallow and wide approach whereby all areas
of the application without getting into too deep, is tested.
53. Sanity Test
❑ In software development, the sanity test determines
whether it is reasonable to proceed with further testing.
❑ Software sanity tests are commonly conflated with smoke
tests. A smoke test determines whether it is possible to
continue testing, as opposed to whether it is reasonable.
❑ A software smoke test determines whether the program
launches and whether its interfaces are accessible and
responsive (for example, the responsiveness of a web
page or an input button).
❑ If the smoke test fails, it is impossible to conduct a sanity
test.
❑ If the sanity test fails, it is not reasonable to attempt more
rigorous testing.
❑ Both sanity tests and smoke tests are ways to avoid
wasting time and effort by quickly determining whether
an application is too flawed to continue detailed testing.
54. Sanity testing is a kind of Software Testing performed after
receiving a software build, with minor changes in code, or
functionality, to ascertain that the bugs have been fixed and no
further issues are introduced due to these changes. The goal is
to determine that the proposed functionality works roughly as
expected. If sanity test fails, the build is rejected to save the
time and costs involved in a more rigorous testing.
The objective is "not" to verify thoroughly the new functionality,
but to determine that the developer has applied some
rationality (sanity) while producing the software. For instance, if
your scientific calculator gives the result of 2 + 2 =5! Then,
there is no point testing the advanced functionalities like sin 30
+ cos 50.
55.
56. Smoke VS Sanity Test
SMOKE TESTING SANITY TESTING
Smoke testing tests all areas of the application
without getting into toodeep
sanity testing focuses on one or a small set of areas of
functionality of the application/ bug fixes
Smoke testing of the software application is done
to check whether the build can be accepted for
thorough software testing
Sanity testing of the software is to ensure whether the
requirements are met ornot
Smoke testing is like General Health CheckUp Sanity Testing is like specialized health check up
57. State Transition Testing
❑ Exposes defects triggered by moving from one program state to another.
✓ Example: In case of an ATM machine software, consider the various operations of ATM
like “Withdrawal Cash”, “Balance Inquiry”, “Transfer Cash” as different states, then the
defects that arise from Moving from the state of Menu selection to Withdrawal cash
appears under State Transition Testing
58. State Transition Testing
❑ A state transition model has four basic parts:
✓ The states that the software may occupy (open/closed or funded/insufficient funds);
✓ The transitions from one state to another (not all transitions are allowed);
✓ The events that cause a transition (withdrawing money, closing a file);
✓ The actions that result from a transition (an error message, or being given your cash).
59. State Transition Testing
❑ Electronic clock example
✓A simple electronic clock has four modes,
display time, change time, display date and
change date
✓ The change mode button switches
between display time and display date
✓ The reset button switches from display time
to adjust time or display date to adjust date
✓ The set button returns
from adjust time to display time
or adjust date to display date
61. Usability Testing
❑ Usability testing is a technique used to evaluate a product by testing it on users. This
can be seen as an irreplaceable usability practice, since it gives direct input on how
real users use the system.
❑ Usability testing measures the usability, or ease of use, of a specific object or set of
objects.
❑ User interviews, surveys, video recording of user sessions, and other techniques can
be used
62. Usability Testing[18]
❑ The aim is to observe people using the product to discover errors and areas of
improvement. Usability testing generally involves measuring how well test subjects
respond in four areas:
Efficiency -- How much time, and how many steps, are required for people to complete basic tasks? (For
example, find something to buy, create a new account, and order the item.)
Accuracy -- How many mistakes did people make? (And were they fatal or recoverable?)
Recall -- How much does the person remember afterwards or after periods of non-use?
Emotional response -- How does the person feel about the tasks completed? Is the person confident,
stressed? Would the user recommend this system to a friend?
63. Acceptance Testing
❑ It is virtually impossible for a software developer to foresee how the customer will really use
a program
❑ When custom software is built for one customer, a series of acceptance tests are conducted
to enable the customer to validate all requirements
❑ Conducted by the end user rather than software engineers
❑ An acceptance test can range from an informal test drive to a planned and systematically
executed series of tests
❑ Acceptance testing performed by the customer is known as user acceptance testing (UAT),
end-user testing, site (acceptance) testing, or field (acceptance) testing
64. 46
❑In this type of testing, the users are invited at the development center where they use the application
and the developers note every particular input or action carried out by the user. Any type of abnormal
behavior of the system is noted.
❑Alpha tests are conducted in a controlled environment
Alpha Testing
65. 47
❑The beta test is conducted at end user sites. Unlike
alpha testing , the developer is generally not present.
❑Therefore the beta test is a live application of the software
in an environment that cannot be controlled by the developer
❑In this type of testing, the software is handed over to the
user in order to find out if the software meets the user
expectations and works as it is expected to.
❑The end user records all problems that are encountered
during beta testing and reports these to the developer at
regular intervals
❑As a result of problems reported during beta tests, software
engineers make modifications and then prepare for release
of the software product
Beta Testing
66. White-Box Testing
34
✓White-box testing (also known as clear box testing, glass box
testing, transparent box testing, and structural testing) is a
method of testing software that tests internal structures or
workings of an application, as opposed to its functionality (i.e.
black-box testing)
✓The connotations of “clear box” and “glass box” appropriately
indicate that you have full visibility of the internal workings of the
software product, specifically, the logic and the structure of the
code.
✓In white-box testing an internal perspective of the system, as
well as programming skills, are required and used to design test
cases. The tester chooses inputs to exercise paths through the code
and determine the appropriate outputs
68. White-Box Testing
36
✓ As the knowledge of internal
coding structure is prerequisite,
it becomes very easy to find
out which type of input/data
can help in testing the
application effectively.
✓ Reveals errors in code
✓ As knowledge of code and internal
structure is a prerequisite, a skilled
tester is needed to carry out this
type of testing.
69. Unit Testing
❑First level of testing.
❑ Refers to testing program units inisolation.
❑A program unit implements a function, it is natural to test the unit
before it is integrated with otherunits.
70. Unit Testing
❑ Unit testing focuses on verification effort
on the smallest unit of software design
(the software component or module).
❑ Using the component level design
description as a guide, important control
paths are tested to uncover errors within
the boundary of the module.
❑ The unit test is white box oriented and
the step can be conducted in parallel
for multiple components.
71. Dynamic Unit Testing
❑ Dynamic unit testing is execution based testing
❑ As your programs become more complicated, and the number of functions
increases, then what to do?
❑ Losing strategy: Write each function and execute them all together.
✓ It is difficult to debug all the functions at once
✓ Multiple errors interact
❑ Winning strategy: Test each function separately.
✓ Make sure each function works before you test it with other functions.
✓ In the long run, this saves testing and debugging time.
❑ How can you test a function that depends on otherfunctions?
72. Dynamic Unit Testing Environment
❑ An environment for dynamic unit testing is created by emulating the context
of the unit under test
❑ The caller unit is known as a test driver, and all the emulations of the units
called by the unit under test are called stubs
73. Test Driver and Stubs
❑ Test Driver is a software module used to invoke a module under test
and, often, provide test inputs, control and monitor execution, and
report test results (IEEE, 1990)
❑ A piece of code that passes test cases to another piece of code.
❑ The unit under test executes with input values received from the driver
and, upon termination, returns a value to the driver.
❑ The driver compares the actual outcome, that is, the actual value
returned by the unit under test with the expected outcome from the unit
and reports the ensuing test result.
❑ For example, if you wanted to move a Player instance,Player1, two
spaces on the board, the driver code would be
movePlayer(Player1, 2);
❑ This driver code would likely be called from the main method.
74. Test Driver and Stubs
❑ Stub is a “dummy subprogram” that replaces a unit that is called by the unit under
test.
❑ A piece of code that simulates the activity of missing components.
❑ If the function A you are testing calls another function B, then use a simplified version
of function B, called a stub.
❑ A stub returns a value that is sufficient for testing.
❑ The stub does not need to perform the real calculation.
❑ A stub performs two tasks.
✓ First, it shows an evidence that the stub was, in fact, called. Such evidence can be shown by
merely printing a message.
✓ Second, the stub returns a pre-computed value to the caller so that the unit under test can
continue its execution.
75. Test Driver and Stubs[6]
void function_under_test(int& x, int& y)
{
...
p =price(x);
...
}
double price(int x) {return 10.00;}
❑The value returned by function price is good enough for testing.
❑The real price() function may not yet have been tested, or even written.
❑Stubs and drivers are often viewed as throwaway code (Kaner, Falk et al., 1999). However,
they do not have to be thrown away: Stubs can be “filled in” to form the actual method.
Drivers can become automated test cases.
Stub