SlideShare a Scribd company logo
1 of 70
Download to read offline
Software Testing(MCA 513)
MCA V Sem
Unit-I: Introduction
Topic:
Basics of software testing
What Is Software Testing?
Software testing is the process of finding errors in the developed product. It also checks whether
the real outcomes can match expected results, as well as aids in the identification of defects,
missing requirements, or gaps.
Testing is the penultimate step before the launch of the product to the market. It includes
examination, analysis, observation, and evaluation of different aspects of a product.
Professional Software Testers use a combination of manual testing with automated tools. After
conducting tests, the testers report the results to the development team. The end goal is to deliver
a quality product to the customer, which is why software testing is so important.
Importance of Software Testing :
It’s common for many startups to skip testing. They might say that their budget is the reason why
they overlook such an important step. But to make a strong and positive first impression, testing
a product for bugs is a must.
Similarly, established organizations need to maintain their client base. So they have to ensure the
delivery of flawless products to the end-user. Let’s take a look at some points and see why
software testing is vital to good software development.
1.Test Automation Enhances Product Quality
An enterprise can only bring value to the customer only when the product delivered is ideal. And
to achieve that, an organization focuses on testing applications and fixes the bugs that testing
reveals before releasing the product. When the team resolves issues before the product reaches
the customer, the quality of the deliverable increases.
2.Test Automation Improves Security
When customers use the software, they are bound to reveal some sort of personal information.
To prevent hackers from getting hold of this data, security testing is a must before the software is
released. When an organization follows a proper testing process, it ensures a secure product that
in turn makes customers feel safe while using the product. For instance, banking applications or
e-commerce stores need payment information. If the developers don’t fix security-related bugs, it
can cause massive financial loss.
3.Test Automation Detects Compatibility With Different Devices and Platforms
The days are gone when customers worked exclusively on hefty desktop models. In the mobile-
first age, testing a product’s device compatibility is a must. For instance, let’s say your
organization developed a website. The tester must check whether the website runs on different
device resolutions. Additionally, it should also run on different browsers.
Another reason why testing is gaining more importance is ever-increasing browser options. What
works fine on Chrome may not run well on Safari or Internet Explorer. This gives rise to the
need for cross-browser testing, which includes checking the compatibility of the application on
different browsers.
References:
1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University.
2.in4ce Education Solutions Pvt. Ltd.
3. Pham, H. (1999). Software Reliability. John Wiley & Sons, Inc. p. 567. ISBN 9813083840.
Software Validation. The process of ensuring that the software is performing the right process.
Software Verification. The process of ensuring that the software is performing the process
right." Likewise and also there: "In short, Boehm (3) expressed the difference between the
software verification and software validation as follows: Verification: ‘‘Are we building the
product right?’’ Validation: ‘‘Are we building the right product?’’.
4. GeeksforGeeks
Compiled By:
Ms. Savita Mittal
DCA
Software Testing(MCA 513)
MCA V Sem
Unit-I: Introduction
Topic: Principles Of Testing
1) Testing shows presence of defects: Testing can show the defects are present, but cannot
prove that there are no defects. Even after testing the application or product thoroughly we
cannot say that the product is 100% defect free. Testing always reduces the number of
undiscovered defects remaining in the software but even if no defects are found, it is not a proof
of correctness.
2) Exhaustive testing is impossible: Testing everything including all combinations of inputs
and preconditions is not possible. So, instead of doing the exhaustive testing we can use risks
and priorities to focus testing efforts. For example: In an application in one screen there are 15
input fields, each having 5 possible values, then to test all the valid combinations you would
need 30 517 578 125 (515) tests. This is very unlikely that the project timescales would allow for
this number of tests. So, accessing and managing risk is one of the most important activities and
reason for testing in any project.
3) Early testing: In the software development life cycle testing activities should start as early
as possible and should be focused on defined objectives.
4) Defect clustering: A small number of modules contains most of the defects discovered during
pre-release testing or shows the most operational failures.
5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the
same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide
Paradox”, it is really very important to review the test cases regularly and new and different tests
need to be written to exercise different parts of the software or system to potentially find more
defects.
6) Testing is context dependent: Testing is basically context dependent. Different kinds of sites
are tested differently. For example, safety – critical software is tested differently from an e-
commerce site.
7) Absence – of – errors fallacy: If the system built is unusable and does not fulfil the user’s
needs and expectations then finding and fixing defects does not help.
Software Testing Requirement:
Software testing is very important because of the following reasons:
1. Software testing is really required to point out the defects and errors that were made during
the development phases. • Example: Programmers may make a mistake during the
implementation of the software. There could be many reasons for this like lack of experience of
the programmer, lack of knowledge of the programming language, insufficient experience in the
domain, incorrect implementation of the algorithm due to complex logic or simply human error.
• If the customer does not find the testing organization reliable or is not satisfied with the quality
of the deliverable, then they may switch to a competitor organization.
• Sometimes contracts may also include monetary penalties with respect to the timeline and
quality of the product. In such cases, if proper software testing may also prevent monetary losses.
2.It’s essential since it makes sure that the customer finds the organization reliable and their
satisfaction in the application is maintained.
It is very important to ensure the Quality of the product. Quality product delivered to the
customers helps in gaining their confidence. As explained in the previous point, delivering good
quality product on time builds the customers confidence in the team and the organization.
• 3. Testing is necessary in order to provide the facilities to the customers like the delivery
of high quality product or software application which requires lower maintenance cost and hence
results into more accurate, consistent and reliable results. a. High quality product typically has
fewer defects and requires lesser maintenance effort, which in turn means reduced costs.
•
• 4. Testing is required for an effective performance of software application or product.
• 5. It’s important to ensure that the application should not result into any failures because
it can be very expensive in the future or in the later stages of the development. a. Proper testing
ensures that bugs and issues are detected early in the life cycle of the product or application.
• b. If defects related to requirements or design are detected late in the life cyle, it can be
very expensive to fix them since this might require redesign, re-implementation and retesting of
the application.
•
• 6. It’s required to stay in the business. a. Users are not inclined to use software that has
bugs. They may not adopt a software if they are not happy with the stability of the application.
• b. In case of a product organization or startup which has only one product, poor quality of
software may result in lack of adoption of the product and this may result in losses which the
business may not recover from.
References:
1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University.
2.in4ce Education Solutions Pvt. Ltd.
3. Pham, H. (1999). Software Reliability. John Wiley & Sons, Inc. p. 567. ISBN 9813083840.
Software Validation. The process of ensuring that the software is performing the right process.
Software Verification. The process of ensuring that the software is performing the process
right." Likewise and also there: "In short, Boehm (3) expressed the difference between the
software verification and software validation as follows: Verification: ‘‘Are we building the
product right?’’ Validation: ‘‘Are we building the right product?’’.
4. GeeksforGeeks
Compiled By:
Ms. Savita Mittal
DCA
Software Testing(MCA 513)
MCA V Sem
Lecture : 4 (29/9/2020)
Unit-I: Introduction
Behavior and Correctness :
Correctness from software engineering perspective can be defined as the adherence to the
specifications that determine how users can interact with the software and how the software
should behave when it is used correctly.
If the software behaves incorrectly, it might take considerable amount of time to achieve the
task or sometimes it is impossible to achieve it.
Important rules:
Below are some of the important rules for effective programming which are consequences of
the program correctness theory.
 Defining the problem completely.
 Develop the algorithm and then the program logic.
 Reuse the proved models as much as possible.
 Prove the correctness of algorithms during the design phase.
 Developers should pay attention to the clarity and simplicity of your program.
 Verifying each part of a program as soon as it is developed.
Testing and Debugging:
Testing:
Testing is the process of verifying and validating that a software or application is bug free, meets
the technical requirements as guided by its design and development and meets the user
requirements effectively and efficiently with handling all the exceptional and boundary cases.
Debugging:
Debugging is the process of fixing a bug in the software. It can defined as the identifying,
analyzing and removing errors. This activity begins after the software fails to execute properly
and concludes by solving the problem and successfully testing the software. It is considered to be
an extremely complex and tedious task because errors need to be resolved at all stages of
debugging.
Software Testing Metrics & Measurements:
In software projects, it is extremely important to measure the quality, cost, and effectiveness of the
project and the processes. Without measuring these, a project can’t head towards successful
completion.
What is software testing metrics?Software Testing Metrics is defined as a quantitative measure
that helps to estimate the progress and quality of a software testing process. A metric is defined
as the degree to which a system or its component possesses a specific attribute.
References:
1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University.
2.in4ce Education Solutions Pvt. Ltd.
3. Pham, H. (1999). Software Reliability. John Wiley & Sons, Inc. p. 567. ISBN 9813083840.
Software Validation. The process of ensuring that the software is performing the right process.
Software Verification. The process of ensuring that the software is performing the process
right." Likewise and also there: "In short, Boehm (3) expressed the difference between the
software verification and software validation as follows: Verification: ‘‘Are we building the
product right?’’ Validation: ‘‘Are we building the right product?’’.
4. GeeksforGeeks
Compiled By:
Ms. Savita Mittal
Astt. Professor,
DCA
Software Testing(MCA 513)
MCA V Sem
Unit-I: Introduction
Topic: Software Testing Metrics
Now that we have grasped the concept of life cycle, let us move ahead to understanding the
method to calculate test metrics.
Method to calculate test metrics
Here is a method to calculate software test metrics: There are certain steps that you must
follow.
1. Firstly, identify the key software testing processes to be measured 2. In this step, the
tester uses
the data as the base to define the metrics 3. Determination of the information to be followed,
a
frequency of tracking and the person responsible for the task 4. Effective calculation,
management, and interpretation of the metrics that is defined 5. Identify the areas of
improvement
depending on the interpretation of the defined metrics
Verification, Validation and Testing:
Verification and validation are not the same things, although they are often confused.
Boehm
succinctly expressed the difference as [1]
• • Verification: Are we building the product right?
• • Validation: Are we building the right product?
"Building the product right" checks that the specifications are correctly implemented by the
system while "building the right product" refers back to the user's needs . In some contexts,
it is
required to have written requirements for both as well as formal procedures or protocols for
determining compliance.
Building the product right implies the use of the Requirements Specification as input for the
next
phase of the development process, the design process, the output of which is the Design
Specification. Then, it also implies the use of the Design Specification to feed the
construction
process. Every time the output of a process correctly implements its input specification, the
software product is one step closer to final verification. If the output of a process is incorrect,
the
developers are not building the product the stakeholders want correctly. This kind of
verification
is called "artifact or specification verification".
Building the right product implies creating a Requirements Specification that contains the
needs
and goals of the stakeholders of the software product. If such artifact is incomplete or
wrong, the
3
developers will not be able to build the product the stakeholders want. This is a form of
"artifact
or specification validation".
Note: Verification begins before Validation and then they run in parallel until the software
product is released.
Software verification
It would imply to verify if the specifications are met by running the software but this is not
possible (e. g., how can anyone know if the architecture/design/etc. are correctly
implemented by
running the software?). Only by reviewing its associated artifacts, someone can conclude if
the
specifications are met.
Artifact or specification verification
The output of each software development process stage can also be subject to verification
when
checked against its input specification.
Examples of artifact verification:
• • Of the design specification against the requirement specification: Do the architectural
design, detailed design and database logical model specifications correctly implement the
functional and non-functional requirement specifications?
• • Of the construction artifacts against the design specification: Do the source code, user
interfaces and database physical model correctly implement the design specification?
Software validation
Software validation checks that the software product satisfies or fits the intended use (high-
level
checking), i.e., the software meets the user requirements, not as specification artifacts or as
needs
of those who will operate the software only; but, as the needs of all the stakeholders (such
as
users, operators, administrators, managers, investors, etc.). There are two ways to perform
software validation: internal and external. During internal software validation, it is assumed
that
the goals of the stakeholders were correctly understood and that they were expressed in the
requirement artifacts precisely and comprehensively. If the software meets the requirement
specification, it has been internally validated. External validation happens when it is
performed
by asking the stakeholders if the software meets their needs. Different software
development
methodologies call for different levels of user and stakeholder involvement and feedback;
so,
external validation can be a discrete or a continuous event. Successful final external
validation
occurs when all the stakeholders accept the software product and express that it satisfies
their
needs. Such final external validation requires the use of an acceptance test which is a
dynamic
test.
However, it is also possible to perform internal static tests to find out if it meets the
requirements
specification but that falls into the scope of static verification because the software is not
running.
4
Artifact or specification validation
Requirements should be validated before the software product as a whole is ready (the
waterfall
development process requires them to be perfectly defined before design starts; but,
iterative
development processes do not require this to be so and allow their continual improvement).
Examples of artifact validation:
• • User Requirements Specification validation: User requirements as stated in a document
called User Requirements Specification are validated by checking if they indeed represent
the
will and goals of the stakeholders. This can be done by interviewing them and asking them
directly (static testing) or even by releasing prototypes and having the users and
stakeholders to
assess them (dynamic testing).
• • User input validation: User input (gathered by any peripheral such as keyboard,
bio-metric sensor, etc.) is validated by checking if the input provided by the software
operators or
users meet the domain rules and constraints (such as data type, range, and format).
References:
1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University.
2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John
Wiley &
Sons, Inc. p. 567. ISBN 9813083840 . Software Validation. The process of ensuring that the
software is performing the right process. Software Verification. The process of ensuring that
the
software is performing the process right." Likewise and also there: "In short, Boehm (3)
expressed the difference between the software verification and software validation as
follows:
Verification: ‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right
product?’’.
4. GeeksforGeeks
Compiled By:
Ms. Savita Mittal
DCA
Software Testing(MCA 513)
MCA V Sem
Lecture : 5 (30/9/2020)
Unit-I: Introduction
Importance of software testing metrics
Software testing metrics is important due to several factors. Well, I have listed a few of them
below:
● It helps you to take a decision for the next phase of activities
● It is evidence of the claim or prediction
● It helps you to understand the type of improvement required
● It eases the process of decision making or technology change
Types of software testing metrics:
Enlisting them below:
● Process Metrics
● Product Metrics
● Project Metrics
Process Metrics: It is used to improve the efficiency of the process in the SDLC (Software
Development Life Cycle).
Product Metrics: It is used to tackle the quality of the software product.
Project Metrics: It measures the efficiency of the team working on the project along with the
testing tools used.
Life-cycle of software testing metrics
Analysis: It is responsible for the identification of metrics as well as the definition.
Communicate: It helps in explaining the need and significance of metrics to stakeholders and
testing team. It educates the testing team about the data points that need to be captured for
processing the metric.
Evaluation: It helps in capturing the needed data. It also verifies the validity of the captured data
and calculates the metric value.
Reports: It develops the report with an effective conclusion. It distributes the reports to the
stakeholders, developers and the testing teams.
Now that we have grasped the concept of life cycle, let us
move ahead to understanding the method to calculate test
metrics.
References:
1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University.
2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John Wiley &
Sons, Inc. p. 567. ISBN 9813083840. Software Validation. The process of ensuring that the
software is performing the right process. Software Verification. The process of ensuring that the
software is performing the process right." Likewise and also there: "In short, Boehm (3)
expressed the difference between the software verification and software validation as follows:
Verification: ‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right
product?’’.
4. GeeksforGeeks
Compiled By:
Ms. Savita Mittal
Astt. Professor,
DCA
Software Testing(MCA 513)
MCA V Sem
Lecture : 7 (5/10/2020)
Unit-I: Introduction
Types of Software Testing(Advance)
According to the nature and scope of an application, there are different types of software testing.
This is because not all testing procedures suit all products. And every type has its pros and cons.
1. Unit Testing
It focuses on smallest unit of software design. In this we test an individual unit or group of inter
related units. It is often done by programmer by using sample input and observing its
corresponding outputs.
Example:
a) In a program we are checking if loop, method or
function is working fine
b) Misunderstood or incorrect, arithmetic precedence.
c) Incorrect initialization
2. Integration Testing
The objective is to take unit tested components and build a program structure that has been
dictated by design.Integration testing is testing in which a group of components are combined to
produce output.
Integration testing is of four types: (i) Top down (ii) Bottom up (iii) Sandwich (iv) Big-Bang
Example
(a) Black Box testing:- It is used for validation.
In this we ignores internal working mechanism and
focuses on what is the output?.
(b) White Box testing:- It is used for verification.
In this we focus on internal mechanism i.e.
how the output is achieved?
3. Regression Testing
Every time new module is added leads to changes in program. This type of testing make sure that
whole component works properly even after adding components to the complete program.
Example
In school record suppose we have module staff, students
and finance combining these modules and checking if on
integration these module works fine is regression testing
4. Smoke Testing
This test is done to make sure that software under testing is ready or stable for further testing
It is called smoke test as testing initial pass is done to check if it did not catch the fire or smoked
in the initial switch on.
Example:
If project has 2 modules so before going to module
make sure that module 1 works properly
5. Alpha Testing
This is a type of validation testing.It is a type of acceptance testing which is done before the
product is released to customers. It is typically done by QA people.
Example:
When software testing is performed internally within
the organization
6. Beta Testing The beta test is conducted at one or more customer sites by the end-user of the
software. This version is released for the limited number of users for testing in real time
environment
Example:
When software testing is performed for the limited
number of people
7. System Testing
In this software is tested such that it works fine for different operating system.It is covered under
the black box testing technique. In this we just focus on required input and output without
focusing on internal working.
In this we have security testing, recovery testing , stress testing and performance testing
Example:
This include functional as well as non functional
testing
8. Stress Testing
In this we gives unfavorable conditions to the system and check how they perform in those
condition.
Example:
(a) Test cases that require maximum memory or other
resources are executed
(b) Test cases that may cause thrashing in a virtual
operating system
(c) Test cases that may cause excessive disk requirement
9. Performance Testing
It is designed to test the run-time performance of software within the context of an integrated
system.It is used to test speed and effectiveness of program.
Example:
Checking number of processor cycles.
Quality and Reliability:
Quality is a much broader aspect than Reliability. Quality covers almost everything from
organisation, managements, service, procedures, people, product, product-life etc. In simple
words, Reliability is only a subset of quality, the others being performance, consistency,
efficiency, user experience etc. Whereas, Quality is the measure of conformance to laid down
product specification. Let’s simplify it to understand the difference between Quality and
Reliability.
We will take an example: Suppose, you went to a shop to buy oranges. If the orange is good or
bad, that measure is quality. You took the oranges that day and returned to home. Next day you
again visited the same shop to buy oranges and found that the oranges available there are good
again. You purchased it and returned to your home. You repeated the same process for one week
and every time you got good oranges. Now, you have a trust on that shop that you will always
get good oranges there, and that’s Reliability.
Now we can say that:
 Quality is Present (today) while Reliability is the future.
 Quality can be controlled and measured to the accuracy, while Reliability is just a
probability. We can ensure reliability by controlling the quality.
 Quality is every thing untill put into operation (i.e t=0 hrs), while Reliability is every
thing that happens after t=0 hrs.
In other words, Product’s quality is measured prior to customer’s initial product use. However
the Product reliability is measured during/after the customer’s product use.
 Quality is the measure of conformance to laid down product specification.
 Quality is a static measure of product meeting its specification, whereas Reliability is a
dynamic measure of product performance.
 Quality is observed whereas reliability is experienced.
 You buy based upon quality. You come back and buy again based upon reliability
 Poor quality system can have better reliability and a good quality system can have poor
reliability.
 Once design is over, maximum system reliability is fixed and through quality assurance
what we can achieve is design reliability.
 Quality is one among many parameters to ensure better reliability.
References:
1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University.
2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John Wiley &
Sons, Inc. p. 567. ISBN 9813083840. Software Validation. The process of ensuring that the
software is performing the right process. Software Verification. The process of ensuring that the
software is performing the process right." Likewise and also there: "In short, Boehm (3) expressed
the difference between the software verification and software validation as follows: Verification:
‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right product?’’.
4. GeeksforGeeks
Compiled By:
Ms. Savita Mittal
Astt. Professor,
DCA
Software Testing(MCA 513)
MCA V Sem
Lecture : 8 (5/10/2020)
Unit-I: Introduction
Defect Tracking:
Defect tracking has long been used to measure software quality, with an emphasis on finding as
many bugs as possible early in the development cycle.While finding bugs early is still an
accepted goal in development circles,many agile teams have moved away from the idea of defect
tracking, claiming that the process creates unnecessary overhead and prevents testers and
developers from communicating effectively. There's some truth in that, but the re are ways to get
the benefits from defect tracking without the drawbacks.
Advantages of defect tracking
There's no shortage of tools when it comes to defect tracking. You can find tools to track non-
technical issues, customer-facing tools for production-related defects, and internal tools the
development team can use to track defects. In fact, even if you're just using sticky notes, email,
spreadsheets, or a log on a wiki to track customer issues, you'll need defect tracking of some sort.
It's just a matter of figuring out the right tools and processes for the team.
Defect tracking helps you ensure that bugs found in the system actually get fixed. Sure, it's great
for testers and developers to have a conversation and to recreate the problem together. If the
problem gets fixed immediately, great! Maybe it doesn't need to be logged. But if it just gets put
down somewhere on a mental to-do list, ther e's a good chance it'll slip through the cracks.
Defect tracking tools not only provide a way to ensure follow-through but also provide valuable
metrics. Depending on the tool being used, the team can tie defects to changed code, tests, or
other data that will allow for traceability or analysis on defect trends. If a certain module is
riddled with defects, it may be time to review and rewrite that module.
Defect tracking tools allow f or a repository of documentation that will provide value for
troubleshooter s or for support personnel later on if there's a workaround for an issue. Having a
tool in place also sends notifications to the right people when a bug needs to be fixed, tested, or
marked as resolved.
Disadvantages of defect tracking
Many of the disadvantages of defect tracking have more to do with the overhead of the processes
and tools than the idea of defect tracking itself. Some organizations use multiple tools to track
defects of different types, and those tools often don't integrate well with one another. The team
ends up with the same defect documented in multiple places with slightly different descriptions
E28094, perhaps first from a user's perspectiveE28094and then from a technical perspective in
the internal bug-tracking system.
Complications can arise out of confusion over descriptions, lack of information , tools that are
overly cumbersome and require mandatory fields for which the user doesn't have the answers,
and difficulty in reporting. Sometimes the process overhead required to open a bug is more time-
consuming than simply fixing the bug.
If every low-priority defect is tracked, the defect report will include many bugs that will never be
fixed because fixing them doesn't provide a high enough ROI. However, keeping them in a
defect report as "open" will require constant reanalysis and may imply poor quality, when in fact
these defects aren't issues customers care about.
Though defect-tracking tools help in storing documentation, they may actually hind r
communication if they prevent team members from talking and collaborating . If code is thrown
"over the wall" from development to test and the bugs a re thrown back to development, when
the only communication about defects is done through a tool, there are bound to be
misunderstandings. This results in the classic line from developers, "it works on my machine,"
along with closure of a "user error." That leads to a reopened bug and an under-the-breath
expletive from the tester.
The best solution for the team
Though agile development promotes "individuals and inter actions over processes and tools,"
processes and tools are still important. However, it's important for teams to avoid replacing
strong communication with poor processes and cumbersome tools.
Some agile teams document their defects by creating user stories. Itay Ben-Yehuda
recommends against this, claiming that these defect-related stories will get lost in the backlog
and take a backseat to new feature work.
Other agile teams may take the approach that bugs must be fixed immediately and thus don't
need to be tracked. "If the defect is important enough to fix, then fix it. Storing it only prolongs
the pain and possibility of repercussion later," says Amy Reichert.
Many agile teams take the approach that if a bug is found and fixed in code being developed
during the current sprint, they don't need to track it. Otherwise, it needs to be tracked.
Clearly, the usability of your defect-tracking tool is a big factor in how successful your team will
be in using the tool to its advantage. Often, the team doesn't have a choice in the tools they use or
the standards set by the governance team. How much flexibility they have depends on the
organization. However, if a tool has been selected, it would be advantageous for someone on the
team to become a super-user and take advantage of configuring the tool to make it as user
friendly for the team as possible, perhaps by adding templates or canned reports.
Teams need to work together to decide when defects will be tracked, as well as discuss any
related processes or guidelines, such as how much information must be tracked for each ticket.
Optimize these processes for communication, flow, and quality. Continually evaluate during
retrospectives and adapt your processes over time.
Every software development team needs to have a process for how to handle defects. Whether
your tools are as simple as sticky notes or as complex as an integrated application lifecycle
management too l suite, decide as a team how to use the tools and create the processes that will
work best for you.
References:
1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University.
2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John Wiley &
Sons, Inc. p. 567. ISBN 9813083840. Software Validation. The process of ensuring that the
software is performing the right process. Software Verification. The process of ensuring that the
software is performing the process right." Likewise and also there: "In short, Boehm (3)
expressed the difference between the software verification and software validation as follows:
Verification: ‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right
product?’’.
4. GeeksforGeeks
Compiled By:
Ms. Savita Mittal
Astt. Professor,
DCA
Software Testing(MCA 513)
MCA V Sem
Lecture 9
Topic: : Test case, Test Suits & Test Oracle
Test case
In software engineering, a test case is a specification of the inputs, execution conditions, testing
procedure, and expected results that define a single test to be executed to achieve a particular
software testing objective, such as to exercise a particular program path or to verify compliance
with a specific requirement. Test cases underlie testing that is methodical rather than haphazard.
A battery of test cases can be built to produce the desired coverage of the software being tested.
Formally defined test cases allow the same tests to be run repeatedly against successive versions
of the software, allowing for effective and consistent regression testing.
Formal test cases
In order to fully test that all the requirements of an application are met, there must be at least two
test cases for each requirement: one positive test and one negative test. If a requirement has sub-
requirements, each sub-requirement must have at least two test cases. Keeping track of the link
between the requirement and the test is frequently done using a traceability matrix. Written test
cases should include a description of the functionality to be tested, and the preparation required
to ensure that the test can be conducted.
A formal written test-case is characterized by a known input and by an expected output, which is
worked out before the test is executed. The known input should test a precondition and the
expected output should test a post condition.
Informal test cases
For applications or systems without formal requirements, test cases can be written based on the
accepted normal operation of programs of a similar class. In some schools of testing, test cases
are not written at all but the activities and results are reported after the tests have been run.
In scenario testing, hypothetical stories are used to help the tester think through a complex
problem or system. These scenarios are usually not written down in any detail. They can be as
simple as a diagram for a testing environment or they could be a description written in prose. The
ideal scenario test is a story that is motivating, credible, complex, and easy to evaluate. They are
usually different from test cases in that test cases are single steps while scenarios cover a number
of steps of the key.[5][6]
Typical written test case format
A test case is usually a single step, or occasionally a sequence of steps, to test the correct
behaviour/functionality, features of an application. An expected result or expected outcome is
usually given.
Additional information that may be included
 Test Case ID - This field uniquely identifies a test case.
 Test case Description/Summary - This field describes the test case objective.
 Test steps - In this field, the exact steps are mentioned for performing the test case.
 Pre-requisites - This field specifies the conditions or steps that must be followed before the test
steps executions.
 Depth
 Test category
 Author- Name of the Tester.
 Automation - Whether this test case is automated or not.
 pass/fail
 Remarks
Larger test cases may also contain prerequisite states or steps, and descriptions.[8]
A written test case should also contain a place for the actual result.
These steps can be stored in a word processor document, spreadsheet, database or other common
repository.
In a database system, you may also be able to see past test results and who generated the results
and the system configuration used to generate those results. These past results would usually be
stored in a separate table.
Test suites often also contain
 Test summary
 Configuration
Besides a description of the functionality to be tested, and the preparation required to ensure that
the test can be conducted, the most time-consuming part in the test case is creating the tests and
modifying them when the system changes.
Under special circumstances, there could be a need to run the test, produce results, and then a
team of experts would evaluate if the results can be considered as a pass. This happens often on
new products' performance number determination. The first test is taken as the base line for
subsequent test and product release cycles.
Acceptance tests, which use a variation of a written test case, are commonly performed by a
group of end-users or clients of the system to ensure the developed system meets the
requirements specified or the contract.[10][11]
User acceptance tests are differentiated by the
inclusion of happy path or positive test cases to the almost complete exclusion of negative test
cases.
Testing suite
In software development, a test suite, less commonly known as a 'validation suite', is a collection
of test cases that are intended to be used to test a software program to show that it has some
specified set of behaviours. A test suite often contains detailed instructions or goals for each
collection of test cases and information on the system configuration to be used during testing. A
group of test cases may also contain prerequisite states or steps, and descriptions of the following
tests.
Collections of test cases are sometimes incorrectly termed a test plan, a test script, or even a test
scenario.
Types
Occasionally, test suites are used to group similar test cases together. A system might have a
smoke test suite that consists only of smoke tests or a test suite for some specific functionality in
the system. It may also contain all tests and signify if a test should be used as a smoke test or for
some specific functionality.
In model-based testing, one distinguishes between abstract test suites, which are collections of
abstract test cases derived from a high-level model of the system under test, and executable test
suites, which are derived from abstract test suites by providing the concrete, lower-level details
needed to execute this suite by a program.[1]
An abstract test suite cannot be directly used on the
actual system under test (SUT) because abstract test cases remain at a high abstraction level and
lack concrete details about the SUT and its environment. An executable test suite works on a
sufficiently detailed level to correctly communicate with the SUT and a test harness is usually
present to interface the executable test suite with the SUT.
A test suite for a primality testing subroutine might consist of a list of numbers and their primality
(prime or composite), along with a testing subroutine. The testing subroutine would supply each
number in the list to the primality tester, and verify that the result of each test is correct.
References:
 IEEE: SWEBOK: Guide to the Software Engineering Body of Knowledge
 Carlo Ghezzi, Mehdi Jazayeri, Dino Mandrioli: Fundamentals of Software Engineering, Prentice
Hall, ISBN 0-13-099183-X
 Alan L. Breitler: A Verification Procedure for Software Derived from Artificial Neural Networks,
Journal of the International Test and Evaluation Association, Jan 2004, Vol 25, No 4.
 Vijay D'Silva, Daniel Kroening, Georg Weissenbacher: A Survey of Automated Techniques for
Formal Software Verification. IEEE Trans. on CAD of Integrated Circuits and Systems 27(7):
1165-1178 (2008)
Compiled By:
Ms. Savita Mittal
DCA
Software Testing- MCA 513
Unit I (Contd..)
Lecture 10 (9/10/2020)
Test Oracles
Testing Oracle is a machanism, different from the program itself, that can be used to test the
accuracy of a program’s output for test cases. Conceptually, we can consider testing a process in
which test cases are given for testing and the program under test. The output of the two then
compares to determine whether the program behaves correctly for test cases. This is shown in
figure.
Testing oracles are required for testing. Ideally, we want an automated oracle, which always
gives the correct answer. However, often oracles are human beings, who mostly calculate by
hand what the output of the program should be. As it is often very difficult to determine whether
the behavior corresponds to the expected behavior, our “human deities” may make mistakes.
Consequently, when there is a discrequently, between the program and the result, we must verify
the result produced by the oracle before declaring that there is a defect in the result.
The human oracles typically use the program’s specifications to decide what the correct behavior
of the program should be. To help oracle determine the correct behavior, it is important that the
behavior of the system or component is explicitly specified and the specification itself be error-
free. In other words actually specify the true and correct behavior.
There are some systems where oracles are automatically generated from the specifications of
programs or modules. With such oracles, we are assured that the output of the oracle conforms to
the specifications. However, even this approach does not solve all our problems, as there is a
possibility of errors in specifications. As a result, a divine generated from the the specifications
will correct the result if the specifications are correct, and this specification will not be reliable in
case of errors. In addition, systems that generate oracles from specifications require formal
specifications, which are often not generated during design.
Software verification
Dynamic verification (Test, experimentation)
Dynamic verification is performed during the execution of software, and dynamically checks its
behavior; it is commonly known as the Test phase. Verification is a Review Process. Depending
on the scope of tests, we can categorize them in three families:
 Test in the small: a test that checks a single function or class (Unit test)
 Test in the large: a test that checks a group of classes, such as
o Module test (a single module)
o Integration test (more than one module)
o System test (the entire system)
 Acceptance test: a formal test defined to check acceptance criteria for a software
o Functional test
o Non functional test (performance, stress test)
The aim of software dynamic verification is to find the errors introduced by an activity (for
example, having a medical software to analyze bio-chemical data); or by the repetitive
performance of one or more activities (such as a stress test for a web server, i.e. check if the
current product of the activity is as correct as it was at the beginning of the activity).
Static verification (Analysis)
Static verification is the process of checking that software meets requirements by inspecting the
code before it runs. For example:
 Code conventions verification
 Bad practices (anti-pattern) detection
 Software metrics calculation
 Formal verification
Verification by Analysis - The analysis verification method applies to verification by
investigation, mathematical calculations, logical evaluation, and calculations using classical
textbook methods or accepted general use computer methods. Analysis includes sampling and
correlating measured data and observed test results with calculated expected values to establish
conformance with requirements.
Narrow scope
When it is defined more strictly, verification is equivalent only to static testing and it is intended
to be applied to artifacts. And, validation (of the whole software product) would be equivalent to
dynamic testing and intended to be applied to the running software product (not its artifacts,
except requirements). Notice that requirements validation can be performed statically and
dynamically
Comparison with validation
Software verification is often confused with software validation. The difference between
verification and validation:
 Software verification asks the question, "Are we building the product right?"; that is,
does the software conform to its specifications? (As a house conforms to its blueprints.)
 Software validation asks the question, "Are we building the right product?"; that is, does
the software do what the user really requires? (As a house conforms to what the owner
needs and wants.)
SOFTWARE REQUIREMENT SPECIFICATION (SRS) VERIFICATION
1) SRS should have the descriptive details on all the requirements – existing and new for the
release.
1. Should include details on functional, non functional, performance and design
requirements
2. Details of interfaces and GUI specifications should be present.
3. For each functional requirement it should clearly specify:
 What all has to happen on doing a particular action on the system?
 Where all it should happen?
 When it has to happen?
 What all should not happen?
1. If any of these is not specified , identify it and note for discussion.
2) Proper version history for the various features should be mentioned.
3) All the new features should be elaborately explained in the SRS.
4) It should mention the operating environment (Software and Hardware) recommended for
proper functioning of the application.
1. Should include OS, additional (3rd
party) software or packages (with version) required
for the application.
2. Recommended values of system memory, RAM, processor speed, server/Desktop
usability etc should be mentioned.
5) Any constraints on the design, H/w or S/W requirements, Standard compliance, etc should
be mentioned in SRS
6) Any testability limitations should be clearly specified
7) All assumptions based on which the application is developed should be listed
8) Interface diagrams, Call flows or flow charts depicted should be cross checked with the
understanding obtained.
9) SRS should have reference for all records like CDR parameter details, Configuration file
parameters details and other operation steps.
10) Ensure that all the requirements are properly reviewed and signed off by respective stake
holders/stake holder representatives.
Source Code review
Code review (sometimes referred to as peer review) is a software quality assurance activity in
which one or several people check a program mainly by viewing and reading parts of its source
code, and they do so after implementation or as an interruption of implementation. At least one
of the persons must not be the code's author. The persons performing the checking, excluding the
author, are called "reviewers".
Although direct discovery of quality problems is often the main goal, code reviews are usually
performed to reach a combination of goals :
 Better code quality – improve internal code quality and maintainability (readability,
uniformity, understandability, ...)
 Finding defects – improve quality regarding external aspects, especially correctness, but
also find performance problems, security vulnerabilities, injected malware, ...
 Learning/Knowledge transfer – help in transferring knowledge about the codebase,
solution approaches, expectations regarding quality, etc; both to the reviewers as well as
to the author
 Increase sense of mutual responsibility – increase a sense of collective code ownership
and solidarity
 Finding better solutions – generate ideas for new and better solutions and ideas that
transcend the specific code at hand.
 Complying to QA guidelines – Code reviews are mandatory in some contexts, e.g., air
traffic software
The above-mentioned definition of code review delimits it against neighboring but separate
software quality assurance techniques: In static code analysis the main checking is performed by
an automated program, in self checks only the author checks the code, in testing the execution of
the code is an integral part, and pair programming is performed continuously during
implementation and not as a separate step.
Types of review processes
There are many variations of code review processes, some of which will be detailed below.
Formal inspection
The historically first code review process that was studied and described in detail was called
"Inspection" by its inventor Michael Fagan. This Fagan inspection is a formal process which
involves a careful and detailed execution with multiple participants and multiple phases. Formal
code reviews are the traditional method of review, in which software developers attend a series
of meetings and review code line by line, usually using printed copies of the material. Formal
inspections are extremely thorough and have been proven effective at finding defects in the code
under review.
Regular change-based code review
In recent years, many teams in industry have introduced a more lightweight type of code review.
Its main characteristic is that the scope of each review is based on the changes to the codebase
performed in a ticket, user story, commit, or some other unit of work. Furthermore, there are
rules or conventions that embed the review task into the development process (e.g., "every ticket
has to be reviewed"), instead of explicitly planning each review. Such a review process is called
"regular, change-based code review". There are many variations of this basic process. A survey
among 240 development teams from 2017 found that 90% of the teams use a review process that
is based on changes (if they use reviews at all), and 60% use regular, change-based code review]
.
Also, most large software corporations such as Microsoft, Google, and Facebook follow a
changed-based code review process.
Efficiency and effectiveness of reviews
Capers Jones' ongoing analysis of over 12,000 software development projects showed that the
latent defect discovery rate of formal inspection is in the 60-65% range. For informal inspection,
the figure is less than 50%. The latent defect discovery rate for most forms of testing is about
30%.[10][11]
A code review case study published in the book Best Kept Secrets of Peer Code
Review found that lightweight reviews can uncover as many bugs as formal reviews, but were
faster and more cost-effective in contradiction to the study done by Capers Jones
The types of defects detected in code reviews have also been studied. Empirical studies provided
evidence that up to 75% of code review defects affect software evolvability/maintainability
rather than functionality, making code reviews an excellent tool for software companies with
long product or system life cycles.
Guidelines
The effectiveness of code review was found to depend on the speed of reviewing. Code review
rates should be between 200 and 400 lines of code per hour. Inspecting and reviewing more than
a few hundred lines of code per hour for critical software (such as safety critical embedded
software) may be too fast to find errors.
Supporting tools
Static code analysis software lessens the task of reviewing large chunks of code on the developer
by systematically checking source code for known vulnerabilities and defect types. A 2012 study
by VDC Research reports that 17.6% of the embedded software engineers surveyed currently use
automated tools to support peer code review and 23.7% expect to use them within 2 years.
References
 IEEE: SWEBOK: Guide to the Software Engineering Body of Knowledge
 Carlo Ghezzi, Mehdi Jazayeri, Dino Mandrioli: Fundamentals of Software Engineering,
Prentice Hall, ISBN 0-13-099183-X
 Alan L. Breitler: A Verification Procedure for Software Derived from Artificial Neural
Networks, Journal of the International Test and Evaluation Association, Jan 2004,
Vol 25, No 4.
 Vijay D'Silva, Daniel Kroening, Georg Weissenbacher: A Survey of Automated
Techniques for Formal Software Verification. IEEE Trans. on CAD of Integrated
Circuits and Systems 27(7): 1165-1178 (2008)
Compiled By:
Ms. Savita Mittal
Asstt. Professor, DCA
Software Testing(MCA 513) Unit 1 (Contd..)
Lecture 11(9/10/2020) User Documentation Verification
Documentation testing falls under the category of non-functional testing. Documentation testing activity
can take off right from the beginning of the software development process. If taken up from the
beginning, defects in documentation can be effortlessly fixed with minimal expenses. Poor
documentation is as likely to cause problems for our customers as is poorly written code. Incorrect or
poorly written or even missing documentation irritates the end users that tend to quickly form an
opinion about the product. Hence quality of documentation bears a clear reflection on the quality of the
product as well as that of the supplier who supplied it. The product can become a success, only If its
documentation is adequately tested.
Software project audits
This article describes approaches to conducting internal or external audits of software development
projects. Audits may be conducted to help in recovery planning for a failing software project, or on a
regular basis as part of a failure prevention approach. Guidance for scheduling audits on a preventive
basis is presented. Recovery planning approaches are described.
An Ominous Sense of Disaster
You're developing a new software program and your project team or contract developer has slipped the
schedule. Should you be concerned? The answer is, probably not if this is the first slip. Software projects
are more difficult to manage than construction projects, and would you expect to completely go crazy if
your home remodeling project slipped a few weeks. If half of your software projects are coming in on
time and on budget or better, you're doing pretty well.
You should be more concerned after the second or third slip. This is a definite sign of a project
potentially in trouble. You might think that the trouble is the schedule delays, but this is merely the
surface manifestation of a deeper underlying problem. Schedule slips and cost overruns may increase
your costs and delay delivery, but in most cases they will not result in total project disaster.
Unfortunately, when you look beneath the surface of a project missing deadlines you will often find that
the underlying architecture and code itself is seriously, perhaps even fatally, flawed. There are two
possible reasons for this correlation:
1. The most frequent explanation is that the developers are over their heads. They are attempting
to build a system whose complexity exceeds their experience or ability (or both) and the result is
a flawed architecture, incorrect object design, poor database design, inefficient code or data
access, and so on. Don't get me wrong, these individuals may have the best of intentions and be
competent in development in general, but for this particular complexity of application, they are
lost in the woods.
2. A much less frequent but not uncommon explanation is that the developers have the capability
to build the system, but the initial estimate of effort and time was so badly under scoped that
they do not have anywhere near enough time to do the job right. Managers may exert so much
pressure on the developers that they crumble and produce poor quality architectures, designs,
and so on simply trying to meet unrealistic milestones. This is one of the reasons development
shops should emphasize to their programmers that they must never sacrifice quality for
schedule, and that it is their jobs as professionals to stand up to any management pressure
otherwise. Schedule slips with high quality code are always preferred over on-time performance
with poor quality (not that anyone likes schedule slips).
If you ignore the initial warning sign of multiple schedule slips, then you have laid a foundation for total
project failure and cancellation. This will show up when the system is finally delivered in one or more of
the following forms:
● The system has numerous defects and crashes or operates incorrectly to the extent where it is
not usable;
● The system is missing key functionality that is necessary for it to be deployed operationally;
● Seemingly minor enhancements to the system are very difficult and costly to implement and
often result in unexpected problems in other parts of the application; or
● The system performance is sufficiently slow that it is not feasible to deploy it operationally.
If you suspect that you may have a project going down this path, you're not alone. According to a
Standish Group Study 40% of software projects underway now are expected to fail, and 33% of all
projects are over budget and late.
Project Auditing Methodology Damage Auditing
Suppose you suspected that one of your subsidiary corporations was in trouble financially, and that they
were intentionally or unintentionally hiding the magnitude of the problem within their accounting
department. Without hesitation you would call in outside experts (certified public accountants) to do an
audit and tell you where you really stood.
Similarly, if you suspect that one of your projects is in trouble, you need to immediately call in outside
experts to do an audit and tell you where you really stand. The audit team should be composed of very,
senior managers and software engineers. Audits typically last between 3 days and 3 weeks, based on the
size of the project.
The audit consists of a management audit and a technical audit. Typically, on smaller projects the
technical audit is the most critical and the focus of the audit, while on the larger projects (over $5M
USD) the management audit dominates.
The Technical Audit
The technical audit focuses on the design team, and to a lesser extent, the programmers doing the
actual implementation. It begins by looking at the overall system architecture and database design. The
question is not really whether these are right or wrong, but rather whether they are appropriate to the
nature of the application (usage, transaction volume, database size, planned evolution, and so on). Our
experience has been that if these two elements are correct, the project has a solid foundation and if
there are other problems, salvage is possible. On the other hand, if these two elements are incorrect
then the remainder of the system is likely to need a total rewrite.
Running a close third in importance is the design of the objects and business application servers. If these
are wrong, the system can often be made to work but maintenance will be difficult. A decision will need
to be made whether to fix and deploy the current system while immediately redesigning a follow-on
system, or to redesign at this point in time.
Once the design has been reviewed, the implementation must be examined. The process begins by using
automated tools to look at comment density (both in headers and embedded within functions) across
the application as a whole and by function or module. A similar analysis of McCabes complexity metric
for each function is completed. Functions with high complexity are candidates for simplification and are
likely trouble points for defects. The code itself (including data access code such as SQL statements and
stored procedures) is then examined, either in its entirety or on a sampling basis. The audit team looks
for inefficient coding techniques, proper error and exception handling, duplicate code blocks (duplicate
code should be encapsulated in a function or object, not just cut and pasted), and other obvious
problems. Finally, the user interface is examined for usability and conformance with industry standards.
Of all items mentioned, the user interface is the easiest to fix if deficient.
References
● IEEE: SWEBOK: Guide to the Software Engineering Body of Knowledge
● Carlo Ghezzi, Mehdi Jazayeri, Dino Mandrioli: Fundamentals of Software Engineering, Prentice
Hall, ISBN 0-13-099183- X
● Alan L. Breitler: A Verification Procedure for Software Derived from Artificial Neural Networks,
Journal of the International Test and Evaluation Association, Jan 2004, Vol 25, No 4.
● Vijay D'Silva, Daniel Kroenin g, Georg Weissenbacher: A Survey of Automate d Techniques
for Formal Software Verification. IEEE Trans. on CAD of Integrated Circuits and Systems 27(7):
1165-1178 (2008)
1. ^ Systems and software engineering – Vocabulary. Iso/Iec/IEEE 24765:2010(E).
2010-12-01. pp. 1–418. doi:10.1109/IEEESTD.2010.5733835. ISBN 978-0-7381-6205-8.
2. ^ Kaner, Cem (May 2003). "What Is a Good Test Case?" (PDF). STAR East: 2.
3. ^ "Writing Test Rules to Verify Stakeholder Requirements ". StickyMinds.
4. ^ Beizer, Boris (May 22, 1995). Black Box Testing. New York: Wiley. p. 3. ISBN 9780471120940.
5. ^ "An Introduction to Scenario Testing " (PDF). Cem Kaner. Retrieved 2009-05-07.
6. ^ Crispin, Lisa; Gregory, Janet (2009). Agile Testing: A Practical Guide for Testers and Agile Teams
. Addison-Wesley. pp. 192–5. ISBN 978-81-317-3068-3.
7. ^ https://www.softwaretestingstandard.org/part3.php ISO/IEC/IEEE 29119-4:2019, "Part 4: Test
techniques"
8. ^ Jump up to: a b
Liu, Juan (2014). "Studies of the Software Test Processes Based on GUI". 2014
International Conference on Computer, Network: 113–121. doi:10.1109/CSCI.2014.104. ISBN
9781605951676. S2CID 15204091. Retrieved
2019-10-22.
9. ^ Kaner, Cem; Falk, Jack; Nguyen, Hung Q. (1993). Testing Computer Software (2nd ed.). Boston:
Thomson Computer Press. p. 123–4. ISBN 1-85032-847-1.
10. ^ Goethem, Brian Hambling, Pauline van (2013). User acceptance testing : a step-by-step guide.
BCS Learning & Development Limited. ISBN 9781780171678.
11. ^ Black, Rex (August 2009). Managing the Testing Process: Practical Tools and Techniques for
Managing Hardware and Software Testing. Hoboken, NJ: Wiley. ISBN 978-0-470-40415-7.
12. ^ Cimperman, Rob (2006). UAT Defined: A Guide to Practical User Acceptance Testing. Pearson
Education. pp. Chapter 2. ISBN 9780132702621.
External links
● Writing Software Security Test Cases - Putting security test cases into your test plan by Robert
Auger
● Software Test Case Engineering By Ajay Bhagwat
Software Testing(MCA 513) Unit 1 (Contd..)
Lecture 12(14/10/2020)
The Management Audit
The management audit has three steps. Gather metric oriented input date; prepare project baselines
using industry standard approaches; and compare actual or projected values with the resultant baseline.
The project baseline will include things like total effort and schedule, deliverables (including page
counts), labor loading curves over time by skill set, development team skills and experience,
maintenance projections, defect projections by category, and so on. By comparing historic values for
staffing and other metric values with baseline values, deviations can be identified and analyzed.
Similarly, forward looking project plans can be compared to the baseline values and deviations
examined. Examples of the types of problems that pop out are shown in the following table:
Metric Industry
Std.
Audit results
Software Design Description page count 2,110 493
Software Test Description page count 873 42
Software testers (person months) 72 14
Software integration and test time (calendar
months)
4.2 0.75
(planned)
Sample management audit problem areas
Disaster Prevention Audits
Through periodic audits throughout the project lifecycle, problems can be identified early and corrective
action taken in a timely fashion. These preventative audits have significantly increased the project
success rates on large projects in the State of California, and are now a requirement for all large
projects. At each audit the team will look at work performed to date, and plans for future work, to
identify problems (if any) with each area. Preventative audits are normally accomplished as follows:
Milestone Scope of Audit
Project Initiation Project baseline plans
Software
Requirements
Review
Requirements, architecture, plans
Software Design
Review
Scope creep, design, architecture, database, interfaces, test
documentation and approach, coding guidelines, plans
Completion of
coding
Code implementation (complexity, adherence to guidelines,
encapsulation, algorithm order of magnitude, plans, user
interface
Delivery Maintainability, conformance to requirements, usability
In addition, project audits are normally conducted as a minimum every 6 months so if one stage extends
longer than 6 months a progress audit would be conducted during the middle of the phase.
Three Actual Examples
Let me describe three actual project audit results by way of illustration. The names are hidden because
of nondisclosure agreements.
 Project Alpha was a large project (over $100M). We focused on the management audit and
identified inadequate staffing levels overall, incorrectly trained staff, and incorrect labor mixes.
Our technical audit indicated abnormally high defect levels and major units needing rewrite. The
project was subsequently cancelled.
 Project Beta was a large project (over $150M). Our technical and management audit indicated
that the schedule slips were reasonable, the quality was high, and the team was doing a good
job of recovering from an initial underestimate of the scope. We also identified that the planned
maintenance effort was significantly underestimated. The project continued with a new
schedule, was successful, and an updated maintenance budget was submitted.
 Project Charlie was a small to moderate project (approximately $1.5M). Our technical and
mangement audit indicated that the current project team and approach was failing, but we
developed a recovery plan, which was subsequently implemented. Recovery planning is
addressed in the next section.
Saving Your Job and Sanity – Recovery Planning
OK, suppose that the project audit determines that your project is indeed in a disaster situation. What
are your options? If you continue on your present course, our experience indicates that in 100% of the
cases the project will ultimately fail. Alternatively, you can cancel the project immediately and cut your
losses. For non-mission critical systems that do not have a large return (in terms of reducing costs or
increasing revenues) this is often the best option. In many cases, however, failure is not an option. In
those cases, recovery planning is the next phase.
Recovery planning begins with software triage. Triage is a military term used by medical personnel
following a major battle. The wounded are separated into three categories: Those that will get better on
their own; those that will die no matter what is done; and those where medical attention has a
significant probability of helping the person to live. The doctors then focus all of their attention on those
where their assistance will make a difference. The initial step in recovery planning is to conduct a triage
on the existing software project. What is usable as is? What can be economically fixed and used? What
should be discarded? During this step, the core system functionality is also identified and all extraneous
features that can be deleted or delayed are called out.
The recovery planning phase then involves development of a new, zero based project baseline plan.
Existing plans, schedules, and so on are thrown out and the project is planned from the current situation
to an achievable completion. This requires that formal techniques be used for estimating effort,
schedule, the time-cost trade off, and so on. Delivered functionality and other project estimating
parameters are adjusted until an acceptable completion date is achieved or it becomes obvious that no
acceptable completion date is possible. We use Cost Xpert to complete the project estimates. We've
never had a project that was formally estimated and planned with Cost Xpert subsequently fail.
Conclusions
Outside project audits are critical for any project that is suspected of being in trouble, and are a high
return on investment (ROI) investment as a risk reduction technique for all projects of any significant
size. Auditors should be independent of the project team and have no vested interest in the project
succeeding or failing. Auditors should not be outside development shops, who may have a vested
interest in disparaging the current project team to place their own staff on the project. Audits look at
both technical and management issues, identify potential problems, and make recommendations. If a
project is found to be in a failure state, then recovery planning is undertaken to try to salvage the
project, if possible.
References
 IEEE: SWEBOK: Guide to the Software Engineering Body of Knowledge
 Carlo Ghezzi, Mehdi Jazayeri, Dino Mandrioli: Fundamentals of Software Engineering, Prentice
Hall, ISBN 0-13-099183-X
 Alan L. Breitler: A Verification Procedure for Software Derived from Artificial Neural Networks,
Journal of the International Test and Evaluation Association, Jan 2004, Vol 25, No 4.
 Vijay D'Silva, Daniel Kroening, Georg Weissenbacher: A Survey of Automated Techniques for
Formal Software Verification. IEEE Trans. on CAD of Integrated Circuits and Systems 27(7): 1165-
1178 (2008)
1. ^ Systems and software engineering – Vocabulary. Iso/Iec/IEEE 24765:2010(E). 2010-12-01.
pp. 1–418. doi:10.1109/IEEESTD.2010.5733835. ISBN 978-0-7381-6205-8.
2. ^ Kaner, Cem (May 2003). "What Is a Good Test Case?" (PDF). STAR East: 2.
3. ^ "Writing Test Rules to Verify Stakeholder Requirements". StickyMinds.
4. ^ Beizer, Boris (May 22, 1995). Black Box Testing. New York: Wiley. p. 3. ISBN 9780471120940.
5. ^ "An Introduction to Scenario Testing" (PDF). Cem Kaner. Retrieved 2009-05-07.
6. ^ Crispin, Lisa; Gregory, Janet (2009). Agile Testing: A Practical Guide for Testers and Agile
Teams. Addison-Wesley. pp. 192–5. ISBN 978-81-317-3068-3.
7. ^ https://www.softwaretestingstandard.org/part3.php ISO/IEC/IEEE 29119-4:2019, "Part 4: Test
techniques"
8. ^ Jump up to: a b
Liu, Juan (2014). "Studies of the Software Test Processes Based on GUI". 2014
International Conference on Computer, Network: 113–121. doi:10.1109/CSCI.2014.104.
ISBN 9781605951676. S2CID 15204091. Retrieved 2019-10-22.
9. ^ Kaner, Cem; Falk, Jack; Nguyen, Hung Q. (1993). Testing Computer Software (2nd ed.). Boston:
Thomson Computer Press. p. 123–4. ISBN 1-85032-847-1.
10. ^ Goethem, Brian Hambling, Pauline van (2013). User acceptance testing : a step-by-step guide.
BCS Learning & Development Limited. ISBN 9781780171678.
11. ^ Black, Rex (August 2009). Managing the Testing Process: Practical Tools and Techniques for
Managing Hardware and Software Testing. Hoboken, NJ: Wiley. ISBN 978-0-470-40415-7.
12. ^ Cimperman, Rob (2006). UAT Defined: A Guide to Practical User Acceptance Testing. Pearson
Education. pp. Chapter 2. ISBN 9780132702621.
External links
 Writing Software Security Test Cases - Putting security test cases into your test plan by Robert
Auger
 Software Test Case Engineering By Ajay Bhagwat
Software Testing(MCA 513)
MCA V Sem
Unit-I: Introduction
Lecture 13
Topic: :
Boundary Value Analysis & Equivalence Partitioning
Software Testing is imperative for a bug-free application; this can be done
manually or even automated. Although automation testing reduces the testing
time, manual testing continues to be the most popular method for validating the
functionality of software applications. Here, we are explaining the most important
manual software testing techniques.
● Black Box Testing Technique
● Boundary Value Analysis (BVA)
● Equivalence Class Partitioning
What are Software Testing Techniques?
Software Testing Techniques are basically certain procedures which help every
software development project improve its overall quality and effectiveness. It
helps in designing better test cases, which are a set of conditions or variables
under which a tester will determine whether a system under test satisfies
requirements or works correctly. Different testing techniques are implemented as
a part of the testing process to improve the effectiveness of the tests.
Black Box Test Design Technique
Black Box Test Design is defined as a testing technique in which functionality of
the Application Under Test (AUT) is tested without looking at the internal code
structure, implementation details and knowledge of internal paths of the software.
This type of testing is based entirely on software requirements and specifications.
In Black Box Testing, we just focus on input and output of the software system
without bothering about the inner working of the software . By using this
technique, we could save a lot of testing time and get good test case coverage.
Test Techniques are generally categorized into Five:
1. Boundary Value Analysis (BVA)
2. Equivalence Class Partitioning
3. Decision Table based testing.
4. State Transition
5. Error Guessing
Boundary Value Analysis (BVA):
BVA is another Black Box Test Design Technique, which is used to find the errors
at boundaries of input domain (tests the behavior of a program at the input
boundaries) rather than finding those errors in the centre of input. So, the basic
idea in boundary value testing is to select input variable values at their:
minimum, just above the minimum, just below the minimum, a nominal value,
just below the maximum, maximum and just above the maximum. That is, for
each range, there are two boundaries, the lower boundary (start of the range) and
the upper boundary (end of the range) and the boundaries are the beginning and
end of each valid partition . We should design test cases which exercise the
program functionality at the boundaries, and with values just inside and outside
the boundaries. Boundary value analysis is also a part of stress and negative
testing.
Suppose, if the input is a set of values between A and B, then design test cases for
A, A+1, A-1 and B, B+1, B-1.
Example:
Why go with Boundary Value Analysis?
Consider an example where a developer writes code for an amount text field
which will accept and transfer values only from 100 to 5000. The test engineer
checks it by entering 99 into the amount text field and then clicks on the transfer
button. It will show an error message as 99 is an invalid test case, because the
boundary values are already set as 100 and 5000. Since 99 is less than 100, the text
field will not transfer the amount.
The valid and invalid test cases are listed below.
Valid Test Cases
1. Enter the value 100 which is min value.
2. Enter the value 101 which is min+1 value.
3. Enter the value 4999 which is max-1 value.
4. Enter the value 5000 which is max value.
Invalid Test Cases
1. Enter the value 99 which is min-1 value.
2. Enter the value 5001 which is max+1 value
Equivalence Partitioning:
Equivalence partitioning is also known as “Equivalence Class Partitioning”. In this
method, the input domain data is divided into different equivalence data classes –
which are generally termed as ‘Valid’ and ‘Invalid’. The inputs to the software or
system are divided into groups that are expected to exhibit similar behavior.
Thus, it reduces the number of test cases to a finite list of testable test cases
covering maximum possibilities.
Example: Suppose the application you are testing accept values in the character
limit of 1 – 100 only. Here, there would be three partitions: one valid partition and
two invalid partitions.
The valid partition: Between 1 & 100 characters.
The expectation is that the text field would handle all inputs with 1-100 characters,
the same way.
The first invalid partition: 0 characters.
When no characters are entered, we’d expect the text field to reject the value.
The second invalid partition: ≥ 101
We’d expect the text field to reject all values greater than or equal to 101 characters
EQUIVALENCE PARTITIONING has been categorized into two parts:
1. Pressman Rule.
2. Practice Method.
1.Pressman Rule:
Rule 1: If input is a range of values, then design test cases for one valid and two
invalid values.
Rule 2: If input is a set of values, then design test cases for all valid value sets and
two invalid values.
For example:
Consider any online shopping website, where every product should have a
specific product ID and name. Users can search either by using name of the
product or by the product ID. Here, you can consider a set of products with product
IDs and you want to check for Laptops (valid value).
Rule 3: If input is Boolean, then design test cases for both true and false values.
Consider a sample web page which consists of first name, last name, and email
text fields with radio buttons for gender which use Boolean inputs.
If the user clicks on any of the radio buttons, the corresponding value should be
set as the input. If the user clicks on a different option, the value of input needs to
be updated with the new one (and the previously selected option should be
deselected).
Here, the instance of a radio button option being clicked can be treated as TRUE
and the instance where none are clicked, as FALSE. Also, two radio buttons should
not get selected simultaneously; if so, then it is considered as a bug.
2.Practice Method:
If the input is a range of values, then divide the range into equivalent parts. Then
test for all the valid values and ensure that 2 invalid values are being tested for.
For example:
If there is deviation in between the range of values, then use Practice Method.
If there is no deviation between the range of values, then use Pressman Rule.
Summary
Boundary Value Analysis is better than Equivalence Partitioning as it considers
both positive and negative values along with maximum and minimum value. So,
when compared with Equivalence Partitioning, Boundary Value Analysis proves
to be a better choice in assuring the quality.
● Software testing Techniques allow you to design better test cases. There are
five primarily used techniques.
● Boundary value analysis is testing at the boundaries between partitions.
● Equivalent Class Partitioning allows you to divide set of test condition into a
partition which should be considered the same.
References:
Celestial Systems Inc.
Compiled by:
Ms.Savita Mittal
Lecture 14(17/10/2020)
Decision Table Based Testing
Software Engineering | Decision Table
Decision table is a brief visual representation for specifying which actions to perform depending on
given conditions. The information represented in decision tables can also be represented as decision
trees or in a programming language using if-then-else and switch-case statements.
A decision table is a good way to settle with different combination inputs with their corresponding
outputs and also called cause-effect table. Reason to call cause-effect table is a related logical
diagramming technique called cause-effect graphing that is basically used to obtain the decision table.
Importance of Decision Table:
1. Decision tables are very much helpful in test design technique.
2. It helps testers to search the effects of combinations of different inputs and other software states
that must correctly implement business rules.
3. It provides a regular way of stating complex business rules, that is helpful for developers as well as
for testers.
4. It assists in development process with developer to do a better job. Testing with all combination
might be impractical.
5. A decision table is basically an outstanding technique used in both testing and requirements
management.
6. It is a structured exercise to prepare requirements when dealing with complex business rules.
7. It is also used in model complicated logic.
Decision Table in test designing:
Blank Decision Table
CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4
Condition 1
Condition 2
Condition 3
Condition 4
Decision Table: Combinations
CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4
Condition 1 Y Y N N
Condition 2 Y N Y N
Condition 3 Y N N Y
Condition 4 N Y Y N
Advantage of Decision Table:
1. Any complex business flow can be easily converted into the test scenarios & test cases using this
technique.
2. Decision tables work iteratively that means the table created at the first iteration is used as input
table for next tables. The iteration is done only if the initial table is not satisfactory.
3. Simple to understand and everyone can use this method design the test scenarios & test cases.
4. It provide complete coverage of test cases which help to reduce the rework on writing test
scenarios & test cases.
5. These tables guarantee that we consider every possible combination of condition values. This is
known as its completeness property.
Lecture 15
26/10/2020
Cause and Effect Graph in Black box Testing
Cause-effect graph comes under the black box testing technique which underlines the
relationship between a given result and all the factors affecting the result. It is used to
write dynamic test cases.
The dynamic test cases are used when code works dynamically based on user input. For
example, while using email account, on entering valid email, the system accepts it but,
when you enter invalid email, it throws an error message. In this technique, the input
conditions are assigned with causes and the result of these input conditions with effects.
Cause-Effect graph technique is based on a collection of requirements and used to
determine minimum possible test cases which can cover a maximum test area of the
software.
The main advantage of cause-effect graph testing is, it reduces the time of test execution
and cost.
This technique aims to reduce the number of test cases but still covers all necessary test
cases with maximum coverage to achieve the desired application quality.
Cause-Effect graph technique converts the requirements specification into a logical
relationship between the input and output conditions by using logical operators like AND,
OR and NOT.
Notations used in the Cause-Effect Graph
AND - E1 is an effect and C1 and C2 are the causes. If both C1 and C2 are true, then
effect E1 will be true.
OR - If any cause from C1 and C2 is true, then effect E1 will be true.
NOT - If cause C1 is false, then effect E1 will be true.
Mutually Exclusive - When only one cause is true.
Let's try to understand this technique with some examples:
Situation:
The character in column 1 should be either A or B and in the column 2 should be a digit.
If both columns contain appropriate values then update is made. If the input of column 1
is incorrect, i.e. neither A nor B, then message X will be displayed. If the input in column
2 is incorrect, i.e. input is not a digit, then message Y will be displayed.
o A file must be updated, if the character in the first column is either "A" or "B" and
in the second column it should be a digit.
o If the value in the first column is incorrect (the character is neither A nor B) then
massage X will be displayed.
o If the value in the second column is incorrect (the character is not a digit) then
massage Y will be displayed.
Now, we are going to make a Cause-Effect graph for the above situation:
Causes are:
o C1 - Character in column 1 is A
o C2 - Character in column 1 is B
o C3 - Character in column 2 is digit!
Effects:
o E1 - Update made (C1 OR C2) AND C3
o E2 - Displays Massage X (NOT C1 AND NOT C2)
o E3 - Displays Massage Y (NOT C3)
Where AND, OR, NOT are the logical gates.
Effect E1- Update made- The logic for the existence of effect E1 is "(C1 OR C2) AND
C3". For C1 OR C2, any one from C1 and C2 should be true. For logic AND C3
(Character in column 2 should be a digit), C3 must be true. In other words, for the
existence of effect E1 (Update made) any one from C1 and C2 but the C3 must be true.
We can see in graph cause C1 and C2 are connected through OR logic and effect E1 is
connected with AND logic.
Effect E2 - Displays Massage X - The logic for the existence of effect E2 is "NOT C1
AND NOT C2" that means both C1 (Character in column 1 should be A) and C2
(Character in column 1 should be B) should be false. In other words, for the existence of
effect E2 the character in column 1 should not be either A or B. We can see in the graph,
C1 OR C2 is connected through NOT logic with effect E2.
Effect E3 - Displays Massage Y- The logic for the existence of effect E3 is "NOT C3"
that means cause C3 (Character in column 2 is a digit) should be false. In other words,
for the existence of effect E3, the character in column 2 should not be a digit. We can
see in the graph, C3 is connected through NOT logic with effect E3.
So, it is the cause-effect graph for the given situation. A tester needs to convert causes
and effects into logical statements and then design cause-effect graph. If function gives
output (effect) according to the input (cause) so, it is considered as defect free, and if
not doing so, then it is sent to the development team for the correction.
Conclusion
Summary of the steps:
o Draw the circles for effects and Causes.
o Start from effect and then pick up what is the cause of this effect.
o Draw mutually exclusive causes (exclusive causes which are directly connected
via one effect and one cause) at last.
o Use logic gates to draw dynamic test cases.
Lecture 16
Date: 28/10/2020
Control Flow Software Testing & Cyclomatic Complexity
Last Updated: 05-08-2019
Control flow testing is a type of software testing that uses program’s control flow as a model. Control
flow testing is a structural testing strategy. This testing technique comes under white box testing. For the
type of control flow testing, all the structure, design, code and implementation of the software should be
known to the testing team.
This type of testing method is often used by developers to test their own code and own implementation as
the design, code and the implementation is better known to the developers. This testing method is
implemented with the intention to test the logic of the code so that the user requirements can be fulfilled.
Its main application is to relate the small programs and segments of the larger programs.
Control Flow Testing Process:
Following are the steps involved into the process of control flow testing:
 Control Flow Graph Creation:
From the given source code a control flow graph is created either manually or by using the
software.
 Coverage Target:
A coverage target is defined over the control flow graph that includes nodes, edges, paths, branches
etc.
 Test Case Creation:
Test cases are created using control flow graphs to cover the defined coverage target.
 Test Case Execution:
After the creation of test cases over coverage target, further test cases are executed.
 Analysis:
Analyze the result and find out whether the program is error free or has some defects.
Control Flow Graph:
Control Flow Graph is a graphical representation of control flow or computation that is done during the
execution of the program. Control flow graphs are mostly used in static analysis as well as compiler
applications, as they can accurately represent the flow inside of a program unit. Control flow graph was
originally developed by Frances E. Allen.
Cyclomatic Complexity:
Cyclomatic Complexity is the quantitative measure of the number of linearly independent paths in it. It is
a software metric used to describe the complexity of a program. It is computed using the Control Flow
Graph of the program.
M = E - N + 2P
Advantages of Control flow testing:
 It detects almost half of the defects that are determined during the unit testing.
 It also determines almost one-third of the defects of the whole program.
 It can be performed manually or automated as the control flow graph that is used can be made by
hand or by using software also.
Disadvantages of Control flow testing:
 It is difficult to find missing paths if program and the model are done by same person.
 Unlikely to find spurious features.
Lecture 17
Path Testing
Path Testing is a structural testing method based on the source code or algorithm and
NOT based on the specifications. It can be applied at different levels of granularity.
Path Testing Assumptions:
 The Specifications are Accurate
 The Data is defined and accessed properly
 There are no defects that exist in the system other than those that affect control
flow
Path Testing Techniques:
 Control Flow Graph (CFG) - The Program is converted into Flow graphs by
representing the code into nodes, regions and edges.
 Decision to Decision path (D-D) - The CFG can be broken into various
Decision to Decision paths and then collapsed into individual nodes.
 Independent (basis) paths - Independent path is a path through a DD-path
graph which cannot be reproduced from other paths by other methods.
Steps to Calculate the independent paths
Step 1 : Draw the Flow Graph of the Function/Program under consideration as shown
below:
Step 2 : Determine the independent paths.
Path 1: 1 - 2 - 5 - 7
Path 2: 1 - 2 - 5 - 6 - 7
Path 3: 1 - 2 - 3 - 2 - 5 - 6 - 7
Path 4: 1 - 2 - 3 - 4 - 2 - 5 - 6 - 7
Lecture 18
Unit 2
Topic: Generating Graphs From Program
Input description: Parameters describing the desired graph, such as the number of vertices n,
the number of edges m, or the edge probability p.
Problem description: Generate (1) all or (2) a random or (3) the next graph satisfying the
parameters.
Discussion: Graph generation typically arises in constructing test data for programs. Perhaps
you have two different programs that solve the same problem, and you want to see which one is
faster or make sure that they always give the same answer. Another application is experimental
graph theory, verifying whether a particular property is true for all graphs or how often it is true.
It is much easier to conjecture the four-color theorem once you have demonstrated 4-colorings
for all planar graphs on 15 vertices.
A different application of graph generation arises in network design. Suppose you need to
design a network linking ten machines using as few cables as possible, such that the network can
survive up to two vertex failures. One approach is to test all the networks with a given number of
edges until you find one that will work. For larger graphs, more heuristic approaches, like
simulated annealing, will likely be necessary.
Many factors complicate the problem of generating graphs. First, make sure you know what you
want to generate:
 Do I want labeled or unlabeled graphs? - The issue here is whether the names of the
vertices matter in deciding whether two graphs are the same. In generating labeled
graphs, we seek to construct all possible labelings of all possible graph topologies. In
generating unlabeled graphs, we seek only one representative for each topology and
ignore labelings. For example, there are only two connected unlabeled graphs on three
vertices - a triangle and a simple path. However, there are four connected labeled
graphs on three vertices - one triangle and three 3-vertex paths, each distinguished by
their central vertex. In general, labeled graphs are much easier to generate. However,
there are so many more of them that you quickly get swamped with isomorphic copies
of the same few graphs.
 What do I mean by random? - There are two primary models of random graphs, both
of which generate graphs according to different probability distributions. The first model
is parameterized by a given edge probability p. Typically, p=0.5, although smaller values
can be used to construct sparser random graphs. In this model a coin is flipped for each
pair of vertices x and y to decide whether to add an edge (x,y). All labeled graphs will be
generated with equal probability when p=1/2.
The second model is parameterized by the desired number of edges m. It selects m
distinct edges uniformly at random. One way to do this is by drawing random (x,y)-pairs
and creating an edge if that pair is not already in the graph. An alternative approach to
computing the same things constructs the set of possible edges and selects a random m-
subset of them, as discussed in Section .
Which of these options best models your application? Probably none of them. Random graphs,
by definition, have very little structure. In most applications, graphs are used to model
relationships, which are often highly structured. Experiments conducted on random graphs,
although interesting and easy to perform, often fail to capture what you are looking for.
An alternative to random graphs is to use ``organic'' graphs, graphs that reflect the relationships
among real-world objects. The Stanford GraphBase, discussed below, is an outstanding source
of organic graphs. Further, there are many raw sources of relationships electronically available
via the Internet that can be turned into interesting organic graphs with a little programming and
imagination. Consider the graph defined by a set of WWW pages, with any hyperlink between
two pages defining an edge. Or what about the graph implicit in railroad, subway, or airline
networks, with vertices being stations and edges between two stations connected by direct
service? As a final example, every large computer program defines a call graph, where the
vertices represent subroutines, and there is an edge (x,y) if x calls y.
Two special classes of graphs have generation algorithms that have proven particularly useful in
practice:
 Trees - Prüfer codes provide a simple way to rank and unrank labeled trees and thus
solve all the standard generation problems discussed in Section . There are exactly
labeled trees on n vertices, and exactly that many strings of length n-2 on the
alphabet .
The key to Prüfer's bijection is the observation that every tree has at least two vertices of
degree 1. Thus in any labeled tree, the vertex v incident on the leaf with lowest label is
well-defined. We take v to be , the first character in the code. We then delete the
associated leaf and repeat the procedure until only two vertices are left. This defines a
unique code S for any given labeled tree that can be used to rank the tree. To go from
code to tree, observe that the degree of vertex v in the tree is one more than the number of
times v occurs in S. The lowest-labeled leaf will be the smallest integer missing from S,
which when paired with determines the first edge of the tree. The entire tree follows by
induction.
Algorithms for efficiently generating unlabeled rooted trees are presented in the
implementation section below.
 Fixed degree sequence graphs - The degree sequence of a graph G is an integer
partition where is the degree of the ith highest-degree vertex of G.
Since each edge contributes to the degree of two vertices, p is a partition of 2m, where
m is the number of edges in G.
Not all partitions correspond to degree sequences of graphs. However, there is a recursive
construction that constructs a graph with a given degree sequence if one exists. If a
partition is realizable, the highest-degree vertex can be connected to the next highest-
degree vertices in G, or the vertices corresponding to parts . Deleting and
decrementing yields a smaller partition, which we recur on. If we terminate
without ever creating negative numbers, the partition was realizable. Since we always
connect the highest-degree vertex to other high-degree vertices, it is important to reorder
the parts of the partition by size after each iteration.
Although this construction is deterministic, a semirandom collection of graphs realizing
this degree sequence can be generated from G using edge-flipping operations. Suppose
edges (x,y) and (w,z) are in G, but (x,w) and (y,z) are not. Exchanging these pairs of edges
creates a different (not necessarily connected) graph without changing the degrees of any
vertex.
Implementations: The Stanford GraphBase [Knu94] is perhaps most useful as an instance
generator for constructing a wide variety of graphs to serve as test data for other programs. It
incorporates graphs derived from interactions of characters in famous novels, Roget's Thesaurus,
the Mona Lisa, expander graphs, and the economy of the United States. It also contains routines
for generating binary trees, graph products, line graphs, and other operations on basic graphs.
Finally, because of its machine-independent random number generators, it provides a way to
construct random graphs such that they can be reconstructed elsewhere, thus making them
perfect for experimental comparisons of algorithms. See Section for additional information.
Combinatorica [Ski90] provides Mathematica generators for basic graphs such as stars, wheels,
complete graphs, random graphs and trees, and graphs with a given degree sequence. Further,
it includes operations to construct more interesting graphs from these, including join, product,
and line graph. Graffiti [Faj87], a collection of almost 200 graphs of graph-theoretic interest,
are available in Combinatorica format. See Section .
The graph isomorphism testing program nauty (see Section ), by Brendan D. McKay of the
Australian National University, has been used to generate catalogs of all nonisomorphic graphs
with up to 11 vertices. This extension to nauty, named makeg, can be obtained by anonymous
ftp from bellatrix.anu.edu.au (150.203.23.14) in the directory pub/nauty19.
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes
Software testing  lecture notes

More Related Content

What's hot

Software engineering critical systems
Software engineering   critical systemsSoftware engineering   critical systems
Software engineering critical systemsDr. Loganathan R
 
Architecture design in software engineering
Architecture design in software engineeringArchitecture design in software engineering
Architecture design in software engineeringPreeti Mishra
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance Webtech Learning
 
UML diagrams and symbols
UML diagrams and symbolsUML diagrams and symbols
UML diagrams and symbolsKumar
 
Capability Maturity Model (CMM) in Software Engineering
Capability Maturity Model (CMM) in Software EngineeringCapability Maturity Model (CMM) in Software Engineering
Capability Maturity Model (CMM) in Software EngineeringFaizanAhmad340414
 
Software requirements specification
Software requirements specificationSoftware requirements specification
Software requirements specificationlavanya marichamy
 
IT8076 - SOFTWARE TESTING
IT8076 - SOFTWARE TESTINGIT8076 - SOFTWARE TESTING
IT8076 - SOFTWARE TESTINGSathya R
 
Formal Specification in Software Engineering SE9
Formal Specification in Software Engineering SE9Formal Specification in Software Engineering SE9
Formal Specification in Software Engineering SE9koolkampus
 
Software requirement and specification
Software requirement and specificationSoftware requirement and specification
Software requirement and specificationAman Adhikari
 
White box & Black box testing
White box & Black box testingWhite box & Black box testing
White box & Black box testingNitishMhaske1
 
formal verification
formal verificationformal verification
formal verificationToseef Aslam
 
verification and validation
verification and validationverification and validation
verification and validationDinesh Pasi
 
Dynamic and Static Modeling
Dynamic and Static ModelingDynamic and Static Modeling
Dynamic and Static ModelingSaurabh Kumar
 
Real Time Software Design in Software Engineering SE13
Real Time Software Design in Software Engineering SE13Real Time Software Design in Software Engineering SE13
Real Time Software Design in Software Engineering SE13koolkampus
 

What's hot (20)

Analysis modeling
Analysis modelingAnalysis modeling
Analysis modeling
 
5. scm
5. scm5. scm
5. scm
 
Presentation on uml
Presentation on umlPresentation on uml
Presentation on uml
 
Software engineering critical systems
Software engineering   critical systemsSoftware engineering   critical systems
Software engineering critical systems
 
Architecture design in software engineering
Architecture design in software engineeringArchitecture design in software engineering
Architecture design in software engineering
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
 
UML diagrams and symbols
UML diagrams and symbolsUML diagrams and symbols
UML diagrams and symbols
 
Capability Maturity Model (CMM) in Software Engineering
Capability Maturity Model (CMM) in Software EngineeringCapability Maturity Model (CMM) in Software Engineering
Capability Maturity Model (CMM) in Software Engineering
 
Software requirements specification
Software requirements specificationSoftware requirements specification
Software requirements specification
 
Package Diagram
Package DiagramPackage Diagram
Package Diagram
 
IT8076 - SOFTWARE TESTING
IT8076 - SOFTWARE TESTINGIT8076 - SOFTWARE TESTING
IT8076 - SOFTWARE TESTING
 
Formal Specification in Software Engineering SE9
Formal Specification in Software Engineering SE9Formal Specification in Software Engineering SE9
Formal Specification in Software Engineering SE9
 
Software requirement and specification
Software requirement and specificationSoftware requirement and specification
Software requirement and specification
 
White box & Black box testing
White box & Black box testingWhite box & Black box testing
White box & Black box testing
 
formal verification
formal verificationformal verification
formal verification
 
verification and validation
verification and validationverification and validation
verification and validation
 
Dynamic and Static Modeling
Dynamic and Static ModelingDynamic and Static Modeling
Dynamic and Static Modeling
 
Unit testing
Unit testing Unit testing
Unit testing
 
Software metrics
Software metricsSoftware metrics
Software metrics
 
Real Time Software Design in Software Engineering SE13
Real Time Software Design in Software Engineering SE13Real Time Software Design in Software Engineering SE13
Real Time Software Design in Software Engineering SE13
 

Similar to Software testing lecture notes

Software_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfSoftware_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfAnupmaMunshi
 
Why Software Testing is Crucial in Software Development_.pdf
Why Software Testing is Crucial in Software Development_.pdfWhy Software Testing is Crucial in Software Development_.pdf
Why Software Testing is Crucial in Software Development_.pdfXDuce Corporation
 
What is Software Testing Definition, Types and Benefits.pdf
What is Software Testing Definition, Types and Benefits.pdfWhat is Software Testing Definition, Types and Benefits.pdf
What is Software Testing Definition, Types and Benefits.pdfJoeyWilliams21
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWJournal For Research
 
Manual Testing guide by nagula sai kiran.docx
Manual Testing guide by nagula sai kiran.docxManual Testing guide by nagula sai kiran.docx
Manual Testing guide by nagula sai kiran.docxsai kiran
 
Why is it important to hire an independent testing team for your development ...
Why is it important to hire an independent testing team for your development ...Why is it important to hire an independent testing team for your development ...
Why is it important to hire an independent testing team for your development ...App Sierra
 
What are the advantages of non functional testing
What are the advantages of non functional testingWhat are the advantages of non functional testing
What are the advantages of non functional testingMaveric Systems
 
Black box testing
Black box testingBlack box testing
Black box testingAnil Shivaa
 
Lesson 7...Question Part 1
Lesson 7...Question Part 1Lesson 7...Question Part 1
Lesson 7...Question Part 1bhushan Nehete
 
Top 10 Practices for Software Testing in 2023.pptx
Top 10 Practices for Software Testing in 2023.pptxTop 10 Practices for Software Testing in 2023.pptx
Top 10 Practices for Software Testing in 2023.pptxOprim Solutions
 
Software Testing Interview Questions For Experienced
Software Testing Interview Questions For ExperiencedSoftware Testing Interview Questions For Experienced
Software Testing Interview Questions For Experiencedzynofustechnology
 
Software Quality Assurance.docx
Software Quality Assurance.docxSoftware Quality Assurance.docx
Software Quality Assurance.docx10Pie
 
ISTQB Chapter 1 Fundamentals of Testing
ISTQB Chapter 1  Fundamentals of TestingISTQB Chapter 1  Fundamentals of Testing
ISTQB Chapter 1 Fundamentals of Testingssuser2d9936
 
Examining test coverage in software testing (1)
Examining test coverage in software testing (1)Examining test coverage in software testing (1)
Examining test coverage in software testing (1)get joys
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdfGaurav Nigam
 

Similar to Software testing lecture notes (20)

Software_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfSoftware_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdf
 
Why Software Testing is Crucial in Software Development_.pdf
Why Software Testing is Crucial in Software Development_.pdfWhy Software Testing is Crucial in Software Development_.pdf
Why Software Testing is Crucial in Software Development_.pdf
 
Why is software testing important
Why is software testing important Why is software testing important
Why is software testing important
 
Why is software testing important
Why is software testing importantWhy is software testing important
Why is software testing important
 
What is Software Testing Definition, Types and Benefits.pdf
What is Software Testing Definition, Types and Benefits.pdfWhat is Software Testing Definition, Types and Benefits.pdf
What is Software Testing Definition, Types and Benefits.pdf
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEW
 
Manual Testing guide by nagula sai kiran.docx
Manual Testing guide by nagula sai kiran.docxManual Testing guide by nagula sai kiran.docx
Manual Testing guide by nagula sai kiran.docx
 
Basics in software testing
Basics in software testingBasics in software testing
Basics in software testing
 
Why is it important to hire an independent testing team for your development ...
Why is it important to hire an independent testing team for your development ...Why is it important to hire an independent testing team for your development ...
Why is it important to hire an independent testing team for your development ...
 
What are the advantages of non functional testing
What are the advantages of non functional testingWhat are the advantages of non functional testing
What are the advantages of non functional testing
 
Bab 1
Bab 1Bab 1
Bab 1
 
Black box testing
Black box testingBlack box testing
Black box testing
 
Black box
Black boxBlack box
Black box
 
Lesson 7...Question Part 1
Lesson 7...Question Part 1Lesson 7...Question Part 1
Lesson 7...Question Part 1
 
Top 10 Practices for Software Testing in 2023.pptx
Top 10 Practices for Software Testing in 2023.pptxTop 10 Practices for Software Testing in 2023.pptx
Top 10 Practices for Software Testing in 2023.pptx
 
Software Testing Interview Questions For Experienced
Software Testing Interview Questions For ExperiencedSoftware Testing Interview Questions For Experienced
Software Testing Interview Questions For Experienced
 
Software Quality Assurance.docx
Software Quality Assurance.docxSoftware Quality Assurance.docx
Software Quality Assurance.docx
 
ISTQB Chapter 1 Fundamentals of Testing
ISTQB Chapter 1  Fundamentals of TestingISTQB Chapter 1  Fundamentals of Testing
ISTQB Chapter 1 Fundamentals of Testing
 
Examining test coverage in software testing (1)
Examining test coverage in software testing (1)Examining test coverage in software testing (1)
Examining test coverage in software testing (1)
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdf
 

More from TEJVEER SINGH

Software engineering lecture notes
Software engineering lecture notesSoftware engineering lecture notes
Software engineering lecture notesTEJVEER SINGH
 
Introduction to cloud computing
Introduction to cloud computingIntroduction to cloud computing
Introduction to cloud computingTEJVEER SINGH
 
HOW TO DOWNLOAD MICROSOFT WORD IN ANDROID, and How to convert doc file into ...
HOW TO DOWNLOAD MICROSOFT WORD  IN ANDROID, and How to convert doc file into ...HOW TO DOWNLOAD MICROSOFT WORD  IN ANDROID, and How to convert doc file into ...
HOW TO DOWNLOAD MICROSOFT WORD IN ANDROID, and How to convert doc file into ...TEJVEER SINGH
 
Computer graphics unit 4th
Computer graphics unit 4thComputer graphics unit 4th
Computer graphics unit 4thTEJVEER SINGH
 
Most Important C language program
Most Important C language programMost Important C language program
Most Important C language programTEJVEER SINGH
 
Multi Banking System
Multi Banking SystemMulti Banking System
Multi Banking SystemTEJVEER SINGH
 
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...TEJVEER SINGH
 
Computer network questions
Computer network questionsComputer network questions
Computer network questionsTEJVEER SINGH
 
#How to install mongoDB and also setup path
#How to install mongoDB and also setup path#How to install mongoDB and also setup path
#How to install mongoDB and also setup pathTEJVEER SINGH
 
java.lang.ClassNotFoundException: oracle.jdbc.driver.OracaleDriver
java.lang.ClassNotFoundException: oracle.jdbc.driver.OracaleDriverjava.lang.ClassNotFoundException: oracle.jdbc.driver.OracaleDriver
java.lang.ClassNotFoundException: oracle.jdbc.driver.OracaleDriverTEJVEER SINGH
 
Oracle 10g Installation Guide
Oracle 10g Installation GuideOracle 10g Installation Guide
Oracle 10g Installation GuideTEJVEER SINGH
 
Important dbms practical question
Important dbms practical  questionImportant dbms practical  question
Important dbms practical questionTEJVEER SINGH
 

More from TEJVEER SINGH (13)

Software engineering lecture notes
Software engineering lecture notesSoftware engineering lecture notes
Software engineering lecture notes
 
Introduction to cloud computing
Introduction to cloud computingIntroduction to cloud computing
Introduction to cloud computing
 
HOW TO DOWNLOAD MICROSOFT WORD IN ANDROID, and How to convert doc file into ...
HOW TO DOWNLOAD MICROSOFT WORD  IN ANDROID, and How to convert doc file into ...HOW TO DOWNLOAD MICROSOFT WORD  IN ANDROID, and How to convert doc file into ...
HOW TO DOWNLOAD MICROSOFT WORD IN ANDROID, and How to convert doc file into ...
 
Computer graphics unit 4th
Computer graphics unit 4thComputer graphics unit 4th
Computer graphics unit 4th
 
Most Important C language program
Most Important C language programMost Important C language program
Most Important C language program
 
Multi Banking System
Multi Banking SystemMulti Banking System
Multi Banking System
 
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...
Design principle of pattern recognition system and STATISTICAL PATTERN RECOGN...
 
Computer network questions
Computer network questionsComputer network questions
Computer network questions
 
#How to install mongoDB and also setup path
#How to install mongoDB and also setup path#How to install mongoDB and also setup path
#How to install mongoDB and also setup path
 
java.lang.ClassNotFoundException: oracle.jdbc.driver.OracaleDriver
java.lang.ClassNotFoundException: oracle.jdbc.driver.OracaleDriverjava.lang.ClassNotFoundException: oracle.jdbc.driver.OracaleDriver
java.lang.ClassNotFoundException: oracle.jdbc.driver.OracaleDriver
 
Oracle 10g Installation Guide
Oracle 10g Installation GuideOracle 10g Installation Guide
Oracle 10g Installation Guide
 
Important dbms practical question
Important dbms practical  questionImportant dbms practical  question
Important dbms practical question
 
Shift reduce parser
Shift reduce parserShift reduce parser
Shift reduce parser
 

Recently uploaded

Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupJonathanParaisoCruz
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,Virag Sontakke
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...M56BOOKSTORE PRODUCT/SERVICE
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 

Recently uploaded (20)

Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized Group
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 

Software testing lecture notes

  • 1. Software Testing(MCA 513) MCA V Sem Unit-I: Introduction Topic: Basics of software testing What Is Software Testing? Software testing is the process of finding errors in the developed product. It also checks whether the real outcomes can match expected results, as well as aids in the identification of defects, missing requirements, or gaps. Testing is the penultimate step before the launch of the product to the market. It includes examination, analysis, observation, and evaluation of different aspects of a product. Professional Software Testers use a combination of manual testing with automated tools. After conducting tests, the testers report the results to the development team. The end goal is to deliver a quality product to the customer, which is why software testing is so important. Importance of Software Testing : It’s common for many startups to skip testing. They might say that their budget is the reason why they overlook such an important step. But to make a strong and positive first impression, testing a product for bugs is a must. Similarly, established organizations need to maintain their client base. So they have to ensure the delivery of flawless products to the end-user. Let’s take a look at some points and see why software testing is vital to good software development. 1.Test Automation Enhances Product Quality An enterprise can only bring value to the customer only when the product delivered is ideal. And to achieve that, an organization focuses on testing applications and fixes the bugs that testing reveals before releasing the product. When the team resolves issues before the product reaches the customer, the quality of the deliverable increases. 2.Test Automation Improves Security
  • 2. When customers use the software, they are bound to reveal some sort of personal information. To prevent hackers from getting hold of this data, security testing is a must before the software is released. When an organization follows a proper testing process, it ensures a secure product that in turn makes customers feel safe while using the product. For instance, banking applications or e-commerce stores need payment information. If the developers don’t fix security-related bugs, it can cause massive financial loss. 3.Test Automation Detects Compatibility With Different Devices and Platforms The days are gone when customers worked exclusively on hefty desktop models. In the mobile- first age, testing a product’s device compatibility is a must. For instance, let’s say your organization developed a website. The tester must check whether the website runs on different device resolutions. Additionally, it should also run on different browsers. Another reason why testing is gaining more importance is ever-increasing browser options. What works fine on Chrome may not run well on Safari or Internet Explorer. This gives rise to the need for cross-browser testing, which includes checking the compatibility of the application on different browsers. References: 1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University. 2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John Wiley & Sons, Inc. p. 567. ISBN 9813083840. Software Validation. The process of ensuring that the software is performing the right process. Software Verification. The process of ensuring that the software is performing the process right." Likewise and also there: "In short, Boehm (3) expressed the difference between the software verification and software validation as follows: Verification: ‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right product?’’. 4. GeeksforGeeks Compiled By: Ms. Savita Mittal DCA
  • 3. Software Testing(MCA 513) MCA V Sem Unit-I: Introduction Topic: Principles Of Testing 1) Testing shows presence of defects: Testing can show the defects are present, but cannot prove that there are no defects. Even after testing the application or product thoroughly we cannot say that the product is 100% defect free. Testing always reduces the number of undiscovered defects remaining in the software but even if no defects are found, it is not a proof of correctness. 2) Exhaustive testing is impossible: Testing everything including all combinations of inputs and preconditions is not possible. So, instead of doing the exhaustive testing we can use risks and priorities to focus testing efforts. For example: In an application in one screen there are 15 input fields, each having 5 possible values, then to test all the valid combinations you would need 30 517 578 125 (515) tests. This is very unlikely that the project timescales would allow for this number of tests. So, accessing and managing risk is one of the most important activities and reason for testing in any project. 3) Early testing: In the software development life cycle testing activities should start as early as possible and should be focused on defined objectives. 4) Defect clustering: A small number of modules contains most of the defects discovered during pre-release testing or shows the most operational failures. 5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide Paradox”, it is really very important to review the test cases regularly and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects. 6) Testing is context dependent: Testing is basically context dependent. Different kinds of sites are tested differently. For example, safety – critical software is tested differently from an e- commerce site. 7) Absence – of – errors fallacy: If the system built is unusable and does not fulfil the user’s needs and expectations then finding and fixing defects does not help. Software Testing Requirement: Software testing is very important because of the following reasons:
  • 4. 1. Software testing is really required to point out the defects and errors that were made during the development phases. • Example: Programmers may make a mistake during the implementation of the software. There could be many reasons for this like lack of experience of the programmer, lack of knowledge of the programming language, insufficient experience in the domain, incorrect implementation of the algorithm due to complex logic or simply human error. • If the customer does not find the testing organization reliable or is not satisfied with the quality of the deliverable, then they may switch to a competitor organization. • Sometimes contracts may also include monetary penalties with respect to the timeline and quality of the product. In such cases, if proper software testing may also prevent monetary losses. 2.It’s essential since it makes sure that the customer finds the organization reliable and their satisfaction in the application is maintained. It is very important to ensure the Quality of the product. Quality product delivered to the customers helps in gaining their confidence. As explained in the previous point, delivering good quality product on time builds the customers confidence in the team and the organization. • 3. Testing is necessary in order to provide the facilities to the customers like the delivery of high quality product or software application which requires lower maintenance cost and hence results into more accurate, consistent and reliable results. a. High quality product typically has fewer defects and requires lesser maintenance effort, which in turn means reduced costs. • • 4. Testing is required for an effective performance of software application or product. • 5. It’s important to ensure that the application should not result into any failures because it can be very expensive in the future or in the later stages of the development. a. Proper testing ensures that bugs and issues are detected early in the life cycle of the product or application. • b. If defects related to requirements or design are detected late in the life cyle, it can be very expensive to fix them since this might require redesign, re-implementation and retesting of the application. • • 6. It’s required to stay in the business. a. Users are not inclined to use software that has bugs. They may not adopt a software if they are not happy with the stability of the application. • b. In case of a product organization or startup which has only one product, poor quality of software may result in lack of adoption of the product and this may result in losses which the business may not recover from.
  • 5. References: 1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University. 2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John Wiley & Sons, Inc. p. 567. ISBN 9813083840. Software Validation. The process of ensuring that the software is performing the right process. Software Verification. The process of ensuring that the software is performing the process right." Likewise and also there: "In short, Boehm (3) expressed the difference between the software verification and software validation as follows: Verification: ‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right product?’’. 4. GeeksforGeeks Compiled By: Ms. Savita Mittal DCA
  • 6. Software Testing(MCA 513) MCA V Sem Lecture : 4 (29/9/2020) Unit-I: Introduction Behavior and Correctness : Correctness from software engineering perspective can be defined as the adherence to the specifications that determine how users can interact with the software and how the software should behave when it is used correctly. If the software behaves incorrectly, it might take considerable amount of time to achieve the task or sometimes it is impossible to achieve it. Important rules: Below are some of the important rules for effective programming which are consequences of the program correctness theory.  Defining the problem completely.  Develop the algorithm and then the program logic.  Reuse the proved models as much as possible.  Prove the correctness of algorithms during the design phase.  Developers should pay attention to the clarity and simplicity of your program.  Verifying each part of a program as soon as it is developed. Testing and Debugging: Testing: Testing is the process of verifying and validating that a software or application is bug free, meets the technical requirements as guided by its design and development and meets the user requirements effectively and efficiently with handling all the exceptional and boundary cases. Debugging: Debugging is the process of fixing a bug in the software. It can defined as the identifying, analyzing and removing errors. This activity begins after the software fails to execute properly and concludes by solving the problem and successfully testing the software. It is considered to be an extremely complex and tedious task because errors need to be resolved at all stages of debugging.
  • 7. Software Testing Metrics & Measurements: In software projects, it is extremely important to measure the quality, cost, and effectiveness of the project and the processes. Without measuring these, a project can’t head towards successful completion. What is software testing metrics?Software Testing Metrics is defined as a quantitative measure that helps to estimate the progress and quality of a software testing process. A metric is defined as the degree to which a system or its component possesses a specific attribute. References: 1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University. 2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John Wiley & Sons, Inc. p. 567. ISBN 9813083840. Software Validation. The process of ensuring that the software is performing the right process. Software Verification. The process of ensuring that the software is performing the process right." Likewise and also there: "In short, Boehm (3) expressed the difference between the software verification and software validation as follows: Verification: ‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right product?’’. 4. GeeksforGeeks Compiled By: Ms. Savita Mittal Astt. Professor, DCA Software Testing(MCA 513) MCA V Sem Unit-I: Introduction Topic: Software Testing Metrics Now that we have grasped the concept of life cycle, let us move ahead to understanding the method to calculate test metrics. Method to calculate test metrics Here is a method to calculate software test metrics: There are certain steps that you must follow.
  • 8. 1. Firstly, identify the key software testing processes to be measured 2. In this step, the tester uses the data as the base to define the metrics 3. Determination of the information to be followed, a frequency of tracking and the person responsible for the task 4. Effective calculation, management, and interpretation of the metrics that is defined 5. Identify the areas of improvement depending on the interpretation of the defined metrics Verification, Validation and Testing: Verification and validation are not the same things, although they are often confused. Boehm succinctly expressed the difference as [1] • • Verification: Are we building the product right? • • Validation: Are we building the right product? "Building the product right" checks that the specifications are correctly implemented by the system while "building the right product" refers back to the user's needs . In some contexts, it is required to have written requirements for both as well as formal procedures or protocols for determining compliance. Building the product right implies the use of the Requirements Specification as input for the next phase of the development process, the design process, the output of which is the Design Specification. Then, it also implies the use of the Design Specification to feed the construction process. Every time the output of a process correctly implements its input specification, the software product is one step closer to final verification. If the output of a process is incorrect, the developers are not building the product the stakeholders want correctly. This kind of verification is called "artifact or specification verification". Building the right product implies creating a Requirements Specification that contains the needs and goals of the stakeholders of the software product. If such artifact is incomplete or wrong, the 3 developers will not be able to build the product the stakeholders want. This is a form of "artifact or specification validation". Note: Verification begins before Validation and then they run in parallel until the software product is released. Software verification It would imply to verify if the specifications are met by running the software but this is not possible (e. g., how can anyone know if the architecture/design/etc. are correctly implemented by running the software?). Only by reviewing its associated artifacts, someone can conclude if the specifications are met. Artifact or specification verification
  • 9. The output of each software development process stage can also be subject to verification when checked against its input specification. Examples of artifact verification: • • Of the design specification against the requirement specification: Do the architectural design, detailed design and database logical model specifications correctly implement the functional and non-functional requirement specifications? • • Of the construction artifacts against the design specification: Do the source code, user interfaces and database physical model correctly implement the design specification? Software validation Software validation checks that the software product satisfies or fits the intended use (high- level checking), i.e., the software meets the user requirements, not as specification artifacts or as needs of those who will operate the software only; but, as the needs of all the stakeholders (such as users, operators, administrators, managers, investors, etc.). There are two ways to perform software validation: internal and external. During internal software validation, it is assumed that the goals of the stakeholders were correctly understood and that they were expressed in the requirement artifacts precisely and comprehensively. If the software meets the requirement specification, it has been internally validated. External validation happens when it is performed by asking the stakeholders if the software meets their needs. Different software development methodologies call for different levels of user and stakeholder involvement and feedback; so, external validation can be a discrete or a continuous event. Successful final external validation occurs when all the stakeholders accept the software product and express that it satisfies their needs. Such final external validation requires the use of an acceptance test which is a dynamic test. However, it is also possible to perform internal static tests to find out if it meets the requirements specification but that falls into the scope of static verification because the software is not running. 4 Artifact or specification validation Requirements should be validated before the software product as a whole is ready (the waterfall development process requires them to be perfectly defined before design starts; but, iterative development processes do not require this to be so and allow their continual improvement). Examples of artifact validation: • • User Requirements Specification validation: User requirements as stated in a document
  • 10. called User Requirements Specification are validated by checking if they indeed represent the will and goals of the stakeholders. This can be done by interviewing them and asking them directly (static testing) or even by releasing prototypes and having the users and stakeholders to assess them (dynamic testing). • • User input validation: User input (gathered by any peripheral such as keyboard, bio-metric sensor, etc.) is validated by checking if the input provided by the software operators or users meet the domain rules and constraints (such as data type, range, and format). References: 1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University. 2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John Wiley & Sons, Inc. p. 567. ISBN 9813083840 . Software Validation. The process of ensuring that the software is performing the right process. Software Verification. The process of ensuring that the software is performing the process right." Likewise and also there: "In short, Boehm (3) expressed the difference between the software verification and software validation as follows: Verification: ‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right product?’’. 4. GeeksforGeeks Compiled By: Ms. Savita Mittal DCA Software Testing(MCA 513) MCA V Sem Lecture : 5 (30/9/2020) Unit-I: Introduction Importance of software testing metrics Software testing metrics is important due to several factors. Well, I have listed a few of them below: ● It helps you to take a decision for the next phase of activities ● It is evidence of the claim or prediction ● It helps you to understand the type of improvement required ● It eases the process of decision making or technology change Types of software testing metrics: Enlisting them below:
  • 11. ● Process Metrics ● Product Metrics ● Project Metrics Process Metrics: It is used to improve the efficiency of the process in the SDLC (Software Development Life Cycle). Product Metrics: It is used to tackle the quality of the software product. Project Metrics: It measures the efficiency of the team working on the project along with the testing tools used. Life-cycle of software testing metrics
  • 12. Analysis: It is responsible for the identification of metrics as well as the definition. Communicate: It helps in explaining the need and significance of metrics to stakeholders and testing team. It educates the testing team about the data points that need to be captured for processing the metric. Evaluation: It helps in capturing the needed data. It also verifies the validity of the captured data and calculates the metric value. Reports: It develops the report with an effective conclusion. It distributes the reports to the stakeholders, developers and the testing teams. Now that we have grasped the concept of life cycle, let us move ahead to understanding the method to calculate test metrics. References: 1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University. 2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John Wiley & Sons, Inc. p. 567. ISBN 9813083840. Software Validation. The process of ensuring that the software is performing the right process. Software Verification. The process of ensuring that the software is performing the process right." Likewise and also there: "In short, Boehm (3) expressed the difference between the software verification and software validation as follows: Verification: ‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right product?’’. 4. GeeksforGeeks Compiled By: Ms. Savita Mittal Astt. Professor, DCA
  • 13. Software Testing(MCA 513) MCA V Sem Lecture : 7 (5/10/2020) Unit-I: Introduction Types of Software Testing(Advance) According to the nature and scope of an application, there are different types of software testing. This is because not all testing procedures suit all products. And every type has its pros and cons. 1. Unit Testing It focuses on smallest unit of software design. In this we test an individual unit or group of inter related units. It is often done by programmer by using sample input and observing its corresponding outputs. Example: a) In a program we are checking if loop, method or function is working fine b) Misunderstood or incorrect, arithmetic precedence. c) Incorrect initialization 2. Integration Testing The objective is to take unit tested components and build a program structure that has been dictated by design.Integration testing is testing in which a group of components are combined to produce output. Integration testing is of four types: (i) Top down (ii) Bottom up (iii) Sandwich (iv) Big-Bang Example (a) Black Box testing:- It is used for validation.
  • 14. In this we ignores internal working mechanism and focuses on what is the output?. (b) White Box testing:- It is used for verification. In this we focus on internal mechanism i.e. how the output is achieved? 3. Regression Testing Every time new module is added leads to changes in program. This type of testing make sure that whole component works properly even after adding components to the complete program. Example In school record suppose we have module staff, students and finance combining these modules and checking if on integration these module works fine is regression testing 4. Smoke Testing This test is done to make sure that software under testing is ready or stable for further testing It is called smoke test as testing initial pass is done to check if it did not catch the fire or smoked in the initial switch on. Example: If project has 2 modules so before going to module make sure that module 1 works properly 5. Alpha Testing This is a type of validation testing.It is a type of acceptance testing which is done before the product is released to customers. It is typically done by QA people. Example: When software testing is performed internally within the organization
  • 15. 6. Beta Testing The beta test is conducted at one or more customer sites by the end-user of the software. This version is released for the limited number of users for testing in real time environment Example: When software testing is performed for the limited number of people 7. System Testing In this software is tested such that it works fine for different operating system.It is covered under the black box testing technique. In this we just focus on required input and output without focusing on internal working. In this we have security testing, recovery testing , stress testing and performance testing Example: This include functional as well as non functional testing 8. Stress Testing In this we gives unfavorable conditions to the system and check how they perform in those condition. Example: (a) Test cases that require maximum memory or other resources are executed (b) Test cases that may cause thrashing in a virtual operating system (c) Test cases that may cause excessive disk requirement 9. Performance Testing It is designed to test the run-time performance of software within the context of an integrated system.It is used to test speed and effectiveness of program. Example: Checking number of processor cycles.
  • 16. Quality and Reliability: Quality is a much broader aspect than Reliability. Quality covers almost everything from organisation, managements, service, procedures, people, product, product-life etc. In simple words, Reliability is only a subset of quality, the others being performance, consistency, efficiency, user experience etc. Whereas, Quality is the measure of conformance to laid down product specification. Let’s simplify it to understand the difference between Quality and Reliability. We will take an example: Suppose, you went to a shop to buy oranges. If the orange is good or bad, that measure is quality. You took the oranges that day and returned to home. Next day you again visited the same shop to buy oranges and found that the oranges available there are good again. You purchased it and returned to your home. You repeated the same process for one week and every time you got good oranges. Now, you have a trust on that shop that you will always get good oranges there, and that’s Reliability. Now we can say that:  Quality is Present (today) while Reliability is the future.  Quality can be controlled and measured to the accuracy, while Reliability is just a probability. We can ensure reliability by controlling the quality.  Quality is every thing untill put into operation (i.e t=0 hrs), while Reliability is every thing that happens after t=0 hrs. In other words, Product’s quality is measured prior to customer’s initial product use. However the Product reliability is measured during/after the customer’s product use.
  • 17.  Quality is the measure of conformance to laid down product specification.  Quality is a static measure of product meeting its specification, whereas Reliability is a dynamic measure of product performance.  Quality is observed whereas reliability is experienced.  You buy based upon quality. You come back and buy again based upon reliability  Poor quality system can have better reliability and a good quality system can have poor reliability.  Once design is over, maximum system reliability is fixed and through quality assurance what we can achieve is design reliability.  Quality is one among many parameters to ensure better reliability. References: 1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University. 2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John Wiley & Sons, Inc. p. 567. ISBN 9813083840. Software Validation. The process of ensuring that the software is performing the right process. Software Verification. The process of ensuring that the software is performing the process right." Likewise and also there: "In short, Boehm (3) expressed the difference between the software verification and software validation as follows: Verification: ‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right product?’’. 4. GeeksforGeeks Compiled By: Ms. Savita Mittal Astt. Professor, DCA Software Testing(MCA 513) MCA V Sem Lecture : 8 (5/10/2020) Unit-I: Introduction Defect Tracking: Defect tracking has long been used to measure software quality, with an emphasis on finding as many bugs as possible early in the development cycle.While finding bugs early is still an accepted goal in development circles,many agile teams have moved away from the idea of defect
  • 18. tracking, claiming that the process creates unnecessary overhead and prevents testers and developers from communicating effectively. There's some truth in that, but the re are ways to get the benefits from defect tracking without the drawbacks. Advantages of defect tracking There's no shortage of tools when it comes to defect tracking. You can find tools to track non- technical issues, customer-facing tools for production-related defects, and internal tools the development team can use to track defects. In fact, even if you're just using sticky notes, email, spreadsheets, or a log on a wiki to track customer issues, you'll need defect tracking of some sort. It's just a matter of figuring out the right tools and processes for the team. Defect tracking helps you ensure that bugs found in the system actually get fixed. Sure, it's great for testers and developers to have a conversation and to recreate the problem together. If the problem gets fixed immediately, great! Maybe it doesn't need to be logged. But if it just gets put down somewhere on a mental to-do list, ther e's a good chance it'll slip through the cracks. Defect tracking tools not only provide a way to ensure follow-through but also provide valuable metrics. Depending on the tool being used, the team can tie defects to changed code, tests, or other data that will allow for traceability or analysis on defect trends. If a certain module is riddled with defects, it may be time to review and rewrite that module. Defect tracking tools allow f or a repository of documentation that will provide value for troubleshooter s or for support personnel later on if there's a workaround for an issue. Having a tool in place also sends notifications to the right people when a bug needs to be fixed, tested, or marked as resolved. Disadvantages of defect tracking Many of the disadvantages of defect tracking have more to do with the overhead of the processes and tools than the idea of defect tracking itself. Some organizations use multiple tools to track defects of different types, and those tools often don't integrate well with one another. The team ends up with the same defect documented in multiple places with slightly different descriptions E28094, perhaps first from a user's perspectiveE28094and then from a technical perspective in the internal bug-tracking system. Complications can arise out of confusion over descriptions, lack of information , tools that are overly cumbersome and require mandatory fields for which the user doesn't have the answers, and difficulty in reporting. Sometimes the process overhead required to open a bug is more time- consuming than simply fixing the bug. If every low-priority defect is tracked, the defect report will include many bugs that will never be fixed because fixing them doesn't provide a high enough ROI. However, keeping them in a defect report as "open" will require constant reanalysis and may imply poor quality, when in fact these defects aren't issues customers care about.
  • 19. Though defect-tracking tools help in storing documentation, they may actually hind r communication if they prevent team members from talking and collaborating . If code is thrown "over the wall" from development to test and the bugs a re thrown back to development, when the only communication about defects is done through a tool, there are bound to be misunderstandings. This results in the classic line from developers, "it works on my machine," along with closure of a "user error." That leads to a reopened bug and an under-the-breath expletive from the tester. The best solution for the team Though agile development promotes "individuals and inter actions over processes and tools," processes and tools are still important. However, it's important for teams to avoid replacing strong communication with poor processes and cumbersome tools. Some agile teams document their defects by creating user stories. Itay Ben-Yehuda recommends against this, claiming that these defect-related stories will get lost in the backlog and take a backseat to new feature work. Other agile teams may take the approach that bugs must be fixed immediately and thus don't need to be tracked. "If the defect is important enough to fix, then fix it. Storing it only prolongs the pain and possibility of repercussion later," says Amy Reichert. Many agile teams take the approach that if a bug is found and fixed in code being developed during the current sprint, they don't need to track it. Otherwise, it needs to be tracked. Clearly, the usability of your defect-tracking tool is a big factor in how successful your team will be in using the tool to its advantage. Often, the team doesn't have a choice in the tools they use or the standards set by the governance team. How much flexibility they have depends on the organization. However, if a tool has been selected, it would be advantageous for someone on the team to become a super-user and take advantage of configuring the tool to make it as user friendly for the team as possible, perhaps by adding templates or canned reports. Teams need to work together to decide when defects will be tracked, as well as discuss any related processes or guidelines, such as how much information must be tracked for each ticket. Optimize these processes for communication, flow, and quality. Continually evaluate during retrospectives and adapt your processes over time. Every software development team needs to have a process for how to handle defects. Whether your tools are as simple as sticky notes or as complex as an integrated application lifecycle management too l suite, decide as a team how to use the tools and create the processes that will work best for you. References: 1.Zhen Ming (Jack) Jiang - EECS 4413, Software Testing, York University.
  • 20. 2.in4ce Education Solutions Pvt. Ltd. 3. Pham, H. (1999). Software Reliability. John Wiley & Sons, Inc. p. 567. ISBN 9813083840. Software Validation. The process of ensuring that the software is performing the right process. Software Verification. The process of ensuring that the software is performing the process right." Likewise and also there: "In short, Boehm (3) expressed the difference between the software verification and software validation as follows: Verification: ‘‘Are we building the product right?’’ Validation: ‘‘Are we building the right product?’’. 4. GeeksforGeeks Compiled By: Ms. Savita Mittal Astt. Professor, DCA Software Testing(MCA 513) MCA V Sem Lecture 9 Topic: : Test case, Test Suits & Test Oracle Test case In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing.
  • 21. Formal test cases In order to fully test that all the requirements of an application are met, there must be at least two test cases for each requirement: one positive test and one negative test. If a requirement has sub- requirements, each sub-requirement must have at least two test cases. Keeping track of the link between the requirement and the test is frequently done using a traceability matrix. Written test cases should include a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted. A formal written test-case is characterized by a known input and by an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a post condition. Informal test cases For applications or systems without formal requirements, test cases can be written based on the accepted normal operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities and results are reported after the tests have been run. In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex, and easy to evaluate. They are usually different from test cases in that test cases are single steps while scenarios cover a number of steps of the key.[5][6] Typical written test case format A test case is usually a single step, or occasionally a sequence of steps, to test the correct behaviour/functionality, features of an application. An expected result or expected outcome is usually given. Additional information that may be included  Test Case ID - This field uniquely identifies a test case.  Test case Description/Summary - This field describes the test case objective.  Test steps - In this field, the exact steps are mentioned for performing the test case.  Pre-requisites - This field specifies the conditions or steps that must be followed before the test steps executions.  Depth  Test category  Author- Name of the Tester.  Automation - Whether this test case is automated or not.  pass/fail  Remarks Larger test cases may also contain prerequisite states or steps, and descriptions.[8]
  • 22. A written test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table. Test suites often also contain  Test summary  Configuration Besides a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted, the most time-consuming part in the test case is creating the tests and modifying them when the system changes. Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would evaluate if the results can be considered as a pass. This happens often on new products' performance number determination. The first test is taken as the base line for subsequent test and product release cycles. Acceptance tests, which use a variation of a written test case, are commonly performed by a group of end-users or clients of the system to ensure the developed system meets the requirements specified or the contract.[10][11] User acceptance tests are differentiated by the inclusion of happy path or positive test cases to the almost complete exclusion of negative test cases. Testing suite In software development, a test suite, less commonly known as a 'validation suite', is a collection of test cases that are intended to be used to test a software program to show that it has some specified set of behaviours. A test suite often contains detailed instructions or goals for each collection of test cases and information on the system configuration to be used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests. Collections of test cases are sometimes incorrectly termed a test plan, a test script, or even a test scenario. Types Occasionally, test suites are used to group similar test cases together. A system might have a smoke test suite that consists only of smoke tests or a test suite for some specific functionality in
  • 23. the system. It may also contain all tests and signify if a test should be used as a smoke test or for some specific functionality. In model-based testing, one distinguishes between abstract test suites, which are collections of abstract test cases derived from a high-level model of the system under test, and executable test suites, which are derived from abstract test suites by providing the concrete, lower-level details needed to execute this suite by a program.[1] An abstract test suite cannot be directly used on the actual system under test (SUT) because abstract test cases remain at a high abstraction level and lack concrete details about the SUT and its environment. An executable test suite works on a sufficiently detailed level to correctly communicate with the SUT and a test harness is usually present to interface the executable test suite with the SUT. A test suite for a primality testing subroutine might consist of a list of numbers and their primality (prime or composite), along with a testing subroutine. The testing subroutine would supply each number in the list to the primality tester, and verify that the result of each test is correct. References:  IEEE: SWEBOK: Guide to the Software Engineering Body of Knowledge  Carlo Ghezzi, Mehdi Jazayeri, Dino Mandrioli: Fundamentals of Software Engineering, Prentice Hall, ISBN 0-13-099183-X  Alan L. Breitler: A Verification Procedure for Software Derived from Artificial Neural Networks, Journal of the International Test and Evaluation Association, Jan 2004, Vol 25, No 4.  Vijay D'Silva, Daniel Kroening, Georg Weissenbacher: A Survey of Automated Techniques for Formal Software Verification. IEEE Trans. on CAD of Integrated Circuits and Systems 27(7): 1165-1178 (2008) Compiled By: Ms. Savita Mittal DCA
  • 24. Software Testing- MCA 513 Unit I (Contd..) Lecture 10 (9/10/2020) Test Oracles Testing Oracle is a machanism, different from the program itself, that can be used to test the accuracy of a program’s output for test cases. Conceptually, we can consider testing a process in which test cases are given for testing and the program under test. The output of the two then compares to determine whether the program behaves correctly for test cases. This is shown in figure. Testing oracles are required for testing. Ideally, we want an automated oracle, which always gives the correct answer. However, often oracles are human beings, who mostly calculate by hand what the output of the program should be. As it is often very difficult to determine whether the behavior corresponds to the expected behavior, our “human deities” may make mistakes. Consequently, when there is a discrequently, between the program and the result, we must verify the result produced by the oracle before declaring that there is a defect in the result. The human oracles typically use the program’s specifications to decide what the correct behavior of the program should be. To help oracle determine the correct behavior, it is important that the behavior of the system or component is explicitly specified and the specification itself be error- free. In other words actually specify the true and correct behavior. There are some systems where oracles are automatically generated from the specifications of programs or modules. With such oracles, we are assured that the output of the oracle conforms to the specifications. However, even this approach does not solve all our problems, as there is a possibility of errors in specifications. As a result, a divine generated from the the specifications
  • 25. will correct the result if the specifications are correct, and this specification will not be reliable in case of errors. In addition, systems that generate oracles from specifications require formal specifications, which are often not generated during design. Software verification Dynamic verification (Test, experimentation) Dynamic verification is performed during the execution of software, and dynamically checks its behavior; it is commonly known as the Test phase. Verification is a Review Process. Depending on the scope of tests, we can categorize them in three families:  Test in the small: a test that checks a single function or class (Unit test)  Test in the large: a test that checks a group of classes, such as o Module test (a single module) o Integration test (more than one module) o System test (the entire system)  Acceptance test: a formal test defined to check acceptance criteria for a software o Functional test o Non functional test (performance, stress test) The aim of software dynamic verification is to find the errors introduced by an activity (for example, having a medical software to analyze bio-chemical data); or by the repetitive performance of one or more activities (such as a stress test for a web server, i.e. check if the current product of the activity is as correct as it was at the beginning of the activity). Static verification (Analysis) Static verification is the process of checking that software meets requirements by inspecting the code before it runs. For example:  Code conventions verification  Bad practices (anti-pattern) detection  Software metrics calculation  Formal verification Verification by Analysis - The analysis verification method applies to verification by investigation, mathematical calculations, logical evaluation, and calculations using classical textbook methods or accepted general use computer methods. Analysis includes sampling and correlating measured data and observed test results with calculated expected values to establish conformance with requirements. Narrow scope
  • 26. When it is defined more strictly, verification is equivalent only to static testing and it is intended to be applied to artifacts. And, validation (of the whole software product) would be equivalent to dynamic testing and intended to be applied to the running software product (not its artifacts, except requirements). Notice that requirements validation can be performed statically and dynamically Comparison with validation Software verification is often confused with software validation. The difference between verification and validation:  Software verification asks the question, "Are we building the product right?"; that is, does the software conform to its specifications? (As a house conforms to its blueprints.)  Software validation asks the question, "Are we building the right product?"; that is, does the software do what the user really requires? (As a house conforms to what the owner needs and wants.) SOFTWARE REQUIREMENT SPECIFICATION (SRS) VERIFICATION 1) SRS should have the descriptive details on all the requirements – existing and new for the release. 1. Should include details on functional, non functional, performance and design requirements 2. Details of interfaces and GUI specifications should be present. 3. For each functional requirement it should clearly specify:  What all has to happen on doing a particular action on the system?  Where all it should happen?  When it has to happen?  What all should not happen? 1. If any of these is not specified , identify it and note for discussion. 2) Proper version history for the various features should be mentioned. 3) All the new features should be elaborately explained in the SRS. 4) It should mention the operating environment (Software and Hardware) recommended for proper functioning of the application. 1. Should include OS, additional (3rd party) software or packages (with version) required for the application. 2. Recommended values of system memory, RAM, processor speed, server/Desktop usability etc should be mentioned.
  • 27. 5) Any constraints on the design, H/w or S/W requirements, Standard compliance, etc should be mentioned in SRS 6) Any testability limitations should be clearly specified 7) All assumptions based on which the application is developed should be listed 8) Interface diagrams, Call flows or flow charts depicted should be cross checked with the understanding obtained. 9) SRS should have reference for all records like CDR parameter details, Configuration file parameters details and other operation steps. 10) Ensure that all the requirements are properly reviewed and signed off by respective stake holders/stake holder representatives. Source Code review Code review (sometimes referred to as peer review) is a software quality assurance activity in which one or several people check a program mainly by viewing and reading parts of its source code, and they do so after implementation or as an interruption of implementation. At least one of the persons must not be the code's author. The persons performing the checking, excluding the author, are called "reviewers". Although direct discovery of quality problems is often the main goal, code reviews are usually performed to reach a combination of goals :  Better code quality – improve internal code quality and maintainability (readability, uniformity, understandability, ...)  Finding defects – improve quality regarding external aspects, especially correctness, but also find performance problems, security vulnerabilities, injected malware, ...  Learning/Knowledge transfer – help in transferring knowledge about the codebase, solution approaches, expectations regarding quality, etc; both to the reviewers as well as to the author  Increase sense of mutual responsibility – increase a sense of collective code ownership and solidarity  Finding better solutions – generate ideas for new and better solutions and ideas that transcend the specific code at hand.  Complying to QA guidelines – Code reviews are mandatory in some contexts, e.g., air traffic software The above-mentioned definition of code review delimits it against neighboring but separate software quality assurance techniques: In static code analysis the main checking is performed by an automated program, in self checks only the author checks the code, in testing the execution of
  • 28. the code is an integral part, and pair programming is performed continuously during implementation and not as a separate step. Types of review processes There are many variations of code review processes, some of which will be detailed below. Formal inspection The historically first code review process that was studied and described in detail was called "Inspection" by its inventor Michael Fagan. This Fagan inspection is a formal process which involves a careful and detailed execution with multiple participants and multiple phases. Formal code reviews are the traditional method of review, in which software developers attend a series of meetings and review code line by line, usually using printed copies of the material. Formal inspections are extremely thorough and have been proven effective at finding defects in the code under review. Regular change-based code review In recent years, many teams in industry have introduced a more lightweight type of code review. Its main characteristic is that the scope of each review is based on the changes to the codebase performed in a ticket, user story, commit, or some other unit of work. Furthermore, there are rules or conventions that embed the review task into the development process (e.g., "every ticket has to be reviewed"), instead of explicitly planning each review. Such a review process is called "regular, change-based code review". There are many variations of this basic process. A survey among 240 development teams from 2017 found that 90% of the teams use a review process that is based on changes (if they use reviews at all), and 60% use regular, change-based code review] . Also, most large software corporations such as Microsoft, Google, and Facebook follow a changed-based code review process. Efficiency and effectiveness of reviews Capers Jones' ongoing analysis of over 12,000 software development projects showed that the latent defect discovery rate of formal inspection is in the 60-65% range. For informal inspection, the figure is less than 50%. The latent defect discovery rate for most forms of testing is about 30%.[10][11] A code review case study published in the book Best Kept Secrets of Peer Code Review found that lightweight reviews can uncover as many bugs as formal reviews, but were faster and more cost-effective in contradiction to the study done by Capers Jones The types of defects detected in code reviews have also been studied. Empirical studies provided evidence that up to 75% of code review defects affect software evolvability/maintainability rather than functionality, making code reviews an excellent tool for software companies with long product or system life cycles.
  • 29. Guidelines The effectiveness of code review was found to depend on the speed of reviewing. Code review rates should be between 200 and 400 lines of code per hour. Inspecting and reviewing more than a few hundred lines of code per hour for critical software (such as safety critical embedded software) may be too fast to find errors. Supporting tools Static code analysis software lessens the task of reviewing large chunks of code on the developer by systematically checking source code for known vulnerabilities and defect types. A 2012 study by VDC Research reports that 17.6% of the embedded software engineers surveyed currently use automated tools to support peer code review and 23.7% expect to use them within 2 years. References  IEEE: SWEBOK: Guide to the Software Engineering Body of Knowledge  Carlo Ghezzi, Mehdi Jazayeri, Dino Mandrioli: Fundamentals of Software Engineering, Prentice Hall, ISBN 0-13-099183-X  Alan L. Breitler: A Verification Procedure for Software Derived from Artificial Neural Networks, Journal of the International Test and Evaluation Association, Jan 2004, Vol 25, No 4.  Vijay D'Silva, Daniel Kroening, Georg Weissenbacher: A Survey of Automated Techniques for Formal Software Verification. IEEE Trans. on CAD of Integrated Circuits and Systems 27(7): 1165-1178 (2008) Compiled By: Ms. Savita Mittal Asstt. Professor, DCA Software Testing(MCA 513) Unit 1 (Contd..) Lecture 11(9/10/2020) User Documentation Verification Documentation testing falls under the category of non-functional testing. Documentation testing activity can take off right from the beginning of the software development process. If taken up from the beginning, defects in documentation can be effortlessly fixed with minimal expenses. Poor documentation is as likely to cause problems for our customers as is poorly written code. Incorrect or
  • 30. poorly written or even missing documentation irritates the end users that tend to quickly form an opinion about the product. Hence quality of documentation bears a clear reflection on the quality of the product as well as that of the supplier who supplied it. The product can become a success, only If its documentation is adequately tested. Software project audits This article describes approaches to conducting internal or external audits of software development projects. Audits may be conducted to help in recovery planning for a failing software project, or on a regular basis as part of a failure prevention approach. Guidance for scheduling audits on a preventive basis is presented. Recovery planning approaches are described. An Ominous Sense of Disaster You're developing a new software program and your project team or contract developer has slipped the schedule. Should you be concerned? The answer is, probably not if this is the first slip. Software projects are more difficult to manage than construction projects, and would you expect to completely go crazy if your home remodeling project slipped a few weeks. If half of your software projects are coming in on time and on budget or better, you're doing pretty well. You should be more concerned after the second or third slip. This is a definite sign of a project potentially in trouble. You might think that the trouble is the schedule delays, but this is merely the surface manifestation of a deeper underlying problem. Schedule slips and cost overruns may increase your costs and delay delivery, but in most cases they will not result in total project disaster. Unfortunately, when you look beneath the surface of a project missing deadlines you will often find that the underlying architecture and code itself is seriously, perhaps even fatally, flawed. There are two possible reasons for this correlation: 1. The most frequent explanation is that the developers are over their heads. They are attempting to build a system whose complexity exceeds their experience or ability (or both) and the result is a flawed architecture, incorrect object design, poor database design, inefficient code or data access, and so on. Don't get me wrong, these individuals may have the best of intentions and be competent in development in general, but for this particular complexity of application, they are lost in the woods. 2. A much less frequent but not uncommon explanation is that the developers have the capability to build the system, but the initial estimate of effort and time was so badly under scoped that they do not have anywhere near enough time to do the job right. Managers may exert so much pressure on the developers that they crumble and produce poor quality architectures, designs, and so on simply trying to meet unrealistic milestones. This is one of the reasons development shops should emphasize to their programmers that they must never sacrifice quality for schedule, and that it is their jobs as professionals to stand up to any management pressure otherwise. Schedule slips with high quality code are always preferred over on-time performance with poor quality (not that anyone likes schedule slips).
  • 31. If you ignore the initial warning sign of multiple schedule slips, then you have laid a foundation for total project failure and cancellation. This will show up when the system is finally delivered in one or more of the following forms: ● The system has numerous defects and crashes or operates incorrectly to the extent where it is not usable; ● The system is missing key functionality that is necessary for it to be deployed operationally; ● Seemingly minor enhancements to the system are very difficult and costly to implement and often result in unexpected problems in other parts of the application; or ● The system performance is sufficiently slow that it is not feasible to deploy it operationally. If you suspect that you may have a project going down this path, you're not alone. According to a Standish Group Study 40% of software projects underway now are expected to fail, and 33% of all projects are over budget and late. Project Auditing Methodology Damage Auditing Suppose you suspected that one of your subsidiary corporations was in trouble financially, and that they were intentionally or unintentionally hiding the magnitude of the problem within their accounting department. Without hesitation you would call in outside experts (certified public accountants) to do an audit and tell you where you really stood. Similarly, if you suspect that one of your projects is in trouble, you need to immediately call in outside experts to do an audit and tell you where you really stand. The audit team should be composed of very, senior managers and software engineers. Audits typically last between 3 days and 3 weeks, based on the size of the project. The audit consists of a management audit and a technical audit. Typically, on smaller projects the technical audit is the most critical and the focus of the audit, while on the larger projects (over $5M USD) the management audit dominates. The Technical Audit The technical audit focuses on the design team, and to a lesser extent, the programmers doing the actual implementation. It begins by looking at the overall system architecture and database design. The question is not really whether these are right or wrong, but rather whether they are appropriate to the nature of the application (usage, transaction volume, database size, planned evolution, and so on). Our experience has been that if these two elements are correct, the project has a solid foundation and if there are other problems, salvage is possible. On the other hand, if these two elements are incorrect then the remainder of the system is likely to need a total rewrite. Running a close third in importance is the design of the objects and business application servers. If these are wrong, the system can often be made to work but maintenance will be difficult. A decision will need
  • 32. to be made whether to fix and deploy the current system while immediately redesigning a follow-on system, or to redesign at this point in time. Once the design has been reviewed, the implementation must be examined. The process begins by using automated tools to look at comment density (both in headers and embedded within functions) across the application as a whole and by function or module. A similar analysis of McCabes complexity metric for each function is completed. Functions with high complexity are candidates for simplification and are likely trouble points for defects. The code itself (including data access code such as SQL statements and stored procedures) is then examined, either in its entirety or on a sampling basis. The audit team looks for inefficient coding techniques, proper error and exception handling, duplicate code blocks (duplicate code should be encapsulated in a function or object, not just cut and pasted), and other obvious problems. Finally, the user interface is examined for usability and conformance with industry standards. Of all items mentioned, the user interface is the easiest to fix if deficient. References ● IEEE: SWEBOK: Guide to the Software Engineering Body of Knowledge ● Carlo Ghezzi, Mehdi Jazayeri, Dino Mandrioli: Fundamentals of Software Engineering, Prentice Hall, ISBN 0-13-099183- X ● Alan L. Breitler: A Verification Procedure for Software Derived from Artificial Neural Networks, Journal of the International Test and Evaluation Association, Jan 2004, Vol 25, No 4. ● Vijay D'Silva, Daniel Kroenin g, Georg Weissenbacher: A Survey of Automate d Techniques for Formal Software Verification. IEEE Trans. on CAD of Integrated Circuits and Systems 27(7): 1165-1178 (2008) 1. ^ Systems and software engineering – Vocabulary. Iso/Iec/IEEE 24765:2010(E). 2010-12-01. pp. 1–418. doi:10.1109/IEEESTD.2010.5733835. ISBN 978-0-7381-6205-8. 2. ^ Kaner, Cem (May 2003). "What Is a Good Test Case?" (PDF). STAR East: 2. 3. ^ "Writing Test Rules to Verify Stakeholder Requirements ". StickyMinds. 4. ^ Beizer, Boris (May 22, 1995). Black Box Testing. New York: Wiley. p. 3. ISBN 9780471120940. 5. ^ "An Introduction to Scenario Testing " (PDF). Cem Kaner. Retrieved 2009-05-07. 6. ^ Crispin, Lisa; Gregory, Janet (2009). Agile Testing: A Practical Guide for Testers and Agile Teams . Addison-Wesley. pp. 192–5. ISBN 978-81-317-3068-3.
  • 33. 7. ^ https://www.softwaretestingstandard.org/part3.php ISO/IEC/IEEE 29119-4:2019, "Part 4: Test techniques" 8. ^ Jump up to: a b Liu, Juan (2014). "Studies of the Software Test Processes Based on GUI". 2014 International Conference on Computer, Network: 113–121. doi:10.1109/CSCI.2014.104. ISBN 9781605951676. S2CID 15204091. Retrieved 2019-10-22. 9. ^ Kaner, Cem; Falk, Jack; Nguyen, Hung Q. (1993). Testing Computer Software (2nd ed.). Boston: Thomson Computer Press. p. 123–4. ISBN 1-85032-847-1. 10. ^ Goethem, Brian Hambling, Pauline van (2013). User acceptance testing : a step-by-step guide. BCS Learning & Development Limited. ISBN 9781780171678. 11. ^ Black, Rex (August 2009). Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing. Hoboken, NJ: Wiley. ISBN 978-0-470-40415-7. 12. ^ Cimperman, Rob (2006). UAT Defined: A Guide to Practical User Acceptance Testing. Pearson Education. pp. Chapter 2. ISBN 9780132702621. External links ● Writing Software Security Test Cases - Putting security test cases into your test plan by Robert Auger ● Software Test Case Engineering By Ajay Bhagwat Software Testing(MCA 513) Unit 1 (Contd..) Lecture 12(14/10/2020) The Management Audit The management audit has three steps. Gather metric oriented input date; prepare project baselines using industry standard approaches; and compare actual or projected values with the resultant baseline. The project baseline will include things like total effort and schedule, deliverables (including page counts), labor loading curves over time by skill set, development team skills and experience, maintenance projections, defect projections by category, and so on. By comparing historic values for
  • 34. staffing and other metric values with baseline values, deviations can be identified and analyzed. Similarly, forward looking project plans can be compared to the baseline values and deviations examined. Examples of the types of problems that pop out are shown in the following table: Metric Industry Std. Audit results Software Design Description page count 2,110 493 Software Test Description page count 873 42 Software testers (person months) 72 14 Software integration and test time (calendar months) 4.2 0.75 (planned) Sample management audit problem areas Disaster Prevention Audits Through periodic audits throughout the project lifecycle, problems can be identified early and corrective action taken in a timely fashion. These preventative audits have significantly increased the project success rates on large projects in the State of California, and are now a requirement for all large projects. At each audit the team will look at work performed to date, and plans for future work, to identify problems (if any) with each area. Preventative audits are normally accomplished as follows: Milestone Scope of Audit Project Initiation Project baseline plans Software Requirements Review Requirements, architecture, plans Software Design Review Scope creep, design, architecture, database, interfaces, test documentation and approach, coding guidelines, plans Completion of coding Code implementation (complexity, adherence to guidelines, encapsulation, algorithm order of magnitude, plans, user interface Delivery Maintainability, conformance to requirements, usability
  • 35. In addition, project audits are normally conducted as a minimum every 6 months so if one stage extends longer than 6 months a progress audit would be conducted during the middle of the phase. Three Actual Examples Let me describe three actual project audit results by way of illustration. The names are hidden because of nondisclosure agreements.  Project Alpha was a large project (over $100M). We focused on the management audit and identified inadequate staffing levels overall, incorrectly trained staff, and incorrect labor mixes. Our technical audit indicated abnormally high defect levels and major units needing rewrite. The project was subsequently cancelled.  Project Beta was a large project (over $150M). Our technical and management audit indicated that the schedule slips were reasonable, the quality was high, and the team was doing a good job of recovering from an initial underestimate of the scope. We also identified that the planned maintenance effort was significantly underestimated. The project continued with a new schedule, was successful, and an updated maintenance budget was submitted.  Project Charlie was a small to moderate project (approximately $1.5M). Our technical and mangement audit indicated that the current project team and approach was failing, but we developed a recovery plan, which was subsequently implemented. Recovery planning is addressed in the next section. Saving Your Job and Sanity – Recovery Planning OK, suppose that the project audit determines that your project is indeed in a disaster situation. What are your options? If you continue on your present course, our experience indicates that in 100% of the cases the project will ultimately fail. Alternatively, you can cancel the project immediately and cut your losses. For non-mission critical systems that do not have a large return (in terms of reducing costs or increasing revenues) this is often the best option. In many cases, however, failure is not an option. In those cases, recovery planning is the next phase. Recovery planning begins with software triage. Triage is a military term used by medical personnel following a major battle. The wounded are separated into three categories: Those that will get better on their own; those that will die no matter what is done; and those where medical attention has a significant probability of helping the person to live. The doctors then focus all of their attention on those where their assistance will make a difference. The initial step in recovery planning is to conduct a triage on the existing software project. What is usable as is? What can be economically fixed and used? What should be discarded? During this step, the core system functionality is also identified and all extraneous features that can be deleted or delayed are called out. The recovery planning phase then involves development of a new, zero based project baseline plan. Existing plans, schedules, and so on are thrown out and the project is planned from the current situation to an achievable completion. This requires that formal techniques be used for estimating effort, schedule, the time-cost trade off, and so on. Delivered functionality and other project estimating
  • 36. parameters are adjusted until an acceptable completion date is achieved or it becomes obvious that no acceptable completion date is possible. We use Cost Xpert to complete the project estimates. We've never had a project that was formally estimated and planned with Cost Xpert subsequently fail. Conclusions Outside project audits are critical for any project that is suspected of being in trouble, and are a high return on investment (ROI) investment as a risk reduction technique for all projects of any significant size. Auditors should be independent of the project team and have no vested interest in the project succeeding or failing. Auditors should not be outside development shops, who may have a vested interest in disparaging the current project team to place their own staff on the project. Audits look at both technical and management issues, identify potential problems, and make recommendations. If a project is found to be in a failure state, then recovery planning is undertaken to try to salvage the project, if possible. References  IEEE: SWEBOK: Guide to the Software Engineering Body of Knowledge  Carlo Ghezzi, Mehdi Jazayeri, Dino Mandrioli: Fundamentals of Software Engineering, Prentice Hall, ISBN 0-13-099183-X  Alan L. Breitler: A Verification Procedure for Software Derived from Artificial Neural Networks, Journal of the International Test and Evaluation Association, Jan 2004, Vol 25, No 4.  Vijay D'Silva, Daniel Kroening, Georg Weissenbacher: A Survey of Automated Techniques for Formal Software Verification. IEEE Trans. on CAD of Integrated Circuits and Systems 27(7): 1165- 1178 (2008) 1. ^ Systems and software engineering – Vocabulary. Iso/Iec/IEEE 24765:2010(E). 2010-12-01. pp. 1–418. doi:10.1109/IEEESTD.2010.5733835. ISBN 978-0-7381-6205-8. 2. ^ Kaner, Cem (May 2003). "What Is a Good Test Case?" (PDF). STAR East: 2. 3. ^ "Writing Test Rules to Verify Stakeholder Requirements". StickyMinds. 4. ^ Beizer, Boris (May 22, 1995). Black Box Testing. New York: Wiley. p. 3. ISBN 9780471120940. 5. ^ "An Introduction to Scenario Testing" (PDF). Cem Kaner. Retrieved 2009-05-07. 6. ^ Crispin, Lisa; Gregory, Janet (2009). Agile Testing: A Practical Guide for Testers and Agile Teams. Addison-Wesley. pp. 192–5. ISBN 978-81-317-3068-3. 7. ^ https://www.softwaretestingstandard.org/part3.php ISO/IEC/IEEE 29119-4:2019, "Part 4: Test techniques" 8. ^ Jump up to: a b Liu, Juan (2014). "Studies of the Software Test Processes Based on GUI". 2014 International Conference on Computer, Network: 113–121. doi:10.1109/CSCI.2014.104. ISBN 9781605951676. S2CID 15204091. Retrieved 2019-10-22. 9. ^ Kaner, Cem; Falk, Jack; Nguyen, Hung Q. (1993). Testing Computer Software (2nd ed.). Boston: Thomson Computer Press. p. 123–4. ISBN 1-85032-847-1. 10. ^ Goethem, Brian Hambling, Pauline van (2013). User acceptance testing : a step-by-step guide. BCS Learning & Development Limited. ISBN 9781780171678.
  • 37. 11. ^ Black, Rex (August 2009). Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing. Hoboken, NJ: Wiley. ISBN 978-0-470-40415-7. 12. ^ Cimperman, Rob (2006). UAT Defined: A Guide to Practical User Acceptance Testing. Pearson Education. pp. Chapter 2. ISBN 9780132702621. External links  Writing Software Security Test Cases - Putting security test cases into your test plan by Robert Auger  Software Test Case Engineering By Ajay Bhagwat Software Testing(MCA 513) MCA V Sem Unit-I: Introduction Lecture 13 Topic: : Boundary Value Analysis & Equivalence Partitioning Software Testing is imperative for a bug-free application; this can be done manually or even automated. Although automation testing reduces the testing time, manual testing continues to be the most popular method for validating the functionality of software applications. Here, we are explaining the most important manual software testing techniques. ● Black Box Testing Technique ● Boundary Value Analysis (BVA) ● Equivalence Class Partitioning What are Software Testing Techniques? Software Testing Techniques are basically certain procedures which help every software development project improve its overall quality and effectiveness. It helps in designing better test cases, which are a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly. Different testing techniques are implemented as a part of the testing process to improve the effectiveness of the tests. Black Box Test Design Technique Black Box Test Design is defined as a testing technique in which functionality of the Application Under Test (AUT) is tested without looking at the internal code structure, implementation details and knowledge of internal paths of the software. This type of testing is based entirely on software requirements and specifications.
  • 38. In Black Box Testing, we just focus on input and output of the software system without bothering about the inner working of the software . By using this technique, we could save a lot of testing time and get good test case coverage. Test Techniques are generally categorized into Five: 1. Boundary Value Analysis (BVA) 2. Equivalence Class Partitioning 3. Decision Table based testing. 4. State Transition 5. Error Guessing Boundary Value Analysis (BVA): BVA is another Black Box Test Design Technique, which is used to find the errors at boundaries of input domain (tests the behavior of a program at the input boundaries) rather than finding those errors in the centre of input. So, the basic idea in boundary value testing is to select input variable values at their: minimum, just above the minimum, just below the minimum, a nominal value, just below the maximum, maximum and just above the maximum. That is, for each range, there are two boundaries, the lower boundary (start of the range) and the upper boundary (end of the range) and the boundaries are the beginning and end of each valid partition . We should design test cases which exercise the program functionality at the boundaries, and with values just inside and outside the boundaries. Boundary value analysis is also a part of stress and negative testing. Suppose, if the input is a set of values between A and B, then design test cases for A, A+1, A-1 and B, B+1, B-1. Example: Why go with Boundary Value Analysis? Consider an example where a developer writes code for an amount text field which will accept and transfer values only from 100 to 5000. The test engineer checks it by entering 99 into the amount text field and then clicks on the transfer button. It will show an error message as 99 is an invalid test case, because the boundary values are already set as 100 and 5000. Since 99 is less than 100, the text field will not transfer the amount. The valid and invalid test cases are listed below. Valid Test Cases 1. Enter the value 100 which is min value. 2. Enter the value 101 which is min+1 value. 3. Enter the value 4999 which is max-1 value. 4. Enter the value 5000 which is max value. Invalid Test Cases 1. Enter the value 99 which is min-1 value. 2. Enter the value 5001 which is max+1 value Equivalence Partitioning: Equivalence partitioning is also known as “Equivalence Class Partitioning”. In this method, the input domain data is divided into different equivalence data classes – which are generally termed as ‘Valid’ and ‘Invalid’. The inputs to the software or system are divided into groups that are expected to exhibit similar behavior.
  • 39. Thus, it reduces the number of test cases to a finite list of testable test cases covering maximum possibilities. Example: Suppose the application you are testing accept values in the character limit of 1 – 100 only. Here, there would be three partitions: one valid partition and two invalid partitions. The valid partition: Between 1 & 100 characters. The expectation is that the text field would handle all inputs with 1-100 characters, the same way. The first invalid partition: 0 characters. When no characters are entered, we’d expect the text field to reject the value. The second invalid partition: ≥ 101 We’d expect the text field to reject all values greater than or equal to 101 characters EQUIVALENCE PARTITIONING has been categorized into two parts: 1. Pressman Rule. 2. Practice Method. 1.Pressman Rule: Rule 1: If input is a range of values, then design test cases for one valid and two invalid values. Rule 2: If input is a set of values, then design test cases for all valid value sets and two invalid values. For example: Consider any online shopping website, where every product should have a specific product ID and name. Users can search either by using name of the product or by the product ID. Here, you can consider a set of products with product IDs and you want to check for Laptops (valid value). Rule 3: If input is Boolean, then design test cases for both true and false values. Consider a sample web page which consists of first name, last name, and email text fields with radio buttons for gender which use Boolean inputs. If the user clicks on any of the radio buttons, the corresponding value should be set as the input. If the user clicks on a different option, the value of input needs to be updated with the new one (and the previously selected option should be deselected). Here, the instance of a radio button option being clicked can be treated as TRUE and the instance where none are clicked, as FALSE. Also, two radio buttons should not get selected simultaneously; if so, then it is considered as a bug. 2.Practice Method: If the input is a range of values, then divide the range into equivalent parts. Then test for all the valid values and ensure that 2 invalid values are being tested for. For example: If there is deviation in between the range of values, then use Practice Method. If there is no deviation between the range of values, then use Pressman Rule. Summary Boundary Value Analysis is better than Equivalence Partitioning as it considers both positive and negative values along with maximum and minimum value. So, when compared with Equivalence Partitioning, Boundary Value Analysis proves to be a better choice in assuring the quality.
  • 40. ● Software testing Techniques allow you to design better test cases. There are five primarily used techniques. ● Boundary value analysis is testing at the boundaries between partitions. ● Equivalent Class Partitioning allows you to divide set of test condition into a partition which should be considered the same. References: Celestial Systems Inc. Compiled by: Ms.Savita Mittal Lecture 14(17/10/2020) Decision Table Based Testing Software Engineering | Decision Table Decision table is a brief visual representation for specifying which actions to perform depending on given conditions. The information represented in decision tables can also be represented as decision trees or in a programming language using if-then-else and switch-case statements. A decision table is a good way to settle with different combination inputs with their corresponding outputs and also called cause-effect table. Reason to call cause-effect table is a related logical diagramming technique called cause-effect graphing that is basically used to obtain the decision table. Importance of Decision Table: 1. Decision tables are very much helpful in test design technique. 2. It helps testers to search the effects of combinations of different inputs and other software states that must correctly implement business rules. 3. It provides a regular way of stating complex business rules, that is helpful for developers as well as for testers. 4. It assists in development process with developer to do a better job. Testing with all combination might be impractical. 5. A decision table is basically an outstanding technique used in both testing and requirements management. 6. It is a structured exercise to prepare requirements when dealing with complex business rules. 7. It is also used in model complicated logic. Decision Table in test designing: Blank Decision Table
  • 41. CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4 Condition 1 Condition 2 Condition 3 Condition 4 Decision Table: Combinations CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4 Condition 1 Y Y N N Condition 2 Y N Y N Condition 3 Y N N Y Condition 4 N Y Y N Advantage of Decision Table: 1. Any complex business flow can be easily converted into the test scenarios & test cases using this technique. 2. Decision tables work iteratively that means the table created at the first iteration is used as input table for next tables. The iteration is done only if the initial table is not satisfactory. 3. Simple to understand and everyone can use this method design the test scenarios & test cases. 4. It provide complete coverage of test cases which help to reduce the rework on writing test scenarios & test cases. 5. These tables guarantee that we consider every possible combination of condition values. This is known as its completeness property. Lecture 15 26/10/2020 Cause and Effect Graph in Black box Testing Cause-effect graph comes under the black box testing technique which underlines the relationship between a given result and all the factors affecting the result. It is used to write dynamic test cases. The dynamic test cases are used when code works dynamically based on user input. For example, while using email account, on entering valid email, the system accepts it but,
  • 42. when you enter invalid email, it throws an error message. In this technique, the input conditions are assigned with causes and the result of these input conditions with effects. Cause-Effect graph technique is based on a collection of requirements and used to determine minimum possible test cases which can cover a maximum test area of the software. The main advantage of cause-effect graph testing is, it reduces the time of test execution and cost. This technique aims to reduce the number of test cases but still covers all necessary test cases with maximum coverage to achieve the desired application quality. Cause-Effect graph technique converts the requirements specification into a logical relationship between the input and output conditions by using logical operators like AND, OR and NOT. Notations used in the Cause-Effect Graph AND - E1 is an effect and C1 and C2 are the causes. If both C1 and C2 are true, then effect E1 will be true. OR - If any cause from C1 and C2 is true, then effect E1 will be true. NOT - If cause C1 is false, then effect E1 will be true. Mutually Exclusive - When only one cause is true. Let's try to understand this technique with some examples:
  • 43. Situation: The character in column 1 should be either A or B and in the column 2 should be a digit. If both columns contain appropriate values then update is made. If the input of column 1 is incorrect, i.e. neither A nor B, then message X will be displayed. If the input in column 2 is incorrect, i.e. input is not a digit, then message Y will be displayed. o A file must be updated, if the character in the first column is either "A" or "B" and in the second column it should be a digit. o If the value in the first column is incorrect (the character is neither A nor B) then massage X will be displayed. o If the value in the second column is incorrect (the character is not a digit) then massage Y will be displayed. Now, we are going to make a Cause-Effect graph for the above situation: Causes are: o C1 - Character in column 1 is A o C2 - Character in column 1 is B o C3 - Character in column 2 is digit! Effects: o E1 - Update made (C1 OR C2) AND C3 o E2 - Displays Massage X (NOT C1 AND NOT C2) o E3 - Displays Massage Y (NOT C3) Where AND, OR, NOT are the logical gates. Effect E1- Update made- The logic for the existence of effect E1 is "(C1 OR C2) AND C3". For C1 OR C2, any one from C1 and C2 should be true. For logic AND C3 (Character in column 2 should be a digit), C3 must be true. In other words, for the existence of effect E1 (Update made) any one from C1 and C2 but the C3 must be true. We can see in graph cause C1 and C2 are connected through OR logic and effect E1 is connected with AND logic. Effect E2 - Displays Massage X - The logic for the existence of effect E2 is "NOT C1 AND NOT C2" that means both C1 (Character in column 1 should be A) and C2
  • 44. (Character in column 1 should be B) should be false. In other words, for the existence of effect E2 the character in column 1 should not be either A or B. We can see in the graph, C1 OR C2 is connected through NOT logic with effect E2. Effect E3 - Displays Massage Y- The logic for the existence of effect E3 is "NOT C3" that means cause C3 (Character in column 2 is a digit) should be false. In other words, for the existence of effect E3, the character in column 2 should not be a digit. We can see in the graph, C3 is connected through NOT logic with effect E3. So, it is the cause-effect graph for the given situation. A tester needs to convert causes and effects into logical statements and then design cause-effect graph. If function gives output (effect) according to the input (cause) so, it is considered as defect free, and if not doing so, then it is sent to the development team for the correction. Conclusion Summary of the steps: o Draw the circles for effects and Causes. o Start from effect and then pick up what is the cause of this effect. o Draw mutually exclusive causes (exclusive causes which are directly connected via one effect and one cause) at last. o Use logic gates to draw dynamic test cases. Lecture 16 Date: 28/10/2020 Control Flow Software Testing & Cyclomatic Complexity Last Updated: 05-08-2019 Control flow testing is a type of software testing that uses program’s control flow as a model. Control flow testing is a structural testing strategy. This testing technique comes under white box testing. For the type of control flow testing, all the structure, design, code and implementation of the software should be known to the testing team.
  • 45. This type of testing method is often used by developers to test their own code and own implementation as the design, code and the implementation is better known to the developers. This testing method is implemented with the intention to test the logic of the code so that the user requirements can be fulfilled. Its main application is to relate the small programs and segments of the larger programs. Control Flow Testing Process: Following are the steps involved into the process of control flow testing:  Control Flow Graph Creation: From the given source code a control flow graph is created either manually or by using the software.  Coverage Target: A coverage target is defined over the control flow graph that includes nodes, edges, paths, branches etc.  Test Case Creation: Test cases are created using control flow graphs to cover the defined coverage target.  Test Case Execution: After the creation of test cases over coverage target, further test cases are executed.
  • 46.  Analysis: Analyze the result and find out whether the program is error free or has some defects. Control Flow Graph: Control Flow Graph is a graphical representation of control flow or computation that is done during the execution of the program. Control flow graphs are mostly used in static analysis as well as compiler applications, as they can accurately represent the flow inside of a program unit. Control flow graph was originally developed by Frances E. Allen. Cyclomatic Complexity: Cyclomatic Complexity is the quantitative measure of the number of linearly independent paths in it. It is a software metric used to describe the complexity of a program. It is computed using the Control Flow Graph of the program. M = E - N + 2P Advantages of Control flow testing:  It detects almost half of the defects that are determined during the unit testing.  It also determines almost one-third of the defects of the whole program.  It can be performed manually or automated as the control flow graph that is used can be made by hand or by using software also. Disadvantages of Control flow testing:  It is difficult to find missing paths if program and the model are done by same person.  Unlikely to find spurious features. Lecture 17 Path Testing Path Testing is a structural testing method based on the source code or algorithm and NOT based on the specifications. It can be applied at different levels of granularity. Path Testing Assumptions:  The Specifications are Accurate  The Data is defined and accessed properly  There are no defects that exist in the system other than those that affect control flow Path Testing Techniques:
  • 47.  Control Flow Graph (CFG) - The Program is converted into Flow graphs by representing the code into nodes, regions and edges.  Decision to Decision path (D-D) - The CFG can be broken into various Decision to Decision paths and then collapsed into individual nodes.  Independent (basis) paths - Independent path is a path through a DD-path graph which cannot be reproduced from other paths by other methods. Steps to Calculate the independent paths Step 1 : Draw the Flow Graph of the Function/Program under consideration as shown below: Step 2 : Determine the independent paths. Path 1: 1 - 2 - 5 - 7 Path 2: 1 - 2 - 5 - 6 - 7 Path 3: 1 - 2 - 3 - 2 - 5 - 6 - 7 Path 4: 1 - 2 - 3 - 4 - 2 - 5 - 6 - 7 Lecture 18 Unit 2 Topic: Generating Graphs From Program
  • 48. Input description: Parameters describing the desired graph, such as the number of vertices n, the number of edges m, or the edge probability p. Problem description: Generate (1) all or (2) a random or (3) the next graph satisfying the parameters. Discussion: Graph generation typically arises in constructing test data for programs. Perhaps you have two different programs that solve the same problem, and you want to see which one is faster or make sure that they always give the same answer. Another application is experimental graph theory, verifying whether a particular property is true for all graphs or how often it is true. It is much easier to conjecture the four-color theorem once you have demonstrated 4-colorings for all planar graphs on 15 vertices. A different application of graph generation arises in network design. Suppose you need to design a network linking ten machines using as few cables as possible, such that the network can survive up to two vertex failures. One approach is to test all the networks with a given number of edges until you find one that will work. For larger graphs, more heuristic approaches, like simulated annealing, will likely be necessary. Many factors complicate the problem of generating graphs. First, make sure you know what you want to generate:  Do I want labeled or unlabeled graphs? - The issue here is whether the names of the vertices matter in deciding whether two graphs are the same. In generating labeled graphs, we seek to construct all possible labelings of all possible graph topologies. In generating unlabeled graphs, we seek only one representative for each topology and ignore labelings. For example, there are only two connected unlabeled graphs on three vertices - a triangle and a simple path. However, there are four connected labeled graphs on three vertices - one triangle and three 3-vertex paths, each distinguished by their central vertex. In general, labeled graphs are much easier to generate. However, there are so many more of them that you quickly get swamped with isomorphic copies of the same few graphs.
  • 49.  What do I mean by random? - There are two primary models of random graphs, both of which generate graphs according to different probability distributions. The first model is parameterized by a given edge probability p. Typically, p=0.5, although smaller values can be used to construct sparser random graphs. In this model a coin is flipped for each pair of vertices x and y to decide whether to add an edge (x,y). All labeled graphs will be generated with equal probability when p=1/2. The second model is parameterized by the desired number of edges m. It selects m distinct edges uniformly at random. One way to do this is by drawing random (x,y)-pairs and creating an edge if that pair is not already in the graph. An alternative approach to computing the same things constructs the set of possible edges and selects a random m- subset of them, as discussed in Section . Which of these options best models your application? Probably none of them. Random graphs, by definition, have very little structure. In most applications, graphs are used to model relationships, which are often highly structured. Experiments conducted on random graphs, although interesting and easy to perform, often fail to capture what you are looking for. An alternative to random graphs is to use ``organic'' graphs, graphs that reflect the relationships among real-world objects. The Stanford GraphBase, discussed below, is an outstanding source of organic graphs. Further, there are many raw sources of relationships electronically available via the Internet that can be turned into interesting organic graphs with a little programming and imagination. Consider the graph defined by a set of WWW pages, with any hyperlink between two pages defining an edge. Or what about the graph implicit in railroad, subway, or airline networks, with vertices being stations and edges between two stations connected by direct service? As a final example, every large computer program defines a call graph, where the vertices represent subroutines, and there is an edge (x,y) if x calls y. Two special classes of graphs have generation algorithms that have proven particularly useful in practice:  Trees - Prüfer codes provide a simple way to rank and unrank labeled trees and thus solve all the standard generation problems discussed in Section . There are exactly labeled trees on n vertices, and exactly that many strings of length n-2 on the alphabet . The key to Prüfer's bijection is the observation that every tree has at least two vertices of degree 1. Thus in any labeled tree, the vertex v incident on the leaf with lowest label is well-defined. We take v to be , the first character in the code. We then delete the associated leaf and repeat the procedure until only two vertices are left. This defines a unique code S for any given labeled tree that can be used to rank the tree. To go from code to tree, observe that the degree of vertex v in the tree is one more than the number of times v occurs in S. The lowest-labeled leaf will be the smallest integer missing from S, which when paired with determines the first edge of the tree. The entire tree follows by induction.
  • 50. Algorithms for efficiently generating unlabeled rooted trees are presented in the implementation section below.  Fixed degree sequence graphs - The degree sequence of a graph G is an integer partition where is the degree of the ith highest-degree vertex of G. Since each edge contributes to the degree of two vertices, p is a partition of 2m, where m is the number of edges in G. Not all partitions correspond to degree sequences of graphs. However, there is a recursive construction that constructs a graph with a given degree sequence if one exists. If a partition is realizable, the highest-degree vertex can be connected to the next highest- degree vertices in G, or the vertices corresponding to parts . Deleting and decrementing yields a smaller partition, which we recur on. If we terminate without ever creating negative numbers, the partition was realizable. Since we always connect the highest-degree vertex to other high-degree vertices, it is important to reorder the parts of the partition by size after each iteration. Although this construction is deterministic, a semirandom collection of graphs realizing this degree sequence can be generated from G using edge-flipping operations. Suppose edges (x,y) and (w,z) are in G, but (x,w) and (y,z) are not. Exchanging these pairs of edges creates a different (not necessarily connected) graph without changing the degrees of any vertex. Implementations: The Stanford GraphBase [Knu94] is perhaps most useful as an instance generator for constructing a wide variety of graphs to serve as test data for other programs. It incorporates graphs derived from interactions of characters in famous novels, Roget's Thesaurus, the Mona Lisa, expander graphs, and the economy of the United States. It also contains routines for generating binary trees, graph products, line graphs, and other operations on basic graphs. Finally, because of its machine-independent random number generators, it provides a way to construct random graphs such that they can be reconstructed elsewhere, thus making them perfect for experimental comparisons of algorithms. See Section for additional information. Combinatorica [Ski90] provides Mathematica generators for basic graphs such as stars, wheels, complete graphs, random graphs and trees, and graphs with a given degree sequence. Further, it includes operations to construct more interesting graphs from these, including join, product, and line graph. Graffiti [Faj87], a collection of almost 200 graphs of graph-theoretic interest, are available in Combinatorica format. See Section . The graph isomorphism testing program nauty (see Section ), by Brendan D. McKay of the Australian National University, has been used to generate catalogs of all nonisomorphic graphs with up to 11 vertices. This extension to nauty, named makeg, can be obtained by anonymous ftp from bellatrix.anu.edu.au (150.203.23.14) in the directory pub/nauty19.