Test Case Prioritization Techniques
www.kanoah.com
ABOUT US
Kanoah is an innovative company
providing ground-breaking solutions to
software testing professionals on the
Atlassian JIRA platform
About Kanoah Tests
Kanoah Tests is a full featured test management, integrated seamless into
JIRA with the same look-n-feel. No need to learn or switch between different
applications
Coordinate all test management activities including test planning, authoring,
execution, tracking and reporting from a central location
Kanoah Tests enables you to track testing progress and quality to foster
collaboration and visibility across traditional and agile teams
Get real-time insights into your testing progress with out of the box
reports
Easily integrate your automated tests and submit test results with Kanoah
Tests’ powerful REST API or use the API to automate many areas of the
application
After looking for several years at plugins for test management we finally found
Kanoah Tests. The other solutions were either too complex, didn't integrate well
with Jira, or were focused on a single project. Kanoah Tests proved to be an
elegant solution that allowed linking between any project. Kanoah has been
very responsive to feedback, requests, suggestions as well bugs. The customer
service is awesome. I'd highly recommend Kanoah Tests to teams of any size
looking to simplify test management and consolidate tools.
Don
Pierce
Robert
Murhamer
Liked Kanoah from the moment I discovered it. Integrates nicely with JIRA and
especially with Agile. Test cases can be authored right from the story level, but has
all other functionality a Test Case Management solution would need to have,
ranging from creating test plans, executing test cases, importing test cases, API for
automation, a.s.o. Additionally the team at Kanoah is amazing and responding to
any question very quickly. It wasn't hard to sell to my management to purchase
Kanoah. Will highly recommend Kanoah to anybody.
Zour
Brosh
I just start working with Kanoah and I am impressed how it's simple to manage tests
without unlimited non-used features like in most of the test management tools and
still to get the needed functionality and results. The integration with Jira is a great
working solution that enable to share testing and development in simple way on
one system. I recommend to use Kanoah for testing management. It will help to do
Kanoah as much as possible customizable like Jira to match each group
methodology
For more reviews, visit: https://marketplace.atlassian.com/plugins/com.kanoah.test-manager/server/reviews
Reviews
Key Features
Native seamless integration with JIRA
No need to learn or switch between
different applications
Perfect for agile & traditional testing
approaches
Manage, organize and track all your testing
efforts in a central place
Reuse test cases across your projects
Powerful REST API
Establish clear traceability between
requirements, test cases, and defects
Execute test cases and track results that
matter
Get real-time insights into your testing
progress with out of the box reports
Live statistics accessible to your entire
team
Benefits for the testers
No need to lear or switch between
different applications
Reuse test cases across projects for
regression
Link test cases to requirements and
defects
API support for automated efforts
Benefits for the teams
Informed decisions based on real-time
insights
End-to-end traceability and impact
analysis
Centralized Test Management
Save time and increase productivity
Why choose Kanoah Tests
Coordinate all test
management activities right
inside JIRA
1Testing right
inside JIRA
Take advantage of the built-in
reports to track the results and
measure progress
2Make informed
decisions
Kanoah Tests users receive
priority support, even
during trials
3Stelar
support
Test Case Prioritization Techniques
What is Test Case Prioritization?
Even with a reduced number of executable test cases, it must be assured that as many as
possible critical faults are found. This means test cases must be prioritized.
Test cases should be prioritized so that if any test ends prematurely, Prioritization rule
the best possible test result at that point of time is achieved.
The most important test cases first Prioritization also ensures that the most important
test cases are executed first. This way important problems can be found early.
Test case prioritization techniques schedule test cases for execution in an order that
attempts to increase their effectiveness in meeting some performance goal.
An improved rate of fault detection during testing can provide faster feedback on the
system under test, and let software engineers begin correcting faults earlier than might
otherwise be possible. [1]
Some aspects to consider
The usage frequency of a function or the probability of failure in software use. If certain
functions of the system are used often and they contain a fault, then the probability of
this fault leading to a failure is high. Thus, test cases for this function should have a higher
priority than test cases for a less-often-used function.
Failure risk. Risk is the combination (mathematical product) of severity and failure
probability. The severity is the expected damage. Such risks may be, for example, that
the business of the customer using the software is impaired, thus leading to financial
losses for the customer. Tests that may find failures with a high risk get higher priority
than tests that may find failures with low risks.
The visibility of a failure for the end user is a further criterion for prioritization of test
cases. This is especially important in interactive systems. For example, a user of a city
information service will feel unsafe if there are problems in the user interface and will lose
confidence in the remaining information output.
Test cases can be chosen depending on the priority of the requirements. The different
functions delivered by a system have different importance for the customer. The
customer may be able to accept the loss of some of the functionality if it behaves
wrongly. For other parts, this may not be possible. [1]
Some aspects to consider
Besides the functional requirements, the quality characteristics may have differing
importance for the customer. Correct implementation of the important quality
characteristics must be tested. Test cases for verifying conformance to required quality
characteristics get a high priority.
Prioritization can also be done from the perspective of development or system
architecture. Components that lead to severe consequences when they fail (for example,
a crash of the system) should be tested especially intensively.
Complexity of the individual components and system parts can be used to prioritize test
cases. Complex program parts should be tested more intensively because developers
probably introduced more faults. However, it may happen that program parts seen as
easy contain many faults because development was not done with the necessary care.
Therefore, prioritization in this area should be based on experience data from earlier
projects run within the organization.
Failures having a high project risk should be found early. These are failures that require
considerable correction work that in turn requires special resources and leads to
considerable delays of the project. [1]
More factors to consider
Mission-critical components (take advice from Requirements/Customer).
Complex features (take advice from development team/Requirements).
Where failures would be most visible (take advice from development team).
Features that undergo frequent changes (take advice from development team).
Areas with past histories of problems (take advice from development team).
Areas with complex coding (where were the developers most challenged).
Areas of most frequent use.
Major functionalities rather than going into detail.
New functionality. [3]
Why is it important?
This subjective and difficult part of testing is about risk management, test planning, cost,
value, and being analytical about which tests to run in the context of your specific
project.
Testing is one of the most critically important phases of the software development life
cycle and consumes significant resources in terms of effort, time and cost.
For complex applications it is impractical or impossible to exhaustively test every
scenario and all data variables.
Prioritizing test cases based on perceived risks and customer expressed needs can
efficiently reduce the number of test cases necessary for comprehensive testing of a
software application, to meet project milestones while ensuring customer’s requirements
and expectations have been met.
Running all of the test cases in a test suite, however, can require a large amount of effort.
For this reason, researchers have considered various techniques for reducing the cost of
regression testing, including regression test selection and test-suite minimization
techniques. [2]
Why is it important?
Software engineers often save the test suites they develop for their software so that they
can reuse those test suites later as the software evolves.
Regression test selection techniques reduce the cost of regression testing by selecting
an appropriate subset of the existing test suite, based on information about the program,
modified version, and test suite.
Test suite minimization techniques lower costs by reducing a test suite to a minimal
subset that maintains equivalent coverage of the original test suite with respect to a
particular test adequacy criterion.
Test case prioritization techniques provide another method for assisting with regression
testing. These techniques allow testers to order their test cases so that those test cases
with the highest priority, according to some criterion, are executed earlier in the testing
process than lower priority test cases. [3]
With this technique, the test cases are prioritized and scheduled in order that attempts
to maximize some objective function. To decide the priority of the test cases the various
factors depending upon the need are decided then the priority is assigned to the test
cases. Test case prioritization provides a way to schedule and run test cases, which have
the highest priority in order to provide earlier detect faults. [5]
Test Case Prioritization Techniques
Prioritization for Rate of Fault Detection
This goal is described as one of improving our test suite's rate of fault detection.
An improved rate of fault detection during regression testing can let software engineers
begin their debugging activities earlier than might otherwise be possible, speeding the
release of the software.
An improved rate of fault detection can also provide faster feedback on the system under
test, and provide earlier evidence when quality goals have not been met, allowing
strategic decisions about release schedules to be made earlier than might otherwise be
possible. [3]
Comparator Techniques
Random ordering. As an experimental control, one prioritization “technique” that we
consider is the random ordering of the test cases in the test suite.
Optimal ordering. For further comparison, we also consider an optimal ordering of the
test cases in the test suite. We can obtain such an ordering in our experiments because
we use programs with known faults and can determine which faults each test case
exposes: this lets us determine the ordering of test cases that maximizes a test suite’s
rate of fault detection. In practice, this is not a viable technique, but it provides an upper
bound on the effectiveness of the other heuristics that we consider. [4]
Statement Level Techniques
Total statement coverage prioritization. Using program instrumentation, we can measure
the coverage of statements in a program by its test cases. We can then prioritize test
cases in terms of the total number of statements they cover by sorting them in order of
coverage achieved. If multiple test cases cover the same number of statements, we can
order them pseudorandomly.
Additional statement coverage prioritization. Similar to previous, but it relies on feedback
about coverage attained so far in testing to focus on statements not yet covered. To do
this, the technique greedily selects a test case that yields the greatest statement
coverage, then adjusts the coverage data about subsequent test cases to indicate their
coverage of statements not yet covered, and then iterates until all statements covered by
at least one test case have been covered. When all statements have been covered, the
remaining test cases are covered (recursively) by resetting all statements to “not
covered” and reapplying additional statement coverage on the remaining test cases. [4]
Statement Level Techniques
Total Fault Exposing Potential (FEP) prioritization. The ability of a fault to be exposed by a
test case depends not only on whether the test case executes a faulty component, but
also on the probability that a fault in that statement will cause a failure for that test case.
Any practical determination of this probability must be an approximation, but we wish to
know whether such an approximation might yield a prioritization technique superior in
terms of rate of fault detection than techniques based solely on code coverage.
Additional Fault Exposing Potential (FEP) prioritization. Similar to the extensions made to
total statement coverage prioritization to produce additional statement coverage
prioritization, we incorporate feedback into total FEP prioritization to create additional
fault-exposing-potential (FEP) prioritization. In additional FEP prioritization, after
selecting a test case t, we lower the award values for all other test cases that exercise
statements exercised by t to reflect our increased confidence in the correctness of those
statements; we then select a next test case, repeating this process until all test cases
have been ordered. This approach lets us account for the fact that additional executions
of a statement may be less valuable than initial executions. [4]
Function Level Techniques
Total function coverage prioritization. Analogous to total statement coverage
prioritization, but operating at the level of functions, total function coverage prioritization
prioritizes test cases by sorting them in order of the total number of functions they
execute. The technique has a worst-case cost analogous to that of statement coverage:
The number of functions in a program is typically much smaller, however, than the
number of statements in a program. Moreover, the process of collecting function-level
traces is less expensive and less intrusive than the process of collecting statement level
traces. Thus, total function coverage prioritization promises to be cheaper than total
statement coverage prioritization.
Additional function coverage prioritization. Analogous to additional statement coverage
prioritization, but operating at the level of functions, this technique incorporates
feedback into total function coverage prioritization, prioritizing test cases (greedily)
according to the total number of additional functions they cover. When all statements
have been covered, we reset coverage vectors and reapply additional function coverage
on the remaining test cases.
Total FEP (function level) prioritization. This technique is analogous to total FEP
prioritization at the statement level. To translate that technique to the function level, we
required a function-level approximation of faultexposing potential. After obtaining award
values for test cases. we then apply the same prioritization algorithm as for total FEP
(statement level) prioritization, substituting functions for statements. [4]
Function Level Techniques
Total fault index (FI) prioritization. In the context of regression testing, we are also
interested in the potential influence, on fault proneness, of our modifications; that is, with
the potential of modifications to lead to regression faults. This requires that our fault
proneness measure account for attributes of software change. We can account for the
association of changes with fault-proneness by prioritizing test cases based on this
measure. For this technique, as a metric of fault proneness, we use a fault index which, in
previous studies, has proven effective at providing fault proneness estimates. The fault
index generation process involves the following steps:
First, a set of measurable attributes is obtained from each function in the program.
Second, the metrics are standardized using the corresponding metrics of a baseline
version (which later facilitates the comparison across versions).
Third, principal components analysis reduces the set of standardized metrics to a smaller
set of domain values, simplifying the dimensionality of the problem and removing the
metrics colinearity.
Finally, the domain values weighted by their variance are combined into a linear function
to generate one fault index per function in the program. [4]
Function Level Techniques
Additional fault-index (FI) prioritization. Additional fault index coverage prioritization is
accomplished in a manner similar to additional function coverage prioritization, by
incorporating feedback into total fault index coverage prioritization. The set of functions
that have been covered by previously executed test cases is maintained. If this set
contains all functions (more precisely, if no test case adds anything to this coverage), the
set is reinitialized to zero. To find the next best test case, we compute, for each test case,
the sum of the fault indexes for each function that test case executes, except for
functions in the set of covered functions. The test case for which this sum is the greatest
wins. This process is repeated until all test cases have been prioritized.
Total FI with FEP coverage prioritization. We hypothesized that, by utilizing both an
estimate of fault exposing potential and an estimate of fault proneness, we might be able
to achieve a superior rate of fault detection. There are many ways in which one could
combine these estimates; in this work, for each function, we calculate the product of the
FI and FEP estimates for that function. We then calculate, for each test case, the sum of
these products across the functions executed by that test case. We order test cases in
descending order of that sum, resolving ties pseudorandomly.
Additional FEP (function level) prioritization. This technique incorporates feedback into
the total FEP (function level) technique in the same manner used for the total FEP
(statement level) technique. [4]
Function Level Techniques
Additional FI with FEP coverage prioritization. We incorporate feedback into the previous
technique to yield an “additional” variant. We again calculate, for each function, the
product of its FI and FEP estimates. Next, we repeatedly calculate, for each test case not
yet prioritized, the sum of these products across the functions executed by that test case,
select the test case with the highest such sum, and reset the values for functions covered
by that test case to zero, until all values are zero. If test cases remain, we reset the values
for functions and repeat the process on the remaining test cases.
Total DIFF prioritization. With DIFF-based techniques, for each function present in both P
and P0 , we measure degree of change by adding the number of lines listed as inserted,
deleted, or changed, in the output of the Unix diff command applied to P and P0 . The
wide availability of “diff” tools makes this approach easily accessible to practitioners.
Additional DIFF prioritization. Additional DIFF prioritization is analogous to additional FI
prioritization, except that it relies on modification data derived from diff.
Total DIFF with FEP prioritization. Total DIFF with FEP prioritization is analogous to total FI
with FEP prioritization, except that it relies on modification data derived from diff.
Additional DIFF with FEP prioritization. Additional DIFF with FEP prioritization is analogous
to additional FI with FEP prioritization, except that it relies on modification data derived
from diff. [4]
Prioritization Using Fault Severity
The priority to the test cases are assigned depending upon the priority of the
requirement to be tested.
The requirement to be tested first is assigned the higher weight and the test cases
covering that requirement are given higher priority of execution.
In this approach, the requirements considered are based on fault severity that is the
number of times the fault can occur in the code that code is required to be tested
first thus given the higher weight.
Done to calculate the total severity of the Faults Detected (TSFD).
Consists of the summation of the measures of the all faults identified.
This is how the test cases are prioritized first requirements are prioritized then test cases
against each requirements is mapped , then executed according to the priority assigned
and the results are analyzed based on fault severity. [5]
Prioritization Using Fault Severity
Business value Measure. This is the factor in which the requirements are assigned rank
according to their importance. Most critical requirement is assigned higher number (10)
and least important requirement is assigned the lowest number.
Project Change Volatility. This factor depends upon the times consumer is modifying the
project requirements during the software development cycle. The biggest causes of the
project failure happen to be the lack of user inputs and the changing and incomplete
requirements.
Development Complexity. Development efforts, technology used for development,
environmental constraints and the time consumed or required decides the complexity of
the development phase.
Fault Proneness of Requirements. This is the very direct factor for assigning weight to
requirement. This factor considers those requirements which are error prone according
to the historical data such as requirement failure reported by the customer. [5]
Prioritization Using Fault Severity
Thus on the basis of this weight is assigned to each requirement and depending upon
this the weighted prioritization factor that measures the importance of testing a
requirement earlier is calculated and the test cases are assigned priority. This is the first
step of the proposed technique.
The second step is to assign severity measure to the each fault. The range of severity
measure varies as below:
Complex (Severity 1): SM value of 9-10.
Moderate (Severity 2): SM of 6.
Low (Severity 3) :SM of 4.
Very Low (Severity 4): SM of 2. [5]
References
[1] Software Testing Foundations by Andreas Spillner, Tilo Linz and Hans Schaefer.
[2] http://www.seguetech.com/blog/2012/08/10/importance-test-case-prioritization
[3] Test Case Prioritization Technical Report GIT-99-28, College of Computing, Georgia
Institute of Technology
[4] Test Case Prioritization: A Family of Empirical Studies by Sebastian Elbaum, Alexey G.
Malishevsky and Gregg Rothermel.
[5] Various Techniques Used For Prioritization of Test Cases by Ekta Khandelwal and
Madhulika Bhadauria.
Next Steps
Getting Started Guide
GETTING STARTED RESOURCES FREE TRIAL
Documentation
Support
Tutorials
What we're reading this week
Blog
Start you 30-day free trial now

Test Case Prioritization Techniques

  • 1.
  • 2.
    www.kanoah.com ABOUT US Kanoah isan innovative company providing ground-breaking solutions to software testing professionals on the Atlassian JIRA platform
  • 3.
    About Kanoah Tests KanoahTests is a full featured test management, integrated seamless into JIRA with the same look-n-feel. No need to learn or switch between different applications Coordinate all test management activities including test planning, authoring, execution, tracking and reporting from a central location Kanoah Tests enables you to track testing progress and quality to foster collaboration and visibility across traditional and agile teams Get real-time insights into your testing progress with out of the box reports Easily integrate your automated tests and submit test results with Kanoah Tests’ powerful REST API or use the API to automate many areas of the application
  • 4.
    After looking forseveral years at plugins for test management we finally found Kanoah Tests. The other solutions were either too complex, didn't integrate well with Jira, or were focused on a single project. Kanoah Tests proved to be an elegant solution that allowed linking between any project. Kanoah has been very responsive to feedback, requests, suggestions as well bugs. The customer service is awesome. I'd highly recommend Kanoah Tests to teams of any size looking to simplify test management and consolidate tools. Don Pierce Robert Murhamer Liked Kanoah from the moment I discovered it. Integrates nicely with JIRA and especially with Agile. Test cases can be authored right from the story level, but has all other functionality a Test Case Management solution would need to have, ranging from creating test plans, executing test cases, importing test cases, API for automation, a.s.o. Additionally the team at Kanoah is amazing and responding to any question very quickly. It wasn't hard to sell to my management to purchase Kanoah. Will highly recommend Kanoah to anybody. Zour Brosh I just start working with Kanoah and I am impressed how it's simple to manage tests without unlimited non-used features like in most of the test management tools and still to get the needed functionality and results. The integration with Jira is a great working solution that enable to share testing and development in simple way on one system. I recommend to use Kanoah for testing management. It will help to do Kanoah as much as possible customizable like Jira to match each group methodology For more reviews, visit: https://marketplace.atlassian.com/plugins/com.kanoah.test-manager/server/reviews Reviews
  • 5.
    Key Features Native seamlessintegration with JIRA No need to learn or switch between different applications Perfect for agile & traditional testing approaches Manage, organize and track all your testing efforts in a central place Reuse test cases across your projects Powerful REST API Establish clear traceability between requirements, test cases, and defects Execute test cases and track results that matter Get real-time insights into your testing progress with out of the box reports Live statistics accessible to your entire team
  • 6.
    Benefits for thetesters No need to lear or switch between different applications Reuse test cases across projects for regression Link test cases to requirements and defects API support for automated efforts Benefits for the teams Informed decisions based on real-time insights End-to-end traceability and impact analysis Centralized Test Management Save time and increase productivity
  • 7.
    Why choose KanoahTests Coordinate all test management activities right inside JIRA 1Testing right inside JIRA Take advantage of the built-in reports to track the results and measure progress 2Make informed decisions Kanoah Tests users receive priority support, even during trials 3Stelar support
  • 8.
  • 9.
    What is TestCase Prioritization? Even with a reduced number of executable test cases, it must be assured that as many as possible critical faults are found. This means test cases must be prioritized. Test cases should be prioritized so that if any test ends prematurely, Prioritization rule the best possible test result at that point of time is achieved. The most important test cases first Prioritization also ensures that the most important test cases are executed first. This way important problems can be found early. Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness in meeting some performance goal. An improved rate of fault detection during testing can provide faster feedback on the system under test, and let software engineers begin correcting faults earlier than might otherwise be possible. [1]
  • 10.
    Some aspects toconsider The usage frequency of a function or the probability of failure in software use. If certain functions of the system are used often and they contain a fault, then the probability of this fault leading to a failure is high. Thus, test cases for this function should have a higher priority than test cases for a less-often-used function. Failure risk. Risk is the combination (mathematical product) of severity and failure probability. The severity is the expected damage. Such risks may be, for example, that the business of the customer using the software is impaired, thus leading to financial losses for the customer. Tests that may find failures with a high risk get higher priority than tests that may find failures with low risks. The visibility of a failure for the end user is a further criterion for prioritization of test cases. This is especially important in interactive systems. For example, a user of a city information service will feel unsafe if there are problems in the user interface and will lose confidence in the remaining information output. Test cases can be chosen depending on the priority of the requirements. The different functions delivered by a system have different importance for the customer. The customer may be able to accept the loss of some of the functionality if it behaves wrongly. For other parts, this may not be possible. [1]
  • 11.
    Some aspects toconsider Besides the functional requirements, the quality characteristics may have differing importance for the customer. Correct implementation of the important quality characteristics must be tested. Test cases for verifying conformance to required quality characteristics get a high priority. Prioritization can also be done from the perspective of development or system architecture. Components that lead to severe consequences when they fail (for example, a crash of the system) should be tested especially intensively. Complexity of the individual components and system parts can be used to prioritize test cases. Complex program parts should be tested more intensively because developers probably introduced more faults. However, it may happen that program parts seen as easy contain many faults because development was not done with the necessary care. Therefore, prioritization in this area should be based on experience data from earlier projects run within the organization. Failures having a high project risk should be found early. These are failures that require considerable correction work that in turn requires special resources and leads to considerable delays of the project. [1]
  • 12.
    More factors toconsider Mission-critical components (take advice from Requirements/Customer). Complex features (take advice from development team/Requirements). Where failures would be most visible (take advice from development team). Features that undergo frequent changes (take advice from development team). Areas with past histories of problems (take advice from development team). Areas with complex coding (where were the developers most challenged). Areas of most frequent use. Major functionalities rather than going into detail. New functionality. [3]
  • 13.
    Why is itimportant? This subjective and difficult part of testing is about risk management, test planning, cost, value, and being analytical about which tests to run in the context of your specific project. Testing is one of the most critically important phases of the software development life cycle and consumes significant resources in terms of effort, time and cost. For complex applications it is impractical or impossible to exhaustively test every scenario and all data variables. Prioritizing test cases based on perceived risks and customer expressed needs can efficiently reduce the number of test cases necessary for comprehensive testing of a software application, to meet project milestones while ensuring customer’s requirements and expectations have been met. Running all of the test cases in a test suite, however, can require a large amount of effort. For this reason, researchers have considered various techniques for reducing the cost of regression testing, including regression test selection and test-suite minimization techniques. [2]
  • 14.
    Why is itimportant? Software engineers often save the test suites they develop for their software so that they can reuse those test suites later as the software evolves. Regression test selection techniques reduce the cost of regression testing by selecting an appropriate subset of the existing test suite, based on information about the program, modified version, and test suite. Test suite minimization techniques lower costs by reducing a test suite to a minimal subset that maintains equivalent coverage of the original test suite with respect to a particular test adequacy criterion. Test case prioritization techniques provide another method for assisting with regression testing. These techniques allow testers to order their test cases so that those test cases with the highest priority, according to some criterion, are executed earlier in the testing process than lower priority test cases. [3] With this technique, the test cases are prioritized and scheduled in order that attempts to maximize some objective function. To decide the priority of the test cases the various factors depending upon the need are decided then the priority is assigned to the test cases. Test case prioritization provides a way to schedule and run test cases, which have the highest priority in order to provide earlier detect faults. [5]
  • 15.
    Test Case PrioritizationTechniques Prioritization for Rate of Fault Detection This goal is described as one of improving our test suite's rate of fault detection. An improved rate of fault detection during regression testing can let software engineers begin their debugging activities earlier than might otherwise be possible, speeding the release of the software. An improved rate of fault detection can also provide faster feedback on the system under test, and provide earlier evidence when quality goals have not been met, allowing strategic decisions about release schedules to be made earlier than might otherwise be possible. [3]
  • 16.
    Comparator Techniques Random ordering.As an experimental control, one prioritization “technique” that we consider is the random ordering of the test cases in the test suite. Optimal ordering. For further comparison, we also consider an optimal ordering of the test cases in the test suite. We can obtain such an ordering in our experiments because we use programs with known faults and can determine which faults each test case exposes: this lets us determine the ordering of test cases that maximizes a test suite’s rate of fault detection. In practice, this is not a viable technique, but it provides an upper bound on the effectiveness of the other heuristics that we consider. [4]
  • 17.
    Statement Level Techniques Totalstatement coverage prioritization. Using program instrumentation, we can measure the coverage of statements in a program by its test cases. We can then prioritize test cases in terms of the total number of statements they cover by sorting them in order of coverage achieved. If multiple test cases cover the same number of statements, we can order them pseudorandomly. Additional statement coverage prioritization. Similar to previous, but it relies on feedback about coverage attained so far in testing to focus on statements not yet covered. To do this, the technique greedily selects a test case that yields the greatest statement coverage, then adjusts the coverage data about subsequent test cases to indicate their coverage of statements not yet covered, and then iterates until all statements covered by at least one test case have been covered. When all statements have been covered, the remaining test cases are covered (recursively) by resetting all statements to “not covered” and reapplying additional statement coverage on the remaining test cases. [4]
  • 18.
    Statement Level Techniques TotalFault Exposing Potential (FEP) prioritization. The ability of a fault to be exposed by a test case depends not only on whether the test case executes a faulty component, but also on the probability that a fault in that statement will cause a failure for that test case. Any practical determination of this probability must be an approximation, but we wish to know whether such an approximation might yield a prioritization technique superior in terms of rate of fault detection than techniques based solely on code coverage. Additional Fault Exposing Potential (FEP) prioritization. Similar to the extensions made to total statement coverage prioritization to produce additional statement coverage prioritization, we incorporate feedback into total FEP prioritization to create additional fault-exposing-potential (FEP) prioritization. In additional FEP prioritization, after selecting a test case t, we lower the award values for all other test cases that exercise statements exercised by t to reflect our increased confidence in the correctness of those statements; we then select a next test case, repeating this process until all test cases have been ordered. This approach lets us account for the fact that additional executions of a statement may be less valuable than initial executions. [4]
  • 19.
    Function Level Techniques Totalfunction coverage prioritization. Analogous to total statement coverage prioritization, but operating at the level of functions, total function coverage prioritization prioritizes test cases by sorting them in order of the total number of functions they execute. The technique has a worst-case cost analogous to that of statement coverage: The number of functions in a program is typically much smaller, however, than the number of statements in a program. Moreover, the process of collecting function-level traces is less expensive and less intrusive than the process of collecting statement level traces. Thus, total function coverage prioritization promises to be cheaper than total statement coverage prioritization. Additional function coverage prioritization. Analogous to additional statement coverage prioritization, but operating at the level of functions, this technique incorporates feedback into total function coverage prioritization, prioritizing test cases (greedily) according to the total number of additional functions they cover. When all statements have been covered, we reset coverage vectors and reapply additional function coverage on the remaining test cases. Total FEP (function level) prioritization. This technique is analogous to total FEP prioritization at the statement level. To translate that technique to the function level, we required a function-level approximation of faultexposing potential. After obtaining award values for test cases. we then apply the same prioritization algorithm as for total FEP (statement level) prioritization, substituting functions for statements. [4]
  • 20.
    Function Level Techniques Totalfault index (FI) prioritization. In the context of regression testing, we are also interested in the potential influence, on fault proneness, of our modifications; that is, with the potential of modifications to lead to regression faults. This requires that our fault proneness measure account for attributes of software change. We can account for the association of changes with fault-proneness by prioritizing test cases based on this measure. For this technique, as a metric of fault proneness, we use a fault index which, in previous studies, has proven effective at providing fault proneness estimates. The fault index generation process involves the following steps: First, a set of measurable attributes is obtained from each function in the program. Second, the metrics are standardized using the corresponding metrics of a baseline version (which later facilitates the comparison across versions). Third, principal components analysis reduces the set of standardized metrics to a smaller set of domain values, simplifying the dimensionality of the problem and removing the metrics colinearity. Finally, the domain values weighted by their variance are combined into a linear function to generate one fault index per function in the program. [4]
  • 21.
    Function Level Techniques Additionalfault-index (FI) prioritization. Additional fault index coverage prioritization is accomplished in a manner similar to additional function coverage prioritization, by incorporating feedback into total fault index coverage prioritization. The set of functions that have been covered by previously executed test cases is maintained. If this set contains all functions (more precisely, if no test case adds anything to this coverage), the set is reinitialized to zero. To find the next best test case, we compute, for each test case, the sum of the fault indexes for each function that test case executes, except for functions in the set of covered functions. The test case for which this sum is the greatest wins. This process is repeated until all test cases have been prioritized. Total FI with FEP coverage prioritization. We hypothesized that, by utilizing both an estimate of fault exposing potential and an estimate of fault proneness, we might be able to achieve a superior rate of fault detection. There are many ways in which one could combine these estimates; in this work, for each function, we calculate the product of the FI and FEP estimates for that function. We then calculate, for each test case, the sum of these products across the functions executed by that test case. We order test cases in descending order of that sum, resolving ties pseudorandomly. Additional FEP (function level) prioritization. This technique incorporates feedback into the total FEP (function level) technique in the same manner used for the total FEP (statement level) technique. [4]
  • 22.
    Function Level Techniques AdditionalFI with FEP coverage prioritization. We incorporate feedback into the previous technique to yield an “additional” variant. We again calculate, for each function, the product of its FI and FEP estimates. Next, we repeatedly calculate, for each test case not yet prioritized, the sum of these products across the functions executed by that test case, select the test case with the highest such sum, and reset the values for functions covered by that test case to zero, until all values are zero. If test cases remain, we reset the values for functions and repeat the process on the remaining test cases. Total DIFF prioritization. With DIFF-based techniques, for each function present in both P and P0 , we measure degree of change by adding the number of lines listed as inserted, deleted, or changed, in the output of the Unix diff command applied to P and P0 . The wide availability of “diff” tools makes this approach easily accessible to practitioners. Additional DIFF prioritization. Additional DIFF prioritization is analogous to additional FI prioritization, except that it relies on modification data derived from diff. Total DIFF with FEP prioritization. Total DIFF with FEP prioritization is analogous to total FI with FEP prioritization, except that it relies on modification data derived from diff. Additional DIFF with FEP prioritization. Additional DIFF with FEP prioritization is analogous to additional FI with FEP prioritization, except that it relies on modification data derived from diff. [4]
  • 23.
    Prioritization Using FaultSeverity The priority to the test cases are assigned depending upon the priority of the requirement to be tested. The requirement to be tested first is assigned the higher weight and the test cases covering that requirement are given higher priority of execution. In this approach, the requirements considered are based on fault severity that is the number of times the fault can occur in the code that code is required to be tested first thus given the higher weight. Done to calculate the total severity of the Faults Detected (TSFD). Consists of the summation of the measures of the all faults identified. This is how the test cases are prioritized first requirements are prioritized then test cases against each requirements is mapped , then executed according to the priority assigned and the results are analyzed based on fault severity. [5]
  • 24.
    Prioritization Using FaultSeverity Business value Measure. This is the factor in which the requirements are assigned rank according to their importance. Most critical requirement is assigned higher number (10) and least important requirement is assigned the lowest number. Project Change Volatility. This factor depends upon the times consumer is modifying the project requirements during the software development cycle. The biggest causes of the project failure happen to be the lack of user inputs and the changing and incomplete requirements. Development Complexity. Development efforts, technology used for development, environmental constraints and the time consumed or required decides the complexity of the development phase. Fault Proneness of Requirements. This is the very direct factor for assigning weight to requirement. This factor considers those requirements which are error prone according to the historical data such as requirement failure reported by the customer. [5]
  • 25.
    Prioritization Using FaultSeverity Thus on the basis of this weight is assigned to each requirement and depending upon this the weighted prioritization factor that measures the importance of testing a requirement earlier is calculated and the test cases are assigned priority. This is the first step of the proposed technique. The second step is to assign severity measure to the each fault. The range of severity measure varies as below: Complex (Severity 1): SM value of 9-10. Moderate (Severity 2): SM of 6. Low (Severity 3) :SM of 4. Very Low (Severity 4): SM of 2. [5]
  • 26.
    References [1] Software TestingFoundations by Andreas Spillner, Tilo Linz and Hans Schaefer. [2] http://www.seguetech.com/blog/2012/08/10/importance-test-case-prioritization [3] Test Case Prioritization Technical Report GIT-99-28, College of Computing, Georgia Institute of Technology [4] Test Case Prioritization: A Family of Empirical Studies by Sebastian Elbaum, Alexey G. Malishevsky and Gregg Rothermel. [5] Various Techniques Used For Prioritization of Test Cases by Ekta Khandelwal and Madhulika Bhadauria.
  • 27.
    Next Steps Getting StartedGuide GETTING STARTED RESOURCES FREE TRIAL Documentation Support Tutorials What we're reading this week Blog Start you 30-day free trial now