International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comparison between Test-Driven Development and Conventional Development: A Ca...IJERA Editor
In Software Engineering, different techniques and approaches are being used nowadays to produce reliable
software. The software quality relies heavily on the software testing. However, not all developers are concerned
with the testing stage of a software. This has affected the software quality and has increased the cost as well. To
avoid these issues, researchers paid a lot of effort on finding the best technique that guarantee the software
quality. In this paper we aim to explore the effectiveness of building test cases using Test-Driven Development
(TDD) technique compared with the conventional technique (Test-last). The comparison measures the
effectiveness of test cases with regard to number of defects, code coverage and test cases development duration
between TDD and Test-Last. The results has been analyzes and presented to support the best technique. On an
average, the effectiveness of test cases with regards to the selected quality factors in Test-Driven Development
(TDD) was better than the conventional technique (Test-last). TDD and conventional testing had nearly the
same percentage as result in code coverage. Moreover, the number of defects found and the test cases
development duration spent in TDD are high compared with Test-Last. The results led to suggest some
contributions and achievement that could be gained from applying TDD technique in software industry. As
using TDD as development technique in young companies can produce high quality software in less time.
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
This is chapter 2 of ISTQB Advance Test Manager certification. This presentation helps aspirants understand and prepare the content of the certification.
Comparison between Test-Driven Development and Conventional Development: A Ca...IJERA Editor
In Software Engineering, different techniques and approaches are being used nowadays to produce reliable
software. The software quality relies heavily on the software testing. However, not all developers are concerned
with the testing stage of a software. This has affected the software quality and has increased the cost as well. To
avoid these issues, researchers paid a lot of effort on finding the best technique that guarantee the software
quality. In this paper we aim to explore the effectiveness of building test cases using Test-Driven Development
(TDD) technique compared with the conventional technique (Test-last). The comparison measures the
effectiveness of test cases with regard to number of defects, code coverage and test cases development duration
between TDD and Test-Last. The results has been analyzes and presented to support the best technique. On an
average, the effectiveness of test cases with regards to the selected quality factors in Test-Driven Development
(TDD) was better than the conventional technique (Test-last). TDD and conventional testing had nearly the
same percentage as result in code coverage. Moreover, the number of defects found and the test cases
development duration spent in TDD are high compared with Test-Last. The results led to suggest some
contributions and achievement that could be gained from applying TDD technique in software industry. As
using TDD as development technique in young companies can produce high quality software in less time.
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
This is chapter 2 of ISTQB Advance Test Manager certification. This presentation helps aspirants understand and prepare the content of the certification.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
Software Reliability Growth Models (SRGM) have been developed to estimate software reliability measures such as
software failure rate, number of remaining faults and software reliability. In this paper, the software analyzers tool proposed
for deriving several software reliability growth models based on Enhanced Non-homogeneous Poisson Process (ENHPP) in
the presence of imperfect debugging and error generation. The proposed models are initially formulated for the case when
there is no differentiation between failure observation and fault removal testing processes and then this extended for the case
when there is a clear differentiation between failure observation and fault removal testing processes. Many Software
Reliability Growth Models (SRGM) have been developed to describe software failures as a random process and can be used
to measure the development status during testing. With SRGM software consultants can easily measure (or evaluate) the
software reliability (or quality) and plot software reliability growth charts.
Testability measurement model for object oriented design (tmmood)ijcsit
Measuring testability early in the development life cycle especially at design phase is a criterion of crucial importance to software designers, developers, quality controllers and practitioners. However, most of the
mechanism available for testability measurement may be used in the later phases of development life cycle.
Early estimation of testability, absolutely at design phase helps designers to improve their designs before
the coding starts. Practitioners regularly advocate that testability should be planned early in design phase.
Testability measurement early in design phase is greatly emphasized in this study; hence, considered significant for the delivery of quality software. As a result, it extensively reduces rework during and after implementation, as well as facilitate for design effective test plans, better project and resource planning in a practical manner, with a focus on the design phase. An effort has been put forth in this paper to recognize the key factors contributing in testability measurement at design phase. Additionally, testability
measurement model is developed to quantify software testability at design phase. Furthermore, the relationship of Testability with these factors has been tested and justified with the help of statistical measures. The developed model has been validated using experimental tryout. Finally, it incorporates the empirical validation of the testability measurement model as the author’s most important contribution.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
Software Reliability Growth Models (SRGM) have been developed to estimate software reliability measures such as
software failure rate, number of remaining faults and software reliability. In this paper, the software analyzers tool proposed
for deriving several software reliability growth models based on Enhanced Non-homogeneous Poisson Process (ENHPP) in
the presence of imperfect debugging and error generation. The proposed models are initially formulated for the case when
there is no differentiation between failure observation and fault removal testing processes and then this extended for the case
when there is a clear differentiation between failure observation and fault removal testing processes. Many Software
Reliability Growth Models (SRGM) have been developed to describe software failures as a random process and can be used
to measure the development status during testing. With SRGM software consultants can easily measure (or evaluate) the
software reliability (or quality) and plot software reliability growth charts.
Testability measurement model for object oriented design (tmmood)ijcsit
Measuring testability early in the development life cycle especially at design phase is a criterion of crucial importance to software designers, developers, quality controllers and practitioners. However, most of the
mechanism available for testability measurement may be used in the later phases of development life cycle.
Early estimation of testability, absolutely at design phase helps designers to improve their designs before
the coding starts. Practitioners regularly advocate that testability should be planned early in design phase.
Testability measurement early in design phase is greatly emphasized in this study; hence, considered significant for the delivery of quality software. As a result, it extensively reduces rework during and after implementation, as well as facilitate for design effective test plans, better project and resource planning in a practical manner, with a focus on the design phase. An effort has been put forth in this paper to recognize the key factors contributing in testability measurement at design phase. Additionally, testability
measurement model is developed to quantify software testability at design phase. Furthermore, the relationship of Testability with these factors has been tested and justified with the help of statistical measures. The developed model has been validated using experimental tryout. Finally, it incorporates the empirical validation of the testability measurement model as the author’s most important contribution.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
Research Activities: past, present, and future.Marco Torchiano
Public seminar for Professor Position at Politecnico di Torino
- past research products
- current research activities
- future outlook
October 19, 2018
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
LusRegTes: A Regression Testing Tool for Lustre Programs IJECEIAES
Lustre is a synchronous data-flow declarative language widely used for safety-critical applications (avionics, energy, transport...). In such applications, the testing activity for detecting errors of the system plays a crucial role. During the development and maintenance processes, Lustre programs are often evolving, so regression testing should be performed to detect bugs. In this paper, we present a tool for automatic regression testing of Lustre programs. We have defined an approach to generate test cases in regression testing of Lustre programs. In this approach, a Lustre program is represented by an operator network, then the set of paths is identified and the path activation conditions are symbolically computed for each version. Regression test cases are generated by comparing paths between versions. The approach was implemented in a tool, called LusRegTes, in order to automate the test process for Lustre programs.
Introduction to Investigation And Utilizing Lean Test Metrics In Agile Softwa...IJERA Editor
The growth of the software development industry approaches the new development methodologies to deliver the
error free software to its end-user fulfilling the business values to product. The growth of tools and technology
has brought the automation in the development and software testing process, it has also increased the demand of
the fast testing and delivery of the software to end customers. Traditional software development methodologies
to trending agile software development trend have brought new philosophy, dimensions, and processes having
invested new tools to make process easy. The Agile development (Scrum, XP, FDD, BDD, ATDD, ASD,
DSDM, Kanban, Crystal and Lean) process also faces the software testing model crises because of the fast
development of life cycles, fast delivery to end users without having appropriate test metrics which make the
software testing process slow as well as increase the risk. The analysis of the testing metrics in the software
testing process and setting the right lean test metrics help to improve the software quality effectively in agile
process.
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
From the Art of Software Testing to Test-as-a-Service in Cloud Computingijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or modified, a set of test cases are run on each of its functions to assure that the change to that function is not affecting other parts of the software that were previously running flawlessly. For achieving this, all existing test cases need to run as well as new test cases might be required to be created. It is not feasible to re- execute every test case for all the functions of a given software, because if there is a large number of test cases to be run, then a lot of time and effort would be required. This problem can be addressed by prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization, Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also depicted.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
Unit Test using Test Driven Development Approach to Support Reusabilityijtsrd
"Unit testing is one of the approaches that can be used for practical purposes in improving the quality and reliability of software. Test Driven Development TDD adopts an evolutionary approach which requires unit test cases to be written before implementation of the code. TDD method is a radically different way of creating software. Writing test first can assure the correctness of the code and thus helping developer gain better understanding of the software requirements which leads to fewer defects and less debugging time. In TDD, the tests are written before the code is implemented as test first. The number of defects reduced when automated unit tests are written iteratively similar to test driven development. If necessary, TDD does the code refactoring. Refactoring does to improve the internal structure by editing the existing working code, without changing its external behavior. TDD is intended to make the code clearer, simple and bug free. This paper focuses on methodology and framework for automation of unit testing. Myint Myint Moe ""Unit Test using Test-Driven Development Approach to Support Reusability"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd21731.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/21731/unit-test-using-test-driven-development-approach-to-support-reusability/myint-myint-moe"
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
Software testing is an activity which is aimed for evaluating quality of a program and also for improving it, by identifying defects and problems. Software testing strives for achieving its goal (both implicit and explicit) but it has certain limitations, still testing can be done more effectively if certain established principles are to be followed. In spite of having limitations, software testing continues to dominate other verification techniques like static analysis, model checking and proofs. So it is indispensable to understand the goals, principles and limitations of software testing so that the effectiveness of software testing could be maximized.
Programming testing is the method involved with assessing and confirming that a product item or application does what it should do. The advantages of testing incorporate forestalling bugs, lessening improvement costs and further developing execution.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
1. S Raju et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.11-20
RESEARCH ARTICLE
www.ijera.com
OPEN ACCESS
Measurement and Analysis of Test Suite Volume Metrics for
Regression Testing
S Raju1 and G V Uma2
1
Associate Professor, Department of Computer Science & Engineering, Sri Venkateswara College of
Engineering, Sriperumbudur, Tamilnadu, India – 602 117
2
Professor, Department of Information Science & Technology, College of Engineering Guindy, Anna University
Chennai Tamilnadu, India – 600 025
Abstract
Regression testing intends to ensure that a software applications works as specified after changes made to it
during maintenance. It is an important phase in software development lifecycle. Regression testing is the reexecution of some subset of test cases that has already been executed. It is an expensive process used to detect
defects due to regressions. Regression testing has been used to support software-testing activities and assure
acquiring an appropriate quality through several versions of a software product during its development and
maintenance. Regression testing assures the quality of modified applications. In this proposed work, a study and
analysis of metrics related to test suite volume was undertaken. It was shown that the software under test needs
more test cases after changes were made to it. A comparative analysis was performed for finding the change in
test suite size before and after the regression test.
Keyword – Regression Testing, Test Suite Volume, Defect Density, Defect Analysis, Defect Removal
Efficiency
I.
INTRODUCTION
Regression testing is a process of executing
the program to detect defects by retesting the
modified portion or entire program. This can be
performed by running the existing test suites or a new
extended test suite against the modified code to
determine whether the changes affect the entire
program that worked properly prior to the changes.
Adequate coverage will be a primary concern when
conducting regression tests. The process of regression
testing can be stated as follows. Let S be a program
and S' be a modified version of program S; let T be a
set of test cases for P then T' is selected from T that is
subset of T for executing P', establishing T'
correctness with respect to P'. Regression testing
process consisted of steps that include Regression test
selection problem, Coverage identification problem,
Test suite execution problem and Test suite
maintenance problem.
Sometimes, the existing test suit may not be
sufficient to test the modified code. In such case, an
extended test suite is required to cover the defects
created due to modifications. Modifications to the
current version of the software can be an addition or
deletion of new features in terms of modules or
altering the existing features.
Constructing extended test suite to test the new
version of the software needs more careful effort of
www.ijera.com
the testers. Test suite volume obviously grows
proportionately with number of modifications
introduced. Addition of randomly generated test
cases has been shown to be effective.
Also
combinatorial based approach for adding test cases
was also effective. In this work, two different kinds
of applications are considered for measuring the test
metrics. First category of software applications are
small in size, up to 1 KLOC and the second category
of software applications are larger in size, varying 5
to 30 KLOC.
II.
RELATED WORKS
The literature survey revealed that many
researchers have attempted to study the metrics
related to software regression testing and test suite
size. A brief review of some recent research on this
area is presented here. The objective of regression
testing is to have the highest likelihood of finding the
defects yet-to-detect with a minimum amount of time
and effort. This measurement help us to manage and
control the software testing process.
Kan and Konda classified test metrics into
three categories: product metrics, project metrics and
process metrics. The test metrics can be used to
measure and improve quality of test process and/or
software product. Test metrics are a subset of
11 | P a g e
2. S Raju et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.11-20
www.ijera.com
software metrics - product metrics, process metrics
[1][2].
of program using
instances[7].
Gregg Rothermel
presented various
methodologies for improving regression testing
processes. The cost-effectiveness of these
methodologies have been shown to vary with
characteristics of regression test suites. One such
characteristic involves the way in which test inputs
are composed into test cases within a test suite. This
article reports the results of controlled experiments
examining the effects of two factors in test suite
composition---test suite granularity and test input
grouping---on the costs and benefits of several
regression-testing-related methodologies: retest-all,
regression test selection, test suite reduction, and test
case prioritization. The results exposed the essential
tradeoffs affecting the relationship between test suite
design and regression testing cost-effectiveness, with
several implications for practice [3].
Pakinam N. Boghdady explain that the
software testing immensely depends on three main
phases: test case generation, test execution, and test
evaluation. Test case generation is the core of any
testing process; however, those generated test cases
still require test data to be executed which makes the
test data generation not less important than the test
case generation. This kept the researchers during the
past decade occupied with automating those
processes which played a tremendous role in
reducing the time and effort spent during the testing
process. This paper explores different approaches that
had emerged during the past decade regarding the
generation of test cases and test data from different
models as an emerging type of model based testing.
Unified Modeling Language UML models took the
greatest share from among those models[8].
Mrinal Kanti Debbarma presented that the
Software metrics was applied to evaluate and assure
software code quality. It requires a model to convert
internal quality attributes to code reliability. High
degree of complexity in a component (function,
subroutine, object, class etc.) is bad in comparison to
a low degree of complexity in a component. Various
internal codes attribute which can be used to
indirectly assess code quality. In this paper, they
analyzed the software complexity measures for
regression testing which enables the tester/developer
to reduce software development cost and improve
testing efficacy and software code quality. This
analysis was based on a static analysis and different
approaches presented in the software engineering
literature [4].
Jayant et al have proposed a study on test
case prioritization based on cost, time and process
aspects. Prioritization concept increases the rate of
fault detection or code in time and cost constraints.
They have concluded that prioritization of test case or
a test suit has different aspects of fault detection[9].
Ruchika have proposed both regression test
selection and prioritization technique. They
implemented their regression test selection technique
and demonstrated that their technique was effective
regarding selecting and prioritizing test cases. The
proposed technique increases confidence in the
correctness of the modified program [5].
R Kavitha have proposed a prioritization
technique to improve the rate of fault detection of
severe faults for Regression testing. Here, two factors
rate of fault detection and fault impact for prioritizing
test cases are proposed. The results prove that the
proposed prioritization technique was effective[6].
Roya Alavi and Shahriar Lotfi presented a
software system testing methodology that includes a
large set of test cases. Test selection helps to reduce
this cost by selecting a small subset of tests that are
likely to reveal faults. The test selection helps to
reduce cost by selecting a small subset of tests that
find to faults. The aim is to find the maximum faults
www.ijera.com
III.
minimum
number
of
test
PROJECTS AND RELATED DATA
The proposed research work consisted of
many modules. Our first module consisted of
modifications to existing features and addition of new
modules manually.. Each applications considered
separately for identifying the segments where the
proposed changes are to be made. In effect, the size
of the applications projects will be increased. Only
in case of small programs, the code size may or may
not increase. Second module consisted of running
the existing test suites against these new versions of
the application projects. For re-testing, the Junit test
tool is used under Net Beans IDE with Java JDK.
The test results are then examined for their
completeness of test execution. If any test result
indicates that test process is not completed, then we
need to add new test cases to the existing test suite.
For adding new test cases, we either follow random
approach or combinatorics approach. Details of the
small projects used in the proposed research work
such as project size, test suite size, defects count are
shown in the table 1 and that of large projects are
shown in the table 2.
The metrics such defect density, Test Case
Efficiency, Test Suite volume increase are calculated
before regression testing and after the modifications
and after regression testing.
12 | P a g e
3. S Raju et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.11-20
www.ijera.com
Table 1 Small Programs & other details
SL.
NO
Problem /
Project
Size
(LOC)
(S)
No. of
Modules
/ Functions
(M)
No. of
Defects
Found (D)
Test
Suite
Size
(N)
1
Triangle Classification
25
5
12
35
2
Square Root Problem
19
4
9
24
3
Electricity Bill Generation
155
13
20
96
4
Simple Calculator Program
250
18
38
126
5
Simple Editor Program
452
29
69
204
Table 2 Large Size Projects & other details
SL.
NO
Problem /
Project
Size
(LOC)
(S)
No. of
Modules
/ Functions
(M)
No. of
Defects
Found (D)
Test
Suite
Size
(N)
1
Payroll System
15
60
1012
1435
2
Infrastructure Mgt. System
21
64
1290
1524
3
Library System
8
45
629
1096
4
Project Mgt. System
25
75
2638
2926
5
Banking System
31
94
3869
4204
IV.
RESEARCH OBJECTIVES
The proposed research work address the
following issues in detail. For answering these
questions, regression testing is performed and the
results are presented in table format as well as in the
graphical format in the following sections.
RQ1. What is the effect of adding new features and
modifying existing features of the current release
over the previous releases of software?
RQ2. Whether the existing Test Suit is good enough
to test the modified version of the program?
RQ3. What is the effect of modification of software
projects on the test suite volume size?
Regression testing involves reusing test
suites which have been created for earlier versions or
releases of the software. By reusing these test cases,
the costs of designing and creating test cases can be
amortized across the lifetime of a system. When an
existing software projects are modified to incorporate
www.ijera.com
the changes in user requirements, the code size
increases proportionately. Also when we try to add
new modules for adding new functionality, the code
size and number of modules increases.
These
applications are taken to experiment the effectiveness
of Testing after modifications and new additions.
With industry data, we have calculated the test
metrics - Defect Density per LOC or KLOC and Test
Case Efficiency before performing regression testing.
These metrics are shown in the table 3 and table 4
respectively for small and larger size projects.
Defect density is obtained by dividing
number of defects covered by the program/project
size and Test Case Efficiency is calculated as
percentage of defect covered divided by the number
of test cases. Since the proposed research work
addresses the issue of effective regression testing,
these projects are modified in two ways. Either a set
of new features/modules are added or/and the
existing
modules
are
modified.
13 | P a g e
4. S Raju et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.11-20
www.ijera.com
Table 3 Defect Density and Test Case Efficiency for small programs
SL.
NO
Test
Suite
Size (N)
Problem /
Project
No. of
Defects
Covered
(D)
Defect
Density
per LOC
(D/S)
TC Efficiency
(D/N)*100
1
Triangle Classification
35
12
0.480
25.714
2
Square Root Problem
24
9
0.474
41.667
3
Electricity Bill Generation
96
20
0.129
20.833
4
Simple Calculator Program
126
38
0.152
30.159
5
Simple Editor Program
204
69
0.153
33.823
Figure 1 shows the defect density before
regression test for smaller size programs.
The
defect
density
for
each
projects/programs is calculated by using the formula
It is obvious that when we add new set of
functionalities, the code size and number of
modules always increases. These applications are
considered for conducting re-test so as to measure
the effectiveness of Testing after modifications and
new additions.
Figure 2 shows the test case efficiency for
smaller size projects.
Test Case Efficiency is calculated using
the formula
Defect Density / LOC or KLOC
0.6
Defect Density
0.5
0.4
0.3
0.2
0.1
0
Triangle
Square Root
Electricity Bill
Calculator
Simple Editor
Programs
Fig. 1 Defect Density before Regression Testing for smaller size programs
TC Efficiency %
Test Case Efficiency
45
40
35
30
25
20
15
10
5
0
Triangle
Square Root
Electricity Bill
Calculator
Editor
Programs
Fig. 2 Defect Density before Regression Testing for Larger size programs
www.ijera.com
14 | P a g e
5. S Raju et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.11-20
www.ijera.com
Table 4 Defect Density and Test Case Efficiency for Larger programs
SL.
NO
Problem /
Project
Test
Suite
Size
(N)
No. of
Defects
Covered
(D)
Defect
Density
per KLOC
(D/S)
TC Efficiency
(D/N)*100
1
Payroll System
1435
1012
67.467
70.523
2
Infrastructure Mgt. System
1524
1290
61.429
84.645
3
Library System
1096
629
78.625
57.391
4
Project Mgt. System
2926
2638
105.520
90.157
5
Banking System
4204
3869
124.806
92.031
Figure 3 shows the defect density before regression
for larger size programs.
Figure 4 shows the test case efficiency for smaller
size projects.
Fig. 3 Defect Density before Regression Testing for Larger Projects
Fig. 4 Test Case Efficiency before Regression Testing for Larger Projects
V.
REGRESSION TESTING OF
APPLICATIONS
Before performing the regression testing
we added new features and also modified existing
features.
Due to this code size of the
www.ijera.com
projects/programs have increased. Consequent to
this, the test suite is assed with more number of test
cases so as to run the programs /projects against
this extended test suite. Table 5 shows the details
of small programs after the regression testing. It is
observed that there is increase in test suite volume
15 | P a g e
6. S Raju et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.11-20
and defect counts. Figure 5 given below shows
that the number of defects increases when the
www.ijera.com
programs are modified due to change in user
requirements.
Table 5 Effect of Modifications for Small Programs
SL.
NO
Problem /
Project
Size
(S)
(LOC)
No. of Defects
Found (D)
No. of
Modules
Test Suite
Size (N)
Old
New
Total
Old
New
Total
1
Triangle Classification
20
5
12
9
17
35
6
41
2
Square Root Problem
22
4
9
10
19
24
5
29
3
Electricity Bill
195
15
20
9
29
96
10
106
4
Simple Calculator
290
20
38
7
45
126
9
135
5
Simple Editor Program
402
30
69
11
80
204
9
213
Fig 5 Number of defects before and after Regression Testing for Small Programs
The test suite volume increases for covering
these additional defects due to modifications as
shown in the figure 6 below.
Similarly, when we modify the larger
projects to incorporate changes in user requirements,
www.ijera.com
number of defects increases. Hence the original test
suite should be added with more test cases to find
these defects. This is shown in the table 6 given
below.
16 | P a g e
7. S Raju et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.11-20
www.ijera.com
Fig 6 Increase of Test Suite Volume after Regression Testing for Large Programs
Table 6 Effect of Modifications for Larger Projects
Sl.
NO
Problem /
Project
Size
(S)
(KLOC)
No. of
Module
No. of Defects
Found (D)
Test Suite
Size (N)
Old
New
Total
Old
New
Total
1
Payroll System
15.4
65
1012
57
1069
1435
46
1481
2
Infrastructure Mgt. Sys.
21.3
67
1290
62
1352
1524
55
1579
3
Library System
8.5
51
629
24
653
1096
40
1136
4
Project Mgt. System
25.4
73
2638
48
2686
2926
47
2973
5
Banking System
30.6
90
3869
81
3950
4204
52
4256
Figure 7 given below shows that the number of
defects increases when the programs are modified
due to change in user requirements in case of larger
applications.
Fig 7 No. of defects before and after Regression Testing for Larger Programs
www.ijera.com
17 | P a g e
8. S Raju et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.11-20
The test suite volume increases for covering these
additional defects due to modifications as shown in
the figure 8 below. When we compare the metrics
defect density and test case efficiency of the test
www.ijera.com
suites before and after the regression testing, both
increased to some extent for most of the
programs/projects.
Fig 8 Increase of Test Suite Volume after Regression Testing for Larger Projects
This can be seen in the table 7 and
graphically shown in the Figures 9(a) and Figure 9(b)
below. Figures 10(a) and 10(b) shows the Test Case
Efficiency before and after the Regression Testing.
Table 7 Defect Density and Test Case Efficiency after and before the Regression Testing
SL.
NO
Problem /
Project
Defect Density
Test Case Efficiency
BR
AR
BR
AR
1
Triangle Classification
0.480
0.850
25.714
41.463
2
Square Root Problem
0.474
0.863
41.667
65.517
3
Electricity Bill Generation
0.129
0.149
20.833
27.358
4
Simple Calculator Program
0.152
0.155
30.159
33.333
5
Simple Editor Program
0.153
0.199
33.823
37.558
6
Payroll System
67.467
69.416
70.523
72.181
7
Infrastructure Mgt. System
61.429
63.474
84.645
85.624
8
Library System
78.625
76.824
57.391
57.482
9
Project Mgt. System
105.520
105.748
90.157
90.346
10
Banking System
124.806
129.085
92.031
92.810
www.ijera.com
18 | P a g e
9. S Raju et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.11-20
www.ijera.com
Fig 9(a) Defect Density for small Programs
Fig 9(b) Defect Density for Large Programs
Fig 10(a) Test Case efficiency for small Programs
www.ijera.com
19 | P a g e