Software quality is an important issue in the development of successful software application.
Many methods have been applied to improve the software quality. Refactoring is one of those
methods. But, the effect of refactoring in general on all the software quality attributes is
ambiguous.
The goal of this paper is to find out the effect of various refactoring methods on quality
attributes and to classify them based on their measurable effect on particular software quality
attribute. The paper focuses on studying the Reusability, Complexity, Maintainability,
Testability, Adaptability, Understandability, Fault Proneness, Stability and Completeness
attribute of a software .This, in turn, will assist the developer in determining that whether to
apply a certain refactoring method to improve a desirable quality attribute.
An empirical evaluation of impact of refactoring on internal and external mea...ijseajournal
Refactoring is the process of improving the design of existing code by changing its internal structure
without affecting its external behaviour, with the main aims of improving the quality of software product.
Therefore, there is a belief that refactoring improves quality factors such as understandability, flexibility,
and reusability. However, there is limited empirical evidence to support such assumptions.
The objective of this study is to validate/invalidate the claims that refactoring improves software quality.
The impact of selected refactoring techniques was assessed using both external and internal measures. Ten
refactoring techniques were evaluated through experiments to assess external measures: Resource
Utilization, Time Behaviour, Changeability and Analysability which are ISO external quality factors and
five internal measures: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class
Coupling and Lines of Code.
The result of external measures did not show any improvements in code quality after the refactoring
treatment. However, from internal measures, maintainability index indicated an improvement in code
quality of refactored code than non-refactored code and other internal measures did not indicate any
positive effect on refactored code.
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
Software Refactoring Under Uncertainty: A Robust Multi-Objective ApproachWiem Mkaouer
Refactoring large systems involves several sources of uncertainty related to the severity levels of code smells to be corrected and the importance of the classes in which the smells are located. Due to the dynamic nature of software development, these values cannot be accurately determined in practice, leading to refactoring sequences that lack robustness. To address this problem, we introduced a multi-objective robust model, based on NSGA-II, for the software refactoring problem that tries to find the best trade-off between quality and robustness. We evaluated our approach using six open source systems and demonstrated that it is significantly better than state-of-the-art refactoring approaches in terms of robustness in 100% of experiments based on a variety of real-world scenarios. Our suggested refactoring solutions were found to be comparable in terms of quality to those suggested by existing approaches and to carry an acceptable robustness price. Our results also revealed an interesting feature about the trade-off between quality and robustness that demonstrates the practical value of taking robustness into account in software refactoring tasks.
The modern business environment requires organizations to be flexible and open to change if they are to gain and retain their competitive age. Competitive business environment needs to modernize existing legacy system in to self-adaptive ones. Reengineering presents an approach to transfer a legacy system towards an evolvable system. Software reengineering is a leading system evolution technique which helps in effective cost control, quality improvements and time and risk reduction. However successful improvement of legacy system through reengineering requires portfolio analysis of legacy application around various quality and functional parameters some of which includes reliability and modularity of the functions, level of usability and maintainability as well as policy and standards of software architecture and availability of required documents. Portfolio analysis around these parameters will help to examine the legacy application on quality and functional gaps within the application [1].
An empirical evaluation of impact of refactoring on internal and external mea...ijseajournal
Refactoring is the process of improving the design of existing code by changing its internal structure
without affecting its external behaviour, with the main aims of improving the quality of software product.
Therefore, there is a belief that refactoring improves quality factors such as understandability, flexibility,
and reusability. However, there is limited empirical evidence to support such assumptions.
The objective of this study is to validate/invalidate the claims that refactoring improves software quality.
The impact of selected refactoring techniques was assessed using both external and internal measures. Ten
refactoring techniques were evaluated through experiments to assess external measures: Resource
Utilization, Time Behaviour, Changeability and Analysability which are ISO external quality factors and
five internal measures: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class
Coupling and Lines of Code.
The result of external measures did not show any improvements in code quality after the refactoring
treatment. However, from internal measures, maintainability index indicated an improvement in code
quality of refactored code than non-refactored code and other internal measures did not indicate any
positive effect on refactored code.
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
Software Refactoring Under Uncertainty: A Robust Multi-Objective ApproachWiem Mkaouer
Refactoring large systems involves several sources of uncertainty related to the severity levels of code smells to be corrected and the importance of the classes in which the smells are located. Due to the dynamic nature of software development, these values cannot be accurately determined in practice, leading to refactoring sequences that lack robustness. To address this problem, we introduced a multi-objective robust model, based on NSGA-II, for the software refactoring problem that tries to find the best trade-off between quality and robustness. We evaluated our approach using six open source systems and demonstrated that it is significantly better than state-of-the-art refactoring approaches in terms of robustness in 100% of experiments based on a variety of real-world scenarios. Our suggested refactoring solutions were found to be comparable in terms of quality to those suggested by existing approaches and to carry an acceptable robustness price. Our results also revealed an interesting feature about the trade-off between quality and robustness that demonstrates the practical value of taking robustness into account in software refactoring tasks.
The modern business environment requires organizations to be flexible and open to change if they are to gain and retain their competitive age. Competitive business environment needs to modernize existing legacy system in to self-adaptive ones. Reengineering presents an approach to transfer a legacy system towards an evolvable system. Software reengineering is a leading system evolution technique which helps in effective cost control, quality improvements and time and risk reduction. However successful improvement of legacy system through reengineering requires portfolio analysis of legacy application around various quality and functional parameters some of which includes reliability and modularity of the functions, level of usability and maintainability as well as policy and standards of software architecture and availability of required documents. Portfolio analysis around these parameters will help to examine the legacy application on quality and functional gaps within the application [1].
Research Activities: past, present, and future.Marco Torchiano
Public seminar for Professor Position at Politecnico di Torino
- past research products
- current research activities
- future outlook
October 19, 2018
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Ranking The Refactoring Techniques Based on The External Quality AttributesIJRES Journal
The selection of appropriate decisions is a significant issue that might lead to more satisfactory results. The difficulty comes when there are several alternatives and when all of them have the same chance of being selected. It is important, therefore, to find the kinds of priorities among all of these alternatives in order to choose the most appropriate one. The analytic hierarchy process (AHP) is capable of structuring decision problems and finding mathematically determined judgments built on knowledge and experience. This suggests that AHP should prove useful in agile software development where complex decisions occur routinely. This paper presents an example of using the AHP to rank the refactoring techniques based on the external code quality attributes. XP encourages applying the refactoring where the code smells bad. However, refactoring may consume more time and efforts. Therefore, to maximize the benefits of the refactoring in less time and effort, AHP has been applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to focus on the technique that improve the code and the XP development process in general. :
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
Abstract— Testing is an important phase of quality control of Software Development Life Cycle (SDLC). There are various types of testing methodologies involved to test the application. Regression Testing is a type of testing, which is done to ensure whether the modified features or bug fix had an impact over the existing functionality. Defects are identified by executing the set of test cases. Regression Test case selection is not at all possible to conclude how much retesting is required to identify the deviation when the test suites are larger in size. Prioritization of test cases is done to change the order of test case execution based on the severity. In the proposed a model based approach prioritization of test cases are generated based on UML diagrams (Sequence and State Chart). The modified features have the reflection in the model generation and the number of states and transitions covered. Prioritized test cases are then clustered based upon the severities using dendragram approach. It leads to decrease in the time and cost of regression testing.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
Agile software processes, such as extreme programming (XP), Scrum, Lean, etc., rely on best
practices that are considered to improve software development quality. It can be said that best
practices aim to induce software quality assurance (SQA) into the project at hand. Some
researchers of agile methods claim that because of the very nature of such methods,
quality in agile software projects should be a natural outcome of the applied method.
As a consequence, agile quality is expected to be more or less embedded in the agile software
processes. Many reports support and evangelize the advantages of agile methods with respect to
quality assurance, Is it so ?
An ambitious goal of this paper is to present work done to understand how quality is or should
be handled. This paper as all survey papers attempt to summarize and organizes research
results in the field of software engineering, precisely for the topic of agile methods related to
software quality.
Software testing means to cut errors, reduce
maintenances and to short the cost of software development. Many
software development and testing methods are used from many
past years to improve software quality and software reliability. The
major problem arises in the field of software testing is to find the
best test case to performs testing of software. There are many kind
of testing methods used for making a best case. Teasing is a
important part of software development cycle .The process of
testing is not bounded to detection of ’error’ in software but also
enhances the surety of proper functioning and help to find out the
functional and non functional particularities .Testing activities
focuses on the overall progress of software.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
Software maintenance is important and difficult to measure. The cost of maintenance is the most ever during the phases of software development. One of the most critical processes in software development is the reduction of software maintainability cost based on the quality of source code during design step, however, a lack of quality models and measures can help asses the quality attributes of software maintainability process. Software maintainability suffers from a number of challenges such as lack source code understanding, quality of software code, and adherence to programming standards in maintenance. This work describes model based-factors to assess the software maintenance, explains the steps followed to obtain and validate them. Such a method can be used to eliminate the software maintenance cost. The research results will enhance the quality of the source code. It will increase software understandability, eliminate maintenance time, cost, and give confidence for software reusability.
A study of various viewpoints and aspects software quality perspectiveeSAT Journals
Abstract The software quality is very important research of software engineering grown from the last two decades. The software engineering paradigm adopted by many organizations to develop the high quality software at affordable cost. The high quality software is considered as one of the key factor in the rapid growth of Global Software Development. The software metrics computes and evaluates the quality characteristics and used to take quantitative and qualitative decisions for risk assessment and reduction. The multiple stakeholders can view the software quality in multiple angles with various aspects. In this paper we present multiple views of the software quality with respect to various quality aspects. Key Words : Stakeholders, Functional aspect, Structural aspect, Process aspect, Metrics etc.
Software architecture for developers by Simon BrownCodemotion
The agile and software craftsmanship movements are pushing up the quality of the software systems we build, but there’s more we can do because even a small amount of software architecture can prevent many of the problems that projects still face, particularly if the team seems to be more chaotic than they are self-organising. Successful software projects aren’t just about good code and sometimes you need to step away from the IDE for a few moments to see the bigger picture. This session is about that bigger picture, software architecture, technical leadership and the balance with agility.
This document provides a non-exhaustive list of commonly available tools - along with their categories, supported languages, license, and web-site link - that can help in the process of refactoring to repay technical debt.
Research Activities: past, present, and future.Marco Torchiano
Public seminar for Professor Position at Politecnico di Torino
- past research products
- current research activities
- future outlook
October 19, 2018
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Ranking The Refactoring Techniques Based on The External Quality AttributesIJRES Journal
The selection of appropriate decisions is a significant issue that might lead to more satisfactory results. The difficulty comes when there are several alternatives and when all of them have the same chance of being selected. It is important, therefore, to find the kinds of priorities among all of these alternatives in order to choose the most appropriate one. The analytic hierarchy process (AHP) is capable of structuring decision problems and finding mathematically determined judgments built on knowledge and experience. This suggests that AHP should prove useful in agile software development where complex decisions occur routinely. This paper presents an example of using the AHP to rank the refactoring techniques based on the external code quality attributes. XP encourages applying the refactoring where the code smells bad. However, refactoring may consume more time and efforts. Therefore, to maximize the benefits of the refactoring in less time and effort, AHP has been applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to focus on the technique that improve the code and the XP development process in general. :
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
Abstract— Testing is an important phase of quality control of Software Development Life Cycle (SDLC). There are various types of testing methodologies involved to test the application. Regression Testing is a type of testing, which is done to ensure whether the modified features or bug fix had an impact over the existing functionality. Defects are identified by executing the set of test cases. Regression Test case selection is not at all possible to conclude how much retesting is required to identify the deviation when the test suites are larger in size. Prioritization of test cases is done to change the order of test case execution based on the severity. In the proposed a model based approach prioritization of test cases are generated based on UML diagrams (Sequence and State Chart). The modified features have the reflection in the model generation and the number of states and transitions covered. Prioritized test cases are then clustered based upon the severities using dendragram approach. It leads to decrease in the time and cost of regression testing.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
PRODUCT QUALITY EVALUATION METHOD (PQEM): TO UNDERSTAND THE EVOLUTION OF QUAL...ijseajournal
Promoting quality within the context of agile software development, it is extremely important as well as
useful to improve not only the knowledge and decision-making of project managers, product owners, and
quality assurance leaders but also to support the communication between teams. In this context, quality
needs to be visible in a synthetic and intuitive way in order to facilitate the decision of accepting or
rejecting each iteration within the software life cycle. This article introduces a novel solution called
Product Quality Evaluation Method (PQEM) which can be used to evaluate a set of quality characteristics
for each iteration within a software product life cycle. PQEM is based on the Goal-Question-Metric
approach, the standard ISO/IEC 25010, and the extension made of testing coverage in order to obtain the
quality coverage of each quality characteristic. The outcome of PQEM is a unique multidimensional value,
that represents the quality level reached by each iteration of a product, as an aggregated measure. Even
though a value it is not the regular idea of measuring quality, we believe that it can be useful to use this
value to easily understand the quality level of each iteration. An illustrative example of the PQEM method
was carried out with two iterations from a web and mobile application, within the healthcare environment.
A single measure makes it possible to observe the evolution of the level of quality reached in the evolution
of the product through the iterations.
A FRAMEWORK FOR ASPECTUAL REQUIREMENTS VALIDATION: AN EXPERIMENTAL STUDYijseajournal
Requirements engineering is a discipline of software engineering that is concerned with the
identification and handling of user and system requirements. Aspect-Oriented Requirements
Engineering (AORE) extends the existing requirements engineering approaches to cope with the
issue of tangling and scattering resulted from crosscutting concerns. Crosscutting concerns are
considered as potential aspects and can lead to the phenomena “tyranny of the dominant
decomposition”. Requirements-level aspects are responsible for producing scattered and tangled
descriptions of requirements in the requirements document. Validation of requirements artefacts
is an essential task in software development. This task ensures that requirements are correct and
valid in terms of completeness and consistency, hence, reducing the development cost,
maintenance and establish an approximately correct estimate of effort and completion time of the
project. In this paper, we present a validation framework to validate the aspectual requirements
and the crosscutting relationship of concerns that are resulted from the requirements engineering
phase. The proposed framework comprises a high-level and low-level validation to implement on
software requirements specification (SRS). The high-level validation validates the concerns with
stakeholders, whereas the low-level validation validates the aspectual requirement by
requirements engineers and analysts using a checklist. The approach has been evaluated using
an experimental study on two AORE approaches. The approaches are viewpoint-based called
AORE with ArCaDe and lexical analysis based on Theme/Doc approach. The results obtained
from the study demonstrate that the proposed framework is an effective validation model for
AORE artefacts.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
Agile software processes, such as extreme programming (XP), Scrum, Lean, etc., rely on best
practices that are considered to improve software development quality. It can be said that best
practices aim to induce software quality assurance (SQA) into the project at hand. Some
researchers of agile methods claim that because of the very nature of such methods,
quality in agile software projects should be a natural outcome of the applied method.
As a consequence, agile quality is expected to be more or less embedded in the agile software
processes. Many reports support and evangelize the advantages of agile methods with respect to
quality assurance, Is it so ?
An ambitious goal of this paper is to present work done to understand how quality is or should
be handled. This paper as all survey papers attempt to summarize and organizes research
results in the field of software engineering, precisely for the topic of agile methods related to
software quality.
Software testing means to cut errors, reduce
maintenances and to short the cost of software development. Many
software development and testing methods are used from many
past years to improve software quality and software reliability. The
major problem arises in the field of software testing is to find the
best test case to performs testing of software. There are many kind
of testing methods used for making a best case. Teasing is a
important part of software development cycle .The process of
testing is not bounded to detection of ’error’ in software but also
enhances the surety of proper functioning and help to find out the
functional and non functional particularities .Testing activities
focuses on the overall progress of software.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
Software maintenance is important and difficult to measure. The cost of maintenance is the most ever during the phases of software development. One of the most critical processes in software development is the reduction of software maintainability cost based on the quality of source code during design step, however, a lack of quality models and measures can help asses the quality attributes of software maintainability process. Software maintainability suffers from a number of challenges such as lack source code understanding, quality of software code, and adherence to programming standards in maintenance. This work describes model based-factors to assess the software maintenance, explains the steps followed to obtain and validate them. Such a method can be used to eliminate the software maintenance cost. The research results will enhance the quality of the source code. It will increase software understandability, eliminate maintenance time, cost, and give confidence for software reusability.
A study of various viewpoints and aspects software quality perspectiveeSAT Journals
Abstract The software quality is very important research of software engineering grown from the last two decades. The software engineering paradigm adopted by many organizations to develop the high quality software at affordable cost. The high quality software is considered as one of the key factor in the rapid growth of Global Software Development. The software metrics computes and evaluates the quality characteristics and used to take quantitative and qualitative decisions for risk assessment and reduction. The multiple stakeholders can view the software quality in multiple angles with various aspects. In this paper we present multiple views of the software quality with respect to various quality aspects. Key Words : Stakeholders, Functional aspect, Structural aspect, Process aspect, Metrics etc.
Software architecture for developers by Simon BrownCodemotion
The agile and software craftsmanship movements are pushing up the quality of the software systems we build, but there’s more we can do because even a small amount of software architecture can prevent many of the problems that projects still face, particularly if the team seems to be more chaotic than they are self-organising. Successful software projects aren’t just about good code and sometimes you need to step away from the IDE for a few moments to see the bigger picture. This session is about that bigger picture, software architecture, technical leadership and the balance with agility.
This document provides a non-exhaustive list of commonly available tools - along with their categories, supported languages, license, and web-site link - that can help in the process of refactoring to repay technical debt.
Refactoring: Improve the design of existing codeValerio Maggio
Refactoring: Improve the design of existing code
Software Engineering class on main refactoring techniques and bad smells reported in the famous Fawler's book on this topic!
A MODEL TO COMPARE THE DEGREE OF REFACTORING OPPORTUNITIES OF THREE PROJECTS ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
Software quality model based on development team characteristicsIJECEIAES
Many factors have a significant impact on producing high-quality software products. Development team members are among the most important factors. Paying attention to the quality from this perspective will be a good innovation in the software development industry. Given that team members play a very important role in software products, this study tries to focus specifically on team characteristics in software product quality and provide a qualitative model based on this. The required data were collected through observations and interviews with project managers and development team members in several companies under study. Then, data were analyzed through hierarchical analysis. According to the results, the use of this model led to the improvement of the software development process so that the team members were satisfied with it. Also, time management was improved, and the customer expressed his satisfaction with the use of this model. Finally, data analysis showed that this model may lead to faster product delivery.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
In this paper a correlation between the major attributes of object oriented design and changeability has been ascertained. A changeability evaluation model using multiple linear regression has been proposed for object oriented design. The validation of the proposed changeability evaluation model is made known by means of experimental tests and the results show that the model is highly significant.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
EMPIRICALLY VALIDATED SIMPLICITY EVALUATION MODEL FOR OBJECT ORIENTED SOFTWAREijseajournal
Software program developers need to go from beginning to ending and understand source code of the program and other software attributes. The software complexities and length of the program exceedingly affects many design level quality attributes, specifically Simplicity, Testability and software
Maintainability. Incomplete design of any software generally leads to misunderstanding and ambiguities
and therefore to gives faulty design and development results. This is mainly seeming and appears owing to
the absence of it’s an appropriate observation, design and development control. However, high level design
and program simplicity are very necessary and one of the vital attributes of the system development cycle.This research paper highlights the impact and significance of design level software simplicity in common and as a one of the most useful key factor or index of software quality assurance and testing. In this research work principally there are three major efforts are made. As a first contribution, a valuable
relationship between software design quality factor simplicity and related object oriented design
properties, this has been set up. In the second contribution, using design level corresponding metrics a
simplicity evaluation model for object oriented software is developed. Subsequently, the developed
simplicity model has been rationally authenticated by means of experimental data try-out.
Although there has been an extensive study over delivering, increasing and maintaining software quality, there has not been enough aide- mémoire on ‘Rating a Software‘s Quality’. This study would project the literature review thus far and also sculpt the scope and need for the evolution of a rating system of software quality for the future.
AN IMPROVED REPOSITORY STRUCTURE TO IDENTIFY, SELECT AND INTEGRATE COMPONENTS...ijseajournal
An ultimate goal of software development is to build high quality products. The customers of software
industry always demand for high-quality products quickly and cost effectively. The component-based
development (CBD) is the most suitable methodology for the software companies to meet the demands of
target market. To opt CBD, the software development teams have to customize generic components that are
available in the market and it is very difficult for the development teams to choose the suitable components
from the millions of third party and commercial off the shelf (COTS) components. On the other hand, the
development of in-house repository is tedious and time consuming. In this paper, we propose an easy and
understandable repository structure to provide helpful information about stored components like how to
identify, select, retrieve and integrate components. The proposed repository will also provide previous
assessments of developers and end-users about the selected component. The proposed repository will help
the software companies by reducing the customization effort, improving the quality of developed software
and preventing integrating unfamiliar components.
7.significance of software layered technology on size of projects (2)EditorJST
The objective of the software engineering is committed to build software projects within the budget, time and required quality. Software engineering is a layered paradigm comprised of process, methods, tools and quality focus as bedrock to develop the product. Software firms build software projects of varying sizes constrained on resources, time and functional requirement. Impact of software engineering layered technology may vary according to the size of the projects during their development. Quantitative evaluation of layer significance on size of the software project could be categorized as a complex task because it involves a collective decision on multiple criteria. Analytic Hierarchy Process (AHP) provides an effective quantitative approach for finding the significance of software layered technology on size of the projects. This paper presents the estimations through quantitative approach on real time data collected from several software firms. These findings help for a better project management with respect to the cost, time and resources during building a software project.
The analytic hierarchy process (AHP) has been applied in many fields and especially to complex
engineering problems and applications. The AHP is capable of structuring decision problems and finding
mathematically determined judgments built on knowledge and experience. This suggests that AHP should
prove useful in agile software development where complex decisions occur routinely. In this paper, the
AHP is used to rank the refactoring techniques based on the internal code quality attributes. XP
encourages applying the refactoring where the code smells bad. However, refactoring may consume more
time and efforts.So, to maximize the benefits of the refactoring in less time and effort, AHP has been
applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to
focus on the technique that improve the code and the XP development process in general.
Agile software processes, such as extreme programming (XP), Scrum, Lean, etc., rely on best practices that are considered to improve software development quality. It can be said that best
practices aim to induce software quality assurance (SQA) into the project at hand. Some researchers of agile methods claim that because of the very nature of such methods, quality in agile software projects should be a natural outcome of the applied method. As a consequence, agile quality is expected to be more or less embedded in the agile software processes. Many reports support and evangelize the advantages of agile methods with respect to quality assurance, Is it so ? An ambitious goal of this paper is to present work done to understand how quality is or should be handled. This paper as all survey papers attempt to summarize and organizes research results in the field of software engineering, precisely for the topic of agile methods related to software quality.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. 38 Computer Science & Information Technology (CS & IT)
Refactoring changes the value of software metrics and hence the software quality attribute. Not
all the refactoring methods improve the software quality, so there is need to find out the
refactoring methods which improve the quality attributes [6].
The aim of this paper is to find out the effect of refactoring methods on the software metrics.
From the relation between the software metrics and external quality attributes direct relation
between refactoring methods and software quality attributes is derived.
The paper analyzes the effect of refactoring on the software quality attributes. The classification
of refactoring methods is done for particular desired quality attributes and metrics set.
The study also shows that refactoring does not ensure to improve the software quality always. It
has to compensate with some attributes to improve the other.
This paper is organized as follows. Section 2 describes the literature review. Section 3 and 4
explains about the research data and refactoring methods respectively. Section 5 explains about
the analysis and result of refactoring methods. Section 6 and 7 explains the threats to validity and
Conclusion respectively.
2. RELATED WORK
The goal of this paper is to find the effect of refactoring on the external software quality
attributes, using software metrics. In this section we review the study of various researchers on
the effect of refactoring on software quality attributes.
Cinneide, Boyle and Moghadam [1] studied the effect of automated refactoring on the testability
of the software. The aim is to find the refactoring method which improves the cohesion metric
and hence the testability of the software. Code-Imp platform is explored for the refactoring
purpose and available metrics in the tool are applied. The survey is done with the volunteers
where further testing is required to validate that automated refactoring improves the testability of
the software.
Sokal, Aniche and Gerosa [2] took data from Apache software and applied refactoring on it. The
authors randomly selected the fifty refactoring methods. They classified them in two groups
according to their effect on cyclomatic complexity and analyzed the change in code after
refactoring. Their studies show that refactoring does not necessarily decrease the cyclomatic
complexity but increases the maintainability and readability of the program.
Alshayeb [6] assess the effect of refactoring on the external software quality attributes. The
quality attributes taken were Adaptability, Maintainability, Understandability, Reusability and
Testability. Code for refactoring is taken from the open source UMLTool,RabtPad and TerpPaint.
The author applied different types of refactoring on the code and studied the effect of refactoring
on the software metrics. From the relation between the software metrics and external quality
attributes, the effect of refactoring is studied. The author found the inconsistent trend in the
relationship of refactoring method and external quality attributes.
Elish and Alshayeb [3] studied effect of refactoring on testability of software. They used five
refactoring methods: Extract Method, Extract Class, Consolidated Conditional Expression,
Encapsulate Field and Hide Method. Chidamber and Kemerer metrics suite [17] is used to find
the software metric values. The authors concluded that all the refactoring methods they used
increase the testability except the Extract Class method.
Kataoka [5] used coupling metrics to find the effect of refactoring on the maintainability of the
software. He proposed a quantitative evaluation method to measure the maintainability
enhancement effect of program refactoring and helped us to choose the appropriate refactoring.
3. Computer Science & Information Technology (CS & IT) 39
Stroggylos [28] analyzed the source code version control system logs of some of the popular open
source software system. They found the effect of refactoring on the software metrics to evaluate
the impact of refactoring on quality. The results found the increase in metric valued of LCOM, Ca
and RFC which degrades the software quality. They concluded that refactoring does not always
improve the software quality.
Shrivastava [29] presented a case study to improve the quality of software by refactoring. They
took open source and with the Eclipse refactoring tool produced three version of refactored code.
The results found that the size and complexity of a software decreases with refactoring and hence
maintainability increases.
The study to find effect of refactoring on the software quality attributes has a wide scope. Fowler
[7] has given 70 types of refactoring methods and each refactoring method can be linked to the
various software quality attributes. So, our focus is to find the effect of fourteen randomly chosen
refactoring methods on the various object oriented metrics and hence on the external software
quality attributes.
The following quality attributes will be used in the study:
Maintainability: It is defined as the ease with which modification is made on set of attributes.
The modification in the attributes may comprise from requirement to design. It may be about
correction, prevention and adaptation [6].
Reusability: It is defined as the reusable feature of the software in the other components or in
other software system with little adaptation [6].
Testability: It is defined as the degree to which software supports testing process. High testability
requires less effort for testing.
Understandability: It is defined as the ease of understanding the meaning of software
components to the user [6].
Fault proneness: Fault Proneness in the programs is more prone to the bugs and malfunctioning
of the module.
Completeness: Completeness of the program refers for all the necessary components, resources,
programs and all the possible pathways for execution of program [9].
Stability: Stability is defined in terms of ability of the program to bear the risk of all the
unexpected modification [23].
Complexity: In an interactive system it is defined as the difficulty of performing various task like
coding, debugging, implementing and testing the software.
Adaptability: Adaptability of the software is taken in terms of its ability to tolerate the changes
in the system without any intervention from any external resource [26].
3. RESEARCH DATA
The classes used for research data in this paper are from an open source code JHotDraw7.0.6
[10]. Erich Gamma and Thomas Eggenschwiler are the authors of JHotDraw [10]. It has been
developed as a quite powerful design exercise whose design is based on some well-known design
patterns. We took 120 classes of JHotDraw7.0.6 and applied refactoring methods on it.
The aim of making JHotDraw an open-source project is:
• To refactor and hence enhance the existing code.
• To identify new refactoring and design patterns.
• To set it for an example of a well-designed and flexible framework.
4. 40 Computer Science & Information Technology (CS & IT)
4. REFACTORING METHODS
The refactoring methods applied in this paper are taken from the catalog defined by Fowler [7].
The following refactoring methods are applied [12, 18]:
1. Extract Delegate: This refactoring method allows extracting some of the methods and
classes from a given class and added them to newly created class. The refactoring
resolves the problem of the class which is big in size and performs much functionality.
The name of newly created class is given by the user.
2. Encapsulate field: This refactoring allows modifying the access of data from public to
private and generating getter and setter method for that field in the inner class.
3. The Replace Inheritance with Delegation: This refactoring allows removing a class from
inheritance hierarchy, while maintaining the functionality of the parent class. In this
refactoring a private inner class is made, that inherits the former super class. Selected
methods of the parent class are invoked through the new inner class.
4. Replace Constructor with Builder method: The Replace Constructor with
Builder refactoring helps hide a constructor, replacing it with the references to a newly
generated builder class or to an existing builder class.
5. Extract Interface: Extract Interface is a refactoring operation that allows making a new
interface with the members from the existing class, struct and interface.
6. Extract Method: It is a refactoring operation that allows creating a new method from the
existing members of the class.
7. Push Member Down: The Push Members down refactoring allows in relocating the class
members into subclass/sub interface for cleaning the class hierarchy.
8. Move Method: This refactoring allows moving a method from one class to another. The
need of moving a method comes when the method is used more in other class than the
class in which it is defined.
9. Extract Parameter: The Extract parameter refactoring allows selecting a set of parameters
to a method or a wrapper class. The need of the refactoring comes when the number of
parameter in a method is too large. The process of refactoring is done by delegate via
overloading method also.
10. Safe Delete: The Safe Delete refactoring allows you to safely remove the class, method,
field, interface and parameter from the code with making the necessary corrections while
deleting.
11. Inline: The Inline Method refactoring allows putting the method’s body into the body of
its caller method.
12. Static: This refactoring is used to convert a non-static method into a static. This allows
the method functionality available to other classes without making the new class instance.
13. Wrap Method Return Value: The Wrap Return Value refactoring allows selecting a
method and creating a wrapper class for its return values.
5. Computer Science & Information Technology (CS & IT) 41
14. Replace Constructor with Factory Method: The Replace Constructor with Factory
Method refactoring allows hiding the constructor and replacing it with a static method
which returns a new instance of the class.
The tool used for refactoring and studying the values of software metrics is Intellij Idea: IDE for
java and a reliable refactoring tool. It knows about code and gives suggestion also as a tip.
Refactoring methods referenced from Fowler [7] are available in this tool [12]. All the object
oriented metrics can be computed using the tool. The tool gives the module, package, class,
project and method level program metrics. It is available and easy to use. Table 1 shows the
“wrap method return value” refactoring using the tool.
Table 1. Example of “Wrap Method Return value” Refactoring using the IntelliJ Idea tool.
Before Refactoring After Refactoring
public newadded getScrollPane()
{ if (desktop.getParent() instanceof JViewport)
{ JViewport viewPort =
(JViewport)desktop.getParent();
if (viewPort.getParent() instanceof
JScrollPane) return new
newadded((JScrollPane)
viewPort.getParent());
}
return new newadded(null);
}
public noble getScrollPane()
{ if (desktop.getParent() instanceof
JViewport)
{JViewport viewPort =
(JViewport)desktop.getParent();
if (viewPort.getParent() instanceof
JScrollPane) return new noble(new
newadded((JScrollPane)
viewPort.getParent()));
return new noble(new newadded(null));
}
In inner class name “noble” is made and then refactoring is performed in the tool.
5. ANALYSIS AND RESULTS
The focus of this paper is to find the effect of refactoring methods on the software quality
attributes and hence categorized the refactoring methods according to particular quality attributes
and software metric domain. The values of object oriented software metrics is found before and
after refactoring. The result is analyzed according to the value of the software metrics.
To focus our study on the category of refactoring methods, we set up the following hypothesis.
For each hypothesis, H0 represents null hypothesis and H1 represents the alternative hypothesis
of H0.
Hypothesis 1
H0: Refactoring does not improve software adaptability.
H1: Refactoring improves the software adaptability.
Hypothesis 2
H0: Refactoring does not improve software maintainability.
H1: Refactoring improves the software maintainability.
Hypothesis 3
H0: Refactoring does not improve software Understandability.
H1: Refactoring improves the software Understandability.
Hypothesis 4
H0: Refactoring does not improve software Reusability.
H1: Refactoring improves the software Reusability.
6. 42 Computer Science & Information Technology (CS & IT)
Hypothesis 5
H0: Refactoring does not improve software Testability.
H1: Refactoring improves the software Testability.
Hypothesis 6
H0: Refactoring does not decrease software Complexity.
H1: Refactoring decreases the software Complexity.
Hypothesis 7
H0: Refactoring does not make software less Fault Proneness.
H1: Refactoring makes the software less Fault Proneness.
Hypothesis 8
H0: Refactoring does not improve software Stability.
H1: Refactoring improves the software Stability.
Hypothesis 9
H0: Refactoring does not improve software Completeness.
H1: Refactoring improves the software Completeness.
For validating all the hypothesis of this paper the relation between the values of software metrics
and Refactoring methods is given below in Table 1. Where ‘↓’ shows decrease in the value of
metric,’↑’ means increase in the value of the metric and ‘-’ shows no change in the value of
metric.
Table 2. Relation between Refactoring methods and software quality metrics.
Refactoring
Method
W
M
C
Vg L
O
C
N
O
M
C
B
O
L
C
O
M
DI
T
M
P
C
C
Ca
vg
A
H
F
AI
F
C
F
M
H
F
M
IF
R
F
C
Extract
Delegate
↑ ↓ ↓ ↑ ↑ ↓ ↓ ↑ ↑ ↓ ↓ ↓ ↑ ↓ ↑
Encapsulate
Field
↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↓ ↑ ↓ − ↓ ↑ ↑
Inheritance
To
Delegation
↑ ↓ ↑ ↑ ↑ ↓ ↓ ↓ ↑ ↑ ↓ ↑ ↑ ↓ ↑
Extract
Interface
↓ ↓ ↑ ↑ ↑ ↑ ↑ ↑ ↑ − ↑ ↓ ↓ ↓ ↑
Extract
Method
↑ ↓ ↑ ↑ ↑ ↑ ↑ − ↓ − − − ↑ ↓ ↑
Push
Method
Down
↓ ↓ ↑ ↑ ↓ ↑ ↑ ↑ ↓ ↑ ↓ ↓ ↑ ↓ ↑
Move
Method
↑ ↓ ↑ − ↑ ↑ ↑ ↑ ↑ − ↑ − ↑ − ↑
Extract
Parameter
↑ ↓ ↑ ↑ ↑ − ↑ − ↓ − − − ↑ ↓ ↑
Safe Delete ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↑ ↓ ↓ ↓
Inline ↑ ↑ ↑ ↑ ↑ ↑ − ↑ ↑ ↑ ↓ ↑ ↓ ↓ ↓
Static − ↑ ↑ ↑ − ↑ ↑ − ↑ − ↓ − − ↑ ↓
Wrap
Method
↓ ↓ ↑ ↑ ↓ ↓ ↓ ↓ ↓ ↑ − ↓ ↑ ↓ ↓
7. Computer Science & Information Technology (CS & IT) 43
Return
Value
Replace
Constructor
with factory
method
↑ ↓ ↑ ↑ ↓ ↑ ↑ − ↓ − ↓ ↓ ↑ ↑ ↓
Replace
Constructor
with Builder
↓ ↓ ↑ ↑ ↓ ↓ ↓ ↓ ↓ ↑ ↓ ↓ ↑ ↓ ↓
After Analyzing Table 2, it is concluded that the following methods give desirable result for
every metrics [20] and hence improves the quality attribute of software:
1. Wrap Method Return Value
2. Static method
As indicated in hypothesis, we are attempting to find out the refactoring method which improves
a particular category of software metrics. The metrics are divided according to the type of impact
they make on the software. Table 3 summarizes the relation between metrics and their categories.
Table 3. Relation between metrics and their Categories.
Category Attributes Method Coupling
/Cohesion
Inheritance
MOOD[27] AHF, AIF MHF, MIF ,PF MIF, AIF
C & K[17] LCOM LCOM,WMC,
RFC
CBO DIT
Li and
Henry[19]
MPC ,NOM MPC
The various refactoring methods show random effect on the metric values. So, we classify the
refactoring methods according to the desirable effects they make on the categories of Table 3:
attributes, methods, coupling/cohesion and inheritance based metrics. From Table 2 and Table 3
the analysis result is shown in Table 4.
Table 4. Desirable refactoring for the particular category of metrics.
Category Refactoring Method
Attributes Inheritance to delegation, Wrap return value
method and Constructor to Builder
Methods Wrap Method Return Value
Coupling/Cohesion Safe Delete ,Replace constructor with builder
method , Replace constructor with factory
method and Wrap Method Return Value
Inheritance Extract Delegate, Inline, Safe Delete and
Inheritance To Delegation
We used the previously published research work to make the correlation between the software
metrics and external quality attributes. We used work of Dandashi [9] to assess the adaptability,
maintainability, understandability and reusability quality attributes. The following table
summarizes the relationship between software metrics and external quality attributes which can
be helpful to find out the direct effect of refactoring on the external software quality attributes.
8. 44 Computer Science & Information Technology (CS & IT)
In this relationship (+) shows positive correlation, the attributes improve as the metric value
increases, (-) shows negative correlation, the attributes degrade as the metric value decreases and
(0) shows neutral effect.
Table 5. Relation between metrics and external quality attributes.
External Quality DIT CBO RFC WMC NOM LOC LCOM
Adaptability[9,6] - - - + 0 + 0
Maintainability[22,16,9
,25,6]
- - - + - + -
Understandability[9,16,
6]
- - - + 0 + -
Reusability[22,9,16,6] + - - + 0 + -
Testability[21,22,6] - - - - - - -
Complexity[21] + + + + +
Fault Proneness[22,24] + + + + + +
Stability[23] - - - - -
Completeness[9] - - - + - + 0
To validate the hypothesis, we took the relation between Table 2 and Table 5 and come to the
following conclusion:
Table 6. Particular refactoring method for certain quality attributes.
Refactoring Method Quality Attribute
Wrap Return value Testability
Safe Delete Adaptability, Understandability, Less fault
proneness and Stability
Replace Constructor with Builder method Stability
1. “Wrap Return value” refactoring improves testability of the program.
2. “Safe Delete” makes program more adaptable, understandable, less fault proneness and
stable.
3. “Replace Constructor with Builder method” makes program more stable.
From Table 2 and Table 5, we found that for other quality attributes inconsistent results are
coming where some metrics values are needed to be ignored to improve the quality to certain
limit.
1. “Wrap return method” makes the program less fault proneness if increased LOC effect is
ignored.
2. “Wrap Return Method” makes system more adaptable when WMC is ignored.
Summing up the analysis part, we concluded that from Table 6 there are few refactoring methods
which improve certain quality attributes and hence Hypothesis 1, Hypothesis 3, Hypothesis 5,
Hypothesis 7 and Hypothesis 8 are rejected.
From the analysis part of Table 2 “wrap return method value” refactoring changes most of the
metric values to desirable state and hence to certain limit improves every quality attribute.
Therefore the Hypothesis 2, Hypothesis 4, Hypothesis 6 and Hypothesis 9 are rejected.
6. THREATS TO VALIDITY
There are some limitations to extend the result to general case. There are possible numbers of
threats to validity as the few selective classes are taken from the project. The results may vary
when implemented on the whole system and when the scenario is changed. We have applied the
9. Computer Science & Information Technology (CS & IT) 45
refactoring on class level not on the system level.
Another possible threat is the correlation between the internal metrics and the external software
quality attributes; we have not put validation from our side and directly took the result of previous
research.
7. CONCLUSION
Refactoring methods are applied to improve the software quality attribute but the effect of
refactoring on particular quality attribute is still ambiguous. In this paper, we applied fourteen
refactoring methods and noticed that they effect randomly on different software quality attributes.
We classified the refactoring methods which improve a set of metrics which belongs to the
attribute, method, coupling, cohesion and inheritance category of software. We focused on
different external quality attributes, which are Reusability, Complexity, Maintainability,
Testability, Adaptability, Understandability, Fault Proneness, Stability and Completeness and
found the effect of refactoring methods on them. By looking at the results, we found that there are
few refactoring methods which particularly improve a certain quality attributes of software, which
can help the developer to choose them. Our work concludes that refactoring improves the quality
of software but developers need to look for the particular refactoring method for desirable quality
attribute.
Future research can also test and verify the result on bigger projects and can come up with general
relation between refactoring and quality attributes.
ACKNOWLEDGMENT
I would like to acknowledge the support and guidance of my teachers for the excellent guidance
throughout the entire work. I am thankful to university resource center for providing the resources
for the work.
I would like to thank my parents and friends for all their support during my studies.
REFERENCES
[1] Cinnéide, Mel Ó., Dermot Boyle, and Iman Hemati Moghadam, (2011), "Automated refactoring for
testability" , In Software Testing Verification and Validation Workshops (ICSTW), IEEE Fourth
International Conference, pp. 437-443, IEEE.
[2] Francisco Zigmund Sokal, Mauricio Finavaro Aniche and Marco Aurelio Gerosa, (2013), “Does The
Act Of Refactoring Really Make Code Simpler?, A Preliminary Study”.
[3] Elish, Karim O., and Mohammad Alshayeb. (2009), "Investigating the Effect of Refactoring on
Software Testing Effort" In Software Engineering Conference, APSEC'09, Asia-Pacific, pp. 29-34,
IEEE.
[4] Bruntink, Magiel, and Arie van Deursen, (2006), "An empirical study into class testability", Journal
of systems and software 79, no. 9, pp. 1219-1232.
[5] Kataoka, Y., Imai, T., Andou, H. and Fukaya, T., (2002), "A quantitative evaluation of
maintainability enhancement by refactoring", Software Maintenance, Proceedings International
Conference, pp.576-585.
[6] Mohammad Alshayeb, (2009), “Empirical Investigation Of Refactoring Effect On Software Quality”,
Volume 51, Issue 9, Pages 1319-1326, Elsevier.
[7] M.Fowler, K. Beck, J. Brant, W.Opdyke and D. Roberts, (1999), “Refactoring: Improving the Design
of Existing Code”, Addison Wesley.
[8] W.C Wake, (2003), “Refactoring Workbook”, Addison Wesley.
[9] Dandashi Fatma, (2002) "A method for assessing the reusability of object-oriented code using a
validated set of automated measurements", In Proceedings of the 2002 ACM symposium on applied
computing, pp. 997-1003, ACM.
[10] www.jhotdraw.org.
10. 46 Computer Science & Information Technology (CS & IT)
[11] www.sourceforge.net.
[12] www.jetbrains.com
[13] Opdyke, William F., (1990) "Refactoring: An aid in designing application frameworks and evolving
object-oriented systems", In Proc. 1990 Symposium on Object-Oriented Programming Emphasizing
Practical Applications (SOOPPA).
[14] IEEE, (1991), Std. 610.12 – IEEE Standard Glossary of Software Engineering Terminology, The
Institute of Electrical and Electronics Engineers.
[15] ISO/IEC, (1991), 9126 Standard, Information Technology – Software Product Evaluation – Quality
Characteristics and Guidelines for their Use, Switzerland, International Organization for
Standardization.
[16] Kayarvizhy, N. and Kanmani, S., (2011) "Analysis of quality of object oriented systems using object
oriented metrics," Electronics Computer Technology (ICECT), 3rd International Conference on,
vol.5, no., pp.203-206.
[17] Chidamber, S.R and Kemerer, C.F., (1994) “A metrics suite for object oriented design," Software
Engineering, IEEE Transaction, vol.20, no.6, pp.476-493.
[18] Www. Refactoring.com.
[19] Li, W and Henry, S., (1993) "Maintenance metrics for the object oriented paradigm," Software
Metrics Symposium, Proceedings, First International, pp.52-60.
[20] Daniel Rodriguez and Rachel Harrison, (2001),“An Overview of Object-Oriented Design Metrics”.
[21] Khalid, Sadaf, Saima Zehra and Fahim Arif, (2010) "Analysis of object oriented complexity and
testability using object oriented design metrics", In Proceedings of the 2010 National Software
Engineering Conference, ACM.
[22] Srivastava, Sandeep, and Ram Kumar, (2013) "Indirect method to measure software quality using
CK-OO suite." In Intelligent Systems and Signal Processing (ISSP), 2013 International Conference
on, pp. 47-51, IEEE.
[23] Elish, Mahmoud O. and David Rine, (2003) "Investigation of metrics for object-oriented design
logical stability", In Software Maintenance and Reengineering Proceedings, Seventh European
Conference on, pp. 193-200, IEEE.
[24] Basili, Victor R., Lionel C. Briand and Walcélio L. Melo, (1996) "A validation of object-oriented
design metrics as quality indicators", Software Engineering, IEEE Transactions on 22, no. 10, pp.
751-761.
[25] Jehad Al Dallal, (2013) "Object-oriented class maintainability prediction using internal quality
attributes", Information and Software Technology 55, no. 11.
[26] Subramanian, Nary, and Lawrence Chung, (2001) "Metrics for software adaptability", Proc. Software
Quality Management (SQM 2001).
[27] Abreu, Fernando B, (1995) "The MOOD Metrics Set," Proc. ECOOP'95 Workshop on Metrics.
[28] Stroggylos, Konstantinos, and Diomidis Spinellis., (2007) "Refactoring--Does It Improve Software
Quality?” proceedings of the 5th International Workshop on Software Quality, IEEE Computer
Society.
[29] Vasudeva Shrivastava, S.,and V. Shrivastava. (2008) "Impact of metrics based refactoring on the
software quality: a case study". TENCON 2008 IEEE Region 10 Conference, IEEE.
[30] Sharma, Tushar., (2012), "Quantifying Quality of Software Design to Measure the Impact of
Refactoring”. Computer Software and Applications Conference Workshops, IEEE 36th Annual.