Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
In the software measurement validations, assessing the validation of software metrics in software
engineering is a very difficult task due to lack of theoretical methodology and empirical methodology [41,
44, 45]. During recent years, there have been a number of researchers addressing the issue of validating
software metrics. At present, software metrics are validated theoretically using properties of measures.
Further, software measurement plays an important role in understanding and controlling software
development practices and products. The major requirement in software measurement is that the measures
must represent accurately those attributes they purport to quantify and validation is critical to the success
of software measurement. Normally, validation is a collection of analysis and testing activities across the
full life cycle and complements the efforts of other quality engineering functions and validation is a critical
task in any engineering project. Further, validation objective is to discover defects in a system and assess
whether or not the system is useful and usable in operational situation. In the case of software engineering,
validation is one of the software engineering disciplines that help build quality into software. The major
objective of software validation process is to determine that the software performs its intended functions
correctly and provides information about its quality and reliability. This paper discusses the validation
methodology, techniques and different properties of measures that are used for software metrics validation.
In most cases, theoretical and empirical validations are conducted for software metrics validations in
software engineering [1-50].
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
Although there has been an extensive study over delivering, increasing and maintaining software quality, there has not been enough aide- mémoire on ‘Rating a Software‘s Quality’. This study would project the literature review thus far and also sculpt the scope and need for the evolution of a rating system of software quality for the future.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
Evaluating effectiveness factor of object oriented design a testability pers...ijseajournal
Effectiveness is important quality factor to testability measurement of object oriented software at an initial
stage of software development process exclusively at design phase for high quality product. It will help
developer’s design capability to achieve the specified functionalities, characteristics, better design quality
and behavior using appropriate object oriented design (OOD) concepts and procedures. Metric based
model for ‘Effectiveness Quantification Model of Object Oriented Design’ has been proposed by
establishing the correlation between effectiveness and OOD constructs. Later ‘Effectiveness Quantification
Model’ is empirically validated and statistical significance of the study considers the high correlation for
model acceptance. The aim of this research work is to encourage researchers and developers for inclusion
of the effectiveness quantification model to access and quantify software effectiveness quality factor at
design time.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
In the software measurement validations, assessing the validation of software metrics in software
engineering is a very difficult task due to lack of theoretical methodology and empirical methodology [41,
44, 45]. During recent years, there have been a number of researchers addressing the issue of validating
software metrics. At present, software metrics are validated theoretically using properties of measures.
Further, software measurement plays an important role in understanding and controlling software
development practices and products. The major requirement in software measurement is that the measures
must represent accurately those attributes they purport to quantify and validation is critical to the success
of software measurement. Normally, validation is a collection of analysis and testing activities across the
full life cycle and complements the efforts of other quality engineering functions and validation is a critical
task in any engineering project. Further, validation objective is to discover defects in a system and assess
whether or not the system is useful and usable in operational situation. In the case of software engineering,
validation is one of the software engineering disciplines that help build quality into software. The major
objective of software validation process is to determine that the software performs its intended functions
correctly and provides information about its quality and reliability. This paper discusses the validation
methodology, techniques and different properties of measures that are used for software metrics validation.
In most cases, theoretical and empirical validations are conducted for software metrics validations in
software engineering [1-50].
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
Although there has been an extensive study over delivering, increasing and maintaining software quality, there has not been enough aide- mémoire on ‘Rating a Software‘s Quality’. This study would project the literature review thus far and also sculpt the scope and need for the evolution of a rating system of software quality for the future.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
Evaluating effectiveness factor of object oriented design a testability pers...ijseajournal
Effectiveness is important quality factor to testability measurement of object oriented software at an initial
stage of software development process exclusively at design phase for high quality product. It will help
developer’s design capability to achieve the specified functionalities, characteristics, better design quality
and behavior using appropriate object oriented design (OOD) concepts and procedures. Metric based
model for ‘Effectiveness Quantification Model of Object Oriented Design’ has been proposed by
establishing the correlation between effectiveness and OOD constructs. Later ‘Effectiveness Quantification
Model’ is empirically validated and statistical significance of the study considers the high correlation for
model acceptance. The aim of this research work is to encourage researchers and developers for inclusion
of the effectiveness quantification model to access and quantify software effectiveness quality factor at
design time.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
Software Metrics for Identifying Software Size in Software Development ProjectsVishvi Vidanapathirana
This paper defines the best software metrics that can be used to define the size of the software in the current industry of information technology (IT)
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributed systems, particularly in service-oriented systems, is scarce. Distributed systems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps
such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
Framework for a Software Quality Rating SystemKarthik Murali
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
In this paper a correlation between the major attributes of object oriented design and changeability has been ascertained. A changeability evaluation model using multiple linear regression has been proposed for object oriented design. The validation of the proposed changeability evaluation model is made known by means of experimental tests and the results show that the model is highly significant.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
The most important aim of software engineering is to improve software productivity and quality of software product and further reduce the cost of software and time using engineering and management techniques.Broadly speaking, software engineering initiative has been introduced during software crisis period to describe the collection of techniques that apply engineering and management skills to the construction and
support of software process and products. There is no universally agreed theory for software measurement and the software metrics are useful for obtaining the information on evaluation of process and product in software engineering. It helps to plan and carry out improvement in software organizations and to provide objective information about project performance, process capability and product quality. The process capability is extremely important for software industry because the quality of products is largely determined by the quality of the processes. The make use of of existing metrics and development of innovative software metrics will be important factors in future software engineering process and product development. In future, research work will be based on using software metrics in software development for the development of the time schedule, cost estimates and software quality and can be improved through software metrics. The permanent application of measurement based methodologies is used to the software process and its products to provide important and timely management information, together with the use of those techniques to improve that software process and its products. This research paper mainly concentrates on the overview of unique basics of software measurement and exclusive fundamentals of software metrics in software engineering.
Identification, Analysis & Empirical Validation (IAV) of Object Oriented Desi...rahulmonikasharma
Metrics and Measure are closely inter-related to each other. Measure is defined as way of defining amount, dimension, capacity or size of some attribute of a product in quantitative manner while Metric is unit used for measuring attribute. Software quality is one of the major concerns that need to be addressed and measured. Object oriented (OO) systems require effective metrics to assess quality of software. The paper is designed to identify attributes and measures that can help in determining and affecting quality attributes. The paper conducts empirical study by taking public dataset KC1 from NASA project database. It is validated by applying statistical techniques like correlation analysis and regression analysis. After analysis of data, it is found that metrics SLOC, RFC, WMC and CBO are significant and treated as quality indicators while metrics DIT and NOC are not significant. The results produced from them throws significant impact on improving software quality.
Evaluation of the software architecture styles from maintainability viewpointcsandit
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
Software Metrics for Identifying Software Size in Software Development ProjectsVishvi Vidanapathirana
This paper defines the best software metrics that can be used to define the size of the software in the current industry of information technology (IT)
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributed systems, particularly in service-oriented systems, is scarce. Distributed systems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps
such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
Framework for a Software Quality Rating SystemKarthik Murali
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
In this paper a correlation between the major attributes of object oriented design and changeability has been ascertained. A changeability evaluation model using multiple linear regression has been proposed for object oriented design. The validation of the proposed changeability evaluation model is made known by means of experimental tests and the results show that the model is highly significant.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
The most important aim of software engineering is to improve software productivity and quality of software product and further reduce the cost of software and time using engineering and management techniques.Broadly speaking, software engineering initiative has been introduced during software crisis period to describe the collection of techniques that apply engineering and management skills to the construction and
support of software process and products. There is no universally agreed theory for software measurement and the software metrics are useful for obtaining the information on evaluation of process and product in software engineering. It helps to plan and carry out improvement in software organizations and to provide objective information about project performance, process capability and product quality. The process capability is extremely important for software industry because the quality of products is largely determined by the quality of the processes. The make use of of existing metrics and development of innovative software metrics will be important factors in future software engineering process and product development. In future, research work will be based on using software metrics in software development for the development of the time schedule, cost estimates and software quality and can be improved through software metrics. The permanent application of measurement based methodologies is used to the software process and its products to provide important and timely management information, together with the use of those techniques to improve that software process and its products. This research paper mainly concentrates on the overview of unique basics of software measurement and exclusive fundamentals of software metrics in software engineering.
Identification, Analysis & Empirical Validation (IAV) of Object Oriented Desi...rahulmonikasharma
Metrics and Measure are closely inter-related to each other. Measure is defined as way of defining amount, dimension, capacity or size of some attribute of a product in quantitative manner while Metric is unit used for measuring attribute. Software quality is one of the major concerns that need to be addressed and measured. Object oriented (OO) systems require effective metrics to assess quality of software. The paper is designed to identify attributes and measures that can help in determining and affecting quality attributes. The paper conducts empirical study by taking public dataset KC1 from NASA project database. It is validated by applying statistical techniques like correlation analysis and regression analysis. After analysis of data, it is found that metrics SLOC, RFC, WMC and CBO are significant and treated as quality indicators while metrics DIT and NOC are not significant. The results produced from them throws significant impact on improving software quality.
Evaluation of the software architecture styles from maintainability viewpointcsandit
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
Software maintenance is important and difficult to measure. The cost of maintenance is the most ever during the phases of software development. One of the most critical processes in software development is the reduction of software maintainability cost based on the quality of source code during design step, however, a lack of quality models and measures can help asses the quality attributes of software maintainability process. Software maintainability suffers from a number of challenges such as lack source code understanding, quality of software code, and adherence to programming standards in maintenance. This work describes model based-factors to assess the software maintenance, explains the steps followed to obtain and validate them. Such a method can be used to eliminate the software maintenance cost. The research results will enhance the quality of the source code. It will increase software understandability, eliminate maintenance time, cost, and give confidence for software reusability.
EVALUATION OF THE SOFTWARE ARCHITECTURE STYLES FROM MAINTAINABILITY VIEWPOINTcscpconf
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributed systems, particularly in service-oriented systems, is scarce. Distributed systems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
Analysis of Software Complexity Measures for Regression TestingIDES Editor
Software metrics is applied evaluating and assuring
software code quality, it requires a model to convert internal
quality attributes to code reliability. High degree of complexity
in a component (function, subroutine, object, class etc.) is bad
in comparison to a low degree of complexity in a component.
Various internal codes attribute which can be used to indirectly
assess code quality. In this paper, we analyze the software
complexity measures for regression testing which enables the
tester/developer to reduce software development cost and
improve testing efficacy and software code quality. This
analysis is based on a static analysis and different approaches
presented in the software engineering literature.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
A software system continues to grow in size and complexity, it becomes increasing difficult to
understand and manage. Software metrics are units of software measurement. As improvement in coding tools
allow software developer to produce larger amount of software to meet ever expanding requirements. A method
to measure software product, process and project must be used. In this article, we first introduce the software
metrics including the definition of metrics and the history of this field. We aim at a comprehensive survey of the
metrics available for measuring attributes related to software entities. Some classical metrics such as Lines of
codes LOC, Halstead complexity metric (HCM), Function Point analysis and others are discussed and
analyzed. Then we bring up the complexity metrics methods, such as McCabe complexity metrics and object
oriented metrics(C&K method), with real world examples. The comparison and relationship of these metrics are
also presented.
7.significance of software layered technology on size of projects (2)EditorJST
The objective of the software engineering is committed to build software projects within the budget, time and required quality. Software engineering is a layered paradigm comprised of process, methods, tools and quality focus as bedrock to develop the product. Software firms build software projects of varying sizes constrained on resources, time and functional requirement. Impact of software engineering layered technology may vary according to the size of the projects during their development. Quantitative evaluation of layer significance on size of the software project could be categorized as a complex task because it involves a collective decision on multiple criteria. Analytic Hierarchy Process (AHP) provides an effective quantitative approach for finding the significance of software layered technology on size of the projects. This paper presents the estimations through quantitative approach on real time data collected from several software firms. These findings help for a better project management with respect to the cost, time and resources during building a software project.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, research and review articles, engineering journal, International Journal of Engineering Inventions, hard copy of journal, hard copy of certificates, journal of engineering, online Submission, where to publish research paper, journal publishing, international journal, publishing a paper, hard copy journal, engineering journal
A reliability estimation framework for OO design complexity perspective has been developed inthis paper. The proposed framework correlates the object oriented design constructs with complexity and also correlates complexity with reliability. No such framework has been available in the literature that estimates software reliability of OO design by taking complexity into consideration. The framework bridges the gap between object oriented design constructs, complexity and reliability. Framework measures and minimizes the complexity of software design at the early stage of software development life cycle leading to a reliable end product. Reliability and complexity estimation models have been proposed by following the proposed framework. Complexity estimation model has been developed which takes OO design constructs into consideration and proposed reliability estimation models take complexity in consideration for estimating reliability of OO design.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
A simplified predictive framework for cost evaluation to fault assessment usi...IJECEIAES
Software engineering is an integral part of any software development scheme which frequently encounters bugs, errors, and faults. Predictive evaluation of software fault contributes towards mitigating this challenge to a large extent; however, there is no benchmarked framework being reported in this case yet. Therefore, this paper introduces a computational framework of the cost evaluation method to facilitate a better form of predictive assessment of software faults. Based on lines of code, the proposed scheme deploys adopts a machine-learning approach to address the perform predictive analysis of faults. The proposed scheme presents an analytical framework of the correlation-based cost model integrated with multiple standards machine learning (ML) models, e.g., linear regression, support vector regression, and artificial neural networks (ANN). These learning models are executed and trained to predict software faults with higher accuracy. The study considers assessing the outcomes based on error-based performance metrics in detail to determine how well each learning model performs and how accurate it is at learning. It also looked at the factors contributing to the training loss of neural networks. The validation result demonstrates that, compared to logistic regression and support vector regression, neural network achieves a significantly lower error score for software fault prediction.
Similar to The Impact of Software Complexity on Cost and Quality - A Comparative Analysis Between Open Source and Proprietary Software (20)
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysis Between Open Source and Proprietary Software
1. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
DOI: 10.5121/ijsea.2017.8202 17
THE IMPACT OF SOFTWARE COMPLEXITY ON COST
AND QUALITY – A COMPARATIVE ANALYSIS
BETWEEN OPEN SOURCE AND PROPRIETARY
SOFTWARE
Anh Nguyen-Duc
IDI, NTNU, Norway
ABSTRACT
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
KEYWORDS
Design Complexity, Software Engineering, Open source software, Systematic literature review
1. INTRODUCTION
With the appearance of ultra-large software, system-of-system, Internet-of-Things, software
became much larger and more complex that it was before. As software becomes more and more
popular in not only computer, but also embedded systems, the users demand more reliable and
powerful software. Software complexity has been shown to be one of the main contributing
factors to the software development and maintenance effort [2]. Most experts today agree that
complexity is a major feature of computers software, and will increasingly be in the future.
Software metrics, as a tool to measure software development progress, has become an integral
part of software development as well as software research activities. Tom DeMarco stated that
“You cannot control what you cannot measure” [1]. The large part of software measurements is
to, in one way or another, measure or estimate software complexity due to its importance in
practices and research. Measurement of software complexity enhances our knowledge about
nature of software and indirectly assesses and predicts final quality of the product. It is claimed
that complexity metrics is practically meaningful if they can indicate the projects outcomes, i.e,
software quality and effort.
Literature in software engineering reveals a large number of proposed complexity metrics.
However, it is not obvious to select appropriate metric set that can predict given software
attributes. Which quality attributes are influenced by software complexity? Which metrics are
useful for predicting these qualities attributes? Knowing the proper metrics in a specific context
2. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
18
would be beneficial for software industry. Moreover, it is interesting to know whether these
metrics has the same effectiveness across different project context, for example open source and
closed source projects. Since open source is emerging as an alternative to closed source
development in software industry [25, 26], empirical comparison between open sources and
proprietary software in multiple perspectives, such as software complexity, quality and
development effort is always of highly interest.
These questions are non-trivial and can only be answered by proper empirical studies. Typically,
such studies validate the usage of metrics by building a prediction model. The predictive power
depends not only on the selection of metric suite but also prediction techniques and evaluation
method. The available data and feature of the data set also influences the predictive result. While
prediction techniques and evaluation method has been well-studied and systematically aggregated
into a body of knowledge [3, 4, 5], there is very limited number of papers summarizing evidence-
based knowledge about software complexity metrics [6]. Among the prediction models with the
same prediction techniques, some studies yield different predictive results of same metric suite,
even contradictory ones. This adds to the impression that, despite a large number of design
metrics used in quality prediction models, it is currently still unclear whether we have learned
anything from these studies. This paper presents a systematic review on the influence of software
complexity metrics on quality attributes. We aggregated Spearman correlation coefficients from
59 different data sets from 57 primary studies by a tailored meta-analysis approach.
The rest of the paper is organized as follows. Section 2 presents a theory of cognitive complexity
and statistical methods on software engineering. Section 3 states our research questions and
research methodologies. While Section 4 presents the review process, Section 5 provides research
results and discussion. Section 6 identifies the threats to validity. The paper ends with conclusions
and future works.
2. BACKGROUND
2.1. Theory of cognitive complexity
IEEE standard defines software complexity as “the degree to which a system or component has a
design or implementation that is difficult to understand and verify” [7]. The definition
differentiates two different kind of complexity: complexity of design artifacts, such as UML
diagram and complexity of implementation like source code. Basically, design complexity is
structural features of design artifacts that can be measured prior to implementation stage. This
information is important for predicting quality of final products in early phase of software
development life cycle.
Figure 1: Theory of cognitive complexity (adapted from [9])
3. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
19
There is a theoretical basic for investigation the relation between design complexity and software
external quality attributes, namely theory of cognitive complexity [8]. Briand et al. hypothesized
that the structural properties of a software component have an impact on its cognitive complexity
[8]. In his study, cognitive complexity is defined as “the mental burden of the individuals who
have to deal with the component, for example, the designer, developers, testers and maintainers”
[8]. Figure 1 presents a model of cognitive complexity theory, which stated that high cognitive
complexity leads to the increasing effort to understand, implement and maintain the component
[9]. As the results, it could lead to undesirable external qualities, such as increased fault-
proneness and reduced maintainability.
2.1. Empirical investigation on software complexity
In the literature of research on software engineering, the emerging trend is explanatory empirical
methods, which use empirical evidence to confirm or model the casual relationships between
control variables and performance variables. The two common empirical investigation methods
on software complexity are correlation analysis and statistical regression.
Figure 2: Quantitative approaches for empirical validation in software engineering
Correlation analysis investigates the extent to which changes in the value of an attribute (such as
value of complexity metric in a class) are associated with changes in another attribute (such as
number of defect in a class). An correlation coefficient represents a numerical summary of the
degree of association between two variables, e.g., to what degree do high values of coupling of a
class go with high number of defect in that class [27]. Though correlation between two variables
does not necessarily result in causal effect [27], it is still an effective method to select candidate
variables for causation relationship.
Regression goes beyond correlation by adding prediction capabilities. Regression analysis
includes techniques for modeling and analyzing one or several variables to identify the
relationship between a dependent variable and one or more independent variables [28]. Among
large amount of regression techniques has been used in software engineering literature, the most
common techniques are linear regression model and logistics regression model [28]. In both
methods, it is necessary to control all confounding variables in order to achieve a meaningful
casual relationship.
3. RESEARCH DESIGN
3.1. Research questions
Table 1 shows an adapted Goal-Question-Metric (GQM) template to form the research objective.
The target objective of this research is software design complexity, in particular design
4. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
20
complexity of OO system. The purpose is to externally validate design complexity by
investigating its statistical relationship to external software quality. The topic is considered in the
viewpoint of measurement researcher as well as practitioner. The study population is software
engineering literature.
Table 1: GQM template of research goals
Analyze Software design complexity
For the purpose of Confirming
With respect to their Statistical relationship to cost and external
software quality.
From the view point of Measurement practitioner, researcher
In the context of Software engineering literature
The main research objective is refined as four research questions:
• RQ1: Which quality attributes are influenced by design complexity metric in literature?
• RQ2: What are the most investigated design complexity metrics in literature?
• RQ3: Which design complexity metrics are potentially predictors of quality attribute?
• RQ4: How is the influence of design complexity on software quality in open source vs.
closed source software?
3.2. Research methodology
Figure 3 presents research methods used to answer the stated questions.
Systematic
Literature
review
RQ1, RQ2
Vote counting RQ3
Meta analysis RQ3, RQ4
Figure 3: Research methodology
3.2.1. Systematic literature review
A systematic literature review (often referred to as a systematic review) is a common method for
identifying, evaluating and interpreting all available research relevant to a particular research
question, or topic area, or phenomenon of interest. This method is very common in research of
medicine and social science [10]. Individual studies contributing to a systematic review are called
primary studies; a systematic review is a form of secondary study [10]. The most common
reasons are to summarize the existing knowledge about interested research questions, to identify
any gaps in current research in order to suggest areas for further investigation and to provide a
framework/background in order to appropriately position new research activities. The systematic
review involves the following steps, with the detail description provided in [10]:
5. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
21
• Specifying research questions
• Data source selection
• Search strategy development
• Search string formation
• Study selection criteria identification
• Study quality assessment identification
• Study extraction strategy identification
There are two approaches to quantitatively summary results from systematic review,
namely vote counting and meta analysis [11].
3.2.2. Vote counting
Vote counting is an alternative of meta analysis when there is not enough of statistical reported
data. The main advantage of vote counting is that it is conceptually simple and can be used with
very little information [11]. Besides it does not require actual value of effect sizes for aggregation
[11]. The only assumption of the method is that studies use the same kind of statistical test. Vote
counting, however, is not free from error. Vote counting can be strongly biased towards the
conclusion of zero effect size in populations of studies with small effect size and a low number of
subjects. Besides, this bias is not reduced as the number of studies increase [12]. Therefore, the
result of vote counting should be confirmed by other statistical methods, such as meta analysis.
3.2.3. Meta analysis
Meta-analysis is a statistical technique for identifying, summarizing, and reviewing results of
quantitative studies to arrive at conclusions about a body of research [13, 14]. A typical meta-
analysis is to summarize all the research results on one topic and to discuss reliability of the
summary. It is noticed that meta-analysis is only recently introduced in software engineering [11,
15, 16]. Meta analysis is selected because of:
• Its well-founded mathematic background and statistical validity
• Its ability to provide a numerical aggregation value of studies. The outcome of a meta-
analysis is an average effect size with an indication of how variable that effect size is
between studies.
• Its ability to deal with study conflict.
• Its ability to deal with publication bias.
4. RESEARCH CONDUCT
Table 2 shows the search protocol. We choose Scopus, IEEE Explore and ACM Digital Library
as search databases due to its functionality, reputation and familiarity to the reviewer. As
common search fields in systematic reviews [3, 4, 5] article title, keyword and abstract will be
used in this work. With the objective to retrieve all relevant studies, we use the wide search range
among research on Software and System Engineering. The only limitation is that we only search
in publication written in English.
6. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
22
Table 2: Search protocol
Search
database
Scopus, IEEE Explore, ACM Digital Library
Search field Title, Abstract, Keyword
Search topic Software and system engineering
Language English
Search string
formula
(“software complexity” Or Synonyms)
AND(“impact” Or Synonyms) AND (“cost” or
“quality” or Synonyms) AND software
The search string query from Scopus database resulted in a total of 906 primary publications. We
excluded 625 papers out of a total of 906 primarily based on title and abstract, which were clearly
out of scope and did not relate to any research questions. The remaining 281 papers were subject
to detailed exclusion criteria. The paper is scanned to find out whether any exclusion criteria are
met. This stage resulted in remaining 85 papers, which were further filtered out by reading full
text. The studies are carefully read to find whether paper’s objective, study design, data collection
and analysis are relevant to answer our questions. This step remains 31 papers. The second
search in IEEE Explore and ACM Digital Library resulted in a total of 265 papers. It is noticed
that many papers in this set is duplicated with previous pool because Scopus also index papers in
IEEE Explore and ACM Digital Library. After eliminating duplicated papers, there were left 85
papers. The title and abstraction exclusion results in 40 remaining papers and further exclusion
based on detailed criteria reject 25 papers out of this 40. The remaining 15 papers are read and
finally 8 of them are selected.
It is revealed that no relevant papers are found prior to 1996. It was also the time when the
Empirical Software Engineering journal was firstly published. It is worth noticed that in the late
90's, there were not so much publications and the most probable reason for this is that design
metric was a new research area at that time, i.e. Chidamber & Kemerer metric suite was firstly
introduced in 1991 and refined in 1996. The figure also shows the increasing number of studies
between 5-year periods. From 1995 to 1999, there are 9 relevant papers while from 2000 to 2004;
the numbers of relevant papers are 15. This number is 33 for the period of 5 years from 2005 to
2009, which is more than double the number of previous period. This fact indicates the increasing
interest of research community in the topic recently.
5. RESULT AND DICUSSION
5.1. RQ1: Which quality attributes are influenced by design complexity metric in
literature?
a. Results
Figure 4: Investigated quality attributes
7. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
23
Figure 4 shows the external quality attributes that are investigated for the relationship with design
complexity. There are four main investigated costs and quality attributes, namely reliability,
maintainability, reusability and development effort that are refined in 9 sub categories as below:
• Defect density: includes studies that investigate relation between total known defects
divided by the size (defect density) and design complexity.
• Fault proneness: includes studies that investigate the relation between probabilities of
finding a defect in a class (fault proneness) with design complexity.
• Vulnerability, security: includes studies that investigate the relation between numbers of
faults that lead to security failure and design complexity
• Testability: includes studies that predict effort required for software testing due to level of
design complexity [17]
• Maintainability: includes studies that predict time, efforts required to correct bugs, improve
performance or adapt software to a change environment [7].
• Volatility: studies that investigate the relation between the numbers of enhancement in a
class with design complexity.
• Debugging cost: includes studies that investigate the relation between effort and time for
fixing a class with its design complexity.
• Development effort: include studies that investigate the relation between cost and time for
development with its design complexity.
• Refactoring effort: include studies that investigate the relation between efforts for
refactoring a class with its design complexity.
b. Discussion:
The statistics description shows the most investigated cost and quality attributes are fault
proneness and maintainability. Since the information about design complexity is theoretically
available in the design phase, many primary studies collect these data in the implementation or
maintenance phase. Hence, the post-design phase software attributes is more popular subject of
research. Among reliability related attributes, fault-proneness is investigated much more than
fault density or security. The reason could come from the fact that the data in number of fault per
software unit is not sufficient. Therefore, the quality model is more on the level that predict
whether a software unit is faulty or not. Due to the requirements of dataset size for aggregation,
only maintainability and fault proneness studies have sufficient combinable data. Hence, from
next questions, we only focus on these two attributes.
5.2. RQ2: What are the most investigated design complexity metrics in literature?
a. Result:
Table 3 shows the list of metrics, which are used in more than 50% of total selected primary
studies. The most frequent used metrics belong to adjusted version of Chidamber & Kemerer
metric suite, namely, NOC, DIT, CBO, LCOM version 2, WMC and RFC version 1. Among 120
investigated metrics, we found that the distribution of usage of design metrics are followed
Pareto-like distribution 20/60: the 20% most common metrics are investigated in 60% of the
studies.
8. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
24
Table 3: List of frequent used metrics
Metric Definition Type
NOC number of classes that directly inherit from a
given class
inheritance
DIT length of the longest path from the class to the
root of inheritance hierarchy [18]
inheritance
CBO number of other classes to which it is coupled,
included inheritance-based coupling [18]
coupling
LCOM2 LCOM1 minus count number of pairs of
methods that do (average percentage of the
method in a class using an attribute and
subtract from 100%) [18]
cohesion
WMC number of all methods of a class [18] size
RFC1 RFCinf + not count methods indirectly
invoked by methods in M [18]
coupling
b. Discussion: The domination of Chidamber & Kemerer metric set in correlation and prediction
studies could comes from the fact that they are one of the first complete design metrics with
measure of all OO features [18]. The metric set has strong foundation since they were developed
based on the ontology of Bunge and validated against Weyuker’s measurement theory principles
[18]. More importantly, Chidamber et al. claimed that these measures can aid users in
understanding design complexity, in detecting design flaws and in predicting certain project
outcomes and external software qualities such as software defects, testing, and maintenance effort
and provided an empirical sample of these metrics from two commercial systems [8]. Besides, the
metric set is simple, well-understood [19] and straightforward to collect.
These reasons seem to invoke great enthusiasm among researchers and software engineers, and a
great amount of empirical studies have been conducted to evaluate those metrics. Over time, this
metric suite is gradually growing in industry acceptance. In particular, they are being
incorporated into development tools such as Rational Rose [20] and JBuilder [21]. Many others
tool such as WebMetric [23] and JMetric [22], also auto collection feature for the metric set and
make it very easy to collect.
5.3. RQ3: Which design complexity metrics are potentially predictors of quality attribute?
To find whether a design metric is a potential predictor of external attributes, we test each design
metric with the following hypothesis:
H0: There is no impact of metric X on quality attribute Y.
Due to the variation of the use of design metrics in primary studies, we use vote counting to test
the hypothesis H0. Assumed that each study report a significant value (p value) of correlation
coefficient or regression coefficient between design metric X and quality attribute Y. It is worth
noticed that some studies report the actual value of the significance test, but some others only
report the significant level, which is represented by + if the correlation test is significant at 0.05
and by ++ if the test is significant at 0.01. To maximize number of included studies, we select the
significance level at 0.05. The null hypothesis is rejected if the portion of success in total studies
is greater than 0.5. Otherwise, we accepted the hypothesis.
9. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
25
Among 120 design metrics investigated in fault proneness studies and 59 design metrics
investigated in maintainability studies, we select the metrics whose significance level are reported
in at least 6 different datasets.
a. Result for Fault proneness
Table 4 describes the null hypothesis test for significant positive impact of several design metrics
on fault proneness in Spearman correlation result. Table 4 presents number of studies, total
number of dataset, and number of datasets that a metric has significantly positive and negative
correlation coefficient, number of datasets that a metric is non-significant.
There are 12 design metrics that are investigated in more than 6 data sets. Out of these 12 metrics,
9 metrics has portion of success more than 50%, namely CBO, WMC, RFC, WMC Mc Cabe,
SDMC, AMC, NIM, NCM and NTM and therefore null hypothesis is rejected for these metrics.
The 3 other metrics that have less than 50% of success portion are NOC, DIT, LCOM. Hence, we
accept the null hypothesis: with spearman test, there is no positive impact of NOC, DIT or LCOM
on fault proneness.
Table 4: Hypothesis test for significantly Spearman coefficient for fault proneness
b. Result for Maintainability
Table 5 describes the null hypothesis test in Spearman correlation in maintainability studies.
There are 6 design metrics that are investigated in more than 6 data sets. Out of these metrics, 4
metrics has portion of success more than 50 percent and therefore we can reject null hypothesis in
these metrics. The other 2 metrics that have less than 50% of success portion are NOC and DIT.
Therefore, we accept the null hypothesis: with spearman test, there is no positive impact of NOC,
DIT or LCOM on fault proneness. The complete list of metrics is given in [29].
10. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
26
Table 5: Hypothesis test for significantly Spearman coefficients for maintainability
c. Discussion:
For both fault proneness and maintainability studies, vote counting shows that there is no
evidence about relationship of NOC, DIT to external quality attributes. These metrics measure the
inheritance dimension of object oriented systems. WMC, RFC is shown to be potential good
predictors for external quality attributes in all the cases. In fault proneness studies, NIM, NCM,
NTM are scale-based metrics when CBO, SDMC, AMC are coupling-based metrics. In
Maintainability studies, DAC and MPC are also coupling-based metrics. This observation adds an
impression that design metrics that capture scale and coupling dimensions are useful in indicating
the final product quality attributes, such as fault proneness and maintainability. The result also
indicates that, it is unlikely inheritance and cohesion-based metrics can show significant
relationship with software quality attributes.
In comparison with the most frequent use design metrics, the results show that for fault-proneness
and maintainability study, only 50% of frequently used metrics are potentially good ones.
Alternatively, the common metrics are not necessarily the good ones. This suggests that the
researchers in quality model should evaluate the metric effectiveness before adding them to the
model.
5.4. RQ4: How is the influence of design complexity on software quality in open source vs.
closed source software?
a. Result:
There are 35 private datasets, which contributes 59% of total number of datasets. Among them,
10 datasets come from academic projects and 24 datasets come from industry project. The
remaining 24 dataset are public, which contributes 41% of total number of datasets. Figure 6
illustrate the aggregated results for the correlation between metric WMC and Fault proneness
using Meta analysis. The box center represent for the correlation coefficient, the box size
represents for the weight of the dataset, using the dataset size. The line shows the 95% confidence
interval of the correlation coefficient.
Table 6: Correlation intervals in closed source vs. Open source
Metric Closed source Open source % diff.
95% # 95% #
NOC [-0,18;-0,03] 9 [ 0,02; 0,05] 9 24%
DIT [ 0,08; 0,23] 13 [ 0,04; 0,07] 9 0.2%
CBO [ 0,16; 0,31] 15 [ 0,26; 0,30] 9 4.3%
LCOM [ 0,17; 0,32] 9 [ 0,15; 0,18] 9 0.0%
RFC [0,22; 0,37] 8 [0,24; 0,27] 9 2.9%
WMC [0,20; 0,35] 15 [0,27; 0,29] 18 3.9%
11. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
27
Table 5 shows the influence of different metrics on fault proneness across project type. The table
reports the 95% confidence interval of aggregated coefficients for metrics across project type and
the number of investigated datasets. Besides, the difference between the two intervals is
calculated by variance explanation of the separator variable [13, 14]. From the statistics, we have
an observation:
• The correlations between design metrics and fault proneness are at trivial to medium level.
• The Project type variable does not cluster the correlation of metrics and fault proneness
since the explanation power is very small (0 to 4.3%). Only with NOC, the variance
explanation is 24% with two non-overlaps confidence intervals.
• The 95% confidence intervals of metric correlation in closed source projects are larger in
those in open source projects.
b. Discussion:
The small variance explanation shows that project type is not a good separator variable.
Alternatively, there are no difference in relationship between design metrics and fault proneness
between closed source and open source projects. However, the confidence intervals in open
source projects are smaller, which indicate a more homogeneous collection of datasets than
closed source datasets group.
Figure 5: Aggregated correlation coefficient of WMC in fault proneness studies
12. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
28
6. THREATS TO VALIDITY
The major threats to the study validity come from the systematic review and data aggregation
process, which are quality of search process, univariate analysis and aggregation of
heterogeneous data.
The first validity threat comes from the search quality. We use two measures to evaluate the
search result, namely recall and precision [24]. Since it is impossible to know all existing relevant
items, the recall of the search string was estimated by conducting a pilot search, which showed an
initial recall of 88%, and after a refinement of the search string, a recall of 100%. Besides, it was
not tried to optimize the search string for precision, which is clearly reflected by the final, very
low, precision value of 6.3% (considering 906 documents after the removal of duplicates and 57
selected primary studies).
Another main limitation of our study is that it focuses on relationships between single design
metric and quality attribute, such as fault proneness and maintainability. However, it may be that
a combination of design metrics leads to a better prediction than single ones. Therefore, it may be
that multivariate models that use several design metrics to predict fault-proneness are more
meaningful for this purpose. However, the existing multivariate models are so heterogeneous that
it may be impossible to generalize from them.
Last but not least, the primary studies used in this paper stem from various sources with different
quality. One typical critique of meta-analysis is that combining such studies equals mixing apples
and oranges. It can be argued, however, that mixing apples and oranges is beneficial, particularly
if one wants to generalize about fruit, and that studies that are exactly the same in all respects are
actually limited in generalizability [10].
CONCLUSIONS
Table 7 summarizes the results for the research questions.
Table 7: Summary of findings
Question Answers
RQ1: Which quality attributes are
predicted using design complexity
metrics?
Reliability, maintainability, reusability and
development effort
RQ2: Which design complexity
metrics is most frequently used in
literature?
NOC, DIT, CBO, LCOM2, WMC and
RFC1
RQ3: Which design complexity
metrics are potentially predictors of
quality attribute?
Fault proneness: CBO, WMC, RFC,
SDMC, AMC, NIM, NCM and NTM.
Maintainability: WMC, RFC, MPC and
DAC
RQ4: How is the influence of design
complexity on software quality in
open source vs. closed source
software?
Larger variation in closed source projects.
No difference in effectiveness toward fault
proneness
13. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
29
Our findings contribute to both research community and software practitioner in many ways. To
researchers who are interested in software quality model, we confirm the importance of software
coupling and scale metrics.
These two dimensions should be more focused than inheritance and cohesion in fault prediction
models. We suggest a list of specific design metrics that are potential predictors of fault
proneness and maintainability. We also find a limited number of studies focus on other cost and
quality attributes, such as development effort or fault density. These cost and quality attributes
would be a research gap for further studies. The results confirm that, regarding to the impact of
design complexity on quality is heterogeneous across project types. To software practitioners,
since there is no separated regions found for open source and closed source projects, the quality
prediction models on open source projects could be tried on closed source environment. This is
helpful for industrial practitioners since they can learn from the quality model that is published
using open source datasets.
The results of this thesis can be considered as a cornerstone for further study in the area of
correlation and regression analysis of design complexity. The future works could focus on
aggregating the multivariate quality models and investigating other separator variables than
project type. Besides, the cost and quality attributes such as security, volatility or refactoring
effort are potential targets for future investigation of software complexity.
REFERENCES
[1] T. DeMarco, “A metric of estimation quality,” Proceedings of the May 16-19, 1983, national
computer conference, Anaheim, California: ACM, 1983, pp. 753-756.
[2] R.B. Grady, Practical Software Metrics for Project Management and Process Improvement, Hewlett-
Packard Professional Books, Prentice Hall, New Jersey,1992.
[3] C. Catal and B. Diri, “A systematic review of software fault prediction studies,” Expert Systems with
Applications, vol. 36, 2009, pp. 7346-7354.
[4] O. Gomez, H. Oktaba, M. Piattini, and F. Garci¬a, “A systematic review measurement in software
engineering: State-of-the-art in measures”, 1st International Conference on Software and Data
Technologies (ICSOFT), 2006, pp. 224-231.
[5] E. Arisholm, L. Briand, and E. Johannessen, “A systematic and comprehensive investigation of
methods to build and evaluate fault prediction models,” Journal of Systems and Software, vol. 83,
2010, pp. 2-17.
[6] C. Bellini, R. Pereira, and J. Becker, “Measurement in software engineering: From the roadmap to the
crossroads,” International Journal of Software Engineering and Knowledge Engineering, vol. 18,
2008, pp. 37-64.
[7] IEEE, IEEE Standard Glossary of Software Engineering Terminology, report IEEE Std 610.12- 1990,
IEEE, 1990.
[8] L. Briand, J. Wuest, S. Ikonomovski, and H. Lounis, “A Comprehensive Investigation of Quality
Factors in Object-Oriented Designs: An Industrial Case Study” Technical Report ISERN-98-29,
International conference on Software Engineering, 1998.
[9] K. El Emam, W. Melo, and J. Machado, “The prediction of faulty classes using object-oriented design
metrics,” Journal of Systems and Software, vol. 56, 2001, pp. 63-75.
14. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
30
[10] B. A. Kitchenham, “Guidelines for performing Systematic Literature Reviews in Software
Engineering”, Ver 2.3, Keele University, EBSE Technical Report, 2007
[11] L.M. Pickard, B.A. Kitchenham, and P.W. Jones, “Combining empirical results in software
engineering,” Journal on Information and Software Technology, vol. 40, Dec. 1998, pp 811-821
[12] M. Ciolkowski, “Aggregation of Empirical Evidence,” Empirical Software Engineering Issues.
Critical Assessment and Future Directions, Springer Berlin / Heidelberg, 2007, p. 20
[13] LV. Hedges, I. Olkin, Statistical Methods for Meta-analysis. Orlando, FL: Academic Press, 1995.
[14] H. Cooper, L. Hedges, “Research synthesis as a scientific enterprise”, Handbook of research synthesis
(pp. 3-14). New York: Russell Sage, 1994.
[15] J. E. Hannay, T. Dybå, E. Arisholm, D. I. K. Sjøberg, “The Effectiveness of Pair-Programming: A
Meta-Analysis”, Journal on Information and Software Technology 55(7):1110-1122, 2009.
[16] M. Ciolkowski, “What do we know about perspective-based reading? An approach for quantitative
aggregation in software engineering”, 3rd IEEE International Symposium on Empirical Software
Engineering and Measurement, 2009, pp. 133-144.
[17] ISO, “International standard ISO/IEC 9126. Information technology: Software product evaluation:
Quality characteristics and guidelines for their use.” 1991
[18] S.R. Chidamber and C.F. Kemerer, “Towards a metrics suite for object oriented design,” SIGPLAN
Not., vol. 26, 1991, pp. 197-211.
[19] J. M. Scotto, W. Pedrycz, B. Russo, M. Stefanovic, and G. Succi, “Identification of defect-prone
classes in telecommunication software systems using design metrics,” Information Sciences, vol. 176,
2006, pp. 3711-3734.
[20] R. Subramanyam and M. Krishnan, “Empirical analysis of CK metrics for object-oriented design
complexity: Implications for software defects,” IEEE Transactions on Software Engineering, vol. 29,
2003, pp. 297-310.
[21] Y. Zhou and H. Leung, “Empirical analysis of object-oriented design metrics for predicting high and
low severity faults,” IEEE Transactions on Software Engineering, vol. 32, 2006, pp. 771-789.
[22] L. Briand, W. Melo, and J. Wurst, “Assessing the applicability of fault-proneness models across
object-oriented software projects,” IEEE Transactions on Software Engineering, vol. 28, 2002, pp.
706-720.
[23] G. Succi, W. Pedrycz, M. Stefanovic, and J. Miller, “Practical assessment of the models for
identification of defect-prone classes in object-oriented commercial systems using design metrics,”
Journal of Systems and Software, vol. 65, 2003, pp. 1-12.
[24] T. Saracevic, “Evaluation of evaluation in information retrieval”, 18th annual international ACM
SIGIR conference on Research and development in information retrieval, Seattle, Washington, United
States, 1995, pp. 138-146.
[25] Paulson, J.W.; Succi, G.; Eberlein, A., "An empirical study of open-source and closed-source
software products”, IEEE Transactions on Software Engineering, vol.30, no.4, pp. 246- 256, April
2004.
[26] A. Bachmann and A. Bernstein, “Software process data quality and characteristics: a historical view
on open and closed source projects,” Joint international and annual ERCIM workshops on Principles
of software evolution (IWPSE) and software evolution (Evol) workshops, Amsterdam, The
Netherlands: ACM, 2009, pp. 119-128.
15. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.2, March 2017
31
[27] J. Cohen, Statistical Power Analysis for the Behavioral Sciences (2nd Edition), 2nd ed. Routledge
Academic, January 1988.
[28] J. Devore, Probability and Statistics for Engineering and the Sciences, 7th Ed. Thomson Brooks,
2008.
[29] N. D. Anh, “The impact of software design complexity on cost and quality”, Master thesis, [Available
ONLINE] http://www.bth.se/fou/cuppsats.nsf/$$Search
AUTHORS
Dr. Anh Nguyen-Duc is a researcher in the Norwegian University of Science and
Technology’s Department of Computer and Information Science. His research
focuses on Empirical Software Engineering, Global Software Development, Socio-
technical system and Software Start ups