Metrics and Measure are closely inter-related to each other. Measure is defined as way of defining amount, dimension, capacity or size of some attribute of a product in quantitative manner while Metric is unit used for measuring attribute. Software quality is one of the major concerns that need to be addressed and measured. Object oriented (OO) systems require effective metrics to assess quality of software. The paper is designed to identify attributes and measures that can help in determining and affecting quality attributes. The paper conducts empirical study by taking public dataset KC1 from NASA project database. It is validated by applying statistical techniques like correlation analysis and regression analysis. After analysis of data, it is found that metrics SLOC, RFC, WMC and CBO are significant and treated as quality indicators while metrics DIT and NOC are not significant. The results produced from them throws significant impact on improving software quality.
A study of various viewpoints and aspects software quality perspectiveeSAT Journals
Abstract The software quality is very important research of software engineering grown from the last two decades. The software engineering paradigm adopted by many organizations to develop the high quality software at affordable cost. The high quality software is considered as one of the key factor in the rapid growth of Global Software Development. The software metrics computes and evaluates the quality characteristics and used to take quantitative and qualitative decisions for risk assessment and reduction. The multiple stakeholders can view the software quality in multiple angles with various aspects. In this paper we present multiple views of the software quality with respect to various quality aspects. Key Words : Stakeholders, Functional aspect, Structural aspect, Process aspect, Metrics etc.
7.significance of software layered technology on size of projects (2)EditorJST
The objective of the software engineering is committed to build software projects within the budget, time and required quality. Software engineering is a layered paradigm comprised of process, methods, tools and quality focus as bedrock to develop the product. Software firms build software projects of varying sizes constrained on resources, time and functional requirement. Impact of software engineering layered technology may vary according to the size of the projects during their development. Quantitative evaluation of layer significance on size of the software project could be categorized as a complex task because it involves a collective decision on multiple criteria. Analytic Hierarchy Process (AHP) provides an effective quantitative approach for finding the significance of software layered technology on size of the projects. This paper presents the estimations through quantitative approach on real time data collected from several software firms. These findings help for a better project management with respect to the cost, time and resources during building a software project.
Performance Evaluation of Software Quality ModelEditor IJMTER
With the advent of Internet revolution and the emergence of knowledge based systems, Quality acquires a wider
and more challenging dimension. Quality has evolved and undergone transformation from the inspection era to
the quality control regime and then to quality management and finally to the present TQM approach. At every
stage of the transformation “Quality” has been attaining wider dimension with respect to Customer focus,
continual improvement and has been evolving for addressing increasing demands of customers with respect to
delivery of products and services.
This document discusses the evolution of software metrics over time to measure quality as software has become more complex. It outlines how metrics have changed from measuring lines of code and function points to object-oriented and agile metrics that measure concepts like inheritance, cohesion, and velocity. The document also examines how process models like waterfall, spiral, and agile have influenced quality standards and the definition of new metrics. In summary, software metrics have evolved from size-based measures to more sophisticated metrics aligned with modern processes in order to more effectively measure quality attributes as software engineering has advanced.
Evaluating effectiveness factor of object oriented design a testability pers...ijseajournal
Effectiveness is important quality factor to testability measurement of object oriented software at an initial
stage of software development process exclusively at design phase for high quality product. It will help
developer’s design capability to achieve the specified functionalities, characteristics, better design quality
and behavior using appropriate object oriented design (OOD) concepts and procedures. Metric based
model for ‘Effectiveness Quantification Model of Object Oriented Design’ has been proposed by
establishing the correlation between effectiveness and OOD constructs. Later ‘Effectiveness Quantification
Model’ is empirically validated and statistical significance of the study considers the high correlation for
model acceptance. The aim of this research work is to encourage researchers and developers for inclusion
of the effectiveness quantification model to access and quantify software effectiveness quality factor at
design time.
An interactive approach to requirements prioritization using quality factorsijfcstjournal
As the prevalence of software increases, so does the complexity and the number of requirements assoc
iated
to the software project. This presents a dilemma for the developers to clearly identify and prioriti
ze the
most important requirements in order to del
iver the project in given amount of resources and time.
A
number of prioritization methods have been proposed which provide consistent results, but they are v
ery
difficult and complex to implement in practical scenarios as well as lack proper structure to
analyze the
requirements properly. In this study, the users can provide their requirements in two forms: text ba
sed
story form and use case form.
Moreover, the existing prioritization techniques have a very little or no
interaction with the users. So, in t
his paper an attempt has been made to make the prioritization process
user interactive by adding a second level of prioritization where after the developer has properly a
nalyzed
and ranked the requirements on the basis of quality attributes in the first le
vel, takes the opinion of distinct
user’s about the requirements priority sequence. The developer then calculates the disagreement valu
e
associated with each user sequence in order to find out the final priority sequence.
Estimating the Software Product Value during the Development ProcessPaul Houle
Nowadays software companies are facing a fierce competition
to deliver better products but offering a higher value to the customer.
In this context, software product value has becoming a major concern
in software industry, leading for improving the knowledge and better
understanding about how to estimate the software value in early development
phases. Other way, software companies encounter problems such
as releasing products that were developed with high expectations, but
they gradually fall into the category of a mediocre product when they
are released to the market. These high expectations are tightly related
to the expected and offered software value to the customer. This paper
presents an approach for estimating the software product value, focusing
on the development phases. We propose a value indicators approach to
quantify the real value of the development products. The aim is early
identifying potential deviations in the real software value, by comparing
the estimated versus the expected. We present an internal validation
to show the feasibility of this approach to produce benefits in industry
projects.
Software requirements prioritization is a
recognized practice in requirements engineering (RE)
that facilitates the management of stakeholders’
subjective views as specified in their requirements
listing. Since RE process is naturally collaborative in
nature, the intensiveness from both knowledge and
human perspectives opens up the problem of decision
making on requirements, which can be facilitated by
requirements prioritization. However, due to the large
volume of requirements elicited when considering an
ultra-large-scale system, existing prioritization
techniques proposed so far suffer some setbacks in
terms of efficiency, effectiveness and scalability. This
paper employed the use of a more efficient ranking
algorithm for requirements prioritization based on the
limitations of existing techniques. The major objective
is to provide a well-defined ranking procedure through
analysis, suitable for prioritizing software requirements.
An empirical evaluation of the proposed technique was
made using a typical scenario of the Pharmacy
Information System at the Obafemi Awolowo
University Teaching Hospital Complex (OAUTHC) as a
case study. The results showed the computation of the
positive ideal solution (PIS) and negative ideal solution
(NIS), as well as the closeness coefficient (CC) for 4
requirements across 3 stakeholders. The CC showed the
final ranks of requirements, where R4 with 2.09 point is
the most valued requirements, while R1 and R2 with
CC of 1.37 and 1.05 were next in the order of priority
respectively. The CC provides the medium through
which problems of multiple criteria decision making
can be handled, so as to determine the order of priority
of the available alternatives. The paper conveyed
encouraging evidence for the software engineering
community that is capable of resolving redundant
specified requirements, thereby providing the potential
that will facilitate effective and efficient decision
making in handling the differences amongst
requirements that have been prioritized. Thus,
prioritizing software requirements with the
recommended ranking procedure during software
development is crucial and vital in order to reduce
development cost.
A study of various viewpoints and aspects software quality perspectiveeSAT Journals
Abstract The software quality is very important research of software engineering grown from the last two decades. The software engineering paradigm adopted by many organizations to develop the high quality software at affordable cost. The high quality software is considered as one of the key factor in the rapid growth of Global Software Development. The software metrics computes and evaluates the quality characteristics and used to take quantitative and qualitative decisions for risk assessment and reduction. The multiple stakeholders can view the software quality in multiple angles with various aspects. In this paper we present multiple views of the software quality with respect to various quality aspects. Key Words : Stakeholders, Functional aspect, Structural aspect, Process aspect, Metrics etc.
7.significance of software layered technology on size of projects (2)EditorJST
The objective of the software engineering is committed to build software projects within the budget, time and required quality. Software engineering is a layered paradigm comprised of process, methods, tools and quality focus as bedrock to develop the product. Software firms build software projects of varying sizes constrained on resources, time and functional requirement. Impact of software engineering layered technology may vary according to the size of the projects during their development. Quantitative evaluation of layer significance on size of the software project could be categorized as a complex task because it involves a collective decision on multiple criteria. Analytic Hierarchy Process (AHP) provides an effective quantitative approach for finding the significance of software layered technology on size of the projects. This paper presents the estimations through quantitative approach on real time data collected from several software firms. These findings help for a better project management with respect to the cost, time and resources during building a software project.
Performance Evaluation of Software Quality ModelEditor IJMTER
With the advent of Internet revolution and the emergence of knowledge based systems, Quality acquires a wider
and more challenging dimension. Quality has evolved and undergone transformation from the inspection era to
the quality control regime and then to quality management and finally to the present TQM approach. At every
stage of the transformation “Quality” has been attaining wider dimension with respect to Customer focus,
continual improvement and has been evolving for addressing increasing demands of customers with respect to
delivery of products and services.
This document discusses the evolution of software metrics over time to measure quality as software has become more complex. It outlines how metrics have changed from measuring lines of code and function points to object-oriented and agile metrics that measure concepts like inheritance, cohesion, and velocity. The document also examines how process models like waterfall, spiral, and agile have influenced quality standards and the definition of new metrics. In summary, software metrics have evolved from size-based measures to more sophisticated metrics aligned with modern processes in order to more effectively measure quality attributes as software engineering has advanced.
Evaluating effectiveness factor of object oriented design a testability pers...ijseajournal
Effectiveness is important quality factor to testability measurement of object oriented software at an initial
stage of software development process exclusively at design phase for high quality product. It will help
developer’s design capability to achieve the specified functionalities, characteristics, better design quality
and behavior using appropriate object oriented design (OOD) concepts and procedures. Metric based
model for ‘Effectiveness Quantification Model of Object Oriented Design’ has been proposed by
establishing the correlation between effectiveness and OOD constructs. Later ‘Effectiveness Quantification
Model’ is empirically validated and statistical significance of the study considers the high correlation for
model acceptance. The aim of this research work is to encourage researchers and developers for inclusion
of the effectiveness quantification model to access and quantify software effectiveness quality factor at
design time.
An interactive approach to requirements prioritization using quality factorsijfcstjournal
As the prevalence of software increases, so does the complexity and the number of requirements assoc
iated
to the software project. This presents a dilemma for the developers to clearly identify and prioriti
ze the
most important requirements in order to del
iver the project in given amount of resources and time.
A
number of prioritization methods have been proposed which provide consistent results, but they are v
ery
difficult and complex to implement in practical scenarios as well as lack proper structure to
analyze the
requirements properly. In this study, the users can provide their requirements in two forms: text ba
sed
story form and use case form.
Moreover, the existing prioritization techniques have a very little or no
interaction with the users. So, in t
his paper an attempt has been made to make the prioritization process
user interactive by adding a second level of prioritization where after the developer has properly a
nalyzed
and ranked the requirements on the basis of quality attributes in the first le
vel, takes the opinion of distinct
user’s about the requirements priority sequence. The developer then calculates the disagreement valu
e
associated with each user sequence in order to find out the final priority sequence.
Estimating the Software Product Value during the Development ProcessPaul Houle
Nowadays software companies are facing a fierce competition
to deliver better products but offering a higher value to the customer.
In this context, software product value has becoming a major concern
in software industry, leading for improving the knowledge and better
understanding about how to estimate the software value in early development
phases. Other way, software companies encounter problems such
as releasing products that were developed with high expectations, but
they gradually fall into the category of a mediocre product when they
are released to the market. These high expectations are tightly related
to the expected and offered software value to the customer. This paper
presents an approach for estimating the software product value, focusing
on the development phases. We propose a value indicators approach to
quantify the real value of the development products. The aim is early
identifying potential deviations in the real software value, by comparing
the estimated versus the expected. We present an internal validation
to show the feasibility of this approach to produce benefits in industry
projects.
Software requirements prioritization is a
recognized practice in requirements engineering (RE)
that facilitates the management of stakeholders’
subjective views as specified in their requirements
listing. Since RE process is naturally collaborative in
nature, the intensiveness from both knowledge and
human perspectives opens up the problem of decision
making on requirements, which can be facilitated by
requirements prioritization. However, due to the large
volume of requirements elicited when considering an
ultra-large-scale system, existing prioritization
techniques proposed so far suffer some setbacks in
terms of efficiency, effectiveness and scalability. This
paper employed the use of a more efficient ranking
algorithm for requirements prioritization based on the
limitations of existing techniques. The major objective
is to provide a well-defined ranking procedure through
analysis, suitable for prioritizing software requirements.
An empirical evaluation of the proposed technique was
made using a typical scenario of the Pharmacy
Information System at the Obafemi Awolowo
University Teaching Hospital Complex (OAUTHC) as a
case study. The results showed the computation of the
positive ideal solution (PIS) and negative ideal solution
(NIS), as well as the closeness coefficient (CC) for 4
requirements across 3 stakeholders. The CC showed the
final ranks of requirements, where R4 with 2.09 point is
the most valued requirements, while R1 and R2 with
CC of 1.37 and 1.05 were next in the order of priority
respectively. The CC provides the medium through
which problems of multiple criteria decision making
can be handled, so as to determine the order of priority
of the available alternatives. The paper conveyed
encouraging evidence for the software engineering
community that is capable of resolving redundant
specified requirements, thereby providing the potential
that will facilitate effective and efficient decision
making in handling the differences amongst
requirements that have been prioritized. Thus,
prioritizing software requirements with the
recommended ranking procedure during software
development is crucial and vital in order to reduce
development cost.
Effectiveness of software product metrics for mobile application tanveer ahmad
This document discusses the effectiveness of software product metrics for mobile applications. It defines effectiveness and explores how metrics can be used to measure the quality, performance, and efficacy of mobile apps. The document reviews literature on software metrics and their importance. It also examines different types of product metrics like size, complexity, and defect metrics. Finally, it proposes using a statistical simulation approach and developing a new measurement tool called the Effectiveness Calculation Model for Mobile Applications to quantify mobile app performance using computational mathematics.
The most important aim of software engineering is to improve software productivity and quality of software product and further reduce the cost of software and time using engineering and management techniques.Broadly speaking, software engineering initiative has been introduced during software crisis period to describe the collection of techniques that apply engineering and management skills to the construction and
support of software process and products. There is no universally agreed theory for software measurement and the software metrics are useful for obtaining the information on evaluation of process and product in software engineering. It helps to plan and carry out improvement in software organizations and to provide objective information about project performance, process capability and product quality. The process capability is extremely important for software industry because the quality of products is largely determined by the quality of the processes. The make use of of existing metrics and development of innovative software metrics will be important factors in future software engineering process and product development. In future, research work will be based on using software metrics in software development for the development of the time schedule, cost estimates and software quality and can be improved through software metrics. The permanent application of measurement based methodologies is used to the software process and its products to provide important and timely management information, together with the use of those techniques to improve that software process and its products. This research paper mainly concentrates on the overview of unique basics of software measurement and exclusive fundamentals of software metrics in software engineering.
An Elite Model for COTS Component Selection ProcessIJEACS
This document presents a multi-agent approach for selecting commercial off-the-shelf (COTS) software components. It proposes a semi-automated model called ABCS that uses multiple agents to identify suitable candidate components based on requirements. The agents each handle sub-tasks like matching requirements, evaluating security, cost-benefit analysis, and integration testing. They coordinate to produce a weighted list of candidates from which experts can select the most suitable component. The model aims to reduce the time and improve the knowledge involved in COTS component selection.
Transitioning IT Projects to Operations Effectively in Public Sector : A Case...ijmpict
Operational effectiveness is measured by the application availability to end-users and the extent of
convenient usage of the application to perform their business functions. This paper demonstrates how
varying project transition process can affect the operational effectiveness. This explanatory case study
uses various projects in a South Australian government agency as the candidates for evaluation. With
various applications that existed in the production environment, the end-users had varying levels of
satisfaction. This research analyses factors influencing the operational efficiency the projects in transition
from project delivery into operations. The evidence clearly demonstrates criticality of the transition
process of applications from project delivery phase to operations phase. The research analyses the
findings specific to government agencies and presents recommendations. These findings can be useful to
public sector agencies for improving availability of IT applications in operations.
IRJET- Factors in Selection of Construction Project Management Software i...IRJET Journal
The document discusses factors to consider when selecting construction project management software in India. It conducted interviews with 15 experts in the construction industry with experience ranging from 5-30 years. The interviews aimed to understand the software selection process. Based on the literature review and interviews, the document proposes a model for software selection with 8 steps: 1) identify software options, 2) review organization policies, 3) analyze the project's needs, 4) analyze the client's needs, 5) inquire the purpose of planning, 6) analyze software performance and price, 7) check available skills, and 8) select and use software. The model categorizes factors as either project specific or general to guide effective software selection.
The performance of an algorithm can be improved using a parallel computing programming approach. In this study, the performance of bubble sort algorithm on various computer specifications has been applied. Experimental results have shown that parallel computing programming can save significant time performance by 61%-65% compared to serial computing programming.
The document presents a changeability evaluation model for object-oriented software. It begins with an introduction to changeability and its importance. It then reviews existing literature on measuring changeability. A relationship is established between changeability and object-oriented design properties like coupling, inheritance, and polymorphism. The paper then develops a changeability evaluation model using multiple linear regression. The model relates changeability as the dependent variable to object-oriented design metrics as independent variables. The model is validated experimentally using data from class diagrams, showing it is highly significant.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
A Novel Optimization towards Higher Reliability in Predictive Modelling towar...IJECEIAES
Although, the area of software engineering has made a remarkable progress in last decade but there is less attention towards the concept of code reusability in this regards.Code reusability is a subset of Software Reusability which is one of the signature topics in software engineering. We review the existing system to find that there is no progress or availability of standard research approach toward code reusability being introduced in last decade. Hence, this paper introduced a predictive framework that is used for optimizing the performance of code reusability. For this purpose, we introduce a case study of near real-time challenge and involved it in our modelling. We apply neural network and Damped-Least square algorithm to perform optimization with a sole target to compute and ensure highest possible reliability. The study outcome of our model exhibits higher reliability and better computational response time
Actually, software products are increasing in a fast way and are used in almost all activities of human life.
Consequently measuring and evaluating the quality of a software product has become a critical task for
many companies. Several models have been proposed to help diverse types of users with quality issues. The
development of techniques for building software has influenced the creation of models to assess the quality.
Since 2000 the construction of software started to depend on generated or manufactured components and
gave rise to new challenges for assessing quality. These components introduce new concepts such as
configurability, reusability, availability, better quality and lower cost. Consequently the models are
classified in basic models which were developed until 2000, and those based on components called tailored
quality models. The purpose of this article is to describe the main models with their strengths and point out
some deficiencies. In this work, we conclude that in the present age, aspects of communications play an
important factor in the quality of software.
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
This thesis focuses on assessing, evaluating, and managing risks in software development projects for Indian software companies. It aims to identify project risks, analyze dimensions of specific risks through primary research, and study an existing project to identify risks and mitigation strategies. The document discusses related work, project and software development lifecycles, types of risks, the research methodology used, a case study of a project called ERWINA, and concludes with recommendations for improving risk management practices.
IRJET - Scrutinizing Attributes Influencing Role of Information Communication...IRJET Journal
This document discusses attributes that influence the role of information communication technology (ICT) in building construction projects. It identifies 36 attributes from literature that were narrowed down to 31 relevant attributes through a pilot survey. The attributes were then analyzed using the Delphi method and analytical hierarchy process (AHP) to prioritize the most important ones. The Delphi method identified 24 key attributes from expert feedback. These attributes were then analyzed and ranked using AHP, which created a hierarchy and determined global and local weights to identify the most significant attributes. Management attributes like experience of work teams and training to use ICT tools were found to be most important based on the AHP analysis. The study aims to better understand how ICT can improve construction processes and benefits
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
This document summarizes research on techniques for cost estimation in software design. It begins by describing common cost estimation techniques like Constructive Cost Modeling (COCOMO) and Function Point Analysis. It then analyzes research trends in cost estimation, effort estimation, and fault prediction based on literature from 2010 to present. Fewer than 50 papers were found related to overall cost estimation, less than 25 for effort estimation, and only 9 for fault prediction. The document then reviews existing research addressing general cost estimation, enhancement of Function Point Analysis, statistical modeling approaches, cost estimation for embedded systems, and estimation for fourth generation languages and NASA projects. Most techniques use COCOMO or extend existing models with techniques like fuzzy logic, neural networks, or statistical
Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
Using DEMATEL Method to Analyze the Causal Relations on Technological Innovat...drboon
This study analyzes the technology innovation capabilities (TICs) evaluation factors of enterprises by applying the Decision Making Trial and Evaluation Laboratory (DEMATEL) method. Based on the literature reviews, six main perspectives and sixteen criteria were extracted and then validated by six experts. A questionnaire was constructed and answered by eleven experts. Then the DEMATEL method was applied to analyze the importance of criteria and the casual relations among the criteria were constructed. The result showed that the innovation management capability perspective was the most important perspective and influenced the remaining perspectives. This work also presents the significant criteria for each perspective.
The document discusses key concepts for managing software projects including the four Ps of project management: People, Product, Process, and Project. It describes stakeholders and team structures, and emphasizes establishing clear objectives and scope, tracking progress, and learning lessons through post-mortem reviews. Metrics for both processes and products are discussed to assess status, risks, and quality in order to guide improvement.
Testability measurement model for object oriented design (tmmood)ijcsit
Measuring testability early in the development life cycle especially at design phase is a criterion of crucial importance to software designers, developers, quality controllers and practitioners. However, most of the
mechanism available for testability measurement may be used in the later phases of development life cycle.
Early estimation of testability, absolutely at design phase helps designers to improve their designs before
the coding starts. Practitioners regularly advocate that testability should be planned early in design phase.
Testability measurement early in design phase is greatly emphasized in this study; hence, considered significant for the delivery of quality software. As a result, it extensively reduces rework during and after implementation, as well as facilitate for design effective test plans, better project and resource planning in a practical manner, with a focus on the design phase. An effort has been put forth in this paper to recognize the key factors contributing in testability measurement at design phase. Additionally, testability
measurement model is developed to quantify software testability at design phase. Furthermore, the relationship of Testability with these factors has been tested and justified with the help of statistical measures. The developed model has been validated using experimental tryout. Finally, it incorporates the empirical validation of the testability measurement model as the author’s most important contribution.
Ijartes v2-i1-001Evaluation of Changeability Indicator in Component Based Sof...IJARTES
The maintaining of software system is a major
cost concern. The maintaining of a software system depends
on how the changes made to it. The maintainability of a system
depending on the folw of software, its design pattern and
CBSS. In Maintainability phase of a sotware system there are
4 parts, like analyzing, testing, stability, and changes made to
it. In some side areas, these systems emerged very rapidly.
There are many companies which purchase software instead
of developing it .These companies do not have any interst in
the testing of the system but wants to like smoothness in the
flow of the system during changes.
Changeability is one of the characteristics of maintainability.
Software changeability is associated with refactoring which
makes code simpler and easier to maintain (enable all
programmers to improve their code).Factors that affect
changeability include coupling between the modules, lack of
code comments, naming of functions and variables.
Basically,”changeabilty” is the ability of a product or software
to be able to change the structure of the program. It is the rate
the product allows the modification to its components.
In this paper changeability based cost estimation is done.
Initially we take four components; these components are
evaluated based on the coupling, cohesion and Interface
metrix. Next some changes are made to the existing
components and than again these components are evaluated.
Now, on the basis of these two evaluations some conclusion is
made for changeability cost.
A software system continues to grow in size and complexity, it becomes increasing difficult to
understand and manage. Software metrics are units of software measurement. As improvement in coding tools
allow software developer to produce larger amount of software to meet ever expanding requirements. A method
to measure software product, process and project must be used. In this article, we first introduce the software
metrics including the definition of metrics and the history of this field. We aim at a comprehensive survey of the
metrics available for measuring attributes related to software entities. Some classical metrics such as Lines of
codes LOC, Halstead complexity metric (HCM), Function Point analysis and others are discussed and
analyzed. Then we bring up the complexity metrics methods, such as McCabe complexity metrics and object
oriented metrics(C&K method), with real world examples. The comparison and relationship of these metrics are
also presented.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Effectiveness of software product metrics for mobile application tanveer ahmad
This document discusses the effectiveness of software product metrics for mobile applications. It defines effectiveness and explores how metrics can be used to measure the quality, performance, and efficacy of mobile apps. The document reviews literature on software metrics and their importance. It also examines different types of product metrics like size, complexity, and defect metrics. Finally, it proposes using a statistical simulation approach and developing a new measurement tool called the Effectiveness Calculation Model for Mobile Applications to quantify mobile app performance using computational mathematics.
The most important aim of software engineering is to improve software productivity and quality of software product and further reduce the cost of software and time using engineering and management techniques.Broadly speaking, software engineering initiative has been introduced during software crisis period to describe the collection of techniques that apply engineering and management skills to the construction and
support of software process and products. There is no universally agreed theory for software measurement and the software metrics are useful for obtaining the information on evaluation of process and product in software engineering. It helps to plan and carry out improvement in software organizations and to provide objective information about project performance, process capability and product quality. The process capability is extremely important for software industry because the quality of products is largely determined by the quality of the processes. The make use of of existing metrics and development of innovative software metrics will be important factors in future software engineering process and product development. In future, research work will be based on using software metrics in software development for the development of the time schedule, cost estimates and software quality and can be improved through software metrics. The permanent application of measurement based methodologies is used to the software process and its products to provide important and timely management information, together with the use of those techniques to improve that software process and its products. This research paper mainly concentrates on the overview of unique basics of software measurement and exclusive fundamentals of software metrics in software engineering.
An Elite Model for COTS Component Selection ProcessIJEACS
This document presents a multi-agent approach for selecting commercial off-the-shelf (COTS) software components. It proposes a semi-automated model called ABCS that uses multiple agents to identify suitable candidate components based on requirements. The agents each handle sub-tasks like matching requirements, evaluating security, cost-benefit analysis, and integration testing. They coordinate to produce a weighted list of candidates from which experts can select the most suitable component. The model aims to reduce the time and improve the knowledge involved in COTS component selection.
Transitioning IT Projects to Operations Effectively in Public Sector : A Case...ijmpict
Operational effectiveness is measured by the application availability to end-users and the extent of
convenient usage of the application to perform their business functions. This paper demonstrates how
varying project transition process can affect the operational effectiveness. This explanatory case study
uses various projects in a South Australian government agency as the candidates for evaluation. With
various applications that existed in the production environment, the end-users had varying levels of
satisfaction. This research analyses factors influencing the operational efficiency the projects in transition
from project delivery into operations. The evidence clearly demonstrates criticality of the transition
process of applications from project delivery phase to operations phase. The research analyses the
findings specific to government agencies and presents recommendations. These findings can be useful to
public sector agencies for improving availability of IT applications in operations.
IRJET- Factors in Selection of Construction Project Management Software i...IRJET Journal
The document discusses factors to consider when selecting construction project management software in India. It conducted interviews with 15 experts in the construction industry with experience ranging from 5-30 years. The interviews aimed to understand the software selection process. Based on the literature review and interviews, the document proposes a model for software selection with 8 steps: 1) identify software options, 2) review organization policies, 3) analyze the project's needs, 4) analyze the client's needs, 5) inquire the purpose of planning, 6) analyze software performance and price, 7) check available skills, and 8) select and use software. The model categorizes factors as either project specific or general to guide effective software selection.
The performance of an algorithm can be improved using a parallel computing programming approach. In this study, the performance of bubble sort algorithm on various computer specifications has been applied. Experimental results have shown that parallel computing programming can save significant time performance by 61%-65% compared to serial computing programming.
The document presents a changeability evaluation model for object-oriented software. It begins with an introduction to changeability and its importance. It then reviews existing literature on measuring changeability. A relationship is established between changeability and object-oriented design properties like coupling, inheritance, and polymorphism. The paper then develops a changeability evaluation model using multiple linear regression. The model relates changeability as the dependent variable to object-oriented design metrics as independent variables. The model is validated experimentally using data from class diagrams, showing it is highly significant.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
A Novel Optimization towards Higher Reliability in Predictive Modelling towar...IJECEIAES
Although, the area of software engineering has made a remarkable progress in last decade but there is less attention towards the concept of code reusability in this regards.Code reusability is a subset of Software Reusability which is one of the signature topics in software engineering. We review the existing system to find that there is no progress or availability of standard research approach toward code reusability being introduced in last decade. Hence, this paper introduced a predictive framework that is used for optimizing the performance of code reusability. For this purpose, we introduce a case study of near real-time challenge and involved it in our modelling. We apply neural network and Damped-Least square algorithm to perform optimization with a sole target to compute and ensure highest possible reliability. The study outcome of our model exhibits higher reliability and better computational response time
Actually, software products are increasing in a fast way and are used in almost all activities of human life.
Consequently measuring and evaluating the quality of a software product has become a critical task for
many companies. Several models have been proposed to help diverse types of users with quality issues. The
development of techniques for building software has influenced the creation of models to assess the quality.
Since 2000 the construction of software started to depend on generated or manufactured components and
gave rise to new challenges for assessing quality. These components introduce new concepts such as
configurability, reusability, availability, better quality and lower cost. Consequently the models are
classified in basic models which were developed until 2000, and those based on components called tailored
quality models. The purpose of this article is to describe the main models with their strengths and point out
some deficiencies. In this work, we conclude that in the present age, aspects of communications play an
important factor in the quality of software.
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
This thesis focuses on assessing, evaluating, and managing risks in software development projects for Indian software companies. It aims to identify project risks, analyze dimensions of specific risks through primary research, and study an existing project to identify risks and mitigation strategies. The document discusses related work, project and software development lifecycles, types of risks, the research methodology used, a case study of a project called ERWINA, and concludes with recommendations for improving risk management practices.
IRJET - Scrutinizing Attributes Influencing Role of Information Communication...IRJET Journal
This document discusses attributes that influence the role of information communication technology (ICT) in building construction projects. It identifies 36 attributes from literature that were narrowed down to 31 relevant attributes through a pilot survey. The attributes were then analyzed using the Delphi method and analytical hierarchy process (AHP) to prioritize the most important ones. The Delphi method identified 24 key attributes from expert feedback. These attributes were then analyzed and ranked using AHP, which created a hierarchy and determined global and local weights to identify the most significant attributes. Management attributes like experience of work teams and training to use ICT tools were found to be most important based on the AHP analysis. The study aims to better understand how ICT can improve construction processes and benefits
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
This document summarizes research on techniques for cost estimation in software design. It begins by describing common cost estimation techniques like Constructive Cost Modeling (COCOMO) and Function Point Analysis. It then analyzes research trends in cost estimation, effort estimation, and fault prediction based on literature from 2010 to present. Fewer than 50 papers were found related to overall cost estimation, less than 25 for effort estimation, and only 9 for fault prediction. The document then reviews existing research addressing general cost estimation, enhancement of Function Point Analysis, statistical modeling approaches, cost estimation for embedded systems, and estimation for fourth generation languages and NASA projects. Most techniques use COCOMO or extend existing models with techniques like fuzzy logic, neural networks, or statistical
Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
Using DEMATEL Method to Analyze the Causal Relations on Technological Innovat...drboon
This study analyzes the technology innovation capabilities (TICs) evaluation factors of enterprises by applying the Decision Making Trial and Evaluation Laboratory (DEMATEL) method. Based on the literature reviews, six main perspectives and sixteen criteria were extracted and then validated by six experts. A questionnaire was constructed and answered by eleven experts. Then the DEMATEL method was applied to analyze the importance of criteria and the casual relations among the criteria were constructed. The result showed that the innovation management capability perspective was the most important perspective and influenced the remaining perspectives. This work also presents the significant criteria for each perspective.
The document discusses key concepts for managing software projects including the four Ps of project management: People, Product, Process, and Project. It describes stakeholders and team structures, and emphasizes establishing clear objectives and scope, tracking progress, and learning lessons through post-mortem reviews. Metrics for both processes and products are discussed to assess status, risks, and quality in order to guide improvement.
Testability measurement model for object oriented design (tmmood)ijcsit
Measuring testability early in the development life cycle especially at design phase is a criterion of crucial importance to software designers, developers, quality controllers and practitioners. However, most of the
mechanism available for testability measurement may be used in the later phases of development life cycle.
Early estimation of testability, absolutely at design phase helps designers to improve their designs before
the coding starts. Practitioners regularly advocate that testability should be planned early in design phase.
Testability measurement early in design phase is greatly emphasized in this study; hence, considered significant for the delivery of quality software. As a result, it extensively reduces rework during and after implementation, as well as facilitate for design effective test plans, better project and resource planning in a practical manner, with a focus on the design phase. An effort has been put forth in this paper to recognize the key factors contributing in testability measurement at design phase. Additionally, testability
measurement model is developed to quantify software testability at design phase. Furthermore, the relationship of Testability with these factors has been tested and justified with the help of statistical measures. The developed model has been validated using experimental tryout. Finally, it incorporates the empirical validation of the testability measurement model as the author’s most important contribution.
Ijartes v2-i1-001Evaluation of Changeability Indicator in Component Based Sof...IJARTES
The maintaining of software system is a major
cost concern. The maintaining of a software system depends
on how the changes made to it. The maintainability of a system
depending on the folw of software, its design pattern and
CBSS. In Maintainability phase of a sotware system there are
4 parts, like analyzing, testing, stability, and changes made to
it. In some side areas, these systems emerged very rapidly.
There are many companies which purchase software instead
of developing it .These companies do not have any interst in
the testing of the system but wants to like smoothness in the
flow of the system during changes.
Changeability is one of the characteristics of maintainability.
Software changeability is associated with refactoring which
makes code simpler and easier to maintain (enable all
programmers to improve their code).Factors that affect
changeability include coupling between the modules, lack of
code comments, naming of functions and variables.
Basically,”changeabilty” is the ability of a product or software
to be able to change the structure of the program. It is the rate
the product allows the modification to its components.
In this paper changeability based cost estimation is done.
Initially we take four components; these components are
evaluated based on the coupling, cohesion and Interface
metrix. Next some changes are made to the existing
components and than again these components are evaluated.
Now, on the basis of these two evaluations some conclusion is
made for changeability cost.
A software system continues to grow in size and complexity, it becomes increasing difficult to
understand and manage. Software metrics are units of software measurement. As improvement in coding tools
allow software developer to produce larger amount of software to meet ever expanding requirements. A method
to measure software product, process and project must be used. In this article, we first introduce the software
metrics including the definition of metrics and the history of this field. We aim at a comprehensive survey of the
metrics available for measuring attributes related to software entities. Some classical metrics such as Lines of
codes LOC, Halstead complexity metric (HCM), Function Point analysis and others are discussed and
analyzed. Then we bring up the complexity metrics methods, such as McCabe complexity metrics and object
oriented metrics(C&K method), with real world examples. The comparison and relationship of these metrics are
also presented.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
The document describes a proposed tool called the Class Breakpoint Analyzer (CBA) that evaluates software quality at the class level. The CBA extracts metrics like weighted methods per class (WMC), depth of inheritance tree (DIT), number of children (NOC), and lack of cohesion in methods (LCOM) based on the Chidamber and Kemerer (CK) metrics suite. Threshold values are set for each metric to determine if a class is overloaded. The CBA then generates a scorecard for each class to identify classes that need to be refactored to improve quality and reusability. The goal is to help evaluate code quality, identify areas for improvement, and make off-the-shelf
Class quality evaluation using class quality scorecardsIAEME Publication
The document describes a Class Breakpoint Analyzer tool that evaluates software quality using metrics. The tool extracts metrics like Weighted Methods per Class (WMC), Depth of Inheritance Tree (DIT), Number of Children (NOC), and Lack of Cohesion in Methods (LCOM) from source code. Threshold values for each metric indicate if a class needs restructuring. The tool generates a scorecard to determine if a class is overloaded or saturated. This helps improve reusability of existing software and evaluate code quality for junior programmers. The tool uses metrics from the Chidamber and Kemerer (CK) suite to analyze classes and suggest where to break classes for better design.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
This document discusses factors that can reduce software maintenance costs during the implementation phase. It identifies that maintenance costs are highest during software development phases. The objective is to define criteria to assess software quality characteristics and assist during implementation. This will help reduce maintenance costs by creating criteria groups to support writing standard code, developing a model to apply criteria, and increasing understandability. Student groups will study code standardization, write programs, and test software maintenance on programs to validate the model and proposed criteria.
This document analyzes and compares maintainability metrics for aspect-oriented software (AOS) and object-oriented software (OOS) using five projects. It discusses metrics like number of children, depth of inheritance tree, lack of cohesion of methods, weighted methods per class, and lines of code. The results show that for most metrics like NOC, DIT, LCOM, and WMC, the mean values are higher for OOS compared to AOS, indicating that AOS is generally more maintainable based on these metrics. LOC is also lower on average for AOS. The study concludes that an AOP version is more maintainable than an OOP version according to the chosen metrics.
AN IMPROVED REPOSITORY STRUCTURE TO IDENTIFY, SELECT AND INTEGRATE COMPONENTS...ijseajournal
An ultimate goal of software development is to build high quality products. The customers of software
industry always demand for high-quality products quickly and cost effectively. The component-based
development (CBD) is the most suitable methodology for the software companies to meet the demands of
target market. To opt CBD, the software development teams have to customize generic components that are
available in the market and it is very difficult for the development teams to choose the suitable components
from the millions of third party and commercial off the shelf (COTS) components. On the other hand, the
development of in-house repository is tedious and time consuming. In this paper, we propose an easy and
understandable repository structure to provide helpful information about stored components like how to
identify, select, retrieve and integrate components. The proposed repository will also provide previous
assessments of developers and end-users about the selected component. The proposed repository will help
the software companies by reducing the customization effort, improving the quality of developed software
and preventing integrating unfamiliar components.
This document discusses requirement metrics that can be used to measure and improve software quality during the requirements engineering phase of the software development lifecycle. It describes several types of requirement metrics, including size metrics, traceability metrics, completeness metrics, and volatility metrics. These metrics provide insight into factors like the scope and complexity of requirements, consistency between requirement levels, requirements changes over time, and gaps or issues in requirements documentation. Tracking and analyzing these metrics during requirements can help software developers and analysts enhance software quality from the early stages of development.
This document discusses requirement metrics that can be used to measure and improve software quality during the requirements engineering phase of the software development lifecycle. It describes several types of requirement metrics, including size metrics, traceability metrics, completeness metrics, and volatility metrics. These metrics provide insight into factors like the scope and complexity of requirements, consistency between requirement levels, requirements changes over time, and gaps/issues in requirements documentation. Tracking these metrics during requirements can help software teams identify quality issues early and maintain high quality standards from the initial project phases.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
Software Metrics for Identifying Software Size in Software Development ProjectsVishvi Vidanapathirana
This paper defines the best software metrics that can be used to define the size of the software in the current industry of information technology (IT)
The document discusses software metrics and measuring software. It begins with an anecdote about a woman incorrectly trying to measure the length of software with a physical ruler. It then discusses that while software cannot be physically measured, it can be measured through various software metrics. The document goes on to describe different types of software metrics including product, process, and resource metrics and how they are used to measure characteristics of software and the development process.
The document discusses software metrics and measuring software. It begins with an anecdote about a woman incorrectly trying to measure the length of software with a physical ruler. It then discusses that while software cannot be physically measured, it can be measured through various software metrics. The document goes on to describe different types of software metrics including product, process, and resource metrics and how they are used to measure characteristics of software and the development process.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Decision Making Framework in e-Business Cloud Environment Using Software Metr...ijitjournal
Cloud computing technology is most important one in IT industry by enabling them to offer access to their
system and application services on payment type. As a result, more than a few enterprises with Facebook,
Microsoft, Google, and amazon have started offer to their clients. Quality software is most important one in
market competition in this paper presents a hybrid framework based on the goal/question/metric paradigm
to evaluate the quality and effectiveness of previous software goods in project, product and organizations
in a cloud computing environment. In our approach it support decision making in the area of project,
product and organization levels using Neural networks and three angular metrics i.e., project metrics,
product metrics, and organization metrics
Relational Analysis of Software Developer’s Quality AssuresIOSR Journals
This document discusses relational analysis of software developer quality and measures. It begins by introducing the importance of software architecture and development models in ensuring project success. It then discusses measuring processes, products, and resources in software engineering. Internal attributes like size and complexity can be measured from products alone, while external attributes like reliability require executing the code. The research aims to measure internal attributes of the process. It outlines different types of process and product metrics used to measure properties and quality. Finally, it discusses specific defect and lines of code metrics used during implementation to estimate defects and size code.
A novel defect detection method for software requirements inspections IJECEIAES
The requirements form the basis for all software products. Apparently, the requirements are imprecisely stated when scattered between development teams. Therefore, software applications released with some bugs, missing functionalities, or loosely implemented requirements. In literature, a limited number of related works have been developed as a tool for software requirements inspections. This paper presents a methodology to verify that the system design fulfilled all functional requirements. The proposed approach contains three phases: requirements collection, facts collection, and matching algorithm. The feedback results provided enable analysist and developer to make a decision about the initial application release while taking on consideration missing requirements or over-designed requirements.
Hard work matters for everyone in everytbinglojob95766
The document discusses various metrics for measuring software products and projects. It describes direct and indirect software measurement and different types of software metrics including product, process, and project metrics. It then outlines several specific metrics for measuring size, functions, objects, use cases, and web applications. These metrics can be used to assess quality, complexity, and effort required for software products and projects.
Algorithm ExampleFor the following taskUse the random module .docxdaniahendric
Algorithm Example
For the following task:
Use the random module to write a number guessing game.
The number the computer chooses should change each time you run the program.
Repeatedly ask the user for a number. If the number is different from the computer's let the user know if they guessed too high or too low. If the number matches the computer's, the user wins.
Keep track of the number of tries it takes the user to guess it.
An appropriate algorithm might be:
Import the random module
Display a welcome message to the user
Choose a random number between 1 and 100
Get a guess from the user
Set a number of tries to 0
As long as their guess isn’t the number
Check if guess is lower than computer
If so, print a lower message.
Otherwise, is it higher?
If so, print a higher message.
Get another guess
Increment the tries
Repeat
When they guess the computer's number, display the number and their tries count
Notice that each line in the algorithm corresponds to roughly a line of code in Python, but there is no coding itself in the algorithm. Rather the algorithm lays out what needs to happen step by step to achieve the program.
Software Quality Metrics for Object-Oriented Environments
AUTHORS:
Dr. Linda H. Rosenberg Lawrence E. Hyatt
Unisys Government Systems Software Assurance Technology Center
Goddard Space Flight Center Goddard Space Flight Center
Bld 6 Code 300.1 Bld 6 Code 302
Greenbelt, MD 20771 USA Greenbelt, MD 20771 USA
I. INTRODUCTION
Object-oriented design and development are popular concepts in today’s software development
environment. They are often heralded as the silver bullet for solving software problems. While
in reality there is no silver bullet, object-oriented development has proved its value for systems
that must be maintained and modified. Object-oriented software development requires a
different approach from more traditional functional decomposition and data flow development
methods. This includes the software metrics used to evaluate object-oriented software.
The concepts of software metrics are well established, and many metrics relating to product
quality have been developed and used. With object-oriented analysis and design methodologies
gaining popularity, it is time to start investigating object-oriented metrics with respect to
software quality. We are interested in the answer to the following questions:
• What concepts and structures in object-oriented design affect the quality of the
software?
• Can traditional metrics measure the critical object-oriented structures?
• If so, are the threshold values for the metrics the same for object-oriented designs as for
functional/data designs?
• Which of the many new metrics found in the literature are useful to measure the critical
concepts of object-oriented structures?
II. METRIC EVALUATION CRITERIA
While metrics for the traditional functional decomposition and data analysis design appro ...
Similar to Identification, Analysis & Empirical Validation (IAV) of Object Oriented Design (OO) Metrics as Quality Indicators (20)
Data Mining is a significant field in today’s data-driven world. Understanding and implementing its concepts can lead to discovery of useful insights. This paper discusses the main concepts of data mining, focusing on two main concepts namely Association Rule Mining and Time Series Analysis
A Review on Real Time Integrated CCTV System Using Face Detection for Vehicle...rahulmonikasharma
We are describes the technique for real time human face detection and counting the number of passengers in vehicle and also gender of the passengers.The Image processing technology is very popular,now at present all are going to use it for various purpose. It can be applied to various applications for detecting and processing the digital images. Face detection is a part of image processing. It is used for finding the face of human in a given area. Face detection is used in many applications such as face recognition, people tracking, or photography. In this paper,The webcam is installed in public vehicle and connected with Raspberry Pi model. We use face detection technique for detecting and counting the number of passengers in public vehicle via webcam with the help of image processing and Raspberry Pi.
Considering Two Sides of One Review Using Stanford NLP Frameworkrahulmonikasharma
Sentiment analysis is a type of natural language processing for tracking the mood of the public about a particular product or a topic and is useful in several ways. Polarity shift is the most classical task which aims at classifying the reviews either positive or negative. But in many cases, in addition to the positive and negative reviews, there still many neutral reviews exist. However, the performance sometimes limited due to the fundamental deficiencies in handling the polarity shift problem. We propose an Improvised Dual Sentiment Analysis (IDSA) model to address this problem for sentiment classification. We first propose a novel data expansion technique by creating sentiment-reversed review for each training and test review. We develop a corpus based method to construct a pseudo-antonym dictionary. It removes DSA’s dependency on an external antonym dictionary for review reversion. We conduct a range of experiments and the results demonstrates the effectiveness of DSA in addressing the polarity shift in sentiment classification. .
A New Detection and Decoding Technique for (2×N_r ) MIMO Communication Systemsrahulmonikasharma
The requirements of fifth generation new radio (5G- NR) access networks are very high capacity and ultra-reliability. In this paper, we proposed a V-BLAST2 × N_r MIMO system that is analyzed, improved, and expected to achieve both very high throughput and ultra- reliability simultaneously.A new detection technique called parallel detection algorithm is proposed. The performance of the proposed algorithm compared with existing linear detection algorithms. It was seen that the proposed technique increases the speed of signal transmission and prevents error propagation which may be present in serial decoding techniques. The new algorithm reduces the bit error probability and increases the capacity simultaneouslywithout using a standard STC technique. However, it was seen that the BER of systems using the proposed algorithm is slightly higher than a similar system using only STC technique. Simulation results show the advantages of using the proposed technique.
Broadcasting Scenario under Different Protocols in MANET: A Surveyrahulmonikasharma
A wireless network enables people to communicate and access applications and information without wires. This provides freedom of movement and the ability to extend applications to different parts of a building, city, or nearly anywhere in the world. Wireless networks allow people to interact with e-mail or browse the Internet from a location that they prefer. Adhoc Networks are self-organizing wireless networks, absent any fixed infrastructure. broadcasting of data through proper channel is essential. Various protocols are designed to avoid the loss of data. In this paper an overview of different broadcast protocols are discussed.
Sybil Attack Analysis and Detection Techniques in MANETrahulmonikasharma
Security is important for many sensor network applications. A particularly harmful attack against sensor and ad hoc networks is known as the Sybil attack [6], where a node Illegitimately claims multiple identities.Mobility cause a main problem when we talk about security in Mobile Ad-hoc networks. It doesn’t depend on fixed architecture, the nodes are continuously moving in a random fashion. In this article we will focus on identifying the Sybil attack in MANET. It uses air medium for communication so it is more prone to the attack. Sybil attack is one in which single node present multiple fake identities to other nodes, which cause destruction.
A Landmark Based Shortest Path Detection by Using A* and Haversine Formularahulmonikasharma
In 1900, less than 20 percent of the world populace lived in cities, in 2007, fair more than 50 percent of the world populace lived in cities. In 2050, it has been anticipated that more than 70 percent of the worldwide population (about 6.4 billion individuals) will be city tenants. There's more weight being set on cities through this increment in population [1]. With approach of keen cities, data and communication technology is progressively transforming the way city regions and city inhabitants organize and work in reaction to urban development. In this paper, we create a nonspecific plot for navigating a route throughout city A asked route is given by utilizing combination of A* Algorithm and Haversine equation. Haversine Equation gives least distance between any two focuses on spherical body by utilizing latitude and longitude. This least distance is at that point given to A* calculation to calculate minimum distance. The method for identifying the shortest path is specify in this paper.
Processing Over Encrypted Query Data In Internet of Things (IoTs) : CryptDBs,...rahulmonikasharma
This document discusses techniques for processing queries over encrypted data in Internet of Things (IoT) systems. It describes CryptDB and MONOMI, which are database systems that can execute SQL queries over encrypted data. CryptDB uses a database proxy to encrypt/decrypt data and rewrite queries to execute on encrypted data. MONOMI builds on CryptDB and introduces a split client/server approach to query execution to improve efficiency of analytical queries over encrypted data. The document also outlines various encryption schemes that can be used for encrypted query processing, including deterministic encryption, order-preserving encryption, homomorphic encryption, and others.
Quality Determination and Grading of Tomatoes using Raspberry Pirahulmonikasharma
This document describes a system for determining the quality and grading tomatoes using image processing techniques on a Raspberry Pi. The system uses a USB camera to capture images of tomatoes and then performs preprocessing, masking, contour detection, image enhancement and color detection algorithms to analyze features like shape, size, color and texture. It can grade tomatoes into four categories: red, orange, green, and turning green. The system was able to accurately determine tomato quality and estimate expiry dates with 90% accuracy and had low computational time of 0.52 seconds compared to other machine learning methods.
Comparative of Delay Tolerant Network Routings and Scheduling using Max-Weigh...rahulmonikasharma
Network management and Routing is supportively done by performing with the nodes, due to infrastructure-less nature of the network in Ad hoc networks or MANET. The nodes are maintained itself from the functioning of the network, for that reason the MANET security challenges several defects. Routing process and Scheduling is a significant idea to enhance the security in MANET. Other than, scheduling has been recognized to be a key issue for implementing throughput/capacity optimization in Ad hoc networks. Designed underneath conventional (LT) light tailed assumptions, traffic fundamentally faces Heavy-tailed (HT) assumption of the validity of scheduling algorithms. Scheduling policies are utilized for communication networks such as Max-Weight, backpressure and ACO, which are provably throughput optimality and the Pareto frontier of the feasible throughput region under maximal throughput vector. In wireless ad-hoc network, the issue of routing and optimal scheduling performs with time varying channel reliability and multiple traffic streams. Depending upon the security issues within MANETs in this paper presents a comparative analysis of existing scheduling policies based on their performance to progress the delay performance in most scenarios. The security issues of MANETs considered from this paper presents a relative analysis of existing scheduling policies depend on their performance to progress the delay performance in most developments.
DC Conductivity Study of Cadmium Sulfide Nanoparticlesrahulmonikasharma
The dc conductivity of consolidated nanoparticle of CdS has been studied over the temperature range from 303 K to 523 K and the conductivity has been found to be much larger than that of single crystals.
A Survey on Peak to Average Power Ratio Reduction Methods for LTE-OFDMrahulmonikasharma
OFDM (Orthogonal Frequency Division Multiplexing) is generally preferred for high data rate transmission in digital communication. The Long-Term Evolution (LTE) standards for the fourth generation (4G) wireless communication systems. Orthogonal Frequency Division Multiple Access (OFDMA) and Single Carrier Frequency Division Multiple Access (SC-FDMA) are the two multiple access techniques which are generally used in LTE.OFDM system has a major shortcoming of high peak to average power ratio (PAPR) value. This paper explains different PAPR reduction techniques and presents a comparison of the various techniques based on theoretical results. It also presents a survey of the various PAPR reduction techniques and the state of the art in this area.
IOT Based Home Appliance Control System, Location Tracking and Energy Monitoringrahulmonikasharma
Home automation has been a dream of sciences for so many years. It could wind up conceivable in twentieth century simply after power all family units and web administrations were begun being utilized on across the board level. The point of home robotization is to give enhanced accommodation, comfort, vitality effectiveness and security. Vitality checking and protection holds prime significance in this day and age in view of the irregularity between control age and request observing frameworks accessible in the market. Ordinarily, customers are disappointed with the power charge as it doesn't demonstrate the power devoured at the gadget level. This paper shows the outline and execution of a vitality meter utilizing Arduino microcontroller which can be utilized to gauge the power devoured by any individual electrical apparatus. The primary expectation of the proposed vitality meter is to screen the power utilization at the gadget level, transfer it to the server and build up remote control of any apparatus. So we can screen the power utilization remotely and close down gadgets if vital. The car segment is additionally one of the application spaces where vehicle can be made keen by utilizing "IOT". So a vehicle following framework is additionally executed to screen development of vehicles remotely.
Thermal Radiation and Viscous Dissipation Effects on an Oscillatory Heat and ...rahulmonikasharma
An anticipated outcome that is intended chapter is to investigate effects of magnetic field on an oscillatory flow of a viscoelastic fluid with thermal radiation, viscous dissipation with Ohmic heating which bounded by a vertical plane surface, have been studied. Analytical solutions for the quasi – linear hyperbolic partial differential equations are obtained by perturbation technique. Solutions for velocity and temperature distributions are discussed for various values of physical parameters involving in the problem. The effects of cooling and heating of a viscoelastic fluid compared to the Newtonian fluid have been discussed.
Advance Approach towards Key Feature Extraction Using Designed Filters on Dif...rahulmonikasharma
In fast growing database repository system, image as data is one of the important concern despite text or numeric. Still we can’t replace test on any cost but for advancement, information may be managed with images. Therefore image processing is a wide area for the researcher. Many stages of processing of image provide researchers with new ideas to keep information safe with better way. Feature extraction, segmentation, recognition are the key areas of the image processing which helps to enhance the quality of working with images. Paper presents the comparison between image formats like .jpg, .png, .bmp, .gif. This paper is focused on the feature extraction and segmentation stages with background removal process. There are two filters, one is integer filter and second one is floating point Filter, which is used for the key feature extraction from image. These filters applied on the different images of different formats and visually compare the results.
Alamouti-STBC based Channel Estimation Technique over MIMO OFDM Systemrahulmonikasharma
This document summarizes research on using Alamouti space-time block coding (STBC) for channel estimation in MIMO-OFDM wireless communication systems. The proposed system uses 16-PSK modulation with up to 4 transmit and 32 receive antennas. Simulation results show that the proposed approach reduces bit error rate and mean square error at higher signal-to-noise ratios, compared to existing MISO systems. Alamouti-STBC channel estimation improves performance for MIMO-OFDM by achieving full diversity gain from multiple transmit antennas.
Empirical Mode Decomposition Based Signal Analysis of Gear Fault Diagnosisrahulmonikasharma
A vibration investigation is about the specialty of searching for changes in the vibration example, and after that relating those progressions back to the machines mechanical outline. The level of vibration and the example of the vibration reveal to us something about the interior state of the turning segment. The vibration example can let us know whether the machine is out of adjust or twisted. Al-so blames with the moving components and coupling issues can be distinguished. This paper shows an approach for equip blame investigation utilizing signal handling plans. The information has been taken from college of ohio, joined states. The investigation has done utilizing MATLAB software.
1) The document discusses using the ARIMA technique for short term load forecasting of electricity demand in West Bengal, India.
2) It analyzed historical hourly load data from 2017 to build an ARIMA model and forecast demand for July 31, 2017, achieving a Mean Absolute Percentage Error of 2.1778%.
3) ARIMA is identified as an appropriate univariate time series method for short term load forecasting that provides more accurate results than other techniques.
Impact of Coupling Coefficient on Coupled Line Couplerrahulmonikasharma
The coupled line coupler is a type of directional coupler which finds practical utility. It is mainly used for sampling the microwave power. In this paper, 3 couplers A,B & C are designed with different values of coupling coefficient 6dB,10dB & 18dB respectively at a frequency of 2.5GHz using ADS tool. The return loss, isolation loss & transmission loss are determined. The design & simulation is done using microstrip line technology.
Design Evaluation and Temperature Rise Test of Flameproof Induction Motorrahulmonikasharma
The ignition of flammable gases, vapours or dust in presence of oxygen contained in the surrounding atmosphere may lead to explosion. Flameproof three phase induction motors are the most common and frequently used in the process industries such as oil refineries, oil rigs, petrochemicals, fertilizers, etc. The design of flameproof motor is such that it allows and sustain explosion within the enclosure caused by ignition of hazardous gases without transmitting it to the external flammable atmosphere. The enclosure is mechanically strong enough to withstand the explosion pressure developed inside it. To prevent an explosion due to hot spot on the surface of the motor, flameproof induction motors are subjected to heat run test to determine the maximum surface temperature and temperature class with respect to the ignition temperature of the surrounding flammable gas atmosphere. This paper highlights the design features of flameproof motors and their surface temperature classification for different sizes.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Identification, Analysis & Empirical Validation (IAV) of Object Oriented Design (OO) Metrics as Quality Indicators
1. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 5 Issue: 8 31 – 40
______________________________________________________________________________________________
31
IJRITCC | August 2017, Available @ http://www.ijritcc.org
_____________________________________________________________________________________
Identification, Analysis & Empirical Validation (IAV) of Object Oriented Design (OO)
Metrics as Quality Indicators
Dr. Brajesh Kochar 1
Shailendra Singh Gaur 2
Dinesh Kumar Bhardwaj3
1
Department of C.S.E.,Bhagwan Parshuram Institute of Technology, G.G.S.I.P.U.,Delhi, India
2
Department of I.T.,Bhagwan Parshuram Institute of Technology, G.G.S.I.P.U.,Delhi, India
3
Department of C.S.E.,Bhagwan Parshuram Institute of Technology, G.G.S.I.P.U.,Delhi, India
Abstract : Metrics and Measure are closely inter-related to each other. Measure is defined as way of defining amount, dimension, capacity or
size of some attribute of a product in quantitative manner while Metric is unit used for measuring attribute. Software quality is one of the major
concerns that need to be addressed and measured. Object oriented (OO) systems require effective metrics to assess quality of software. The
paper is designed to identify attributes and measures that can help in determining and affecting quality attributes.
The paper conducts empirical study by taking public dataset KC1 from NASA project database. It is validated by applying statistical techniques
like correlation analysis and regression analysis. After analysis of data, it is found that metrics SLOC, RFC, WMC and CBO are significant and
treated as quality indicators while metrics DIT and NOC are not significant. The results produced from them throws significant impact on
improving software quality.
Keywords:- CK object oriented (OO) design metrics , Software quality, C++, Internal and External attributes.
__________________________________________________*****_________________________________________________
1. INTRODUCTION
The metrics used in software engineering are being used by
every industry to enhance, develop and maintain given
software. Software metrics are magnanimous in number and
are dependent on type of software attributes that user wants
to estimate. There are mainly two categories of software
metrics viz process metrics and product metrics. Process
metrics deals with how much effort is required (Person-
Months), quantity of resources and methodology involved in
it. Product metrics deals with specifications of project like
requirements, complexity, cohesion, coupling, reliability and
maintenance. Software product involves finding number of
lines of code (LOC), complexity of code and test cases
generated during testing process.
It is not feasible to design full fault prone system due to
changing business requirements and complexity of software.
A magnanimous amount of research is being done by
researchers to improve software quality by innovating novel
techniques. Object oriented (OO) metrics are commonly
used for quality estimation. As the name suggests, OO
metrics are related to measurement of design characteristics
like encapsulation, inheritance, information hiding and
message passing. Software quality is measured in terms of
metrics. It is measured value that is assigned to product for
measuring its quality. So, it can be said that it is possible to
predict software processes by relying on software metrics.
Software metrics acts as crucial source of information for
decision making [22]. Testing and validation of all parts of
software is time consuming and cost effective. So, it is
essential to identify attributes that can predict fault
proneness and acts as measurable quality attributes. It
requires proper selection of product design metrics.
The objective of study defines exploring various metrics and
validating them by taking public KC1 dataset under NASA
project database. The remainder of paper is organized as
follows: Section 2 makes readers aware with the
methodology involved in study. Section 3 deals with some
terminologies and background of conducted studies by
various researchers. Section 4 conducts empirical study by
collecting, processing and analyzing data using various
statistical techniques. Section 5 provides answers to research
questions. Section 6 concludes the given paper.
2. METHODOLOGY
It is inevitable that research elements are not limited to
review literature on the analysis of metrics and measures of
quality factors being studied under Kitchenham and Charters
[1] unified approach.
Methodology covers following points:
To explore various studies conducted by
researchers in context of OO metrics and software
quality.
To make users aware of software related
terminologies by providing suitable examples.
To identify OO metrics that can act as quality
indicators by predicting fault prone classes.
To validate metrics by taking some dataset and
evaluate them through statistical techniques.
2.1 Research Questions that needs to be answered at the
end of study.
RQ1. How to estimate effort using Lines of Code (LOC)
measure?
2. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 5 Issue: 8 31 – 40
______________________________________________________________________________________________
32
IJRITCC | August 2017, Available @ http://www.ijritcc.org
_____________________________________________________________________________________
RQ2: How to measure % error in length of program and
estimated program level if operands and operators are
given? Does error in length affects quality of software?
3. STATE OF ART & TERMINOLOGY
3.1 State of Art
A continuous efforts and studies have been conducted in
finding and measuring metrics for Object oriented design
(OO) systems. The external metrics deal with features like
portability, reliability, usability etc while internal factors
deal with factors that lead to internal movement of software
modules. They are cohesion and coupling.
Anna et. Al proposed framework for measuring reusability
by refining CK metrics [4,5]. Zhao and Xu [6, 7] defined
cohesion and coupling metrics that works on dependency
graphs between software modules and dependencies.
Coupling is implemented with the help of Aspect J. The
coupling measures are identified by dependencies between
aspects and classes only. Kumar et. al [8] designed
framework for measuring complexity of software but model
failed to measure interaction among various components of
software.
Several OO metrics have been proposed in order to predict
software quality with the help of classification and
prediction models. It is important to know the metrics that
needs to be measured like effort, productivity, fault
tolerance etc. Use of OO methodology is widespread in all
applications that need development of software. Previous
research studies are being referenced that tells about
empirical validation of various product metrics like NOC,
LCOM, McCabe metric [23] [24], [25]. Li and Henry [12]
have proposed prediction model consisting of ten OO
metrics using statistical analysis technique in order to derive
relationship between maintenance and metrics. Khoshgaftar
et .al [13] used NN to estimate software quality. They
compared parametric model and ANN model to estimate
accuracy. Giovani [14] maintains relationship between static
metrics and software fault proneness by computing static
metrics (Cyclomatic complexity) and dynamic metrics
(dataflow coverage). Emam et. al [15] devised model to
predict faulty classes in java application. K, Rakesh., Kaur,
Gurvinder 2011 [16] studied on Comparing Complexity in
Accordance with Object Oriented Metrics. The study
highlighted the object-oriented software metrics proposed in
90s‟ by Chidamber, Kemerer and several studies were
conducted to validate the metrics and discovered several
deficiencies. Gyimothy, T., Ferenc, R., & Siket, I. 2005
[17] conducted a study on Empirical Validation of Object-
Oriented Metrics on Open Source Software for Fault
Prediction. The study is based on source code of the well-
known open source Web and e-mail suite called Mozilla.
The study also used these modified metrics and added one
more object-oriented metric i.e. Lack of cohesion on
methods (LCOM) and the well-known lines of code metric
(LOC). The study used logistic regression and machine
learning methods to predict the fault proneness of the code.
Few software organizations like TRW [26] and SEI [27]
have designed their own product metrics to build cost and
fault prone models.
3.2 Terminology
Before going to literature study, it is mandatory to make
users aware of some software related terminologies.
Software Complexity: - According to Basili [2], complexity
is defined as measure of resources needed by system while
performing given task. Complexity has no units as it is
measured as function of time and space. It is measured on
basis of number of inputs. There is quite difference between
terms complicated and complex. Complicated means the
solution is not known presently but it has possibility to get
solved later. Complex specifies interactions between
software modules are difficult to understand.
Entity and Attributes:-
Measurement is done on entities and attributed. Entity is
defined as an object, event or action over specific time. An
attribute is characteristic of an entity like size of program.
Both entities and attributes are dependent on each other i.e.
saying only measure a program is vague sentence because
its attribute is not specified. “Measure size of program” is
valid sentence holding both entity and attribute.
Table 1: Types of Attributes
Internal Attributes External Attributes
1. These attributes depend only on entity. 1. They depend both on entity as well as
context of entity.
2. It is easy to measure.
Examples: Size, cost, effort etc.
2. It is difficult to measure because it requires
several other factors and parameters to be
tested under consideration.
Examples: Maintenance, reliability etc.
3. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 5 Issue: 8 31 – 40
______________________________________________________________________________________________
33
IJRITCC | August 2017, Available @ http://www.ijritcc.org
_____________________________________________________________________________________
Table 2: Definition of External Attributes
Attributes Definition
1. Effectiveness & Extendibility Allows adaptive changes in system design as
per user requirements. It covers inheritance
metric.
2. Functionality & Reusability Allows redefining existing component in
software system and reusing it in order to
achieve higher outcome. It relates to
complexity metric.
3. Understandability Measures the degree of complexity and level
of programmer to understand design of
software. It relates to coupling and cohesion
metric.
Types of Measures
Measures depend on size of program, structure and nature of
modules. Few measures that are commonly used in software
industry:
(a) Lines of Code (LOC):- It is simplest and well understood
measure that is used for prediction of effort and fault
proneness. LOC helps in finding number of executable lines
in given code.
(b) Halstead Software:- It is used to identify basic elements
of program and measures them to predict their attributes. It
states that program is combination of operators and
operands.
(c) Cyclomatic Complexity:- It measures complexity of
given code from control flow graph. It counts base paths that
start from one point and end till the program is finished.
This approach is used to find number of independent paths
through a program. An independent path is any path through
the program that contains at least one new condition. The
calculation of paths is given by Mc Cabe‟s Cyclomatic
Metric [3] by using formula:
V (g) = e-n+2p (Equation 1)
Where v= vertices of graph, e = edges, n = number of nodes
in graph and p= connected components.
Fig 1: Flow Graph
Each node in a graph represent block of code in a program
and arcs represent branches in program. In flow graph, the
flow is sequential and it is based on assumption that each
node can be reached by starting node and ending node.
From above graph, the value of Cyclomatic complexity is
given as:
V (G) = 9 – 6 + 2 = 5 where e=9, n=6 and p=1
There are 5 independent paths in flow graph as:
Path 1 : acf
Path 2: abef
Path 3: adcf
Path 4: abeacf
Path 5: abebef
There are other two methods for calculating complexity as
follows:
(1) It is equal to number of predicate (decision) nodes plus
one V (G) = P + 1 where P = predicate nodes in graph.
(2) It is equal to number of regions of flow graph.
Object Oriented (OO) Metrics
(a) CK Metrics
The word CK stands for Chidamber and Kemerer [21].
They are of six types:
(1) Depth of Inheritance Tree (DIT):- This metric is defined
as length of root to the deepest leaf node in tree. In this
metric, the class becomes more complex on going from top
to bottom. Depth of tree is also called as height of tree.
(2) Number of Children (NOC):- This metric calculates
number of classes in inheritance tree down from class. It
relates to reusability attribute that gets affected. If NOC
increases, then amount of effort required in testing also
increases.
(3) Coupling between objects (CBO):- It relates the number
of other modules that are coupled to the current module
either as a client or supplier. Increase in CBO will decrease
the usability. It is used to measure complexity, reusability
and quality.
(4) Response for a class (RFC):- It defines number of
methods that are executed in context of messages received
by objects of class and that methods are called as local
methods. Greater number of methods more will be
complexity of class.
a
e
c
b
d
f
4. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 5 Issue: 8 31 – 40
______________________________________________________________________________________________
34
IJRITCC | August 2017, Available @ http://www.ijritcc.org
_____________________________________________________________________________________
(5) Lack of Cohesion on Methods (LCOM):- LCOM is the
difference between the number of methods whose similarity
is zero and the methods whose similarity is not zero. It is not
a good metric of quality.
(6) Weighted Methods per Class (WMC): - WMC is the
number of all member functions and operators defined in
each class. It is used to measure the understandability,
reusability, maintainability and complexity and quality.
(b) MOOD Metrics
Srinivasan et.al [9] defined object oriented design metrics
commonly known as MOOD metrics. MOOD deals with
structural phases like polymorphism, inheritance, cohesion
and coupling. They are defined as below:
(1) Method Inheritance Factor (MIF):- This metric is
defined as ratio of number of inherited methods (NIM) to
the sum of number of inherited methods and number of
defined methods (NDM) in the class.
Mathematically,
MIF = NIM / NIM + NDM (Eq. 2)
(2) Attribute Inheritance Factor (AIF):- This metric is
similar to MIF except the methods are replaced by
attributes. It is defined as ratio of number of inherited
attributes (NIA) to the sum of number of inherited attributes
and number of defined attributes (NDA) in the class.
Mathematically,
AIF = NIA / NIA + NDA (Eq. 3)
(3) Polymorphism Factor (PF):- This metric is defined as
ratio of number of overriding methods to total possible
number of overridden methods in given class. Its value
ranges from (0 – 100) %.
(4) Coupling factor (CF):- Coupling means inter-relatedness
among modules. This metric is defined as ratio of number of
possible couplings (NPC) of given class with other classes
to the number of actual classes (NAC) minus 1.
Mathematically,
CF = NPC / (NAC -1)
Table 3: Quality factors belonging to various metrics and
measures
4. EMPIRICAL STUDY
4.1. Constituents of empirical study
The first step is to select dependent and independent
variables that needs to be measures and validated using OO
metrics. Prediction of fault prone classes in system can be
treated as good indicator to tell about quality of software.
So, it is treated as dependent variable of our study. The
relationship between CK OO design metrics and this
dependent variable is also being presented in paper.
Now, its turn to measure size of classes involved in project.
It is known that various metrics like LOC, FPA are language
dependent [28, 29]. So, we have used concepts of C++
programming language among these metrics. They are
defined as follows:
Table 4: Definitions of OO metrics in context of C++
CK OO design metrics Definition in context of C++
WMC (Weighted methods per class) Number of member functions and operators defined in each class. It is
invalid if we take member functions and operators inherited from super
class.
DIT (Depth of Inheritance Tree) Classes are organized into directed acyclic graphs (DAG) instead of trees
because C++ allows multiple inheritances. It is inheritance related OO
metric.
NOC (Number of children) Number of direct children of each class. It is inheritance related OO
metric.
CBO (Coupling between objects) If a class uses member functions of other class, it is said to be coupled.
RFC (Response for Class) Number of C++ functions directly called by member functions or
operators of given class.
LCOM (Lack of Cohesion on Methods) It is given as member functions (x,y) without variables – member
functions (x,y) with variables.
4.2. Hypothesis
It is performed in order to validate and maintain relationship
with OO metrics and fault proneness dependent variable.
The results later on are used to describe above metrics as
quality indicators. For each metric, hypothesis is given
5. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 5 Issue: 8 31 – 40
______________________________________________________________________________________________
35
IJRITCC | August 2017, Available @ http://www.ijritcc.org
_____________________________________________________________________________________
below that is validated by taking public data set in further
section.
HWMC- A class with more member functions is more
complex and thus more fault prone.
HDIT- The class located in deepest level hierarchy is treated
as more fault prone.
HCBO- Highly coupled classes are more fault prone
because they depend more on methods and objects of other
class.
HNOC- Classes with large number of children are more
faults prone and requires more testing.
HRFC- Classes having larger response sets in response to
member functions are likely to more fault prone.
HLCOM- Classes that binds member functions and
variables are more likely to fault prone i.e. low cohesion.
4.3. Data Collection
The following components that need to be collected are:
LOC of C++ programs at end of implementation
phase related to developed project
Data about C++ programs
List of erroneous data found in testing phase
Replaced source code of C++ programs at
maintenance phase.
Source of data:
The study involves usage of public dataset KC1 from NASA
IV and V Facility Metrics Data Program Repository (MDP)
[30]. The dataset consists of 43 KSLOC of C++ code with
145 classes and 2107 instances.
Data processing:
Empirical analysis of OOCK metrics and code metric
(SLOC) has been done in order to predict number of faults
associated with different severity levels by making use of
public data set KC1 (NASA program metric data program
database). The dataset contains faulty data at method level
(faulty classes according to severity level) while metric
information is at class level.
Different severity levels represent impacts of faults on
performance of system. In KC1 NASA dataset, severity of
faults decreases from 1st
to 5th
level. But for simplicity,
faults that have similar impacts on system can be treated as
single fault. According to [31], fault data is categorized into
three levels-high (severity 1), medium (severity 2) and low
(severity 3,4,5).
Table 5: Distribution of classes among three severity levels
[31]
Level No. of
faulty
classes
% of
faulty
classes
No. of
faults
% of faults
High 24 16.55 49 7.63
Medium 58 40 449 69.94
Low 38 26.20 144 22.43
The intention of this study is to compare and validate effect
of OO metrics on quality of software.
4.4. Data Analysis
This section presents statistical analysis of six OO metrics
and 1 code metric (SLOC) using descriptive method,
correlation analysis and regression analysis.
(a) Descriptive Analysis
It includes different categories of values for all seven
metrics namely Minimum (Min), Maximum (Max), Mean,
Mode, Standard Deviation (SD) and Median.
Below table shows statistics of 145 classes from KC1
dataset.
Table 6: Descriptive statistics of 145 studied C++ classes of
KC1 [32]
Metric
s
Mi
n
Ma
x
Mea
n
Mod
e
S.D. Media
n
WMC 0 110 17.42 8 17.4
5
12
RFC 0 222 34.38 7 36.2
0
28
CBO 0 24 8.32 0 6.34 8
DIT 1 7 2 1 1.26 2
NOC 0 5 0.20 0 0.70 0
LCOM 0 100 68.71 1 36.8
9
84
SLOC 0 231
3
211.1 0 345.
6
108
In above table, high mean value of LCOM indicates classes
are less cohesive.
0
500
1000
1 2 3 4 5 6
WMC distribution
RFC No. of faulty classes % of faulty classes No. of faults % of faults
6. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 5 Issue: 8 31 – 40
______________________________________________________________________________________________
36
IJRITCC | August 2017, Available @ http://www.ijritcc.org
_____________________________________________________________________________________
Fig 2: WMC metric distribution among faulty classes
Fig 3: Combined distribution results of all metrics
From above graph, DIT, NIC metrics states that inheritance
is not well properly suited to KC1 project. SLOC shows
classes are larger in size. Values of DIT, NOC are very low
so it is not observable in various classes of KC1 dataset.
Rest of features like abstraction, encapsulation, and
complexity are observable.
(b) Correlation Analysis
This technique is used to find dependency among dependent
and independent variables. Metrics are taken as independent
while fault prone is dependent. The formula is given by
Spearman‟s rank function:
ρ=1-6∑di2
/n (n2
-1) [33]
Where di=xi-yi, n is sample size of project
Table 7: Correlation results after Spearman‟s rank
Metric WMC RFC CBO DIT NOC LCOM SLOC
WMC 1
RFC 0.528 1
CBO 0.234 0.379 1
DIT 0.134 0.654 0.460 1
NOC 0.026 0.015 --0.001 --0.0.32 1
LCOM 0.218 0.30 0.217 0.217 --0.28 1
SLOC 0.625 0.509 0.572 0.345 --0.034 0.217 1
Categorization of correlation values according to Hopkins
[34] is shown in table
Table 8: Category wise distribution of values
Range of values category
<0.1 Trivial
0.1-0.3 minor
0.3-0.5 moderate
0.5-0.7 large
0.7-0.9 very large
0.9-1.0 perfect
From table 7, it is seen that SLOC, CBO, WMC and RFC
are strongly connected with each other. So, these metrics are
not independent and they lead to redundancy.
(c) Logistic Regression Analysis
It is most widely statistical technique for predicting faulty
classes in system. It is of two types- univariate and
multivariate.
Univariate logistic regression- It is used to analyze
individual effect of each independent variables and
dependent variables. It is given by equation:
P(X1, X2,…. ,Xn)= e(A0+A1X
)/ 1+e(A0+A1X)
[33]
Where P is probability that fault was found in system and
series of A are regression coefficients.
Multivariate logistic regression- It is used to find combined
effect of both variables.
P(X1,X2,…., Xn)=e(A0+A1X1+….+AnXn
)/1+e(A0+A1X1+….+AnXn)
[33]
The detail of regression model includes coefficient (coeff),
constant, pvalue(statistical effect), R2
value (coefficient of
determination) and std err for estimation.
0
500
1000
1500
2000
2500
SLOC RFC CBO DIT NOC LCOM WMC No. of
faulty
classes
% of
faulty
classes
No. of
faults
% of
faults
Series1
Series2
Series3
Series4
Series5
Series6
7. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 5 Issue: 8 31 – 40
______________________________________________________________________________________________
37
IJRITCC | August 2017, Available @ http://www.ijritcc.org
_____________________________________________________________________________________
Table 9: Univariate regression analysis for severity fault as high
WMC RFC CBO DIT NOC LCOM SLOC
Coeff 0.12 0.0004 0.016 0.010 --0.39 0.002 0.001
Constant 0.085 0.118 --0.13 0.301 0.371 0.105 0.101
R2
0.61 0.051 0.117 0.0 0.02 0.012 0.131
p value 0.001 0.002 0.0 0.694 0.117 0.121 0
std err 0.87 0.98 0.95 1.01 1 1.004 0.93
From above table, four metrics SLOC, CBO, RFC and WMC have less p values thus they are significant. SLOC has maximum R2
value and NOC has negative coefficient which means large number of children leads to less faults. DIT, NOC, LCOM is non
significant metrics.
Table 10: Univariate regression analysis for severity fault as medium
WMC RFC CBO DIT NOC LCOM SLOC
Coeff 0.200 0.051 0.412 0.116 --1.11 0.013 0.16
Constant --0.500 1.140 --0.290 2.71 2.91 1.190 0.17
R2
0.116 0.071 0.112 0.0 0.09 0.012 0.391
p value 0 0.02 0.0 0.82 0.193 0.14 0
std err 5.91 7.62 7.12 6.91 7.81 7.82 06.87
Table 11: Univariate regression analysis for severity fault as low
WMC RFC CBO DIT NOC LCOM SLOC
Coeff 0.031 0.032 0.13 0.16 --0.17 0.005 0.004
Constant 0.210 0.510 --0.18 0.65 1.1 0.512 0.100
R2
0.110 0.031 0.147 0.007 0.002 0.008 0.421
p value 0 0.012 0.0 0.2 0.5 0.34 0
std err 2.14 2.18 1.98 2.27 2.17 2.27 1.95
The results show that overall four metrics (SLOC, RFC, CBO and WMC) are significant.
Table 12: Multivariate regression analysis for severity fault as high
WMC RFC CBO DIT SLOC
Coeff --0.007 0.001 0.021 --0.32 0.001
std err 0.007 0.04 0.12 0.07 0.0
p value 0.201 0.16 0.12 0.001 0.07
From above table, WMC, DIT are negative here that are positive in univariate analysis. It occurs due to interaction between
various metrics included in regression model.
Table 13: Multivariate regression analysis for severity fault as medium
NOC DIT SLOC
Coeff --0.9 --1.40 0.12
std err 0.59 0.41 0.01
p value 0.15 0.41 0.0
NOC is still negative but DIT is also negative here that leads them as non quality indicators.
5. ANSWERING RESEARCH QUESTIONS
RQ1. How to estimate effort using Lines of Code (LOC) measure?
Boehm devised a formula for estimating maintenance costs and uses a quantity called as Annual Change Traffic (ACT) which is
related to number of times the requests are handled to perform changes in software.
It is given as:
ACT = KLOCadd + KLOCdel / KLOCtot
8. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 5 Issue: 8 31 – 40
______________________________________________________________________________________________
38
IJRITCC | August 2017, Available @ http://www.ijritcc.org
_____________________________________________________________________________________
Then Annual Maintenance Effort is being computed on basis of ACT as:
AME (person –months) = ACT * SDE
where SDE is software development effort (person-months)
Justification of above formula:
It is given that ACT in software system is 25% per year. The initial development cost was Rs 20 lacs. Total lifetime for the
software is 10 years. What is total cost of software system?
Total cost= software development cost + 0.10 * (software development cost * 0.25)
= 20 + 0.10 (20 * 0.25)
= 20.5 lacs
RQ2. How to measure % error in length of program and estimated program level if operands and operators are given? Does error
in length affects quality of software?
The table consisting of operands and operators are given below:
Operators Occurrences Operands occurrences
main () 1 - -
; 1 extern variable 1
for 2 main function 3
== 3 found 2
!= 4 lim 3
getchar 1 -
() 1 -
&& 3 e 4
return 1 t 2
++ 4 i 1
printf 6 -
if 1 k 3
getline 1 0 4
while 3 MAXLINE 2
Total (n1 = 14) Total (N1= 32) Total (n2=10) total (N2= 25)
Program vocabulary is given by n (n1 +n2) = 24
Program length N = N1+N2 = 57
Estimated Length = NL = n1 log n1 + n2 log n2
= 14 log 14 + 10 log 10
= 25.4
% error = (57 -25.4) / 57 = 0.55 * 100 = 55%
Yes error in length affects the quality of software as less
error means there is less improvement of making changes in
software design.
6. CONCLUSIONS & FUTURE WORKS
Software developers and researchers have acknowledged
that software quality prediction plays vital role in Object
oriented design metrics. Metrics play an important role in
software engineering. There are mainly two categories of
software metrics viz process metrics and product metrics.
Process metrics deals with how much effort is required
(Person-Months), quantity of resources and methodology
involved in it. Product metrics deals with specifications of
project like requirements, complexity, cohesion, coupling,
reliability and maintenance. Categorization of faults on basis
of their severity can be helpful in performing validation of
OO design metrics to predict number of faults in system.
The paper validates the CK OO design metrics by
employing statistical methods like correlation, regression
(univariate, multivariate) under study from KC1 dataset by
NASA MDP.
After applying statistical techniques, it is observed that
metrics SLOC, WMC, RFC and CBO are significant and
related to each other in low, medium and high severity
levels. DIT and NOC are non significant metrics due to their
inheritance concept and low p values. The SLOC metric is
most significant that is termed as quality attribute. The study
also concludes that metrics (SLOC, CBO, WMC, RFC)
measuring class size, complexity, cohesion, coupling are
linked more tightly rather than inheritance metrics (DIT,
NOC).
Future Works
As future work, this study can be replicated by taking
industrial project database in C++, Ada95 and Java. We can
extend this empirical investigation to some soft computing
methods like applying machine learning techniques (neuro
fuzzy, neural networks) in order to produce more refined
results. It will lead to better understanding of prediction
capabilities of suite of CK OO metrics. The best subsets of
classes in multivariate regression model can be used to
produce more significant values of metrics. It is called
introducing Mallow CP in multivariate analysis.
9. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 5 Issue: 8 31 – 40
______________________________________________________________________________________________
39
IJRITCC | August 2017, Available @ http://www.ijritcc.org
_____________________________________________________________________________________
REFERENCES
[1] B. Kitchenham and S. Charters, “Guidelines for
performing systematic literature reviews in
software engineering ”(version 2.3),Keele
University, UK, Tech. Rep. EBSE Technical
Report EBSE-2007-01, 2007
[2] Basili, V., Briand, L., & Melo, W. A Validation of
Object-Oriented Design Metrics as Quality
Indicators. IEEE TRANSACTIONS ON
SOFTWARE ENGINEERING, VOL. 22, NO. 10,
OCTOBER 1996
[3] Puja Sexsena, Monika Saini ,” Empirical studies to
predict fault proneness: a Review, Interntional
journal of computer application (0975-8887)
Volume 22- No 8 may 2011
[4] Sant'Anna, C., Garcia, A., Chavez, C., Lucena, C.,
and Staa, A.V., On the Reuse and Maintenance of
Aspect-Oriented software: An Assessment
Framework, Proceedings of Brazilian Symposium
On Software Engineering (SEES‟03), pp. 19-34,
2003.
[5] Chidamber, S. R. and Kemerer, C. F., A Metrics
Suite for Object-Oriented Design, IEEE
Transactions on Software Engineering, vol. 20 no.
6, pp. 476–493, 1994
[6] Zhao, J. and Xu, B., Measuring aspect cohesion,
Proceedings of 7th
International Conference on
Fundamental Approaches to Software Engineering
(FASE‟04), Lecture Notes in Computer Science,
Volume 2984, Springer-Verlag, pp. 54–68, 2004.
[7] Zhao, J., Measuring coupling in aspect-oriented
systems”, Technical report, Information Processing
Society of Japan (IPSJ), 2004.
[8] Kumar, R., Grover, P.S., and Kumar, A., A Fuzzy
Logic Approach to Measure Complexity of Generic
Aspect-Oriented Systems, Journal of Object
Technology, Vol. 9, No. 3, 2010.
[9] K.P. Srinivasan, Dr. T.Devi, “A Complete and
Comprehensive Metrics Suite for Object-Oriented
Design Quality Assessment”, International Journal
of Software Engineering and Its Applications 8(2),
2014, 173-188.
[10] Vanitha N, “A Report on the Analysis of Metrics
and Measures on Software Quality Factors – A
Literature Study”, IJCSIT, Vol. 5, 2014
[11] Mukesh Bansal, Dr. C.P. Agarwal, Dr. P. Sasikala,
2012, “Predict Software Fault Proneness Using
Object Oriented Metrics”, international journal of
computing, intelligent an communication
technology, ISSN 2319-748X
[12] W. Li and S. Henry, ―Object-Oriented Metrics
that Predict Maintainability‖, Journal of Systems
and Software, vol 23, no.2, pp.111-122, 1993.
[13] T.M. Khoshgaftaar, E.D. Allen, J.P. Hudepohl and
S.J. Aud "Application of neural networks to
software quality modeling of a very large
telecommunications system," IEEE Transactions on
Neural Networks, Vol. 8, No. 4, pp. 902--909,
1997.
[14] Giovanni (2000), ‖Estimating Software Fault-
Proneness for Tuning Testing Activities‖
Proceedings of the 22nd International Conference
on Software Engineering (ICSE2000), Limerick,
Ireland, Jun.2000
[15] E.L. Emam, W. Melo and C.M. Javam ―The
Prediction of Faulty Classes Using Object-Oriented
Design Metrics‖, Journal of Systems and Software,
Elsevier Science, pp. 63-75, 2001
[16] Dr. Rakesh Kumar and Gurvinder
Kaur,”Comparing Complexity in Accordance with
Object Oriented Metrics. International Journal of
Computer Applications”, Published by Foundation
of Computer Science. BibTeX 15(8):42–45,
February 2011
[17] Gyimothy, T., Ferenc, R., & Siket, I.Empirical
Validation of Object-Oriented Metrics on Open
Source Software for Fault Prediction. IEEE
TRANSACTIONS ON SOFTWARE
ENGINEERING, VOL. 31, NO. 10, OCTOBER
2005
[18] Rohitt Sharma, Paramjit Singh, Sumit Sharma, “An
Approach Oriented Towards Enhancing a Metric
Performance”, International Journal On Computer
Science And Engineering (IJCSE), 4(5), 2012, 743-
748
[19] Sonia Montagud, Silvia Abrahao, Emilio Insfran,
“A systematic review of quality attributes and
measures for software product line”, Software
Quality Journal, 20(3-4), 2012, 425-486.
[20] Sharma Aman Kumar , Kalia Arvind , Singh
Hardeep,” Metrics Identification for Measuring
Object Oriented Software Quality”, International
Journal of Soft Computing and Engineering
(IJSCE) ISSN: 2231-2307, Volume-2, Issue-5,
November 2012.
[21] Chidamber, Shyam R., and Chris F. Kemerer. "A
metrics suite for object oriented design." Software
Engineering, IEEE Transactions on 20.6 (1994):
476-49
[22] W. Harrison, ”Software Measurement: A Decision-
Process Approach,” Advances in Computers, vol.
39, pp. 51-105,1994
10. International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 5 Issue: 8 31 – 40
______________________________________________________________________________________________
40
IJRITCC | August 2017, Available @ http://www.ijritcc.org
_____________________________________________________________________________________
[23] V Basili and D Hutchens, ”Analyzing a Syntactic
Family of Complexity Metrics,” IEEE Trans.
Software Eng, vol 9, no. 6, pp. 664-673, June 1982
[24] V. Basili, R. Selby, and T.-Y. Phdips, ”Metric
Analysis and Data Validation Across Fortran
Projects,” IEEE Trans Software Eng, vol.9, no. 6,
pp 652-663, June 1993
[25] S.D. Conte, H.E. Dunsmore, and V.Y. Shen,
Software Eng. Metrics and Models,
Benjamin/Cummings, 1989.
[26] B. Boehm, Softzkre Eng. Economics, Prentice-Hall,
1981
[27] F. McGarry, R. Pajersk, G. Page, S. Waligora, V.
Basili, and M. Zelkowitz, Software Process
Improvement in the NASA Software Eng.
Laboratory. Camegie Mellon Univ., Software Eng.
Inst., Technical Report CMU/SEI-95-TR-22, Dec.
1994
[28] N.I. Churcher and M.J. Shepperd, ”Comments on
„A Metrics Suite for Object-Oriented Design,”‟
IEEE Trans. Software Eng., vol. 21, no. 3, pp. 263-
265, Mar. 1995
[29] Chidamber, S.R. and C.F. Kemerer, 1994. A
metrics suite for object-oriented design. IEEE
Trans. Software Eng., 20: 476-493. DOI:
10.1109/32.295895.
[30] Sayyad Shirabad, J. and Menzies, T.J. (2005) The
PROMISE Repository of Software Engineering
Databases., School of Information Technology and
Engineering, University of Ottawa, Canada.
Available:
http://promise.site.uottawa.ca/SERepository
[31] Yogesh Singh, Arvinder Kaur & Malhotra,
Ruchika (2010). Empirical validation of object-
oriented metrics for predicting fault proneness
models. Software Quality Journal, 18(1), 3-35.
[32] http://promise.site.uottawa.ca/SERepository/dataset
s/kc1.arff
[33] Akhilendra Singh Chouhan, Sanjay Kumar Dubey.
Analytical Review of Fault Proneness for Object
Oriented Systems. IJSER, ISSN 2229-5518, vol.3,
Issue 12, Dec 2012
[34] Hopkin, W.G. (2006). A new view of statistics,
Sport science, MDP. Available online
http://sarpresults.nasagov/viewresearch
[35] R. Subramanayam and M. Krishanan, “Empirical
Analysis of CK Metrics for Object Oriented Design
Complexity: Implications for Software Defects.”
IEEE Transactions on Software Engineering,
29(4):297-310, 2003.