The selection of appropriate decisions is a significant issue that might lead to more satisfactory results. The difficulty comes when there are several alternatives and when all of them have the same chance of being selected. It is important, therefore, to find the kinds of priorities among all of these alternatives in order to choose the most appropriate one. The analytic hierarchy process (AHP) is capable of structuring decision problems and finding mathematically determined judgments built on knowledge and experience. This suggests that AHP should prove useful in agile software development where complex decisions occur routinely. This paper presents an example of using the AHP to rank the refactoring techniques based on the external code quality attributes. XP encourages applying the refactoring where the code smells bad. However, refactoring may consume more time and efforts. Therefore, to maximize the benefits of the refactoring in less time and effort, AHP has been applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to focus on the technique that improve the code and the XP development process in general. :
The analytic hierarchy process (AHP) has been applied in many fields and especially to complex
engineering problems and applications. The AHP is capable of structuring decision problems and finding
mathematically determined judgments built on knowledge and experience. This suggests that AHP should
prove useful in agile software development where complex decisions occur routinely. In this paper, the
AHP is used to rank the refactoring techniques based on the internal code quality attributes. XP
encourages applying the refactoring where the code smells bad. However, refactoring may consume more
time and efforts.So, to maximize the benefits of the refactoring in less time and effort, AHP has been
applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to
focus on the technique that improve the code and the XP development process in general.
SCHEDULING AND INSPECTION PLANNING IN SOFTWARE DEVELOPMENT PROJECTS USING MUL...ijseajournal
This document presents a multi-objective hyper-heuristic evolutionary algorithm (MHypEA) for scheduling and inspection planning in software development projects. The MHypEA incorporates twelve low-level heuristics based on selection, crossover, and mutation operations of evolutionary algorithms. The algorithm selects heuristics based on reinforcement learning with adaptive weights. An experiment on randomly generated test problems found that MHypEA explores and exploits the search space thoroughly to find high quality solutions, achieving better results than other multi-objective evolutionary algorithms in half the time.
Software quality is an important issue in the development of successful software application.
Many methods have been applied to improve the software quality. Refactoring is one of those
methods. But, the effect of refactoring in general on all the software quality attributes is
ambiguous.
The goal of this paper is to find out the effect of various refactoring methods on quality
attributes and to classify them based on their measurable effect on particular software quality
attribute. The paper focuses on studying the Reusability, Complexity, Maintainability,
Testability, Adaptability, Understandability, Fault Proneness, Stability and Completeness
attribute of a software .This, in turn, will assist the developer in determining that whether to
apply a certain refactoring method to improve a desirable quality attribute.
This document provides an overview of software testing techniques and best practices covered in a course on the topic. It discusses the purpose of software testing, including verification, error detection, and validation. It then surveys common software testing methodologies like white box testing, black box testing, and unit testing. The document also includes two case studies, one on test automation and one on testing an intranet system. Finally, it provides a template for a software test plan and discusses several papers on software testing methods and techniques.
Application of Genetic Algorithm in Software Engineering: A ReviewIRJESJOURNAL
Abstract. The software engineering is comparatively new and regularly changing field. The big challenge of meeting strict project schedules with high quality software requires that the field of software engineering be automated to large extent and human resource intervention be minimized to optimum level. To achieve this goal the researcher have explored the potential of machine learning approaches as they are adaptable, have learning ability. In this paper, we take a look at how genetic algorithm (GA) can be used to build tool for software development and maintenance tasks.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Presented on 4 Dec 2014
@Jeju, South Korea
Title: An efficient method for assessing the impact of refactoring candidates on maintainability based on matrix computation
The analytic hierarchy process (AHP) has been applied in many fields and especially to complex
engineering problems and applications. The AHP is capable of structuring decision problems and finding
mathematically determined judgments built on knowledge and experience. This suggests that AHP should
prove useful in agile software development where complex decisions occur routinely. In this paper, the
AHP is used to rank the refactoring techniques based on the internal code quality attributes. XP
encourages applying the refactoring where the code smells bad. However, refactoring may consume more
time and efforts.So, to maximize the benefits of the refactoring in less time and effort, AHP has been
applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to
focus on the technique that improve the code and the XP development process in general.
SCHEDULING AND INSPECTION PLANNING IN SOFTWARE DEVELOPMENT PROJECTS USING MUL...ijseajournal
This document presents a multi-objective hyper-heuristic evolutionary algorithm (MHypEA) for scheduling and inspection planning in software development projects. The MHypEA incorporates twelve low-level heuristics based on selection, crossover, and mutation operations of evolutionary algorithms. The algorithm selects heuristics based on reinforcement learning with adaptive weights. An experiment on randomly generated test problems found that MHypEA explores and exploits the search space thoroughly to find high quality solutions, achieving better results than other multi-objective evolutionary algorithms in half the time.
Software quality is an important issue in the development of successful software application.
Many methods have been applied to improve the software quality. Refactoring is one of those
methods. But, the effect of refactoring in general on all the software quality attributes is
ambiguous.
The goal of this paper is to find out the effect of various refactoring methods on quality
attributes and to classify them based on their measurable effect on particular software quality
attribute. The paper focuses on studying the Reusability, Complexity, Maintainability,
Testability, Adaptability, Understandability, Fault Proneness, Stability and Completeness
attribute of a software .This, in turn, will assist the developer in determining that whether to
apply a certain refactoring method to improve a desirable quality attribute.
This document provides an overview of software testing techniques and best practices covered in a course on the topic. It discusses the purpose of software testing, including verification, error detection, and validation. It then surveys common software testing methodologies like white box testing, black box testing, and unit testing. The document also includes two case studies, one on test automation and one on testing an intranet system. Finally, it provides a template for a software test plan and discusses several papers on software testing methods and techniques.
Application of Genetic Algorithm in Software Engineering: A ReviewIRJESJOURNAL
Abstract. The software engineering is comparatively new and regularly changing field. The big challenge of meeting strict project schedules with high quality software requires that the field of software engineering be automated to large extent and human resource intervention be minimized to optimum level. To achieve this goal the researcher have explored the potential of machine learning approaches as they are adaptable, have learning ability. In this paper, we take a look at how genetic algorithm (GA) can be used to build tool for software development and maintenance tasks.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Presented on 4 Dec 2014
@Jeju, South Korea
Title: An efficient method for assessing the impact of refactoring candidates on maintainability based on matrix computation
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
A METHOD FOR PRIORITIZING QUALITATIVE SCENARIOS IN EVALUATING ENTERPRISE ARCH...ijcsit
In the field of enterprise architecture (EA), qualitative scenarios are used to understand the qualitative characteristics better. In order to reduce the implementation cost, scenarios are prioritized to be able to focus on the higher priority and more important scenarios. There are different methods to evaluate
enterprise architecture including architecture Trade-off Analysis Method (ATAM).Prioritizing qualitative scenarios is one of the main phases of this method. Since none of the recent studies meet the prioritizing qualitative scenarios requirements, considering proper prioritizing criteria and reaching an appropriate speed priority, non-dominated sorting genetic algorithms is used in this study (NSGA-II). In addition to previous research standards more criteria were considered in the proposed algorithm, these sets of structures together as gene and in the form of cell array constitute chromosome. The proposed algorithm is evaluated in two case studies in the field of enterprise architecture and architecture software. The results showed the accuracy and the more appropriate speed comparing to the previous works including genetic
algorithms
Evolutionary Search Techniques with Strong Heuristics for Multi-Objective Fea...Abdel Salam Sayyad
This document summarizes Abdel Salam Sayyad's doctoral defense that addressed using evolutionary search techniques with strong heuristics for multi-objective feature selection in software product lines. The defense outlined modeling feature models, analyzing them automatically, formulating the multi-objective feature selection problem, and using multi-objective evolutionary algorithms. Results demonstrated scalability by increasing objectives, tuning parameters, and using heuristics like "PUSH" and "PULL" as well as population seeding. The defense concluded by discussing future work.
Research Activities: past, present, and future.Marco Torchiano
Public seminar for Professor Position at Politecnico di Torino
- past research products
- current research activities
- future outlook
October 19, 2018
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
Conceptualization of a Domain Specific Simulator for Requirements Prioritizationresearchinventy
This paper conceptualizes a domain specific simulator for requirements prioritization; its aims at helping to identify appropriate prioritization strategies for a project in hand. The possible existing scenarios are difficult to analyze; they involve different variables, like the selection of: stakeholders (their availability, expertise, and importance); prioritization criteria; and prioritization methods. To demonstrate the feasibility of the proposed simulator elements, a well established general purpose simulator, called Arena, was used. The results demonstrate that, it is possible to build the suggested scenarios in order to study and make inferences about the prioritization strategies.
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
This document discusses the creation and use of a process performance model (PPM) at a Chinese software supplier. It first describes the organization as specializing in security software for banks with 80 employees. It then breaks down the metrics used to measure business objectives and connects processes to specific indicators. The document walks through creating the PPM using cause-and-effect diagrams and relationships between attributes. Finally, it shows how the PPM is used to predict values for defect density in various phases using computed formulas.
Software requirements prioritization is a
recognized practice in requirements engineering (RE)
that facilitates the management of stakeholders’
subjective views as specified in their requirements
listing. Since RE process is naturally collaborative in
nature, the intensiveness from both knowledge and
human perspectives opens up the problem of decision
making on requirements, which can be facilitated by
requirements prioritization. However, due to the large
volume of requirements elicited when considering an
ultra-large-scale system, existing prioritization
techniques proposed so far suffer some setbacks in
terms of efficiency, effectiveness and scalability. This
paper employed the use of a more efficient ranking
algorithm for requirements prioritization based on the
limitations of existing techniques. The major objective
is to provide a well-defined ranking procedure through
analysis, suitable for prioritizing software requirements.
An empirical evaluation of the proposed technique was
made using a typical scenario of the Pharmacy
Information System at the Obafemi Awolowo
University Teaching Hospital Complex (OAUTHC) as a
case study. The results showed the computation of the
positive ideal solution (PIS) and negative ideal solution
(NIS), as well as the closeness coefficient (CC) for 4
requirements across 3 stakeholders. The CC showed the
final ranks of requirements, where R4 with 2.09 point is
the most valued requirements, while R1 and R2 with
CC of 1.37 and 1.05 were next in the order of priority
respectively. The CC provides the medium through
which problems of multiple criteria decision making
can be handled, so as to determine the order of priority
of the available alternatives. The paper conveyed
encouraging evidence for the software engineering
community that is capable of resolving redundant
specified requirements, thereby providing the potential
that will facilitate effective and efficient decision
making in handling the differences amongst
requirements that have been prioritized. Thus,
prioritizing software requirements with the
recommended ranking procedure during software
development is crucial and vital in order to reduce
development cost.
Using Multi-Criteria Decision and knowledge representation methodologies for ...Nit Celesc
This document describes a methodology used to evaluate innovation projects for an electric power company in Brazil. It uses multi-criteria decision making (MCDM) and the analytic hierarchy process (AHP) to establish criteria and weights for evaluating projects. Key criteria included originality, applicability, relevance, and cost reasonableness. The methodology decomposed the evaluation hierarchy and used pairwise comparisons to determine relative importance of criteria. This allowed establishing priority scores for projects and simulating different evaluation scenarios. The results provided an effective tool to support the company's evaluation and selection of research and development projects.
Measurements &milestones for monitoring and controllingDhiraj Singh
The document discusses various metrics that can be used to monitor testing progress, evaluate quality, and determine when to stop testing. It recommends defining meaningful metrics and milestones in the test plan. Metrics should track progress towards milestones, tester productivity, defects found, requirements coverage, and budgets. Graphs of trends can show phases of the monitoring process over time. Regular meetings allow managers to evaluate if testing is on schedule and on budget.
Distributed Software Development Process, Initiatives and Key Factors: A Syst...zillesubhan
Geographically Distributed Software Development (GSD) process differs from Collocated Software Development (CSD) process in various technical aspects. It is empirically proven that renowned process improvement initiatives applicable to CSD are not very effective for GSD. The objective of this research is to review the existing literature (both academia and industrial) to identify initiatives and key factors which play key role in the improvement and maturity of a GSD process, to achieve this goal we planned a Systematic Literature Review (SLR) following a standard protocol. Three highly respected sources are selected to search for the relevant literature which resulted in a large number of TOIs (Title of Interest). An inter-author custom protocol is outlined and followed to shortlist most relevant articles for review. The data is extracted from this set of finally selected articles. We have performed both qualitative and quantitative analysis of the extracted data to obtain the results. The concluded results identify several initiatives and key factors involved in GSD and answer each research question posed by the SLR.
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
The document summarizes a model-based approach to prioritizing regression test cases. It involves generating test cases from UML models, prioritizing them based on the number of states and transitions covered, and clustering them by severity using a dendrogram approach. This helps decrease the time and cost of regression testing by focusing testing efforts on the most important and affected areas first. The proposed approach constructs models from requirements, identifies states, prioritizes flows, generates test cases, and prioritizes the test cases based on severity to improve regression testing efficiency.
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewIJERA Editor
Software testing is one of the most important steps in the process of Software Development. Testing provides
the glimpse of the proper functioning of the system under different conditions. It makes it a necessary step to
choose the best testing method for the software system to be successful and accepted by a large number of
people as the market is really competitive these days and only error free systems can survive for a longer period
of time. This paper gives the comparative analysis of two major methods of testing : Formal Specifications
Based Software Testing and Model Based Software Testing, which are used widely in the process of software
development process. It brings out how these two methods of testing can provide reliability to software system
including the major uses, advantages, and disadvantages of both the testing methods. It briefly gives the detailed
comparative analysis of these two methods of software testing. It also brings out the situations where formal
specifications based testing is more effective and efficient while model based testing being effective in others.
This comparative analysis will help one in deciding on a better testing technique, depending upon the situation,
and requirements of software, for the software to be successful in long run
This document presents a recommendation system for design patterns in software development. The system uses a knowledge base of design patterns and a Goal, Question, Metric methodology to ask users a series of weighted questions to determine the most suitable design pattern for their needs. An experiment was conducted with 8 users to evaluate the system. The system was able to correctly recommend a design pattern 50% of the time and was found to require fewer questions than similar existing systems. Future work will focus on expanding the knowledge base and further evaluating the system.
This document contains a concert score for the song "Silly Me" arranged for orchestra. It lists the instrumentation and provides musical notation for various sections of the orchestra. The score directs the parts for instruments such as flute, clarinet, trumpet, trombone, and others. It also provides performance markings such as dynamics, articulations, and techniques for playing specified sections of the song.
Learn why use selenium with 3 million dollar bugs!Edureka!
The document discusses million dollar bugs and the importance of rigorous testing to prevent them. It provides examples of three real million dollar bugs, including a software glitch that caused a recall of 1.9 million Toyota Prius cars costing over $2 billion. Best practices for avoiding bugs include following standards, using libraries, refactoring code, test-driven development, and automating testing. While complete elimination of bugs may not be possible, adhering to testing best practices can significantly reduce the chances of million dollar bugs occurring.
Comparative Computational Modelling of CO2 Gas Emissions for Three Wheel Vehi...IJRES Journal
Quest for a greener environment and energy conservation has led a number of research studies to increase fuel economy and reduce emissions in developmental design of vehicles.This study illustrates how a vehicular body shape affects fuel consumption and gas emission. Solid models for two different tricycles were done and simulated using Solid works flow xpress, Mathematical models were applied to compare the rate of fuel consumption and gas emission between the simulated models. The result shows thatNASENI TC1 consumes less fuel and invariably emits less CO2 when compared with RFM 1.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
A METHOD FOR PRIORITIZING QUALITATIVE SCENARIOS IN EVALUATING ENTERPRISE ARCH...ijcsit
In the field of enterprise architecture (EA), qualitative scenarios are used to understand the qualitative characteristics better. In order to reduce the implementation cost, scenarios are prioritized to be able to focus on the higher priority and more important scenarios. There are different methods to evaluate
enterprise architecture including architecture Trade-off Analysis Method (ATAM).Prioritizing qualitative scenarios is one of the main phases of this method. Since none of the recent studies meet the prioritizing qualitative scenarios requirements, considering proper prioritizing criteria and reaching an appropriate speed priority, non-dominated sorting genetic algorithms is used in this study (NSGA-II). In addition to previous research standards more criteria were considered in the proposed algorithm, these sets of structures together as gene and in the form of cell array constitute chromosome. The proposed algorithm is evaluated in two case studies in the field of enterprise architecture and architecture software. The results showed the accuracy and the more appropriate speed comparing to the previous works including genetic
algorithms
Evolutionary Search Techniques with Strong Heuristics for Multi-Objective Fea...Abdel Salam Sayyad
This document summarizes Abdel Salam Sayyad's doctoral defense that addressed using evolutionary search techniques with strong heuristics for multi-objective feature selection in software product lines. The defense outlined modeling feature models, analyzing them automatically, formulating the multi-objective feature selection problem, and using multi-objective evolutionary algorithms. Results demonstrated scalability by increasing objectives, tuning parameters, and using heuristics like "PUSH" and "PULL" as well as population seeding. The defense concluded by discussing future work.
Research Activities: past, present, and future.Marco Torchiano
Public seminar for Professor Position at Politecnico di Torino
- past research products
- current research activities
- future outlook
October 19, 2018
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
Conceptualization of a Domain Specific Simulator for Requirements Prioritizationresearchinventy
This paper conceptualizes a domain specific simulator for requirements prioritization; its aims at helping to identify appropriate prioritization strategies for a project in hand. The possible existing scenarios are difficult to analyze; they involve different variables, like the selection of: stakeholders (their availability, expertise, and importance); prioritization criteria; and prioritization methods. To demonstrate the feasibility of the proposed simulator elements, a well established general purpose simulator, called Arena, was used. The results demonstrate that, it is possible to build the suggested scenarios in order to study and make inferences about the prioritization strategies.
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
This document discusses the creation and use of a process performance model (PPM) at a Chinese software supplier. It first describes the organization as specializing in security software for banks with 80 employees. It then breaks down the metrics used to measure business objectives and connects processes to specific indicators. The document walks through creating the PPM using cause-and-effect diagrams and relationships between attributes. Finally, it shows how the PPM is used to predict values for defect density in various phases using computed formulas.
Software requirements prioritization is a
recognized practice in requirements engineering (RE)
that facilitates the management of stakeholders’
subjective views as specified in their requirements
listing. Since RE process is naturally collaborative in
nature, the intensiveness from both knowledge and
human perspectives opens up the problem of decision
making on requirements, which can be facilitated by
requirements prioritization. However, due to the large
volume of requirements elicited when considering an
ultra-large-scale system, existing prioritization
techniques proposed so far suffer some setbacks in
terms of efficiency, effectiveness and scalability. This
paper employed the use of a more efficient ranking
algorithm for requirements prioritization based on the
limitations of existing techniques. The major objective
is to provide a well-defined ranking procedure through
analysis, suitable for prioritizing software requirements.
An empirical evaluation of the proposed technique was
made using a typical scenario of the Pharmacy
Information System at the Obafemi Awolowo
University Teaching Hospital Complex (OAUTHC) as a
case study. The results showed the computation of the
positive ideal solution (PIS) and negative ideal solution
(NIS), as well as the closeness coefficient (CC) for 4
requirements across 3 stakeholders. The CC showed the
final ranks of requirements, where R4 with 2.09 point is
the most valued requirements, while R1 and R2 with
CC of 1.37 and 1.05 were next in the order of priority
respectively. The CC provides the medium through
which problems of multiple criteria decision making
can be handled, so as to determine the order of priority
of the available alternatives. The paper conveyed
encouraging evidence for the software engineering
community that is capable of resolving redundant
specified requirements, thereby providing the potential
that will facilitate effective and efficient decision
making in handling the differences amongst
requirements that have been prioritized. Thus,
prioritizing software requirements with the
recommended ranking procedure during software
development is crucial and vital in order to reduce
development cost.
Using Multi-Criteria Decision and knowledge representation methodologies for ...Nit Celesc
This document describes a methodology used to evaluate innovation projects for an electric power company in Brazil. It uses multi-criteria decision making (MCDM) and the analytic hierarchy process (AHP) to establish criteria and weights for evaluating projects. Key criteria included originality, applicability, relevance, and cost reasonableness. The methodology decomposed the evaluation hierarchy and used pairwise comparisons to determine relative importance of criteria. This allowed establishing priority scores for projects and simulating different evaluation scenarios. The results provided an effective tool to support the company's evaluation and selection of research and development projects.
Measurements &milestones for monitoring and controllingDhiraj Singh
The document discusses various metrics that can be used to monitor testing progress, evaluate quality, and determine when to stop testing. It recommends defining meaningful metrics and milestones in the test plan. Metrics should track progress towards milestones, tester productivity, defects found, requirements coverage, and budgets. Graphs of trends can show phases of the monitoring process over time. Regular meetings allow managers to evaluate if testing is on schedule and on budget.
Distributed Software Development Process, Initiatives and Key Factors: A Syst...zillesubhan
Geographically Distributed Software Development (GSD) process differs from Collocated Software Development (CSD) process in various technical aspects. It is empirically proven that renowned process improvement initiatives applicable to CSD are not very effective for GSD. The objective of this research is to review the existing literature (both academia and industrial) to identify initiatives and key factors which play key role in the improvement and maturity of a GSD process, to achieve this goal we planned a Systematic Literature Review (SLR) following a standard protocol. Three highly respected sources are selected to search for the relevant literature which resulted in a large number of TOIs (Title of Interest). An inter-author custom protocol is outlined and followed to shortlist most relevant articles for review. The data is extracted from this set of finally selected articles. We have performed both qualitative and quantitative analysis of the extracted data to obtain the results. The concluded results identify several initiatives and key factors involved in GSD and answer each research question posed by the SLR.
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
The document summarizes a model-based approach to prioritizing regression test cases. It involves generating test cases from UML models, prioritizing them based on the number of states and transitions covered, and clustering them by severity using a dendrogram approach. This helps decrease the time and cost of regression testing by focusing testing efforts on the most important and affected areas first. The proposed approach constructs models from requirements, identifies states, prioritizes flows, generates test cases, and prioritizes the test cases based on severity to improve regression testing efficiency.
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewIJERA Editor
Software testing is one of the most important steps in the process of Software Development. Testing provides
the glimpse of the proper functioning of the system under different conditions. It makes it a necessary step to
choose the best testing method for the software system to be successful and accepted by a large number of
people as the market is really competitive these days and only error free systems can survive for a longer period
of time. This paper gives the comparative analysis of two major methods of testing : Formal Specifications
Based Software Testing and Model Based Software Testing, which are used widely in the process of software
development process. It brings out how these two methods of testing can provide reliability to software system
including the major uses, advantages, and disadvantages of both the testing methods. It briefly gives the detailed
comparative analysis of these two methods of software testing. It also brings out the situations where formal
specifications based testing is more effective and efficient while model based testing being effective in others.
This comparative analysis will help one in deciding on a better testing technique, depending upon the situation,
and requirements of software, for the software to be successful in long run
This document presents a recommendation system for design patterns in software development. The system uses a knowledge base of design patterns and a Goal, Question, Metric methodology to ask users a series of weighted questions to determine the most suitable design pattern for their needs. An experiment was conducted with 8 users to evaluate the system. The system was able to correctly recommend a design pattern 50% of the time and was found to require fewer questions than similar existing systems. Future work will focus on expanding the knowledge base and further evaluating the system.
This document contains a concert score for the song "Silly Me" arranged for orchestra. It lists the instrumentation and provides musical notation for various sections of the orchestra. The score directs the parts for instruments such as flute, clarinet, trumpet, trombone, and others. It also provides performance markings such as dynamics, articulations, and techniques for playing specified sections of the song.
Learn why use selenium with 3 million dollar bugs!Edureka!
The document discusses million dollar bugs and the importance of rigorous testing to prevent them. It provides examples of three real million dollar bugs, including a software glitch that caused a recall of 1.9 million Toyota Prius cars costing over $2 billion. Best practices for avoiding bugs include following standards, using libraries, refactoring code, test-driven development, and automating testing. While complete elimination of bugs may not be possible, adhering to testing best practices can significantly reduce the chances of million dollar bugs occurring.
Comparative Computational Modelling of CO2 Gas Emissions for Three Wheel Vehi...IJRES Journal
Quest for a greener environment and energy conservation has led a number of research studies to increase fuel economy and reduce emissions in developmental design of vehicles.This study illustrates how a vehicular body shape affects fuel consumption and gas emission. Solid models for two different tricycles were done and simulated using Solid works flow xpress, Mathematical models were applied to compare the rate of fuel consumption and gas emission between the simulated models. The result shows thatNASENI TC1 consumes less fuel and invariably emits less CO2 when compared with RFM 1.
Find here all the information related to Hotels, Resorts, Restaurants, Movies, Car Hire, Advocates, Doctors, Local Dating, Local Health Information In Palo Alto. For more details visit our site http://www.bipamerica.com
Sheriva Vernooy is applying for a Senior Delivery Service Manager position. She has a Bachelor's degree in Human Resources Management from Polytechnic of Namibia. She currently works as the Head of Human Resources at TAAG Angolan Airlines, where she manages the HR department. Her experience includes 8 years working in various HR and customer service roles for Air Namibia and TAAG. She believes her education and experience in HR, customer satisfaction, and service delivery make her well-qualified for the position.
El documento menciona dos obras de arte: "Cabeza de mujer", un dibujo a lápiz del pintor español Pablo Picasso, y una referencia al pintor holandés Vincent Van Gogh.
Arihant Arden, an ongoing residential project in Greater Noida West by Arihant Buildcon, Offers 2,3 and 4 BHK residential apartments. See more @ http://goo.gl/B5BtEz
Connor Whitman and his wife Madeline arrive in the mysterious town of Mineral Park with their horse-drawn carriage. They meet Miss Gladys, who welcomes them and says the town is full of magic. Connor is skeptical but notes the town has gold, houses already built, and a tavern. Madeline is unhappy with the clothing style but is pleased when Connor builds her a general store. Their daughter Eudora is born as Connor finds work through Miss Gladys, though the outfits are strange. Madeline remains unhappy with the town's lack of amenities but is content with her store project.
How does acupuncture work Phoenix ArizonaDavid Bell
How does acupuncture work? Acupuncture works through activating and adjusting the nervous system and the muscles in the body so as to create change in the the brain muscles and organs in the body.
La tecnología de punta se refiere a tecnología recientemente inventada y avanzada que comienza siendo desarrollada en laboratorios. Eventualmente esta tecnología es comercializada aunque a precios altos, y luego es reemplazada por nuevas innovaciones tecnológicas. La tecnología de punta puede ser una herramienta útil para las organizaciones al agilizar procesos como las compras y brindar control sobre inventarios y costos.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
A MODEL TO COMPARE THE DEGREE OF REFACTORING OPPORTUNITIES OF THREE PROJECTS ...acijjournal
This document presents a model for quantifying and comparing the degree of refactoring opportunities in three software projects. The model involves drawing UML diagrams for the projects, calculating source code metrics for each UML diagram, representing the diagrams on an ordinal scale based on the metrics, and using a machine learning tool (Weka) to analyze the resulting dataset. The tool uses a Naive Bayesian classifier to generate a confusion matrix for each project, allowing evaluation of the model's performance at classifying refactoring opportunities as low, medium, or high. The model is applied to three projects from a company to test its ability to measure and compare refactoring opportunities in code.
AN IMPROVED REPOSITORY STRUCTURE TO IDENTIFY, SELECT AND INTEGRATE COMPONENTS...ijseajournal
An ultimate goal of software development is to build high quality products. The customers of software
industry always demand for high-quality products quickly and cost effectively. The component-based
development (CBD) is the most suitable methodology for the software companies to meet the demands of
target market. To opt CBD, the software development teams have to customize generic components that are
available in the market and it is very difficult for the development teams to choose the suitable components
from the millions of third party and commercial off the shelf (COTS) components. On the other hand, the
development of in-house repository is tedious and time consuming. In this paper, we propose an easy and
understandable repository structure to provide helpful information about stored components like how to
identify, select, retrieve and integrate components. The proposed repository will also provide previous
assessments of developers and end-users about the selected component. The proposed repository will help
the software companies by reducing the customization effort, improving the quality of developed software
and preventing integrating unfamiliar components.
Working software measures the progress. Basically, Agile method involves interleaving the specification, implementation, design and testing. Series of versions are developed with the involvement of and evaluation by the stake holders in each version. Agile methods aim at reducing the software process overheads (like documentation) and concentrate more on code rather than the design. Customer involvement, incremental delivery, freedom of developers to evolve new working methods, change management, and last but not the least simplicity is the basic essence of Agile development. Agile methodologies are well suited for small as well as medium sized projects.
Pareto-Optimal Search-Based Software Engineering (POSBSE): A Literature SurveyAbdel Salam Sayyad
Paper presented at the 2nd International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE’13), San Francisco, USA. May 2013.
Performance assessment and analysis of development and operations based autom...IJECEIAES
Development and operations (DevOps), an accretion of automation tools, efficiently reaches the goals of software development, test, release, and delivery in terms of optimization, speed and quality. Diverse set of alternative automation tools exist for different phases of software development, for which DevOps adopts several selection criteria to choose the best tool. This research paper represents the performance evaluation and analysis of automation tools employed in the coding phase of DevOps culture. We have taken most commonly followed source code management tools-BitBucket, GitHub actions, and GitLab into consideration. Current work assesses and analyzes their performance based on DevOps evaluation criteria that too are categorized into different dimensions. For the purpose of performance evaluation, weightage and overall score is assigned to these criteria based on existing renowned literature and industrial case study of TekMentors Pvt Ltd. On the ground of performance outcome, the tool with the highest overall score is realized as the best source code automation tool. This performance analysis or measure will be a great benefit to our young researchers/students to gain an understanding of the modus operandi of DevOps culture, particularly source code automation tools. As a part of future research, other dimensions of selection criteria can also be considered for evaluation purposes.
Today in era of software industry there is no perfect software framework available for
analysis and software development. Currently there are enormous number of software development
process exists which can be implemented to stabilize the process of developing a software system. But no
perfect system is recognized till yet which can help software developers for opting of best software
development process. This paper present the framework of skillful system combined with Likert scale. With
the help of Likert scale we define a rule based model and delegate some mass score to every process and
develop one tool name as MuxSet which will help the software developers to select an appropriate
development process that may enhance the probability of system success.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
Software Refactoring Under Uncertainty: A Robust Multi-Objective ApproachWiem Mkaouer
This document describes a multi-objective robust optimization approach for software refactoring that accounts for uncertainty in code smell severity levels and class importance. The approach formulates refactoring as a multi-objective problem to find solutions that maximize both quality, by correcting code smells, and robustness to changes in severity levels and importance. An evaluation on six open source projects found the approach generates refactoring solutions comparable in quality to existing approaches but with significantly better robustness across different scenarios.
An Elite Model for COTS Component Selection ProcessIJEACS
This document presents a multi-agent approach for selecting commercial off-the-shelf (COTS) software components. It proposes a semi-automated model called ABCS that uses multiple agents to identify suitable candidate components based on requirements. The agents each handle sub-tasks like matching requirements, evaluating security, cost-benefit analysis, and integration testing. They coordinate to produce a weighted list of candidates from which experts can select the most suitable component. The model aims to reduce the time and improve the knowledge involved in COTS component selection.
This document analyzes and compares maintainability metrics for aspect-oriented software (AOS) and object-oriented software (OOS) using five projects. It discusses metrics like number of children, depth of inheritance tree, lack of cohesion of methods, weighted methods per class, and lines of code. The results show that for most metrics like NOC, DIT, LCOM, and WMC, the mean values are higher for OOS compared to AOS, indicating that AOS is generally more maintainable based on these metrics. LOC is also lower on average for AOS. The study concludes that an AOP version is more maintainable than an OOP version according to the chosen metrics.
7.significance of software layered technology on size of projects (2)EditorJST
The objective of the software engineering is committed to build software projects within the budget, time and required quality. Software engineering is a layered paradigm comprised of process, methods, tools and quality focus as bedrock to develop the product. Software firms build software projects of varying sizes constrained on resources, time and functional requirement. Impact of software engineering layered technology may vary according to the size of the projects during their development. Quantitative evaluation of layer significance on size of the software project could be categorized as a complex task because it involves a collective decision on multiple criteria. Analytic Hierarchy Process (AHP) provides an effective quantitative approach for finding the significance of software layered technology on size of the projects. This paper presents the estimations through quantitative approach on real time data collected from several software firms. These findings help for a better project management with respect to the cost, time and resources during building a software project.
A Review Of Code Reviewer Recommendation Studies Challenges And Future Direc...Sheila Sinclair
This document summarizes a systematic literature review of 29 studies on code reviewer recommendation published between 2009 and 2020. The review aims to categorize the methods, datasets, evaluation approaches, reproducibility, validity threats, and future directions discussed in the studies. Most studies used heuristic or machine learning approaches, evaluated models on open source projects, and discussed refining models and additional experiments as future work. Common challenges included ensuring reproducibility and dealing with threats to model generalizability and dataset integrity.
Framework for a Software Quality Rating SystemKarthik Murali
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality
A Comparative Analysis Of Various Methodologies Of Agile Project Management V...Brittany Allen
This document provides a comparative analysis of project management methodologies, specifically comparing the Project Management Body of Knowledge (PMBOK) and various agile project management approaches. It first describes the key processes and knowledge areas of PMBOK. It then outlines some popular agile methodologies like Scrum, Extreme Programming (XP), and Feature Driven Development (FDD). The document aims to identify similarities and differences between the traditional PMBOK framework and more flexible agile approaches.
the application of machine lerning algorithm for SEEKiranKumar671235
The document discusses using machine learning algorithms to accurately estimate software development effort (SDE). It proposes using a modified Jaya optimization algorithm to select important features which are then input to an extreme gradient boosting model for SDE estimation. The key objectives are to develop a novel feature selection method, propose an ensemble model for accurate prediction, and improve prediction ability using deep learning stacking. It reviews related work applying metaheuristic and machine learning techniques for SDE estimation and outlines the proposed approach of using modified Jaya optimization and extreme gradient boosting.
Extreme Programming (XP) is an agile methodology widely used for software development. However, XP is not as effective for medium and large projects due to weaknesses like poor documentation and lack of risk awareness. This paper reviews several studies on adapting XP for different project sizes through practices like extended planning, architecture design, and risk management. Case studies show the adapted XP approach can provide benefits to medium and large projects similar to what standard XP delivers for small projects.
MCDM Techniques for the Selection of Material Handling Equipment in the Autom...IJMER
Material Handling Equipments are utilized in different shops of an automobile industry.
For culling congruous Material Handling Equipment, it is felt that some Multi Criteria Decision
Making Methods must be used due to their ability of converting an intricate quandary to a paired
comparison. These methods are predicated on some relative Criteria and Sub-criteria. Certain
methods such as; Analytic Hierarchy Process (AHP), Fuzzy Analytic Hierarchy Process (FAHP), and
Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) Method have to be utilized
for solving the quandary of Material Handling Equipment cull in different shops of automobile
industry. For solving these quandaries, some criteria (Material, Move, and Method) are culled.
The main conclusions drawn from this study are that, Method criteria is more consequential for
culling Material Handling Equipment, and Conveyor System is more efficient and precise Equipment
for Handling the Material in shop floor of any automobile industry. The focus of this research is in
the area of Cull of Material Handling Equipment in automobile industry. Cull of congruous Material
Handling Equipment is very paramount for reducing manufacturing cycle time, and cost of
manufacturing
This document discusses the impact of aspect-oriented programming (AOP) on software maintainability based on a literature review and case studies. It summarizes several case studies that measured maintainability metrics like coupling, cohesion, and separation of concerns in object-oriented (OO) systems versus aspect-oriented (AO) systems. The studies found that AO systems generally had less coupling between components, higher separation of concerns, and were more changeable and maintainable than equivalent OO systems. The document also outlines various software metrics that have been used to measure maintainability attributes in AO systems like cohesion, coupling, size, and changeability.
This document discusses principles and approaches for modern software project management. It covers topics like iterative development processes, architecture-first approaches, risk management, and quality control. The document outlines phases in a lifecycle like inception, elaboration, construction, and transition. It also describes evaluating project artifacts like requirements, design, and implementation through consistency checks, traceability, and quality reviews.
Similar to Ranking The Refactoring Techniques Based on The External Quality Attributes (20)
Exploratory study on the use of crushed cockle shell as partial sand replacem...IJRES Journal
The increasing demand for natural river sand supply for the use in construction industry along
with the issue of environmental problem posed by the dumping of cockle shell, a by-product from cockle
business have initiated research towards producing a more environmental friendly concrete. This research
explores the potential use of cockle shell as partial sand replacement in concrete production. Cockle shell used
in this experimental work were crushed to smaller size almost similar to sand before mixed in concrete. A total
of six concrete mixtures were prepared with varying the percentages of cockle shell viz. 0%, 5%, 10%, 15%,
20% and 25%. All the specimens were subjected to continuous water curing. The compressive strength test was
conducted at 28 days in accordance to BS EN 12390. Finding shows that integration of suitable content of
crushed cockle shell of 10% as partial sand replacement able to enhance the compressive strength of concrete.
Adopting crushed cockle shell as partial sand replacement in concrete would reduce natural river sand
consumption as well as reducing the amount of cockle shell disposed as waste.
Congenital Malaria: Correlation of Umbilical Cord Plasmodium falciparum Paras...IJRES Journal
The vertical (trans-placental) transmission of the parasite Plasmodium falciparum from
pregnant mother to fetus during gestational period was investigated in a clinical research involving 43 full term
pregnant women in selected Hospitals in Jimeta Yola, Adamawa State Nigeria. During the observational study,
parasitemia was determined by light microscopic examination of umbilical and maternal peripheral blood film
for the presence of the trophozoites of Plasmodium falciparum. Correlational analysis was then carried on the
result obtained at p<0.05.><0.05) was established between maternal peripheral blood and umbilical cord
blood parasitemia with Pearson’s correlation coefficient of 0.762. Thus, in a malaria endemic area like Yola,
Adamawa State, Nigeria, with a stable transmission of parasite, there is a high probability of vertical
transmission of Plasmodium falciparum parasite from mother to fetus during gestation that can be followed by
the presentation of the symptoms of malaria by the newborn and other malaria related complications. Families
are advised to consistently sleep under appropriately treated insecticide mosquito net to avoid mosquito bite and
subsequent infestation.
Review: Nonlinear Techniques for Analysis of Heart Rate VariabilityIJRES Journal
Heart rate variability (HRV) is a measure of the balance between sympathetic mediators of heart
rate that is the effect of epinephrine and norepinephrine released from sympathetic nerve fibres acting on the
sino-atrial and atrio-ventricular nodes which increase the rate of cardiac contraction and facilitate conduction at
the atrio-ventricular node and parasympathetic mediators of heart rate that is the influence of acetylcholine
released by the parasympathetic nerve fibres acting on the sino-atrial and atrio-ventricular nodes leading to a
decrease in the heart rate and a slowing of conduction at the atrio-ventricular node. Sympathetic mediators
appear to exert their influence over longer time periods and are reflected in the low frequency power(LFP) of
the HRV spectrum (between 0.04Hz and 0.15 Hz).Vagal mediators exert their influence more quickly on the
heart and principally affect the high frequency power (HFP) of the HRV spectrum (between 0.15Hz and 0.4
Hz). Thus at any point in time the LFP:HFP ratio is a proxy for the sympatho- vagal balance. Thus HRV is a
valuable tool to investigate the sympathetic and parasympathetic function of the autonomic nervous system.
Study of HRV enhance our understanding of physiological phenomenon, the actions of medications and disease
mechanisms but large scale prospective studies are needed to determine the sensitivity, specificity and predictive
values of heart rate variability regarding death or morbidity in cardiac and non-cardiac patients. This paper
presents the linear and nonlinear to analysis the HRV.
Dynamic Modeling for Gas Phase Propylene Copolymerization in a Fluidized Bed ...IJRES Journal
The document presents a dynamic two-phase model for a fluidized bed reactor used to produce polypropylene. The model divides the reactor into an emulsion phase and bubble phase, with reaction assumed to occur in both phases. Simulation results show the temperature profile is lower than previous single-phase models due to considering both phases. Approximately 13% of the produced polymer comes from the bubble phase, demonstrating the importance of accounting for both phases.
Study and evaluation for different types of Sudanese crude oil propertiesIJRES Journal
Sudanese crude oil is regarded as one of the sweet types of crude in the world, Sulphur containing
compounds are un desirable in petroleum because they de activate the catalyst during the refining processes and
are the main source of acid rains and environmental pollution.(Mark Cullen 2001),Since it contains considerable
amount of salts and acids, it negatively impact the production facilities and transportation lines with corrosive
materials. However it suffers other problems in flow properties represented by the high viscosity and high
percentage of wax. Samples were collected after the initial and final treatment at CPF, and tested for
physical and chemical properties.wax content is in the range 23-31 weight % while asphalting content is about
0.1 weight% . Resin content is 13-7 weight % and deposits are 0.01 weight%. The carbon number distribution in
the crude is in the range 7-35 carbon atoms. The pour point vary between 39°C-42°C and the boiling point is in
the range 70 °C - 533 °C.
A Short Report on Different Wavelets and Their StructuresIJRES Journal
This article consists of basics of wavelet analysis required for understanding of and use of wavelet
theory. In this article we briefly discuss about HAAR wavelet transform their space and structures.
A Case Study on Academic Services Application Using Agile Methodology for Mob...IJRES Journal
Recently, Mobile Cloud Computing reveals many modern development areas in the Information
Technology industry. Several software engineering frameworks and methodologies have been developed to
provide solutions for deploying cloud computing resources on mobile application development. Agile
methodology is one of the most commonly used methodologies in the field. This paper presents the MCCAS a
Web and Mobile application that provide feature for the Palestinian higher education/academic institutions. An
Agile methodology was used in the development of the MCCAS but in parallel with emphasis on Cloud
computing resources deployment. Also many related issues is discussed such as how software engineering
modern methodologies (advances) influenced the development process.
Wear Analysis on Cylindrical Cam with Flexible RodIJRES Journal
Firstly, the kinetic equation of spatial cylindrical cam with flexible rod has been established. Then, an
accurate cylindrical cam mechanism model has been established based on the spatial modeling software
Solidworks. The dynamic effect of flexible rod on mechanical system was studied in detail based on the
mechanical system dynamics analytical software Adams, and Archard wear model is used to predict the wear of
the cam. We used Ansys to create finite element model of the cam link, extracted the first five order mode to
export into Adams. The simulation results show that the dynamic characteristics of spatial cylindrical cam
mechanical system with flexible rod is closed to ideal mechanism. During the cam rotate one cycle, the collision
in the linkage with a clearance occurs in some special location, others still keep a continuous contact, and the
prediction of wear loss is smaller than rigid body.
DDOS Attacks-A Stealthy Way of Implementation and DetectionIJRES Journal
Cloud Computing is a new paradigm provides various host service [paas, saas, Iaas over the internet.
According to a self-service,on-demand and pay as you use business model,the customers will obtain the cloud
resources and services.It is a virtual shared service.Cloud Computing has three basic abstraction layers System
layer(Virtual Machine abstraction of a server),Platform layer(A virtualized operating system, database and
webserver of a server and Application layer(It includes Web Applications).Denial of Service attack is an attempt
to make a machine or network resource unavailable to the intended user. In DOS a user or organization is
deprived of the services of a resource they would normally expect to have.A Successful DOS attack is a highly
noticeable event impacting the entire online user base.DOS attack is found by First Mathematical Metrical
Method (Rate Controlling,Timing Window,Worst Case and Pattern Matching)DOS attack not only affect the
Quality of the service and also affect the performance of the server. DDOS attacks are launched from Botnet-A
large Cluster of Connected device(cellphone,pc or router) infected with malware that allow remote control by an
attacker. Intruder using SIPDAS in DDOS to perform attack.SIPDAS attack strategies are detected using Heap
Space Monitoring Algorithm.
An improved fading Kalman filter in the application of BDS dynamic positioningIJRES Journal
Aiming at the poor dynamic performance and low navigation precision of traditional fading
Kalman filter in BDS dynamic positioning, an improved fading Kalman filter based on fading factor vector is
proposed. The fading factor is extended to a fading factor vector, and each element of the vector corresponds to
each state component. Based on the difference between the actual observed quantity and the predicted one, the
value of the vector is changed automatically. The memory length of different channel is changed in real time
according to the dynamic property of the corresponding state component. The actual observation data of BDS is
used to test the algorithm. The experimental results show that compared with the traditional fading Kalman filter
and the method of the third references, the positioning precision of the algorithm is improved by 46.3% and
23.6% respectively.
Positioning Error Analysis and Compensation of Differential Precision WorkbenchIJRES Journal
The document analyzes positioning errors in differential precision workbenches and proposes a compensation method. It discusses sources of error in workbench transmission systems and guides. Through theoretical analysis and experimentation, it is shown that positioning errors increase with travel distance due to factors like guideway errors. A method is developed to sample positioning at multiple points, compare values to identify errors, and implement reverse error correction through motion control cards. This allows positioning accuracy better than 15 micrometers over 150mm of travel to be achieved. The compensation method can improve precision for a range of machine tool designs.
Status of Heavy metal pollution in Mithi river: Then and NowIJRES Journal
The Mithi River runs through the heart of suburban Mumbai. Its path of flow has been severely
damaged due to industrialization and urbanization. The quality of water has been deteriorating ever since. The
Municipal and industrial effluents are discharged in unchecked amounts. The municipal discharge comprises
untreated domestic and sewage wastes whereas the industries are majorly discharge chemicals and other toxic
effluents which are responsible in increasing the metal load of the river. In the current study, the water is
analysed for heavy metals- Copper, Cadmium, Chromium, Lead and Nickel. It also includes a brief
understanding on the fluctuations that have occurred in the heavy metal pollution, through the compilation of
studies carried out in the area previously.
The Low-Temperature Radiant Floor Heating System Design and Experimental Stud...IJRES Journal
In order to analyze the temperature distribution of the low-temperature radiant floor heating system
that uses the condensing wall-hung boiler as the heat source, the heating system is designed according to a typical
house facing south in Shanghai. The experiments are carried out to study the effects of the supply water
temperature on the thermal comfort of the system. Eventually, the supply water temperature that makes people in
the room feel more comfortable is obtained. The result shows that in the condition of that the outside temperature
is 8~15℃ and the relative humidity is 30~70%RH, the temperature distribution in the room is from high to low
when the height is from bottom to top. The floor surface temperature is highest, but its uniformity is very poor.
When the heating system reaches the steady state, the air temperature of the room is uniform. When the supply
water temperature is 63℃ The room is relatively comfortable at the above experimental condition.
Experimental study on critical closing pressure of mudstone fractured reservoirsIJRES Journal
This study examines the critical closing pressure of fractures in mudstone reservoir cores from the Daqing oilfield in China. Laboratory experiments subjected fractured and unfractured mudstone cores to increasing external pressures while measuring permeability. The critical closing pressure is defined as the pressure when fractured core permeability matches unfractured permeability, indicating fracture closure. Results show fractured cores have higher permeability than unfractured cores due to fractures. Permeability generally decreases exponentially with increasing pressure. By calculating sensitivity equations relating permeability and production pressure difference, the study estimates critical closing pressures under reservoir conditions are lower than values from external pressure experiments. The study provides guidance but notes limitations in fully simulating complex in-situ stress conditions.
Correlation Analysis of Tool Wear and Cutting Sound SignalIJRES Journal
With the classic signal analysis and processing method, the cutting of the audio signal in time
domain and frequency domain analysis. We reached the following conclusions: in the time domain analysis,
cutting audio signals mean and the variance associated with tool wear state change occurred did not change
significantly, and tool wear is not high degree of correlation, and the mean-square value of the audio signal
changes in the size and tool wear the state has a good relationship.
Reduce Resources for Privacy in Mobile Cloud Computing Using Blowfish and DSA...IJRES Journal
Mobile cloud computing in light of the increasing popularity among users of mobile smart
technology which is the next indispensable that enables users to take advantage of the storage cloud computing
services. However, mobile cloud computing, the migration of information on the cloud is reliable their privacy
and security issues. Moreover, mobile cloud computing has limitations in resources such as power energy,
processor, Memory and storage. In this paper, we propose a solution to the problem of privacy with saving and
reducing resources power energy, processor and Memory. This is done through data encryption in the mobile
cloud computing by symmetric algorithm and sent to the private cloud and then the data is encrypted again and
sent to the public cloud through Asymmetric algorithm. The experimental results showed after a comparison
between encryption algorithms less time and less time to decryption are as follows: Blowfish algorithm for
symmetric and the DSA algorithm for Asymmetric. The analysis results showed a significant improvement in
reducing the resources in the period of time and power energy consumption and processor.
Resistance of Dryland Rice to Stem Borer (Scirpophaga incertulas Wlk.) Using ...IJRES Journal
Rice stem borer is one of the important pests that attack plants so as to reduce production. One way
to control pests is to use organic fertilizers that make the plant stronger and healthier. This study was conducted
to determine the effects of organic fertilizers with various doses without the use of pesticides in controlling stem
borer, Scirpophaga incertulas. Methods using split-split plot design which consists of two levels of the whole
plot factor (solid and liquid organic fertilizers), two levels of the subplot factor (conventional and industry,
Tiens and Mitraflora), and four levels of the sub-subplot factor of conventional and industry (5, 10, 15, 20
tonnes/ha), and one level of the sub-subplot factor of Tiens and Mitraflora (each 2 ml/l). Based on the results
Statistical analysis there were no significant differences among treatments and this shows that the use of organic
fertilizers that only a dose of 5 tonnes/ha is sufficient available nutrients that make plants more robust and
resistant to control stem borer, besides that can reduce production costs and friendly to the environment when
compared with using inorganic fertilizers.
A novel high-precision curvature-compensated CMOS bandgap reference without u...IJRES Journal
A novel high-precision curvature-compensated bandgap reference (BGR) without using op-amp
is presented in this paper. It is based on second-order curvature correction principle, which is a weighted sum of
two voltage curves which have opposite curvature characteristic. One voltage curve is achieved by first-order
curvature-compensated bandgap reference (FCBGR) without using op-amp and the other found by using W
function is achieved by utilizing a positive temperature coefficient (TC) exponential current and a linear
negative TC current to flow a linear resistor. The exponential current is gained by using anegative TC voltage to
control a MOSFET in sub-threshold region. In the temperature ranging from -40℃ to 125℃, experimental
results implemented with SMIC 0.18μm CMOS process demonstrate that the presented BGR can achieve a TC
as low as 2.2 ppm/℃ and power-supply rejection ratio(PSRR)is -69 dB without any filtering capacitor at 2.0 V.
While the range of the supply voltage is from 1.7 to 3.0 V, the output voltage line regulation is about1 mV/ V
and the maximum TC is 3.4 ppm/℃.
Structural aspect on carbon dioxide capture in nanotubesIJRES Journal
In this work we reported the carbon dioxide adsorption (CO2) in six different nanostructures in order
to investigate the capturing capacity of the materials at nanoscale. Here we have considered the three different
nanotubes including zinc oxide nanotube (ZnONT), silicon carbide nanotube (SiCNT) and single walled carbon
nanotube (SWCNT). Three different chiralities such as zigzag (9,0), armchair (5,5) and chiral (6,4) having
approximately same diameter are analyzed. The adsorption binding energy values under various cases are
estimated with density functional theory (DFT). We observed CO2 molecule chemisorbed on ZnONT and
SiCNT’s whereas the physisorption is predominant in CNT. To investigate the structural aspect, the tubes with
defects are studied and compared with defect free tubes. We have also analyzed the electrical properties of tubes
from HOMO, LUMO energies. Our results reveal the defected structure enhance the CO2 capture and is
predicted to be a potential candidate for environmental applications.
Thesummaryabout fuzzy control parameters selected based on brake driver inten...IJRES Journal
In this paper, the brake driving intention identification parameters based on the fuzzy control are
summarized and analyzed, the necessary parameters based on the fuzzy control of the brake driving intention
recognition are found out, and I pointed out the commonly corrupt parameters, and through the relevant
parameters , I establish the corresponding driving intention model.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
Ranking The Refactoring Techniques Based on The External Quality Attributes
1. International Journal of Research in Engineering and Science (IJRES)
ISSN (Online): 2320-9364, ISSN (Print): 2320-9356
www.ijres.org Volume 3 Issue 6 ǁ June 2015 ǁ PP.74-87
www.ijres.org 74 | Page
Ranking The Refactoring Techniques Based on The External
Quality Attributes
Sultan Alshehri 1
and Abdulmajeed Aljuhani 2
1+2
Department of Engineering, Software Engineering, University of Regina, Canada
ABSTRACT: The selection of appropriate decisions is a significant issue that might lead to more satisfactory
results. The difficulty comes when there are several alternatives and when all of them have the same chance of
being selected. It is important, therefore, to find the kinds of priorities among all of these alternatives in order to
choose the most appropriate one. The analytic hierarchy process (AHP) is capable of structuring decision
problems and finding mathematically determined judgments built on knowledge and experience. This suggests
that AHP should prove useful in agile software development where complex decisions occur routinely. This
paper presents an example of using the AHP to rank the refactoring techniques based on the external code
quality attributes. XP encourages applying the refactoring where the code smells bad. However, refactoring
may consume more time and efforts. Therefore, to maximize the benefits of the refactoring in less time and
effort, AHP has been applied to achieve this purpose. It was found that ranking the refactoring techniques
helped the XP team to focus on the technique that improve the code and the XP development process in general.
KEYWORDS: Extreme programming, Refactoring; Refactoring techniques; Analytic Hierarchy Process.
I. Introduction:
Code refactoring is the process of re-designing existing code by changing its internal structure, and keeping
its external behavior [1]. Refactoring is a core practice in extreme programming development process, which
assists to enhance software design with reducing cost and effort of coding and testing [2].
1.1 Guidelines in the refactoring process
Several researchers, such as [3], exhibited various refactoring steps, which can be described as follows:
- Determining which part of the code that should be refactored.
- Selecting the appropriate refactoring methods to be applied.
- Applying the refactoring.
- Evaluating the influences of the applied refactoring methods on the code quality attributes [2].
Further steps can be found in [4,5,6].
Issue regarding refactoring tools:
Murphy-Hill et al. [7] conducted an empirical study, which leaded to collect data about refactoring process in
order to assist in constructing a robust refactoring tool. Brunel et al. [8] analyzed several open-source java
systems (JasperReports, Tyrant, Velocity, MegaMek, PDFBox, HSQLDB, Antlr) in order to investigate
refactoring tools accuracy. Refactoring characterization has been introduced by Maticorna and Perez [9]; also,
the authors explained how to use it as a comparing tool in different refactoring definitions. Roberts et al. [10]
studied the functional requirements and empirical criteria for the refactoring tools, and their findings were that
accuracy and the ability to search across the entire program are the most functional requirements, and
integration and speed are the most empirical criteria. Simmonds and Mens [11] conducted a study to provide the
strengths and weakness of four refactoring tools, which are SmalltalkWorks 7.0, Eclipse, Guru, and Together
ControlCenter 6.0.
1.2 Identification of code smells to locate possible refactoring
Advani et al. [12] specified the field of complexity across the systems when applying refactoring. In
addition, the authors introduced a method to assign the testing effort. Bryton et al. [13] presented Binary
Logistic Regression (BLR) model in order to discover the code smell specifically Long Method objectively.
Intelligent assistant and a refactoring agent has been used by Sandalski et al. [14] to inspect the refactoring
architecture and evaluate the existing code in order to emphasize which part of the code needs to be refactored.
Based on plug-ins for Eclipse, a technique has been introduced by Hayashi et al. [15] to assist the development
team on how and where to do refactoring based on the histories of program modifications.
2. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 75 | Page
1.3 The impact of refactoring on the code quality attributes:
Section 5 in this paper, described several studies and investigated the impact of the refactoring on the code
quality attributes. Yet, it is difficult and time consuming to apply all the possible refactoring techniques during
the iteration of the development. Therefore, this paper will provide a way of ranking these techniques that can
improve the code quality and transfer knowledge to the development team.
II. The Analytical Hierarchy Process:
AHP is a multi-criteria decision making tool that structures a problem in a hierarchical model. Human
factor plays a main role in AHP by gathering the elements of a problem and structuring them hieratically.
Thomas Saaty has introduced AHP as a robust and sufficient method that can be used to solve complex
decision-making problems [16]. AHP consists of several steps, such as:
- Breaking down the problem into hierarchal model.
- Identifying the criteria and designing criteria pairwise comparison matrix. Thus, the weight for each
criterion will be specified with respect to the other criteria.
- Designing alternatives pairwise matrix with respect to the control criterion in each matrix.
- Calculating the consistency ratio in order to examine the consistency of the input errors.
- Selecting the highest weighted average rating after calculating the average weight for each alternative.
In order to design a pairwise comparison, Saaty [17] presented an importance scale to assign for each
criterion and alternative. Table 1 shows the importance scale.
Table 1 AHP Numerical Scale Developed by Saaty [17].
Scale Numerical
Rating
Reciprocal
Equal importance 1 1
Moderate importance
of one over other
3 1/3
Very strong or
demonstrated
importance
7 1/7
Extreme importance 9 1/9
Intermediate values 2,4,6,8 1/2, 1/4, 1/6,
1/8
III. Refactoring Techniques:
There are several benefits that can be achieved from the refactoring, such as increasing the programming
speed, and detecting bugs [1]. Fowler [1] identified more than 70 refactoring techniques (i e. patterns), which
grouped into six categories: moving features between objects, simplifying conditional expressions, dealing with
generalization, making methods calls simpler, composing methods, and organizing data. For each refactoring
pattern, there is a certain purpose and influence over the quality attributes. As mentioned above that using
refactoring methods enhance the code and the design of the software; therefore, it is significant to determine the
most important quality attributes in order to maximize the refactoring value. Since the lack of known how to use
refactoring patterns, the procedure of choosing the appropriate refactoring patterns consumes more time and
leads to conflict perspectives among team members. In this paper, we introduce AHP to rank various refactoring
patterns regarding to their influences on the code quality. In addition, we selected eight refactoring patterns from
four various categories introduced by Fowler n order to investigate the importance of the chosen refactoring
patterns using AHP.
The following patterns were selected:
- Extract Method, Inline Method, and Inline Temp Method from “Composing Methods” category.
- Extract Class, Inline Class, and Move Method from “Moving Features Between Objects” category.
- Rename Method from “Making Method Calls Simpler” category.
- Pull Up Method from “Dealing with Generalization” category.
IV. Case Study Setup:
A part of this work has been published in [2]; therefore, the case study setup is similar to that in [2].
4.1 Research Questions and Propositions:
The primary objective in the refactoring practice is to investigate how AHP can be used in ranking the internal
3. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 76 | Page
and external code quality attributes. The following research questions provided a focus for our case study
investigation:
- How important is it to practice the refactoring using AHP?
- How can AHP rank the refactoring methods based on specific criteria?
- How can AHP affect the communication among the developers?
- How can AHP help in saving the developers’ time while refactoring?
The methodology used in this study is four case studies: two case studies in an academic environment and two
case studies in industrial environments with embedded units of analysis. The study propositions are outlined
below:
- AHP captures important criteria and alternatives that need to be considered when refactoring.
- AHP facilitates the process of ranking and selection in refactoring.
- AHP involves an informative discussion and improves the communication among the developers.
- AHP provides a map to focus on the most important refactoring methods that increase the code quality.
- AHP resolves conflicting opinions among the developers when practicing the refactoring and trying to
find the smell code.
4.2 Unit of Analysis:
According to [18] the unit of the analysis should be derived from the main research questions of the study. So,
the main focus of this study is to rank the refactoring pattern based on internal qualities attributes. So, the
ranking and the process of evaluation are units of analysis for this study. Also, the developers’ view of how
AHP benefits each XP practice is another unit analysis. As result, this work is designed as multiple cases
(embedded) with multiple units of analysis.
4.3 Data Collection and Sources:
In the beginning of the applying AHP to the refactoring practice, we propose the criteria that we want to
investigate in order to examine the AHP tool’s ability and benefits. This data was collected from literature
review and previous studies. To increase the validity of this study, data triangulation was obtained. The data
sources in this study were:
- Archival records, such as a study plan from the graduate students.
- Questionnaires given to the participants when developing the XP project.
- Questionnaires given to experts from industry.
- Open-ended interviews with the participants.
- Feedback from the customer.
However, the most important data source of this study was an XP project conducted at the
University of Regina in a software design class in fall 2012. In addition, three companies initially participated in
evaluating some of the XP practices and based on proposed criteria that affect the practice. Later on, two of the
three companies involved in validating the results.
4.4 Designing the Case Studies:
The educational case studies were performed as part of a course in the Advanced Software Design Class for
graduate students taught in Fall 2012 at the University of Regina. The participants were 12 master’s students
and a client from a local company in Regina. Participants have various levels of programming experience and a
good familiarity with XP and its practices. The Students' background related to the experiment includes several
programming languages such as Java, C, C#, and ASP.net.
They have implemented projects previously using various software process methodologies. The study was
carried out throughout 15 weeks; students were divided into two teams. Both teams were assigned to build a
project called “Issue Tracking System” brought by the client along with industrial requirements. It ran in 5 main
iterations and by the end of the semester, the whole software requirements were delivered. The students were
paired based on their experience and knowledge, but also we had an opportunity to pair some experts with
novice and average programmers for the purpose of the study. Participants were given detailed lectures and
supporting study materials on extreme programming practices that focused on planning game activities which
included writing user stories, prioritizing the stories, estimating process parameters, and demonstrating
developers’ commitments. The students were not new to the concept of XP, but they gained more knowledge
and foundation specifically in the iteration plan, release planning and prioritizing the user stories. In addition,
the students were exposed to the AHP methodology and learned the processes necessary to conduct the pairwise
comparisons and to do the calculations. Several papers and different materials about the AHP and user stories
were given to the students to train them and increase their skills in implementing the methodology. In addition, a
survey was distributed among students to get further information about their personal experiences and
4. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 77 | Page
knowledge.
The researchers have visited three companies (two companies in Regina, and one in Calgary; both in Canada)
several times and met with the developers and team leaders to explain the purpose of the study and to collect the
data and feedback from the real industries. To preserve their anonymities, A, B, and C replace the companies’
real names. All the companies are familiar with XP concept and currently practicing the refactoring during their
development. In this study, eighteen experts have used their knowledge and average of 10 years experience to
evaluate the proposed pair alternatives using the AHP.
V. AHP Use in Refactoring:
The AHP method can be used in the XP development team to rank the refactoring techniques based on the
external code quality attributes. In the following sections, the AHP evaluation, structure and process are
presented.
5.1 Background
In this section, we will highlight some of the previous related works that have studied the impact of the
refactoring on the external code quality attributes as follow:
Leitch and Stroulia [19] studied the refactoring effects on software maintenance effort and costs using
dependency analysis. Alshayeb [20] aimed to validate/invalidate the refactoring effects on some of the external
quality attributes (adaptability, maintainability, understandability, reusability, and testability) in order to decide
whether the cost and time put into refactoring are worthwhile. He concluded his study with results that stated
that the refactoring does not necessarily improve these quality attributes.
Similarly, Raed and Li [21] used the hierarchal quality model to investigate the effect of refactoring activities on
some software metrics such as reusability, flexibility, extendibility, and effectiveness. They found that not all
the refactoring methods improve the quality factors.
Kataoka et al. [22] proposed a quantitative evaluation method to measure the maintainability enhancement by
refactoring. They focused only on coupling metrics in order to quantify the refactoring effects.
Elish and Alshayeb [23,24] classified the refactoring techniques based on their effects on the internal and
external quality attributes. The external quality metrics were adaptability, completeness, maintainability,
understandability, reusability, and testability.
Weber and Reichert [25] proposed 11 refactoring techniques to support the business process management at the
operational level and allow the designer to improve the quality of the process model. They focused specifically
on the process-aware information system model (PAIS) that provides schemes for process execution.
Moser et al. [26] conducted a case study in industrial projects and agile environment to analyze the impact of
refactoring on the reusability attribute. They analyzed how often part of the software (i.e. classes, methods, etc.)
is used in a product.
Stroulia and Leitch [27] proposed a method to estimate the expected software maintenance cost by predicting
the return on investment (ROI) for the refactoring activity. The authors claim that this would increase the
adoption of refactoring practices and improve the quality attributes of the software, such as performance and
maintainability. Finally, Moser et al. [28] developed a model to identify a refactoring effort during the
maintenance phase.
5.2 Proposed Criteria for Ranking The Refactoring
In order to rank the refactoring patterns it is necessary to identify the external quality attributes that are
more desirable to the development team or the organization. Each project can have a different set of criteria and
refactoring methods in order to be ranked and evaluated. In this section, we have chosen four external quality
attributes to be the core criteria for the refactoring ranking:
- Reusability: defined as the capability for a component and subsystems to be suitable for use in more
than one application, or in building other components, with little or no adaptation [29, 30].
- Flexibility: defined as “ the ability of a system to adapt to varying environments and situations, and to
cope with changes in business policies and rules. A flexible system is one that is easy to reconfigure or
adapt in response to different user and system requirements” [31].
- Maintainability: defined as the ability of system to accept changes with a degree of ease. These changes
could be modifying a component or other attribute to correct faults, improve performance, or adapt to a
new environment [32].
- Understandability: defined as the degree to which the meaning of a software component is clear to a
user [29].
5.3 AHP-refactoring structure for the external attributes
The top level is the main objective: ranking the refactoring techniques; the second level is the criteria:
5. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 78 | Page
reusability, flexibility, maintainability, and understandability; the third level is the alternatives: Extract Method,
Inline Method, InlineTemp Method, Extract Class, Inline Class, Move Method, Pull Up Method, Rename
Method. Figure 1 illustrates the AHP structure for the problem.
Figure 1 AHP Structure for the refactoring based on the external attributes.
5.4 Refactoring pairwise comparison process
All the participants had to apply the refactoring patterns to a real project to see the real impact on their code.
Then they were required to evaluate the refactoring patterns based on certain criteria. For this purpose, sheets of
paper with appropriate AHP tables were handed to the all participants in order to save time and facilitate the
process of comparison. The first page was dedicated to collecting general information about the evaluator,
his/her experience, and the type and the level of his/her programming skills. The participants compared the
criteria using the Saaty scale from 1-9. The participants were asked as examples:
- Which is more important: reusability or flexibility and by how much?
- Which is more important: reusability or maintainability and by how much?
- Which is more important: reusability or flexibility and by how much?
- Which is more important: reusability or understandability and by how much?
- Which is more important: flexibility or maintainability and by how much?
- Which is more important: flexibility or understandability and by how much?
- Which is more important: maintainability or understandability and by how much?
After finishing the criteria comparisons, the participants had to evaluate all the refactoring techniques based on
each criterion. Example follows:
In term of flexibility, which is more important Extract Method or Inline Method and by how much?
Similarly, all the following comparisons were conducted based on each criterion:
(Extract Method X Inline Method), (Extract Method X Inline Temp Method), (Extract Method X Extract
Class), (Extract Method X Inline Class), (Extract Method X Move Method), (Extract Method X Inline Pull
Up Method) (Extract Method X Rename Method).
(Inline Method X Inline Temp Method), (Inline Method X Extract Class), (Inline Method X Inline Class),
(Inline Method X Move Method) (Inline Method X Inline Method), (Inline Method X Inline Pull Up
Method), (Inline Method X Rename Method).
(Inline Temp Method X Extract Class), (Inline Temp Method X Inline Class), (Inline Temp Method X
Move Method), (Inline Temp Method X Pull Up Method), (Inline Temp Method X Rename Method).
(Extract Class X Inline Class), (Extract Class X Move Method), (Extract Class X Pull Up Method), (Extract
Class X Rename Method).
(Inline Class X Move Method), (Inline Class X Pull Up Method), (Inline Class X Rename Method).
(Move Method X Pull Up Method) (Move Method X Rename Method).
(Pull Up Method X Rename Method).
VI. AHP Evaluation Results
6.1 Educational Case Studies
For Team 1, the rankings for the refactoring techniques based on all criteria (i.e. reusability, flexibility,
maintainability and understandability) are summarized as follows: First: Extract Class (18.33); Second: Rename
Method (15.17); Third: Pull Up Method (14.13); Fourth: Extract Method (13.95); Fifth: Inline Class (12.81);
Sixth: Move Method (12.25); Seventh: Inline Method (6.74); Eighth: Inline Temp Method (6.62). Table 2
summarizes the results. Figure 2 shows the importance of each criterion as follows: understandability (35.70),
maintainability (30.09), reusability (22.42), and flexibility (11.80).
6. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 79 | Page
Table 2 Ranking the refactoring patterns by team 1.
Refactoring
Techniques
All
Extract Class 18.33 %
Rename Method 15.17 %
Pull Up Method 14.13 %
Extract Method 13.95 %
Inline Class 12.81 %
Move Method 12.25 %
Inline Method 6.74 %
Inline Temp
Method
6.62 %
The rankings for the prioritization of techniques by Team 2 is summarized as follows: First: Extract Class
(16.66); Second: Extract Method (16.37); Third: Rename Method (15.41); Fourth: Pull Up Method (15.16);
Fifth: Move Method (12.74); Sixth: Inline Temp Method (8.07); Seventh: Inline Method (7.92); Eighth: Inline
Class (7.66). Table 3 summarizes the results.
Figure 3 shows the importance of each criterion as follows: maintainability (30.85), understandability (29.7),
flexibility (21.77), and reusability (17.67).
Table 3 Ranking the refactoring patterns by team 2.
Refactoring
Techniques
All
Extract Class 16.66 %
Extract Method 16.37 %
Rename Method 15.41 %
Pull Up Method 15.16 %
Move Method 12.74 %
Inline Temp
Method
8.07 %
Inline Method 7.92 %
Inline Class 7.66 %
6.1.1 Observations:
1. Considering all the criteria together, the Extract Class technique was ranked in the highest position by both
teams, Team 1 and Team 2.
2. The Rename Method was ranked in advanced positions. Team 1 ranked it in the second position and Team
2 in the second position.
3. The understandability and maintainability quality attributes were considered the most important by both
teams.
4. If we look at the refactoring techniques considering each criterion individually, we can see both teams
ranked the Rename Method in the top position in the understandability attributes. For other criteria, each
team has ranked the refactoring techniques differently. See tables 4 and 5.
5. For the reusability quality attribute, Team 1 ranked the Extract Class in the highest position, while Team 2
ranked the Extract Method at the top.
6. For the flexibility quality attribute, Team 1 ranked the Rename in the highest position, while Team 2 ranked
the Extract Class at the top.
7. For the maintainability quality attribute, Team 1 ranked the Extract Class in the highest position, while
Team 2 ranked the Pull Up Method at the top.
8. We can note the Extract Class was in the highest position for Team 1 in reusability and maintainability as
individual criterion, while Team 2 considered it in the top only with the flexibility criterion.
Figure 3 Importance of the external criteria by team 2.
Figure 2 Importance of the external criteria for the Team 1.
7. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 80 | Page
Table 4 Refactoring Techniques based on each external criterion by team 1.
Refactoring
Techniques
Resuability
Extract Class 22.43 %
Extract Method 15.53 %
Pull Up Method 14.54 %
Move Method 13.89 %
Inline Class 11.46 %
Rename Method 9.72 %
Inline Temp
Method
6.35 %
Inline Method 6.09 %
Refactoring
Techniques
Flexibility
Rename Method 17.32 %
Extract Class 15.45 %
Pull Up Method 14.57 %
Inline Class 11.99 %
Extract Method 11.94 %
Inline Temp
Method
10.34. %
Inline Method 9.43 %
Move Method 8.95 %
Refactoring
Techniques
Maintainability
Extract Class 23.8 %
Extract Method 16.82 %
Pull Up Method 13.34 %
Inline Class 10.87 %
Move Method 10.75 %
Rename Method 8.32 %
Inline Temp
Method
8.25 %
Inline Method 7.86 %
Refactoring
Techniques
Understandability
Rename Method 25.13 %
Extract Class 15.47 %
Inline Class 13.26 %
Pull Up Method 11.91 %
Move Method 11.75 %
Extract Method 9.44 %
Inline Method 7.04 %
Inline Temp
Method
6.00 %
Table 5 Refactoring techniques based on each external criterion by team 2
Refactoring
Techniques
Resuability
Extract Method 22.43 %
Extract Class 15.53 %
Pull Up Method 14.54 %
Move Method 13.89 %
Rename Method 11.46 %
Inline Temp
Method
9.72 %
Inline Method 6.35 %
Inline Class 6.09 %
Refactoring
Techniques
Flexibility
Extract Class 20.74 %
Pull Up Method 14.92 %
Move Method 14.08 %
Extract Method 12.92 %
Inline Method 10.52 %
Inline Class 10.38. %
Inline Temp
Method
8.27 %
Rename Method 8.18 %
Refactoring
Techniques
Maintainability
Pull Up Method 21.75 %
Rename Method 17.18 %
Extract Method 14.4 %
Extract Class 14.03 %
Inline Temp
Method
8.61 %
Move Method 8.6 %
Inline Class 8 %
Inline Method 7.43 %
Refactoring
Techniques
Understandability
Rename Method 22.82 %
Extract Method 17.39 %
Move Method 16.72 %
Extract Class 13.41 %
Pull Up Method 8.4 %
Inline Temp
Method
7.67 %
Inline Method 7.27 %
Inline Class 8.31 %
6.2 Industrial case studies
The rankings for the prioritization of techniques by company A are summarized as follows: First: Inline
Class (15.71); Second: Extract Method (15.05); Third: Extract Class (14.4); Fourth: Move Method (13.28);
Fifth: Pull Up Method (12.78); Sixth: Inline Method (12.25); Seventh: Inline Temp Method (9.41); Eighth:
Rename Method (7.12). Table 6 summarizes the results.
Figure 4 shows the importance of each criterion as follows: flexibility (35.97), maintainability (28.51),
reusability (24.71), and understandability (10.81).
Table 6 Ranking the refactoring patterns by company A.
Refactoring
Techniques
All
Inline Class 15.71 %
Extract Method 15.05 %
Extract Class 14.4 %
Move Method 13.28 %
Pull Up Method 12.78 %
Inline Method 12.25 %
Inline Temp
Method
9.41 %
Rename Method 7.12 %
The rankings for the prioritization of techniques by company B are summarized as follows: First: Extract Class
(27.84); Second: Extract Method (18.26); Third: Inline Class (11.07); Fourth: Move Method (10.37); Fifth:
Inline Method (8.9); Sixth: Rename Method (8.81); Seventh: Pull Up Method (8.76); Eighth: Inline Temp
Method (5.99). Table 7 summarizes the results. Figure 5 shows the importance of each criterion as follows:
understandability (30.5), reusability (29.42), maintainability (26.97), and flexibility (13.11).
Figure 4 Importance of the external criteria for company A.
8. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 81 | Page
Table 7 Ranking the refactoring patterns by company B.
The rankings for the prioritization of techniques by company C are summarized as follows: First: Extract Class
(21.7); Second: Inline Class (17.38); Third: Inline Method (11.83); Fourth: Extract Method (11.47); Fifth: Pull
Up Method (10.86); Sixth: Move Method (9.7); Seventh: Inline Temp Method (9.35); Eighth: Rename Method
(7.71). Table 8 summarizes the results. Figure 6 shows the importance of each criterion as follows: reusability
(42.19), maintainability (26.34), understandability (18.76), and flexibility (12.71).
Table 8 Ranking the refactoring patterns by company C.
6.2.1 Observations:
1. Considering all the criteria together, the Extract Class technique was ranked in the highest position by
companies B and C, while company A ranked the Inline Class in the highest position.
2. Extract Method was ranked in second position by companies A and B, while company C ranked the Inline
Class in the second position.
3. The Rename Method was ranked in last positions by companies A and C, while company B ranked the
Inline Temp Method in the last position. Also, A and C ranked the Inline Temp Method in the penultimate
position.
4. Each company has ranked the quality attributes differently. Company A ranked maintainability and
flexibility in the two top positions. Company B considered reusability and understandability to be the top
ones. Company C considered reusability and maintainability the highest concerns. As result, companies A
and C shared the same concerns about maintainability and ranked it high. On the other hand, companies B
and C shared the concerns about reusability and ranked it to be one of the top criteria.
5. If we look at the refactoring techniques considering each criterion individually, we can see company B
ranked the Extract Class in the top position in all the quality attributes: reusability, flexibility,
maintainability, and understandability. Company C also ranked the Extract Class in the top position in
terms of reusability and understandability, while company A ranked it in the top only in terms of
maintainability. See tables 9, 10, and 11.
6. For the reusability quality attribute, Team 1 ranked the Extract Class in the highest position, while Team 2
ranked the Extract Method at the top.
7. Company C ranked the Inline Class in the first position in terms of flexibility and maintainability, while
Refactoring
Techniques
All
Extract Class 27.84 %
Extract Method 18.26 %
Inline Class 11.07 %
Move Method 10.37 %
Inline Method 8.9 %
Rename Method 8.81 %
Pull Up Method 8.76 %
Inline Temp
Method
5.99 %
Figure 5 Importance of the external criteria for company B.
Figure 6 Importance of the external criteria for the
refactoring by company C.
Refactoring
Techniques
All
Extract Class 21.7 %
Inline Class 17.38 %
Inline Method 11.83 %
Extract Method 11.47 %
Pull Up Method 10.86 %
Move Method 9.7 %
Inline Temp
Method
9.35 %
Rename Method 7.71 %
9. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 82 | Page
company A ranked it at the top in reusability.
8. Company A ranked the Extract Method at the top in flexibility and the Move Method at the top in
understandability.
Table 9 Refactoring techniques based on each external criterion by company A.
Table 10 Refactoring techniques based on each external criterion by company B.
Table 11 Refactoring techniques based on each external criterion by company C.
6.3 Impact of refactoring on the quality attributes
Tables 12 and 13 show the impact of the refactoring on the external quality attributes reported by Team 1 and
Team 2.
Table 12 Refactoring impact on the external attributes by Team 1
Reusability Flexibility Maintainability Understandability
Extract Method + + + +
Inline Method + - - +
InlineTemp Method 0 0 0 0
Extract Class 0 0 0 0
Inline Class - - + 0
Move Method - - + 0
Pull Up Method + + + +
Rename Method 0 0 + +
10. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 83 | Page
Table 13 Refactoring impact on the external attributes by Team 2
Reusability Flexibility Maintainability Understandability
Extract Method + + - +
Inline Method - - + +
InlineTemp
Method
0 0 0 -
Extract Class + + + +
Inline Class + - + +
Move Method - 0 + +
Pull Up Method 0 0 + +
Rename Method + 0 + +
VII. REFACTORING VALIDATION
From the previous AHP evaluation conducted by students and experts, we found out that there are three
refactoring techniques that have received high rankings: Extract Class, Inline Class, and Extract Method. In this
section, we collected technical information from two educational studies and two industrial studies to validate
the AHP evaluation results obtained previously. The collected information included the following:
• The impact of the proposed refactoring techniques on the external quality attributes.
• How many times each refactoring technique was used for each iteration in each case study.
In addition, the participants were required to count the time spent for the refactoring before and after using
AHP.
7.1 Number of times using the refactoring techniques
Tables 14 and 15 summarize how many times each of the refactoring techniques were used in each iteration by
companies A and B.
Table 14 Number of times refactoring was used for each iteration by company A
Iteratio
n 1
Iteratio
n 2
Iteration
3
Iteratio
n 4
Iteratio
n 5
Total
Extract Method 27 34 21 49 41 172
Inline Method 5 13 17 8 22 65
InlineTemp Method 0 4 0 7 5 16
Extract Class 18 11 12 15 9 65
Inline Class 18 10 7 11 3 49
Move Method 8 6 0 0 20 34
Pull Up Method 21 17 9 27 13 87
Rename Method 11 0 17 22 5 55
Table 15 Number of times refactoring was used for each iteration by company B
Iteration
1
Iteration
2
Iteration
3
Iteration
4
Iteration
5
Total
Extract Method 9 5 2 1 1 18
Inline Method 2 1 0 0 0 3
InlineTemp Method 1 0 0 0 0 1
Extract Class 5 5 3 3 2 18
Inline Class 1 0 0 0 0 1
Move Method 2 0 0 0 0 2
Pull Up Method 7 5 2 2 1 17
Rename Method 4 1 0 0 0 5
7.2 Refactoring impact on the external quality attributes
Table 16 and 17 summarize the external impacts for each refactoring technique for companies A and B.
11. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 84 | Page
Table 16 Refactoring impact on the external attributes by company A
Reusability Flexibility Maintainability Understandability
Extract Method + + - +
Inline Method 0 - - -
InlineTemp Method - - - -
Extract Class + + + +
Inline Class - + + -
Move Method + + + +
Pull Up Method 0 0 + +
Rename Method NA + + +
Table 17 Refactoring impact on the external attributes by company B
Reusability Flexibility Maintainability Understandability
Extract Method + + - +
Inline Method 0 - - +
InlineTemp Method - - - +
Extract Class + + + +
Inline Class - - - 0
Move Method - - - 0
Pull Up Method + + + +
Rename Method 0 0 + +
7.3 AHP-refactoring impact on time
Both companies were asked to provide the time spent before and after using the AHP.
Table 18 shows that the time was reduced.
Table 18 Refactoring impact on time for company A and B
Time before AHP Time after AHP
Company A 3 Days 2 Days
Company B 12 hours 7 hours
7.4 Observations from the validation
- From the AHP evaluation results, Extract Class and Extract Method were mostly ranked in the top
positions in the external quality attributes. Tables 15 and 16 show that Extract Method and Extract Class
are commonly used and have positive effects (as seen in tables 17 and 18). In Company A, the most
refactoring techniques were used: Extract Method (172), Pull Up Method (87), and Extract Class and
Inline Method (65). In Company B, the fewest refactoring techniques were used: Extract Method and
Extract Class (18), and Pull Up Method (17).
- Extract Method had a positive impact on all the attributes in both companies’ results. The reusability,
flexibility, and understandability increased; the maintainability effort was reduced. For both companies,
Extract Class increased reusability, flexibility, and understandability, which is a good result. However, the
maintainability also increased, which is a negative impact.
- Pull Up Method was used many times by the two companies, even though it was not showing significant
results by the AHP evaluation. Also, it showed a positive impact on both external quality attributes
generally.
- After narrowing the use of refactoring techniques and ranking them using AHP, the time for refactoring
for company A was reduced from 3 days to 2 days, while the refactoring time for company B was reduced
from 12 hours to 7 hours.
VIII. SEMI-INTERVIEW RESULTS
The semi-interview was conducted after showing the participants the results of the AHP evaluation for the
refactoring practices. Some of the results were surprising and others were expected. The interview included
open questions to obtain students’ general opinions about the AHP, advantages and disadvantage of the using
the AHP, and the best experience of the AHP among all the XP practices. The data was collected in the form of
handwritten notes during the interviews. These notes were organized in a folder for the sake of easy access and
analysis. From the interviews, we found very positive feedback from the participants regarding the AHP. The
AHP resolved any conflicting opinions and brought each team member’s voice to the decision in a practical
way. It also empathized with the courage of the team by letting every opinion be heard. The time and the
12. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 85 | Page
number of the comparisons were the main concerns of the participants. All of them recommended using the
AHP in the future when it needs to decide which refactoring should be applied. There were a few additional
recommendations as well, such as developing an automated tool to reduce the time for the AHP calculation,
adding the mobility features, performing cost and risk analysis, and trying it with other XP areas and studying
the outcomes.
IX. QUESTIONNAIRES
Questionnaires were given to the participants in order to obtain their perceptions of and experiences with
the AHP. The questionnaires were divided into two main parts. The first part contained questions about the AHP
as a decision and ranking tool. The second part contained questions regarding the direct benefits of the XP
practice and investigated the participants’ satisfaction. We used a seven-point Likert scale to reflect the level of
acceptability of the AHP tool. The seven-point scale appeared as follows: (1) Totally unacceptable. (2)
Unacceptable. (3) Slightly unacceptable. (4) Neutral. (5) Slightly acceptable. (6) Acceptable. (7) Perfectly
Acceptable. Once the participants completed the questionnaire, we calculated the results and presented the total
percentage of the acceptability for each statement in the evaluation (questionnaires) in the tables 4, 5, and 6.
The total percentage of the acceptability was calculated as follows:
- The total percentage of acceptability (TPA)
= The average of the score for each team * 100 / 7.
- The average of the score for each team
= The sum of the scores given by the team members / number of the team.
9.1 Acceptability level of AHP as a decision and ranking tool
AHP received positive ratings by the two teams and the three companies on most of the questions. However, the
lowest percentage given by all studies were regarding the time efficiency (59%, 62%, 61%, 57%, 57%). See
table 19.
Table 19 Acceptability level of AHP as a decision and ranking tool
Team 1 Team 2 Company
A
Company
B
Company
C
1-AHP as a Ranking tool in Refactoring
A- Decision Quality:
Capturing the needed information 76% 88% 86% 83% 88 %
Clarity of the decision process 88% 86% 94% 94% 90%
Clarity of the criteria involved 81% 76% 88 % 83% 88 %
Clarity of the Alternatives involved 81% 79% 88% 90% 95%
Goodness of the decision structure 86% 90% 98% 88% 71%
B- Practically
Understandability 83% 88% 98% 90% 85.66
Simplicity 71% 86% 76% 74 % 74 %
Time Efficiency 59% 62% 61% 57% 57%
Reliability 74% 76% 81% 90% 90%
9.2 Acceptability level for the impact of AHP on the development:
The following percentages show the acceptability level for the impact of the AHP on the development:
First: improving team communication; Team 1 (74%), Team 2 (93%), company A (93%), company B
(93%), and company C (94%).
Second: creating an informative discussion and learning opportunities, Team 1 (71%), Team 2 (88%),
company A (86%), company B (95%), and company C (93%).
Third: clarifying the ranking problem; Team 1 (71%), Team 2 (90%), company A (86%), company B
(93%), and company C (83%).
Fourth: resolving the conflicting opinions among members; Team 1 (64%), Team 2 (86%), company A
(86%), company B (93%), and company C (83%).
Fifth: elevating the team performance; Team 1 (76%) and Team 2 (88%), company A (79%), company B
(86%), and company C (not study).
X. VALIDITY
Construct validity, internal validity, external validity and reliability describe common threats to the validity
13. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 86 | Page
tony performed study [33].”Empirical studies in general and case studies in particular are prone to biases and
validity threats that make it difficult to control the quality of the study to generalize its results” [34]. In this
section, relevant validity threats are described. A number of possible threats to validity can be identified for this
work.
10.1 Construct validity
Construct validity deals with the correct operational measures for the concept being studied and researched. The
major construct validity threat to this study is the small number of participants in each case study. This threat
was mitigated by using several techniques in order to ensure the validity of the findings, as outlined below.
Data triangulation: A major strength of case studies is the possibility of using many different sources of
evidence [35]. This issue was taken into account through the use of surveys and interviews with different
types of participants from different environments with various levels of skills and experience, and through
the use of several observations as well as feedback from those involved in the study. By establishing a chain
of evidence, we were able to reach a valid conclusion.
Methodological triangulation: The research methods employed were a combination of a project conducted to
serve this purpose, interviews, surveys, AHP result comparisons, and researchers’ notes and observations.
Member checking: Presenting the results to the people involved in the study is always recommended,
especially for qualitative research. This was done by showing the final results to all participants to ensure the
accuracy of what was stated and to guard against researcher bias.
10.2 Internal validity
Internal validity is only a concern for an explanatory case study [35]. Internal validity focuses on establishing a
causal relationship between students and educational constraints. This issue can be addressed by relating the
research questions to the propositions and other data sources providing information regarding the questions.
10.3 External validity
External validity is related to the domain of the study and the possibilities of generalizing the results. To provide
external validity to this study, we will need to conduct an additional case study in the industry involving experts
and developers, and then observe the similarities and the differences in the findings of both studies. Thus, future
work will contribute to accruing external validity.
10.4 Reliability
Reliability deals with the data collection procedure and results. Other researchers should arrive at the same case
study findings and conclusions if they follow the same procedure. We addressed this by making the research
questions, case study set up, data collection and analysis procedure plan is available for use by other researchers.
XI. CONCLUSIONS
After using the AHP to rank the refactoring techniques, it was found to be an important tool that provides a
very good vision for developers when they want to apply the refactoring practice to improve the code.
Considering the reusability, flexibility, maintainability and understandability when ranking the refactoring
techniques could bring many advantages to the development team such as code enhancing the code in a short
time and transferring knowledge to the developers. The Extract Class and Extract Method were the most
refactoring techniques have improved the code in our studies. However, the other refactoring techniques have
added values to external code qualities as well. More importantly, though, the AHP has a positive impact on the
development process and communication among the team members. For example, the team could
mathematically reconcile the conflict of opinions regarding the use of refactoring techniques. The AHP provides
a cooperative decision making environment, which could maximize the effectiveness of the developed software.
References:
[1] Martin Fowler, Kent Beck, John Brant, William Opdyke, Don Roberts, “Refactoring: Improving the Design of Existing Code”,
Addison-Wesley Professional; 1 edition (July 8, 1999).
[2] Alshehr, Sultan, Luigi Benedicenti, “Ranking the Refactoring Techniques Based On The Internal Quality Attributes”.
[3] Tom Mens, Tom Tourwe, “A Survey of Software Refactoring”, Journal IEEE Transactions on Software Engineering archive,
vol.30, no.2, February 2004, pp.126-139.
[4] J. A. Dallal and L. Briand, “A Precise Method-Method Interaction-Based Cohesion Metric for Object-Oriented Classes”, Journal
ACM Transactions on Software Engineering and Methodology (TOSEM), vol. 21, no. 2. March 2012.
[5] Tom Tourwe ́ and Tom Mens , “Identifying Refactoring Opportunities Using Logic Meta Programming”, European Conference on
Software Maintenance and Reengineering, 2003.
[6] Liming Zhao, Jane Huffman Hayes “Predicating Classes in Need of Refactoring: An Application of Static Metrics”, Department of
Computer Science, University of Kentuck, 2006.
[7] Emerson Murphy-Hill, Andrew P. Black, Danny Dig, Chris Parnin, “Gathering Refactoring Data: a Comparison of Four Methods”,
in Proceedings of the 2nd Workshop on Refactoring Tools, no.7, 2008.
[8] S. Counsell Brunel Y. Hassoun G. Loizou R. Najjar, “Common Refactorings, a Dependency Graph and some Code Smells: An
Empirical study of Java OSS”, in Proceedings of the ACM/IEEE international symposium on Empirical software engineering, 2006,
pp.288 – 296.
14. Ranking The Refactoring Techniques Based on The External Quality Attributes
www.ijres.org 87 | Page
[9] Raul Maticorna, Javier Perez, “Assisting Refactoring Tools Development Through Refactoring Characterization”, in Proceedings of
the 6th International Conference on Software and Data Technologies, vol. 2, Seville, Spain, 18-21 July, 2011.
[10] Don Roberts and John Brant, Ralph Johnson, “A Refactoring Tool for Smalltalk”, Theory and Practice of Object Systems - Special
issue object-oriented software evolution and re-engineering archive, vol. 3, no. 4, 1997, pp. 253 – 263.
[11] Jocelyn Simmonds, Tom Mens, “A Comparison of Software Refactoring Tools”, Programming Technology Lab, November 2002.
[12] Deepak Advani, Youssef Hassoun & Steve Counsell, “Understanding the Complexity of Refactoring in Software Systems: a Tool-
Based Approach”, International Journal of General Systems, vol.35, no.3, 2006,329-346.
[13] Bryton, S.; Brito e Abreu, F.; Monteiro, M., “Reducing Subjectivity in Code Smells Detection: Experimenting with the Long
Method”, Quality of Information and Communications Technology (QUATIC), 2010 Seventh International Conference, Sept. 29
2010-Oct. 2 2010, pp.337- 342.
[14] Mincho Sandalski, Asya Stoyanova-Doycheva, Ivan Popchev, Stanimir Stoyanov, “Development of a Refactoring Learning
Environment”, Cybernetics and Information Technology, vol.11, no 2, 2011.
[15] Shinpei Hayashi, Motoshi Saeki, Masahito Kurihara, “Supporting Refactoring Activities Using Histories of Program Modification”,
IEICE - Transactions on Information and Systems archive, vol. E89-D no.4, April 2006, pp.1403-1412.
[16] Saaty TL.The Analytic Hierarchy Process, McGraw-Hill, New York, (1980).
[17] Saaty, T.L. How to Make a Decision: the Analytic Hierarchy Process, Interfaces, Vol. 24, No. 6, pp.19--43 (1994).
[18] Robert K. Yin, “Case Study Research: Design and Methods: Applied Social Research Methods”, SAGE Publications, Inc; 2nd
edition (March 18, 1994).
[19] Leitch, R.; Stroulia, E., “Assessing the Maintainability Benefits of Design Restructuring Using Dependency Analysis”, Software
Metrics Symposium, in Proceedings of the Ninth International, 3-5 Sept.2003, pp.309-322.
[20]. M. Alshayeb, “Empirical Investigation of Refactoring Effect on Software Quality”, Information and Software Technology, vol. 51,
no.9, September 2009, pp.1319–1326.
[21] Raed Shatnawi, Wei Li, “ An Empirical Assessment of Refactoring Impact on Software Quality Using Hierarchical Quality Model”,
International Journal of Software Engineering and Its Applications, vol. 5, no.4, October 2011.
[22] Y. Kataoka, T. Imai, H. Andou, and T. Fukaya, “A Quantitative Evaluation of Maintainability Enhancement by Refactoring”, in
Proceeding of the International Conference Software Maintenance, October, 2002, pp. 576-585.
[23] Karim O. Elish and Mohammad Alshayeb, “Using Software Quality Attributes to Classify Refactoring to Patterns”, Journal of
Software, vol. 7, no. 2, February 2012.
[24] Alshayeb, M., H. Al-Jamimi and M. Elish, “Empirical Taxonomy of Refactoring Methods for Aspect-Oriented Programming”,
Journal of Software Maintenance and Evolution: Research and Practice, incorporating Software Process: Improvement and Practice,
accepted March 2011.
[25] Weber, B. and Reichert, M, “Keeping the Cost of Process Change Low through Refactoring”, Technical Report TR-CTIT-07-86,
Centre for Telematics and Information Technology University of Twente, Enschede, 2007.
[26] Raimund Moser, Alberto Sillitti, Pekka Abrahamsson, Giancarlo Succi “Does Refactoring Improve Reusability?”, in Proceedings of
the 9th international conference on Reuse of Off-the-Shelf Components, 2006, pp. 287-297.
[27] E Stroulia and Leitch, “Understanding the Economics of Refactoring”, in Proceedings of the Fifth ICSE Workshop on Economics-
Driven Software Engineering Research, 2003.
[28] Raimund Moser, Witold Pedrycz, Alberto Sillitti,Giancarlo Succi, “A Model to Identify Refactoring Effort during Maintenance by
Mining Source Code Repositories”, in Proceedings of the 9th international conference on Product-Focused Software Process
Improvement, 2008, pp. 360-370.
[29] W. Salamon and D. Wallace, “Quality Characteristics and Metrics for Reusable Software (preliminary Report)”, US DoC for US
DaD Ballistic Missile Defense Organization, NISTIR 5459, May 1994.
[30] Mohammad Alshayeb, “The Impact of Refactoring to Patterns on Software Quality Attributes”, The Arabian Journal for Science
and Engineering, June 2010.
[31] J.D. Meier, Alex Homer, David Hill, Jason Taylor, Prashant Bansode, Lonnie Wall, Rob Boucher, “Patterns & Practices
Application Architecture Guide 2.0”, 2008.
[32] IEEEStd.610.12, Std. 610.12 - IEEE Standard Glossary of Software Engineering Terminology: The Institute of Electrical and
Electronics Engineers, 1991.
[33] Robert K. Yin, “Qualitative Research from Start to Finish”, The Guilford Press; 1 edition (October 7, 2010).
[34] Rudiger, Lincke, “How do PhD Students Plan and Follow-up their Work? – A Case Study”, School of Mathematics and Systems
Engineering, University of Sweden.
[35] Robert K. Yin, “Case Study Research: Design and Methods: Applied Social Research Methods”, SAGE Publications, Inc; 2nd
edition (March 18, 1994).
Authors
Sultan Alshehri was born in Saudi Arabia in 1981; a PhD Student in Software Engineering Systems
Department at University of Regina, Regina, Saskatchewan, Canada.
He worked as a computer science teacher in Riyadh, Saudi Arabia, 2004-2005. Currently, he is working
as Programmer Analysts at SmarTech Company. In the same time, he holds the position of CEO for the
LogicDots Company in Regina. Mr. Alshehri holds a reward of a scholarship since 2005 until 2013
from the high Ministry of education in Saudi Arabia.
Abdulmajeed Aljuhani is a PhD student in Software Systems Engineering Department at University of
Regina, Regina, Saskatchewan, Canada.
Currently, he is working as a lecturer at Tibah University, Medina, Saudi Arabia. Mr. Aljuhani holds a
reward of a scholarship since 2009 until 2017 from the high Ministry of education in Saudi Arabia.