Although, the area of software engineering has made a remarkable progress in last decade but there is less attention towards the concept of code reusability in this regards.Code reusability is a subset of Software Reusability which is one of the signature topics in software engineering. We review the existing system to find that there is no progress or availability of standard research approach toward code reusability being introduced in last decade. Hence, this paper introduced a predictive framework that is used for optimizing the performance of code reusability. For this purpose, we introduce a case study of near real-time challenge and involved it in our modelling. We apply neural network and Damped-Least square algorithm to perform optimization with a sole target to compute and ensure highest possible reliability. The study outcome of our model exhibits higher reliability and better computational response time
Application of Genetic Algorithm in Software Engineering: A ReviewIRJESJOURNAL
Abstract. The software engineering is comparatively new and regularly changing field. The big challenge of meeting strict project schedules with high quality software requires that the field of software engineering be automated to large extent and human resource intervention be minimized to optimum level. To achieve this goal the researcher have explored the potential of machine learning approaches as they are adaptable, have learning ability. In this paper, we take a look at how genetic algorithm (GA) can be used to build tool for software development and maintenance tasks.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Does a Hybrid Approach of Agile and Plan-Driven Methods Work Better for IT Sy...IJERA Editor
With a focus on large-scale IT system development projects in diverse enterprises, this research suggests that a hybrid approach combining agile and plan-driven management methods should fit in a wider context of specific project characteristics. Although the extant research illuminates the advantages of the hybrid approach, very few empirical studies actually suggested that the hybrid approach can improve the likelihood of project success. This research results show that the hybrid approach should be more scalable than the agile method, and that the hybrid approach can provide better cost-benefit ratios compared to the traditional plan-driven method. These quantitative and qualitative findings offer a practical recommendation for the project manager or the project management office to utilize the hybrid approach appropriately.
The performance of an algorithm can be improved using a parallel computing programming approach. In this study, the performance of bubble sort algorithm on various computer specifications has been applied. Experimental results have shown that parallel computing programming can save significant time performance by 61%-65% compared to serial computing programming.
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUESijaia
The purpose of software defect prediction is to improve the quality of a software project by building a
predictive model to decide whether a software module is or is not fault prone. In recent years, much
research in using machine learning techniques in this topic has been performed. Our aim was to evaluate
the performance of clustering techniques with feature selection schemes to address the problem of software
defect prediction problem. We analysed the National Aeronautics and Space Administration (NASA)
dataset benchmarks using three clustering algorithms: (1) Farthest First, (2) X-Means, and (3) selforganizing map (SOM). In order to evaluate different feature selection algorithms, this article presents a
comparative analysis involving software defects prediction based on Bat, Cuckoo, Grey Wolf Optimizer
(GWO), and particle swarm optimizer (PSO). The results obtained with the proposed clustering models
enabled us to build an efficient predictive model with a satisfactory detection rate and acceptable number
of features.
Application of Genetic Algorithm in Software Engineering: A ReviewIRJESJOURNAL
Abstract. The software engineering is comparatively new and regularly changing field. The big challenge of meeting strict project schedules with high quality software requires that the field of software engineering be automated to large extent and human resource intervention be minimized to optimum level. To achieve this goal the researcher have explored the potential of machine learning approaches as they are adaptable, have learning ability. In this paper, we take a look at how genetic algorithm (GA) can be used to build tool for software development and maintenance tasks.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Does a Hybrid Approach of Agile and Plan-Driven Methods Work Better for IT Sy...IJERA Editor
With a focus on large-scale IT system development projects in diverse enterprises, this research suggests that a hybrid approach combining agile and plan-driven management methods should fit in a wider context of specific project characteristics. Although the extant research illuminates the advantages of the hybrid approach, very few empirical studies actually suggested that the hybrid approach can improve the likelihood of project success. This research results show that the hybrid approach should be more scalable than the agile method, and that the hybrid approach can provide better cost-benefit ratios compared to the traditional plan-driven method. These quantitative and qualitative findings offer a practical recommendation for the project manager or the project management office to utilize the hybrid approach appropriately.
The performance of an algorithm can be improved using a parallel computing programming approach. In this study, the performance of bubble sort algorithm on various computer specifications has been applied. Experimental results have shown that parallel computing programming can save significant time performance by 61%-65% compared to serial computing programming.
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUESijaia
The purpose of software defect prediction is to improve the quality of a software project by building a
predictive model to decide whether a software module is or is not fault prone. In recent years, much
research in using machine learning techniques in this topic has been performed. Our aim was to evaluate
the performance of clustering techniques with feature selection schemes to address the problem of software
defect prediction problem. We analysed the National Aeronautics and Space Administration (NASA)
dataset benchmarks using three clustering algorithms: (1) Farthest First, (2) X-Means, and (3) selforganizing map (SOM). In order to evaluate different feature selection algorithms, this article presents a
comparative analysis involving software defects prediction based on Bat, Cuckoo, Grey Wolf Optimizer
(GWO), and particle swarm optimizer (PSO). The results obtained with the proposed clustering models
enabled us to build an efficient predictive model with a satisfactory detection rate and acceptable number
of features.
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
Software cost estimation is of the most challenging task in project management in order to ensuring smoother development operation and target achievement. There has been evolution of various standards tools and techniques for cost estimation practiced in the industry at present times. However, it was never investigated about the overall picturization of effectiveness of such techniques till date. This paper initiates its contribution by presenting taxonomies of conventional cost-estimation techniques and then investigates the research trends towards frequently addressed problems in it. The paper also reviews the existing techniques in well-structured manner in order to highlight the problems addressed, techniques used, advantages associated and limitation explored from literatures. Finally, we also brief the explored open research issues as an added contribution to this manuscript.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
ENSEMBLE REGRESSION MODELS FOR SOFTWARE DEVELOPMENT EFFORT ESTIMATION: A COMP...ijseajournal
As demand for computer software continually increases, software scope and complexity become higher than ever. The software industry is in real need of accurate estimates of the project under development. Software development effort estimation is one of the main processes in software project management. However, overestimation and underestimation may cause the software industry loses. This study determines which technique has better effort prediction accuracy and propose combined techniques that could provide better estimates. Eight different ensemble models to estimate effort with Ensemble Models were compared with each other base on the predictive accuracy on the Mean Absolute Residual (MAR) criterion and statistical tests. The results have indicated that the proposed ensemble models, besides delivering high efficiency in contrast to its counterparts, and produces the best responses for software project effort estimation. Therefore, the proposed ensemble models in this study will help the project managers working with development quality software.
SCHEDULING AND INSPECTION PLANNING IN SOFTWARE DEVELOPMENT PROJECTS USING MUL...ijseajournal
This paper presents a Multi-objective Hyper-heuristic Evolutionary Algorithm (MHypEA) for the solution
of Scheduling and Inspection Planning in Software Development Projects. Scheduling and Inspection
planning is a vital problem in software engineering whose main objective is to schedule the persons to
various activities in the software development process such as coding, inspection, testing and rework in
such a way that the quality of the software product is maximum and at the same time the project make span
and cost of the project are minimum. The problem becomes challenging when the size of the project is
huge. The MHypEA is an effective metaheuristic search technique for suggesting scheduling and inspection
planning. It incorporates twelve low-level heuristics which are based on different methods of selection,
crossover and mutation operations of Evolutionary Algorithms. The selection mechanism to select a lowlevel
heuristic is based on reinforcement learning with adaptive weights. The efficacy of the algorithm has
been studied on randomly generated test problem.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
The effectiveness of test-driven development approach on software projects: A...journalBEEI
Over recent years, software teams and companies have made attempts to achieve higher productivity and efficiency and get more success in the competitive market by employing proper software methods and practices. Test-driven development (TDD) is one of these practices. The literature review shows that this practice can lead to the improvement of the software development process. Existing empirical studies on TDD report different conclusions about its effects on quality and productivity. The present study tried to briefly report the results from a comparative multiple-case study of two software development projects where the effect of TDD within an industrial environment. Method: We conducted an experiment in an industrial case with 18 professionals. We measured TDD effectiveness in terms of team productivity and code quality. We also measured mood metric and cyclomatic complexity to compare our results with the literature. We have found that the test cases written for a TDD task have higher defect detection ability than test cases written for an incremental NON-TDD development task. Additionally, discovering bugs and fixing them became easier. The results obtained showed the TDD developers develop software code with a higher quality rate, and it results in increasing team productivity than NON_TDD developers.
An empirical evaluation of impact of refactoring on internal and external mea...ijseajournal
Refactoring is the process of improving the design of existing code by changing its internal structure
without affecting its external behaviour, with the main aims of improving the quality of software product.
Therefore, there is a belief that refactoring improves quality factors such as understandability, flexibility,
and reusability. However, there is limited empirical evidence to support such assumptions.
The objective of this study is to validate/invalidate the claims that refactoring improves software quality.
The impact of selected refactoring techniques was assessed using both external and internal measures. Ten
refactoring techniques were evaluated through experiments to assess external measures: Resource
Utilization, Time Behaviour, Changeability and Analysability which are ISO external quality factors and
five internal measures: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class
Coupling and Lines of Code.
The result of external measures did not show any improvements in code quality after the refactoring
treatment. However, from internal measures, maintainability index indicated an improvement in code
quality of refactored code than non-refactored code and other internal measures did not indicate any
positive effect on refactored code.
Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
Distributed Software Development Process, Initiatives and Key Factors: A Syst...zillesubhan
Geographically Distributed Software Development (GSD) process differs from Collocated Software Development (CSD) process in various technical aspects. It is empirically proven that renowned process improvement initiatives applicable to CSD are not very effective for GSD. The objective of this research is to review the existing literature (both academia and industrial) to identify initiatives and key factors which play key role in the improvement and maturity of a GSD process, to achieve this goal we planned a Systematic Literature Review (SLR) following a standard protocol. Three highly respected sources are selected to search for the relevant literature which resulted in a large number of TOIs (Title of Interest). An inter-author custom protocol is outlined and followed to shortlist most relevant articles for review. The data is extracted from this set of finally selected articles. We have performed both qualitative and quantitative analysis of the extracted data to obtain the results. The concluded results identify several initiatives and key factors involved in GSD and answer each research question posed by the SLR.
A DATA EXTRACTION ALGORITHM FROM OPEN SOURCE SOFTWARE PROJECT REPOSITORIES FO...ijseajournal
Software project estimation is important for allocating resources and planning a reasonable work schedule. Estimation models are typically built using data from completed projects. While organizations have their historical data repositories, it is difficult to obtain their collaboration due to privacy and competitive concerns. To overcome the issue of public access to private data repositories this study proposes an algorithm to extract sufficient data from the GitHub repository for building duration estimation models. More specifically, this study extracts and analyses historical data on WordPress projects to estimate OSS project duration using commits as an independent variable as well as an improved classification of contributors based on the number of active days for each contributor within a release period. The results indicate that duration estimation models using data from OSS repositories perform well and partially solves the problem of lack of data encountered in empirical research in software engineering.
A MODEL TO COMPARE THE DEGREE OF REFACTORING OPPORTUNITIES OF THREE PROJECTS ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects.
Reusability is an only one best direction to increase developing productivity and maintainability of application. One must first search for good tested software component and reusable. Developed Application software by one programmer can be shown useful for others also component. This is proving that code specifics to application requirements can be also reused in develop projects related with same requirements. The main aim of this paper proposed a way for reusable module. An process that takes source code as a input that will helped to take the decision approximately which particular software, reusable artefacts should be reused or not.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
In this paper a correlation between the major attributes of object oriented design and changeability has been ascertained. A changeability evaluation model using multiple linear regression has been proposed for object oriented design. The validation of the proposed changeability evaluation model is made known by means of experimental tests and the results show that the model is highly significant.
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
Software cost estimation is of the most challenging task in project management in order to ensuring smoother development operation and target achievement. There has been evolution of various standards tools and techniques for cost estimation practiced in the industry at present times. However, it was never investigated about the overall picturization of effectiveness of such techniques till date. This paper initiates its contribution by presenting taxonomies of conventional cost-estimation techniques and then investigates the research trends towards frequently addressed problems in it. The paper also reviews the existing techniques in well-structured manner in order to highlight the problems addressed, techniques used, advantages associated and limitation explored from literatures. Finally, we also brief the explored open research issues as an added contribution to this manuscript.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
ENSEMBLE REGRESSION MODELS FOR SOFTWARE DEVELOPMENT EFFORT ESTIMATION: A COMP...ijseajournal
As demand for computer software continually increases, software scope and complexity become higher than ever. The software industry is in real need of accurate estimates of the project under development. Software development effort estimation is one of the main processes in software project management. However, overestimation and underestimation may cause the software industry loses. This study determines which technique has better effort prediction accuracy and propose combined techniques that could provide better estimates. Eight different ensemble models to estimate effort with Ensemble Models were compared with each other base on the predictive accuracy on the Mean Absolute Residual (MAR) criterion and statistical tests. The results have indicated that the proposed ensemble models, besides delivering high efficiency in contrast to its counterparts, and produces the best responses for software project effort estimation. Therefore, the proposed ensemble models in this study will help the project managers working with development quality software.
SCHEDULING AND INSPECTION PLANNING IN SOFTWARE DEVELOPMENT PROJECTS USING MUL...ijseajournal
This paper presents a Multi-objective Hyper-heuristic Evolutionary Algorithm (MHypEA) for the solution
of Scheduling and Inspection Planning in Software Development Projects. Scheduling and Inspection
planning is a vital problem in software engineering whose main objective is to schedule the persons to
various activities in the software development process such as coding, inspection, testing and rework in
such a way that the quality of the software product is maximum and at the same time the project make span
and cost of the project are minimum. The problem becomes challenging when the size of the project is
huge. The MHypEA is an effective metaheuristic search technique for suggesting scheduling and inspection
planning. It incorporates twelve low-level heuristics which are based on different methods of selection,
crossover and mutation operations of Evolutionary Algorithms. The selection mechanism to select a lowlevel
heuristic is based on reinforcement learning with adaptive weights. The efficacy of the algorithm has
been studied on randomly generated test problem.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
The effectiveness of test-driven development approach on software projects: A...journalBEEI
Over recent years, software teams and companies have made attempts to achieve higher productivity and efficiency and get more success in the competitive market by employing proper software methods and practices. Test-driven development (TDD) is one of these practices. The literature review shows that this practice can lead to the improvement of the software development process. Existing empirical studies on TDD report different conclusions about its effects on quality and productivity. The present study tried to briefly report the results from a comparative multiple-case study of two software development projects where the effect of TDD within an industrial environment. Method: We conducted an experiment in an industrial case with 18 professionals. We measured TDD effectiveness in terms of team productivity and code quality. We also measured mood metric and cyclomatic complexity to compare our results with the literature. We have found that the test cases written for a TDD task have higher defect detection ability than test cases written for an incremental NON-TDD development task. Additionally, discovering bugs and fixing them became easier. The results obtained showed the TDD developers develop software code with a higher quality rate, and it results in increasing team productivity than NON_TDD developers.
An empirical evaluation of impact of refactoring on internal and external mea...ijseajournal
Refactoring is the process of improving the design of existing code by changing its internal structure
without affecting its external behaviour, with the main aims of improving the quality of software product.
Therefore, there is a belief that refactoring improves quality factors such as understandability, flexibility,
and reusability. However, there is limited empirical evidence to support such assumptions.
The objective of this study is to validate/invalidate the claims that refactoring improves software quality.
The impact of selected refactoring techniques was assessed using both external and internal measures. Ten
refactoring techniques were evaluated through experiments to assess external measures: Resource
Utilization, Time Behaviour, Changeability and Analysability which are ISO external quality factors and
five internal measures: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class
Coupling and Lines of Code.
The result of external measures did not show any improvements in code quality after the refactoring
treatment. However, from internal measures, maintainability index indicated an improvement in code
quality of refactored code than non-refactored code and other internal measures did not indicate any
positive effect on refactored code.
Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
Distributed Software Development Process, Initiatives and Key Factors: A Syst...zillesubhan
Geographically Distributed Software Development (GSD) process differs from Collocated Software Development (CSD) process in various technical aspects. It is empirically proven that renowned process improvement initiatives applicable to CSD are not very effective for GSD. The objective of this research is to review the existing literature (both academia and industrial) to identify initiatives and key factors which play key role in the improvement and maturity of a GSD process, to achieve this goal we planned a Systematic Literature Review (SLR) following a standard protocol. Three highly respected sources are selected to search for the relevant literature which resulted in a large number of TOIs (Title of Interest). An inter-author custom protocol is outlined and followed to shortlist most relevant articles for review. The data is extracted from this set of finally selected articles. We have performed both qualitative and quantitative analysis of the extracted data to obtain the results. The concluded results identify several initiatives and key factors involved in GSD and answer each research question posed by the SLR.
A DATA EXTRACTION ALGORITHM FROM OPEN SOURCE SOFTWARE PROJECT REPOSITORIES FO...ijseajournal
Software project estimation is important for allocating resources and planning a reasonable work schedule. Estimation models are typically built using data from completed projects. While organizations have their historical data repositories, it is difficult to obtain their collaboration due to privacy and competitive concerns. To overcome the issue of public access to private data repositories this study proposes an algorithm to extract sufficient data from the GitHub repository for building duration estimation models. More specifically, this study extracts and analyses historical data on WordPress projects to estimate OSS project duration using commits as an independent variable as well as an improved classification of contributors based on the number of active days for each contributor within a release period. The results indicate that duration estimation models using data from OSS repositories perform well and partially solves the problem of lack of data encountered in empirical research in software engineering.
A MODEL TO COMPARE THE DEGREE OF REFACTORING OPPORTUNITIES OF THREE PROJECTS ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects.
Reusability is an only one best direction to increase developing productivity and maintainability of application. One must first search for good tested software component and reusable. Developed Application software by one programmer can be shown useful for others also component. This is proving that code specifics to application requirements can be also reused in develop projects related with same requirements. The main aim of this paper proposed a way for reusable module. An process that takes source code as a input that will helped to take the decision approximately which particular software, reusable artefacts should be reused or not.
Changeability has a direct relation to software maintainability and has a major role in providing high quality maintainable and trustworthy software. The concept of Changeability is a major factor when we design and develop software and its constituents. Developing programs and its constituent components with good changeability continually improves and simplifies test operations and maintenance during and after implementation. It encourages and supports improvement in software quality at design stage in the development of software. The research here highlights the importance of changeability broadly and also as an important aspect of software quality.
In this paper a correlation between the major attributes of object oriented design and changeability has been ascertained. A changeability evaluation model using multiple linear regression has been proposed for object oriented design. The validation of the proposed changeability evaluation model is made known by means of experimental tests and the results show that the model is highly significant.
A Methodology To Manage Victim Components Using Cbo Measureijseajournal
The current practices of software industry demands development of a software within time and budget
which is highly productive. The traditional approach of developing a software from scratch requires
considerable amount of effort. To overcome the drawback a reuse drive software development approach is
adopted. However there is a dire need for realizing effective software reuse. This paper presents several
measures of reusability and presents a methodology of reconfiguring the victim components. The CBO
measure helps in identifying the component to be reconfigured. The proposed strategy is simulated using
HR portal domain specific component system.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of time is consumed in collecting requirements from the organization to build an archiving system. Sometimes the system does not meet the organization needs. This paper proposes a domain-based requirement engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed during analyzing and designing the archiving systems decreased significantly. The proposed methodology also reduces the system errors that may happen at the early stages of the development of the system.
AN ITERATIVE HYBRID AGILE METHODOLOGY FOR DEVELOPING ARCHIVING SYSTEMSijseajournal
With the massive growth of the organizations files, the needs for archiving system become a must. A lot of
time is consumed in collecting requirements from the organization to build an archiving system. Sometimes
the system does not meet the organization needs. This paper proposes a domain-based requirement
engineering system that efficiently and effectively develops different archiving systems based on new
suggested technique that merges the two best used agile methodologies: extreme programming (XP) and
SCRUM. The technique is tested on a real case study. The results shows that the time and effort consumed
during analyzing and designing the archiving systems decreased significantly. The proposed methodology
also reduces the system errors that may happen at the early stages of the development of the system.
An Approach to Calculate Reusability in Source Code Using MetricsIJERA Editor
Reusability is an only one best direction to increase developing productivity and maintainability of application. One must first search for good tested software component and reusable. Developed Application software by one programmer can be shown useful for others also component. This is proving that code specifics to application requirements can be also reused in develop projects related with same requirements. The main aim of this paper proposed a way for reusable module. An process that takes source code as a input that will helped to take the decision approximately which particular software, reusable artefacts should be reused or not.
A systematic mapping study of performance analysis and modelling of cloud sys...IJECEIAES
Cloud computing is a paradigm that uses utility-driven models in providing dynamic services to clients at all levels. Performance analysis and modelling is essential because of service level agreement guarantees. Studies on performance analysis and modelling are increasing in a productive manner on the cloud landscape on issues like virtual machines and data storage. The objective of this study is to conduct a systematic mapping study of performance analysis and modelling of cloud systems and applications. A systematic mapping study is useful in visualization and summarizing the research carried in an area of interest. The systematic study provided an overview of studies on this subject by using a structure, based on categorization. The results are presented in terms of research such as evaluation and solution, and contribution such as tools and method utilized. The results showed that there were more discussions on optimization in relation to tool, method and process with 6.42%, 14.29% and 7.62% respectively. In addition, analysis based on designs in terms of model had 14.29% and publication relating to optimization in terms of evaluation research had 9.77%, validation 7.52%, experience 3.01%, and solution 10.51%. Research gaps were identified and should motivate researchers in pursuing further research directions.
An Empirical Study of the Improved SPLD Framework using Expert Opinion TechniqueIJEACS
Due to the growing need for high-performance and low-cost software applications and the increasing competitiveness, the industry is under pressure to deliver products with low development cost, reduced delivery time and improved quality. To address these demands, researchers have proposed several development methodologies and frameworks. One of the latest methodologies is software product line (SPL) which utilizes the concepts like reusability and variability to deliver successful products with shorter time-to-market, least development and minimum maintenance cost with a high-quality product. This research paper is a validation of our proposed framework, Improved Software Product Line (ISPL), using Expert Opinion Technique. An extensive survey based on a set of questionnaires on various aspects and sub-processes of the ISPLD Framework was carried. Analysis of the empirical data concludes that ISPL shows significant improvements on several aspects of the contemporary SPL frameworks.
Integrated Analysis of Traditional Requirements Engineering Process with Agil...zillesubhan
In the past few years, agile software development approach has emerged as a most attractive software development approach. A typical CASE environment consists of a number of CASE tools operating on a common hardware and software platform and note that there are a number of different classes of users of a CASE environment. In fact, some users such as software developers and managers wish to make use of CASE tools to support them in developing application systems and monitoring the progress of a project. This development approach has quickly caught the attention of a large number of software development firms. However, this approach particularly pays attention to development side of software development project while neglects critical aspects of requirements engineering process. In fact, there is no standard requirement engineering process in this approach and requirements engineering activities vary from situation to situation. As a result, there emerge a large number of problems which can lead the software development projects to failure. One of major drawbacks of agile approach is that it is suitable for small size projects with limited team size. Hence, it cannot be adopted for large size projects. We claim that this approach can be used for large size projects if traditional requirements engineering approach is combined with agile manifesto. In fact, the combination of traditional requirements engineering process and agile manifesto can also help resolve a large number of problems exist in agile development methodologies. As in software development the most important thing is to know the clear customer’s requirements and also through modeling (data modeling, functional modeling, behavior modeling). Using UML we are able to build efficient system starting from scratch towards the desired goal. Through UML we start from abstract model and develop the required system through going in details with different UML diagrams. Each UML diagram serves different goal towards implementing a whole project.
Abstract
Researchers in the field of software engineering, business process improvement and information engineering all want to drastically modernize software life-cycle processes and technologies to correct the problems and to improve the quality of software. Research goals have included ancillary issues, such as improving user services through conversion to new platforms and facilitating software processes by adopting automated tools. Automated tools for software development, understanding, maintenance, and documentation add to process maturity, leading to better quality and reliability of computer services and greater customer satisfaction. This paper focuses on critical issues of legacy program improvement. The program improvement needs the estimation of program from various perspectives. The paper highlights various elements of legacy program complexity which further can be taken in account for further program development.
Keywords: Legacy, Program, Software complexity, Code, Integration
Agent-oriented software engineering is a promising new approach
to software engineering that uses the notion of an agent as the
primary entity of abstraction. The development of methodologies
for agent-oriented software engineering is an area that is currently
receiving much attention, there have been several agent-oriented
methodologies proposed recently and survey papers are starting to
appear. However the authors feel that there is still much work
necessary in this area; current methodologies can be improved
upon. This paper presents a new methodology, the Styx Agent
Methodology, which guides the development of collaborative
agent systems from the analysis phase through to system
implementation and maintenance. A distinguishing feature of
Styx is that it covers a wider range of software development lifecycle
activities than do other recently proposed agent-oriented
methodologies. The key areas covered by this methodology are
the specification of communication concepts, inter-agent
communication and each agent's behaviour activation—but it
does not address the development of application-specific parts of
a sys-tem. It will be supported by a software tool which is
currently in development.
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
2. ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2855 – 2862
2856
appropriateness of design process highly depends over the preciseness of the outcome achieved by the
framework. Hence, there is a need of investigation in two directions where the first direction should focus on
evolving up with some novel model of code reusability while second direction should focus on incorporating
optimization. The significant advantage towards such modeling is that it will be easy to be adopted and will
be applicable over larger range of application that uses object oriented system. Fortunately, there is a bigger
scope of enhancing the optimization technique with larger range of optimization algorithm being available in
existing times. However, there is no much optimization being carried out towards code reusability system.
There is also a possibility that existing approaches towards software metrics be further investigated and
clubbed with experimental data to further investigate the better possibilities of code reusability and thereby a
good chance for evolving up with a predictive system.
The proposed system therefore introduces a mechanism of optimization where the code reusability
is emphasized by adoption a true case study and analytical approaches. We also emphasize on algorithm
efficiency. Section 2 discusses about the research methodology of proposed system followed by elaborated
discussion of algorithm implementation in Section 3. Section 4 briefs out the results being accomplished
while summary of the paper are discussed in Section 5.
1.1. Background
This section discusses about the existing approaches that has been carried out towards code
reusability. Our review work has already discussed about different techniques and trade-off in existing
system [9], we add further update towards the research progress in this domain. First of all, there is
significantly very less number of research progresses in the topic of code reusability from the year 2010 till
date. However, we only discuss the related work in this perspective. Hudaib et al. [10] have presented a
classification-based approach for software reusability using self-organizing map. Khoshkbarforoushha et al.
[11] have discussed about software metric in order to incorporate reuable components in programming
language. Tibermacine et al. [12] have addressed the constraints during modeling of software reuse on
architectural design. Tahir et al. [12] have presented a reusability-based framework that offers efficient
selection of components. The research towards of reusability was also seen in different pattern in most recent
times. Vegt et al. [13] have presented such architecture that emphasizes on reusability of gaming
components. The architecture has been shown to have very low dependability towards conventional software
patterns and existing practice and protocols of coding. Demraoui et al. [14] have presented a synchroncity-
based approach for enhancing the reusability performance over warehouses. The authors have used case-
based reasoning approach in order to enhance the process of software reusability. Ahmaro et al. [15] have
presented a discussion towards existing practical adoption of software reusability considering the case study
of software development organization in Malaysia. The paper gives the fair idea of various forms of reusable
approaches e.g. application product lines, design patterns, COTS integration, program generators,
component-based development, configurable vertical applications, legacy system wrapping, aspect-oriented
software development, etc. Efat et al. [16] have addressed the problems pertaining to component reuse and
analyzed the problem using data repository. The author commented that in order to address the problem of
reusability, the cost for space-time should be addressed for the software components. The technique has also
introduced two different algorithms where one focuses on selection of attributes while other focuses on
ranking as well as prioritizing it. The study outcomes were assessed using success and failures cases of test
outcomes. Mojica et al. [17] have presented an empirical-based investigation towards software reusability.
Mobile applications are considered as a case study where the authors investigated software reusability. The
author also commented that design of the mobile applicatons, at present; make use of various forms of
classes, inheritance, and library reuse. It will significant assists the developer to further upgrade the
performance of software reusability. Nazir et al. [18] have presented a discussion towards selection of
software components by introducing a unique analytical framework considering case study. The author
basically uses an existing mechanism and then modifies into analytical network process that significantly
depends on feedback (i.e. expert opinion) as well as dynamic structure of the network. The significant
advantage of this study is its supportability of various types of reusable factors. It also assists in decision
making for complex networks. The study outcome is assessed using graphical weights. Spoelstra et al. [19]
have discussed the usage of the software reuse towards the organization that practices agile methodologies.
The authors have also discussed about various reuse factors on existing models as well as discussed about
validated outcome of the study. Basically, the study represents a type of software management tool to support
reusbability concept. Zhu et al. [20] have discussed about the software reusability in terms of framework
reusability with respect to distributed process of simulation. The solution provided by the author is a
framework with reusable component that is constructed by the domain experts. The author also introduces
multiple service-oriented interfaces for performance enhancement. The combined work of Manoj and
Nandakumar [21] have described code reusability concept for cost optimization in IT system. The study
3. IJECE ISSN: 2088-8708
A Novel Optimization towards Higher Reliability in Predictive Modelling towards Code … (Manoj H. M.)
2857
provides an analytical model to bring relationship among the standard metric components and code
reusability. The Markov mode is utilized for significant information extraction which provides highest code
reusability value.
1.2. The Problem
The prior section has presented a brief discussion of some of the related work connected with code
reusability. Code reusability is one subset of software reusability which has higher rate of technical adoption
by many software engineers irrespective of the scale of the organization. However, inspite of such massive
usage of code reusability, there is a very less number of attentions being attracted from the research
community towards this topic. Hence, there are various set of open-end problems that are yet to be addressed
in future.
Following are the open research problems after reviewing the existing literatures:-
There is no solid research conducted for code reusability in software engineering till date.
Majority of the existing research lacks model validation for which reason the study outcomes of existing
system cannot be directly said to be applicable.
There is no availability of any benchmark models or techniques towards code reusability or optimization
technqiues which makes its still an open research issues.
None of the existing technqiues are found to adopt any technique that offers algorithm cost effectiveness.
Therefore, after reviewing the existing problems, it can be concluded that there is a need to work on
such above mentioned research problems. Hence, the problem statement of the proposed study is –“It is
significantly a challenging task to formulate a system model that applies optimization in order to enhance the
predictiveness of the code reusability.” The next section discusses about the proposed system to address such
issues.
1.3. The Proposed Solution
The proposed system discussed in this paper is a continuation of our prior work [22]. The prime
purpose of this work is to introduce a framework where code reusability can be tested and analyzed with
varied ranges of near real-time problems associated with software development. The schematic architecture
of the proposed system is as shown in Figure 1.
Code Handling Capacity (α)
Effective Effort (β)
Resource for New Design (γ)
Error Probability (φ)
Damped-Least
Square Algorithm
Reliability
σrel
Stability Attribute
σstab
Training
validation
Figure 1. Schematic architecture of proposed system
There are mainly four essential stages involved in designing proposed framework e.g. i) in the first
stage, we take sample object-oriented source code from two software projects and extracts its software metric
values with respect to Coupling between objects, Response for classes, weighted methods per class, Depth of
inheritance tree, Number of Children, ii) in the second stage, we introduce 4 different problems in software
development as case study, iii) in the third stage, we compute code reusability using concept mentioned in
[22], and iv) we apply optimization technique to ensure reliability of the outcome obtained for code
reusability. This part of the study formulates an empirical approach for further computing the coding
attribute,
)(
1
QP
R
CA (1)
4. ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2855 – 2862
2858
Where, the variable P represents product of software metrics, total effort, and hours of work. The
variable Q represents product of software metrics, effort for incorporating code reusability, and hours of
work, while R will represent difference of cumulative effort days with days required to incorporate code
reusability. We also formulate condition where software engineers have to deliver the software code with
inclusion of certain range of code reusability to ensure cost saving of production in future. This predicted
outcome of code reusability is then assessed using a machine learning approach to ensure higher degree of
reliability as well as stability in the outcome of proposed system. The next section discusses about research
methodology adopted in proposed study.
2. RESEACH METHODOLOGY
The design of the proposed research work is carried out considering analytical research
methodology for enhancing code resuabilty in software engineering. As optimization is the prime motive of
proposed study therefore a machine learning approach is applied for this purpose. This job is done by using
neural network for the purpose of ranking optimization in software engineering. The prime reason for
adopting neural network for optimization is because of its multi-tasking capabilities. We construct a
schematic architecture that takes the input of 4 different types of problems associated with real-time software
development that finally results in computation of Reliability as well as stability attribute. Following are brief
description of the inputs towards the model for code reusability:
Code Handling Capacity (α): This attribute is considered as the maximum number of software codes that
can be newly designed and develired by a team for one month against the client’s requirement.
o Rationale: This attribute α will directly represent the quantification of optimization technique
toward proposed code reusability concept. Increase in the value of this attribute by keeping the
other attributes similar is the direct indicator of optimization.
Effective Effort (β): This attribute is considered as the precise time consumed by desiginer when a code
is developed by incorporating code reusability within it.
o Rationale: This attribute β will represent saving of production time when optimization is
performed to see how much time is required in i) either building the code from scratch or ii)
applying the code reusability concept.
Resource for New Design (γ): This attribute defines number of resources being utilized for developing
the new code with code reusability within it.
o Rationale: This attribute γ will directly represent the cost involved in production. The developer
is assumed to design the new code in such a way that it should have certain thresholded
percentage of code reusability. Hence, this value should be kept as low as possible.
Error Probability (φ): In order to check the internal validaity of the proposed model, we intentionally
incorporate certain range of error as an uncertainty.
o Rationale: The entite concept of code reusability and its optimization is framed without even
knowing the form of requirement of client. We call this factor as uncertainty. Therefore, this
attribute φ will perform computation of code reusability by inclusion of uncertainty factor or
error probability.
We use all the above mentioned attributes as input towards the processor, which upon processing
will give the output of reliability and stability. We apply Damped-Least Square algorithm as the optimization
algorithm in order to perform optimization using neural network.
3. ALGORITHM IMPLEMENTATION
The implementation of the proposed algorithm is carried out using Matlab. The algorithm design
mainly focuses on improving the study outcomes of optimization i.e. reliability and the stability. The
algorithm implications initiate with the 4 different forms of the input i.e. Code Handling Capacity (α),
Effective Effort (β), Resource for New Design (γ), and Error Probability (φ). The numerical values of this
input are obtained by considering 2 software engineers capable of deliverying 2 and 3 software codes for new
projects in 26 working days. Total of 8 hours of working time is considered for this purpose. We adhere to
the standard of the neural network based optimization policy by associating weights with the input attributes.
The cardinality of the input layer is 4 while that of the intermediate layer is 24 and 2. Similarly, the
cardinality of the outcome is 2.
5. IJECE ISSN: 2088-8708
A Novel Optimization towards Higher Reliability in Predictive Modelling towards Code … (Manoj H. M.)
2859
Figure 2 Schema of neural network used in proposed system
The neural network mapping config of nodes are 4(input)-24(intermediate)-2(ouput) as shown in
Figure 2, where cardinality of input nodes is 4, cardinality of hidden nodes is 24, and cardinality of output
node is 2.The essential steps involved in proposed algorithm are as follows:
Optimization Algorithm for Code Reusability
Input: Code Handling Capacity (α), Effective Effort (β), Resource for New Design (γ), and Error Probability
(φ).
Output: σrel (reliability),σstab. (stability attribute)
Start:
1. init Δ=[α, β, γ, ϕ]
2. Compute weight (δ)
3. a=min-max(value(Δ))
4. norm(Δ, o)[(-4, +4)(0.1, 0.9)]
5. Apply traing_Alg
6. ErrorMinMax(No-To)
7. Compute .).( 21 ccrel
& σstab.
End
The above mentioned algorithm adopts the design principle of Figure 1 and performs initialization
of multiple attributes associated with the case studies i.e. (α, β, γ, φ). After initializing the input attributes
(Line-1), the next important step of algorithm is to compute weight as {(4(input layer)x24(hidden
layer))+(24(hidden layer)x2(output layer))=144}. This weight is used for computing output (Line-7). The
prime target of this algorithm is to evolve up with a number of perceptrons layers for the purpose of
predicting reliability attribute and stability attribute. Various forms of iterations have been observed in order
to look for an ellite outcome of optimization. The algorithm performs processing considering various
numbers of permutations with input attributes. Apart from this data, we also constructed a training dataset
where elongated series of training data associated with our case study is developed. This data was called
during implication of the training algorithm. We split the data into two parts where the bigger proportion of
the data is considered for applying itself to training operation in neural network and smaller proportion of the
data is considered for carrying out internal model validation. Both the inputs attribute Δ and output attribute
σ has been subjected to normalization array of (0, +1). The numerical values of input attributes Δ is obtained
and min-max algorithm is applied (Line-3). The algorithm also performs normalization of the both input and
output numerical outcomes to ensure that the numerical outcomes are statistically within probability limits
(Line-4). For easiness in computation, we choose the normalization value of input as -4 to +4 while that of
output is considered to be 0.1 to 0.9. These values can be changed according to the test environment;
however, we choose this value of sharp resemblems with out consideration of case study. A feedforward
neural network is used as a media of training algorithm (Line-5) followed by computation of output from
neural network No and training To. We find difference of these twor intermediate outcomes and again apply
min-max algorithm in order to extract the possible error (Line-6). This error is used for internal model
validation in later stage of model checking. Finally, we compute the reliability attribute σrel whose dependable
variables are c1, c2, Δ, and δ (Line-7). The first variable c1 represents a higher-limit constant 1/0.8 while the
second variable c2 represent lower-limit constant of value 0.1. These values can be suitably changed based on
the necessity of the anticipated convergence outcomes. The considerations of these values are purely based
on probability theory. Adoption of higher limit 0.8 is done as statistically, the permissible higher limit is till
0.8, while numbers more than it is considered as impractical. The last variable δ is weight which is equivalent
to 144 in our case. Hence, once the reliability attribute is computed, we check for stability factor i.e. σstab just
to ensure the total number of occurance of reliable outcomes. If the total frequencies of majority of the
6. ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2855 – 2862
2860
generated values of σrel is nearly similar than we consider that the proposed test scenario is highly stable. The
training performance was checked for 1000-5000 iterations in order to see the convergence performance of
both experimental and hypothetical outcomes. An interesting fact about this algorithm is that it is more prone
to converge towards low error rate as an ellite outcome, which ensures that proposed system offers and
outcome whose validation leads to highly authenticated outcome. This also means that proposed system can
truelly optimize the concept of code reusability under consideration of various real-life challenges in software
project development. The performance of the proposed algorithm is nearly found similar under different
forms of changes being carried out towards testing the algorithm performance. At the same time the
complexity of the algorithm is also found quite low. The next section discusses about the outcomes
accomplished from proposed study.
4. RESULT AND DISCUSSION
As the proposed study focuses mainly on two factors i.e. optimization and reliability, we consider
testifying the study outcome of the proposed system on the basis of error performance of proposed model,
validation performance, and response time mainly. The model is allowed to iterate for 20,000 rounds in order
to check the convergence performance. Figure 3 (a) shows the trained result significantly converges with the
best fit result thereby showing the higher reliability of the proposed model. The model reliability is further
proven by linearity trend of the validation curve in Figure 3 (b). Although, 20,000 epochs has been given, but
the convergence towards elite outcome was seen only within 2178 epoch.
Epoch
ErrorPerformamnce
(a) Error performance (b) Validation performance
(c) Comparative response time (d) Comparative error performance
Figure 3. Outcome of proposed study
7. IJECE ISSN: 2088-8708
A Novel Optimization towards Higher Reliability in Predictive Modelling towards Code … (Manoj H. M.)
2861
As there are various forms of optimization techniques therefore we compare our optimization
technique using neural network and various others frequently used optimization technqiues e.g. k-Nearest
Neighbour (KNN) and Support Vector Machine (SVM) under similar environment parameters. The analysis
shows that proposed study offer significantly lower response time (Figure 3. (c)) and higher reliability as
seen from error performance (Figure 3. (d)). KNN-algorithm doesn’t give better outcome as it is much
distance-based approach for all the training data while performing optimization. Therefore response time is
quite higher as well as error is also too high. On the other hand, SVM has better performance in contrast to
KNN due to its ability to perform both linear and non-linear classification. Adoption of kernel-based
approach also assists it to offer better error performance. However, the response time could be only lowered
to certain extent in SVM, which is not cost effective in nature although, it offer lowered error. The proposed
system uses neural network whose accuracy level can be increasingly controlled in every cycle of epoch. On
the other hand, adoption of damped least square algorithm significantly assists in solving non-linear
optimization problem and hence the contribution of neural network is proposed system is more inclined to
error performance than on performance time. Hence, it can be said that proposed system offers a cost
effective as well as highly reliable optimization of the framework that supports code reusability.
5. CONCLUSION
The concept of software reusability has received lots of attention within research community 10
years back that resultsi n evolution of various standard software metrics. However, there is less number of
interests towards this domain found as there are very less research papers in this. Code reusability is one of
the part of software reusability that has never being investigated in past although the concept of code
reusability is practiced in many organization without even following any protocols. The prime reason behind
it is that there are no such benchmarked models in this regards. Therefore, we address this problem by taking
a case study that mimicks real-time problem in software development. We also use sample software projects
and computes its conventional software metrics values that we use for our analytical modelling. We introduce
a formulation to compute the code attribute with an inclusion of code reusability logic. Neural network is
applied for optimization to find that proposed system offer extremely lower error score and reduced
computational time in comparison to existing optimization techniques.
REFERENCES
[1] J. Portman, “Building Services Engineering: After Design, During Construction,” John Wiley & Sons, 2016
[2] R. Lutowski, “Software Requirements: Encapsulation, Quality, and Reuse,” CRC Press, 2016
[3] D. Wiebusch, “Reusability for Intelligent Realtime Interactive Systems,” BoD – Books on Demand-Computer, 2016
[4] J. Sametinger, “Software Engineering with Reusable Components,” Springer Science & Business Media, 2013
[5] M. Kraeling, Andrew McKay, “Software Engineering for Embedded Systems,” Elsevier Inc, 2013
[6] L. Antovski1 and Florinda Imeri2, “Review of Software Reuse Processes,” International Journal of Computer
Science Issues, vol. 10, issue 6, no. 2, 2013
[7] P. S. Sandhu, Aashima, P. Kakkar and S. Sharma, “A survey on Software Reusability,” IEEE-International
Conference on Mechanical and Electrical Technology, Singapore, pp. 769-773, 2010
[8] N. Padhy, R. Panigrahi and S. Baboo, “A Systematic Literature Review of an Object Oriented Metric: Reusability,”
IEEE- International Conference on Computational Intelligence and Networks, Bhubaneshwar, pp. 190-191, 2015
[9] H.M. Manoj and A.N. Nandakumar, “A Survey on Modelling of Software Metrics for Ranking Code Reusability in
Object Oriented Design Stage," International Journal of Engineering Research & Technology (IJERT), vol. 3, issue.
12, 2014
[10] A. Hudaib, A. Huneiti, I. Othman, “Software Reusability Classification and Predication Using Self-Organizing Map
(SOM),” Communications and Network, pp. 179-192, 2016
[11] A. Khoshkbarforoushha, P. Jamshidi, M. F. Gholami, L. Wang and R. Ranjan, "Metrics for BPEL Process
Reusability Analysis in a Workflow System," in IEEE Systems Journal, vol. 10, no. 1, pp. 36-45, March 2016.
[12] C. Tibermacine, S. Sadou, M.T.T.That, and C.Dony, "Software Architecture Constraint Reuse-by-composition,"
Future Generation Computer Systems, vol. 61, pp.37-53, 2016.
[13] M. Tahir, F. Khan, M. Babar, F. Arif, and F. Khan, "Framework for Better Reusability in Component Based
Software Engineering," The Journal of Applied Environmental and Biological Sciences (JAEBS), vol. 6, pp.77-81,
2016
[14] W.V.D. Vegt, W. Wim, E. Nyamsuren, A. Georgiev, and I.M. Ortiz, “RAGE Architecture for Reusable Serious
Gaming Technology Components,” International Journal of Computer Games Technology, vol. 3, 2016
[15] L. Demraoui, H. Behja, and R. B. Abbou, “A Case-based Reasoning Approach to the Reusability of CWM
Metadata,” In Systems of Collaboration (SysCo), International Conference, pp. 1-6, 2016
[16] I.Y.Y. Ahmaro, M. Z. b. M. Yusoff, and A. M. Abualkishik, "The Current Practices of Software Reusability
Approaches in Malaysia,” In Software Engineering Conference (MySEC), 8th Malaysian, pp. 172-176, 2014.
[17] M. I. A. Efat, M. S. Siddik, M. Shoyaib, and S. M. Khaled, “Feature Prioritization for Analyzing and Enhancing
Software Reusability,” In Informatics, Electronics & Vision (ICIEV), International Conference, pp. 1-5, 2014
8. ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2855 – 2862
2862
[18] I. J. Mojica, B. Adams, M. Nagappan, S.Dienst, T. Berger, and A. E. Hassan, “A Large-scale Empirical Study on
Software Reuse In Mobile Apps”, IEEE software, vol. 31, no. 2, pp.78-86, 2014.
[19] S. Nazir, S. Anwar, S. A. Khan, S. Shahzad, M. Ali, R. Amin, M. Nawaz, P. Lazaridis, and J. Cosmas, “Software
Component Selection Based on Quality Criteria Using the Analytic Network Process,” In Abstract and Applied
Analysis, vol. 2014
[20] W. Spoelstra, M. Iacob, and M. V. Sinderen, “Software Reuse in Agile Development Organizations: a Conceptual
Management Tool,” In Proceedings of the ACM Symposium on Applied Computing, pp. 315-322, 2011
[21] Manoj H. M, Dr. Nandakumar A.N, “ Constructing Relationship Between Software Metrics and Code Reusability in
Object Oriented Design,” (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 7,
No. 2, 2016
[22] F. Zhu, Y. Yao, H. Chen, and F. Yao, “Reusable Component Model Development Approach for Parallel and
Distributed Simulation,” The Scientific World Journal, pp. 12, 2014
BIOGRAPHIES OF AUTHOR
Manoj H M, Currently working as Assistant Professor, Dept of CSE, and Don Bosco Institute of
Technology, Bangalore, India. He has total experience of 6 years & 4 months in teaching. His
research domain is Software Engineering. He has completed B.E in Information Science and
Engineering from Kalpataru Institute of Technology, Tiptur, India. And M.Tech in Software
Engineering from East Point College of Engineering and Technology, Bangalore, India. He has
published 3 research papers in international journals and 5 technical papers in
International/National conferences. He is pursuing Ph.D in Computer Science & Engineering
from Jain University, Bangalore, India.
Dr. NandaKumar AN, Professor in the Dept of CSE, NHCE, Bangalore. He has more than 35
years of teaching experience. He has completed P.hD from Berhanpur University. He has done
BE in Electronics and Communication Engineering from Mysore University, Mysore, India. He
has completed M.Tech in Computer Science and Technology from Roorkee University Roorkee.
He has published more than 50 research papers in various International journals and international
conferences. His area of interests include software Engineering, Image processing, wireless
sensor networks, parallel computing and others. He has also served as Principal in many reputed
engineering colleges in Karnataka and Andhra including Dean (research) in a reputed university
in Tamil Nadu.