Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
Cause-Effect Graphing: Rigorous Test Case DesignTechWell
A tester’s toolbox today contains a number of test case design techniques—classification trees, pairwise testing, design of experiments-based methods, and combinatorial testing. Each of these methods is supported by automated tools. Tools provide consistency in test case design, which can increase the all-important test coverage in software testing. Cause-effect graphing, another test design technique, is superior from a test coverage perspective, reducing the number of test cases needed to provide excellent coverage. Gary Mogyorodi describes these black box test case design techniques, summarizes the advantages and disadvantages of each technique, and provides a comparison of the features of the tools that support them. Using an example problem, he compares the number of test cases derived and the test coverage obtained using each technique, highlighting the advantages of cause-effect graphing. Join Gary to see what new techniques you might want to add to your toolbox.
Application of the analytic hierarchy process (AHP) for selection of forecast...Gurdal Ertek
In this paper, we described an application of the Analytic Hierarchy Process (AHP) for the ranking and selection of forecasting software. AHP is a multi-criteria decision making (MCDM) approach, which is based on the pair-wise comparison
of elements of a given set with respect to multiple criteria. Even though there are applications of the AHP to software selection problems, we have not encountered a study that involves forecasting software. We started our analysis by filtering
among forecasting software that were found on the Internet by undergraduate students as a part of a course project. Then we processed a second filtering step, where we reduced the number of software to be examined even further. Finally we
constructed the comparison matrices based upon the evaluations of three “semiexperts”, and obtained a ranking of forecasting software of the selected software using the Expert Choice software. We report our findings and our insights, together with the results of a sensitivity analysis.
http://research.sabanciuniv.edu.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
Cause-Effect Graphing: Rigorous Test Case DesignTechWell
A tester’s toolbox today contains a number of test case design techniques—classification trees, pairwise testing, design of experiments-based methods, and combinatorial testing. Each of these methods is supported by automated tools. Tools provide consistency in test case design, which can increase the all-important test coverage in software testing. Cause-effect graphing, another test design technique, is superior from a test coverage perspective, reducing the number of test cases needed to provide excellent coverage. Gary Mogyorodi describes these black box test case design techniques, summarizes the advantages and disadvantages of each technique, and provides a comparison of the features of the tools that support them. Using an example problem, he compares the number of test cases derived and the test coverage obtained using each technique, highlighting the advantages of cause-effect graphing. Join Gary to see what new techniques you might want to add to your toolbox.
Application of the analytic hierarchy process (AHP) for selection of forecast...Gurdal Ertek
In this paper, we described an application of the Analytic Hierarchy Process (AHP) for the ranking and selection of forecasting software. AHP is a multi-criteria decision making (MCDM) approach, which is based on the pair-wise comparison
of elements of a given set with respect to multiple criteria. Even though there are applications of the AHP to software selection problems, we have not encountered a study that involves forecasting software. We started our analysis by filtering
among forecasting software that were found on the Internet by undergraduate students as a part of a course project. Then we processed a second filtering step, where we reduced the number of software to be examined even further. Finally we
constructed the comparison matrices based upon the evaluations of three “semiexperts”, and obtained a ranking of forecasting software of the selected software using the Expert Choice software. We report our findings and our insights, together with the results of a sensitivity analysis.
http://research.sabanciuniv.edu.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Review on Algorithmic and Non Algorithmic Software Cost Estimation Techniquesijtsrd
Effective software cost estimation is the most challenging and important activities in software development. Developers want a simple and accurate method of efforts estimation. Estimation of the cost before starting of work is a prediction and prediction always not accurate. Software effort estimation is a very critical task in the software engineering and to control quality and efficiency a suitable estimation technique is crucial. This paper gives a review of various available software effort estimation methods, mainly focus on the algorithmic model and non algorithmic model. These existing methods for software cost estimation are illustrated and their aspect will be discussed. No single technique is best for all situations, and thus a careful comparison of the results of several approaches is most likely to produce realistic estimation. This paper provides a detailed overview of existing software cost estimation models and techniques. This paper presents the strength and weakness of various cost estimation methods. This paper focuses on some of the relevant reasons that cause inaccurate estimation. Pa Pa Win | War War Myint | Hlaing Phyu Phyu Mon | Seint Wint Thu "Review on Algorithmic and Non-Algorithmic Software Cost Estimation Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26511.pdfPaper URL: https://www.ijtsrd.com/engineering/-/26511/review-on-algorithmic-and-non-algorithmic-software-cost-estimation-techniques/pa-pa-win
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Software Defect Prediction Using Local and Global AnalysisEditor IJMTER
The software defect factors are used to measure the quality of the software. The software
effort estimation is used to measure the effort required for the software development process. The defect
factor makes an impact on the software development effort. Software development and cost factors are
also decided with reference to the defect and effort factors. The software defects are predicted with
reference to the module information. Module link information are used in the effort estimation process.
Data mining techniques are used in the software analysis process. Clustering techniques are used
in the property grouping process. Rule mining methods are used to learn rules from clustered data
values. The “WHERE” clustering scheme and “WHICH” rule mining scheme are used in the defect
prediction and effort estimation process. The system uses the module information for the defect
prediction and effort estimation process.
The proposed system is designed to improve the defect prediction and effort estimation process.
The Single Objective Genetic Algorithm (SOGA) is used in the clustering process. The rule learning
operations are carried out sing the Apriori algorithm. The system improves the cluster accuracy levels.
The defect prediction and effort estimation accuracy is also improved by the system. The system is
developed using the Java language and Oracle relation database environment.
The End User Requirement for Project Management Software Accuracy IJECEIAES
This research explains the relationship between the end user requirement and accuracy of PMS (Project Management Software). The research aims are to analyze the PMS accuracy and measuring the probability of PMS accuracy in achieving ±1% of the end user requirement. The bias statistical method will be used to prove the PMS accuracy that based on the hypothesis testing. The result indicates the PMS is still accurate to be implemented in AcehIndonesia area projects that using the SNI (National Indonesia Standard as current method) with the accuracy index of ±7.5%. The achievement probability of reaching the end user requirement is still low of ±21.77%. In case of the PMS, the low achievement of the end user requirement is not only caused by the low accuracy of the PMS but also caused by the amount of variability error, which is influenced by the amount of variation of the project activity. In this study, we confirm that it is necessary to reconcile both conditions between the PMS accuracy and the end user requirements.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
PORM: Predictive Optimization of Risk Management to Control Uncertainty Probl...IJECEIAES
Irrespective of different research-based approaches toward risk management, developing a precise model towards risk management is found to be a computationally challenging task owing to critical and vague definition of the origination of the problems. This research work introduces a model called as PROM i.e. Predictive Optimization of Risk Management with the perspective of software engineering. The significant contribution of PORM is to offer a reliable computation of risk analysis by considering generalized practical scenario of software development practices in Information Technology (IT) industry. The proposed PORM system is also designed and equipped with better risk factor assessment with an aid of machine learning approach without having more involvement of iteration. The study outcome shows that PORM system offers computationally cost effective analysis of risk factor as assessed with respect to different quality standards of object oriented system involved in every software projects.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
Abstract— Testing is an important phase of quality control of Software Development Life Cycle (SDLC). There are various types of testing methodologies involved to test the application. Regression Testing is a type of testing, which is done to ensure whether the modified features or bug fix had an impact over the existing functionality. Defects are identified by executing the set of test cases. Regression Test case selection is not at all possible to conclude how much retesting is required to identify the deviation when the test suites are larger in size. Prioritization of test cases is done to change the order of test case execution based on the severity. In the proposed a model based approach prioritization of test cases are generated based on UML diagrams (Sequence and State Chart). The modified features have the reflection in the model generation and the number of states and transitions covered. Prioritized test cases are then clustered based upon the severities using dendragram approach. It leads to decrease in the time and cost of regression testing.
Conceptualization of a Domain Specific Simulator for Requirements Prioritizationresearchinventy
This paper conceptualizes a domain specific simulator for requirements prioritization; its aims at helping to identify appropriate prioritization strategies for a project in hand. The possible existing scenarios are difficult to analyze; they involve different variables, like the selection of: stakeholders (their availability, expertise, and importance); prioritization criteria; and prioritization methods. To demonstrate the feasibility of the proposed simulator elements, a well established general purpose simulator, called Arena, was used. The results demonstrate that, it is possible to build the suggested scenarios in order to study and make inferences about the prioritization strategies.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
Comparative Performance Analysis of Machine Learning Techniques for Software ...csandit
Machine learning techniques can be used to analyse data from different perspectives and enable
developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
A Systematic Mapping Review of Software Quality Measurement: Research Trends,...IJECEIAES
Software quality is a key for the success in the business of information and technology. Hence, before be marketed, it needs the software quality measurement to fulfill the user requirements. Some methods of the software quality analysis have been tested in a different perspective, and we have presented the software method in the point of view of users and experts. This study aims to map the method of software quality measurement in any models of quality. Using the method of Systematic Mapping Study, we did a searching and filtering of papers using the inclusion and exclusion criteria. 42 relevant papers have been obtained then. The result of the mapping showed that though the model of ISO SQuaRE has been widely used since the last five years and experienced the dynamics, the researchers in Indonesia still used ISO9126 until the end of 2016.The most commonly used method of the software quality measurement Method is the empirical method, and some researchers have done an AHP and Fuzzy approach in measuring the software quality.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Review on Algorithmic and Non Algorithmic Software Cost Estimation Techniquesijtsrd
Effective software cost estimation is the most challenging and important activities in software development. Developers want a simple and accurate method of efforts estimation. Estimation of the cost before starting of work is a prediction and prediction always not accurate. Software effort estimation is a very critical task in the software engineering and to control quality and efficiency a suitable estimation technique is crucial. This paper gives a review of various available software effort estimation methods, mainly focus on the algorithmic model and non algorithmic model. These existing methods for software cost estimation are illustrated and their aspect will be discussed. No single technique is best for all situations, and thus a careful comparison of the results of several approaches is most likely to produce realistic estimation. This paper provides a detailed overview of existing software cost estimation models and techniques. This paper presents the strength and weakness of various cost estimation methods. This paper focuses on some of the relevant reasons that cause inaccurate estimation. Pa Pa Win | War War Myint | Hlaing Phyu Phyu Mon | Seint Wint Thu "Review on Algorithmic and Non-Algorithmic Software Cost Estimation Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26511.pdfPaper URL: https://www.ijtsrd.com/engineering/-/26511/review-on-algorithmic-and-non-algorithmic-software-cost-estimation-techniques/pa-pa-win
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Software Defect Prediction Using Local and Global AnalysisEditor IJMTER
The software defect factors are used to measure the quality of the software. The software
effort estimation is used to measure the effort required for the software development process. The defect
factor makes an impact on the software development effort. Software development and cost factors are
also decided with reference to the defect and effort factors. The software defects are predicted with
reference to the module information. Module link information are used in the effort estimation process.
Data mining techniques are used in the software analysis process. Clustering techniques are used
in the property grouping process. Rule mining methods are used to learn rules from clustered data
values. The “WHERE” clustering scheme and “WHICH” rule mining scheme are used in the defect
prediction and effort estimation process. The system uses the module information for the defect
prediction and effort estimation process.
The proposed system is designed to improve the defect prediction and effort estimation process.
The Single Objective Genetic Algorithm (SOGA) is used in the clustering process. The rule learning
operations are carried out sing the Apriori algorithm. The system improves the cluster accuracy levels.
The defect prediction and effort estimation accuracy is also improved by the system. The system is
developed using the Java language and Oracle relation database environment.
The End User Requirement for Project Management Software Accuracy IJECEIAES
This research explains the relationship between the end user requirement and accuracy of PMS (Project Management Software). The research aims are to analyze the PMS accuracy and measuring the probability of PMS accuracy in achieving ±1% of the end user requirement. The bias statistical method will be used to prove the PMS accuracy that based on the hypothesis testing. The result indicates the PMS is still accurate to be implemented in AcehIndonesia area projects that using the SNI (National Indonesia Standard as current method) with the accuracy index of ±7.5%. The achievement probability of reaching the end user requirement is still low of ±21.77%. In case of the PMS, the low achievement of the end user requirement is not only caused by the low accuracy of the PMS but also caused by the amount of variability error, which is influenced by the amount of variation of the project activity. In this study, we confirm that it is necessary to reconcile both conditions between the PMS accuracy and the end user requirements.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
PORM: Predictive Optimization of Risk Management to Control Uncertainty Probl...IJECEIAES
Irrespective of different research-based approaches toward risk management, developing a precise model towards risk management is found to be a computationally challenging task owing to critical and vague definition of the origination of the problems. This research work introduces a model called as PROM i.e. Predictive Optimization of Risk Management with the perspective of software engineering. The significant contribution of PORM is to offer a reliable computation of risk analysis by considering generalized practical scenario of software development practices in Information Technology (IT) industry. The proposed PORM system is also designed and equipped with better risk factor assessment with an aid of machine learning approach without having more involvement of iteration. The study outcome shows that PORM system offers computationally cost effective analysis of risk factor as assessed with respect to different quality standards of object oriented system involved in every software projects.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
Abstract— Testing is an important phase of quality control of Software Development Life Cycle (SDLC). There are various types of testing methodologies involved to test the application. Regression Testing is a type of testing, which is done to ensure whether the modified features or bug fix had an impact over the existing functionality. Defects are identified by executing the set of test cases. Regression Test case selection is not at all possible to conclude how much retesting is required to identify the deviation when the test suites are larger in size. Prioritization of test cases is done to change the order of test case execution based on the severity. In the proposed a model based approach prioritization of test cases are generated based on UML diagrams (Sequence and State Chart). The modified features have the reflection in the model generation and the number of states and transitions covered. Prioritized test cases are then clustered based upon the severities using dendragram approach. It leads to decrease in the time and cost of regression testing.
Conceptualization of a Domain Specific Simulator for Requirements Prioritizationresearchinventy
This paper conceptualizes a domain specific simulator for requirements prioritization; its aims at helping to identify appropriate prioritization strategies for a project in hand. The possible existing scenarios are difficult to analyze; they involve different variables, like the selection of: stakeholders (their availability, expertise, and importance); prioritization criteria; and prioritization methods. To demonstrate the feasibility of the proposed simulator elements, a well established general purpose simulator, called Arena, was used. The results demonstrate that, it is possible to build the suggested scenarios in order to study and make inferences about the prioritization strategies.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
Comparative Performance Analysis of Machine Learning Techniques for Software ...csandit
Machine learning techniques can be used to analyse data from different perspectives and enable
developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
A Systematic Mapping Review of Software Quality Measurement: Research Trends,...IJECEIAES
Software quality is a key for the success in the business of information and technology. Hence, before be marketed, it needs the software quality measurement to fulfill the user requirements. Some methods of the software quality analysis have been tested in a different perspective, and we have presented the software method in the point of view of users and experts. This study aims to map the method of software quality measurement in any models of quality. Using the method of Systematic Mapping Study, we did a searching and filtering of papers using the inclusion and exclusion criteria. 42 relevant papers have been obtained then. The result of the mapping showed that though the model of ISO SQuaRE has been widely used since the last five years and experienced the dynamics, the researchers in Indonesia still used ISO9126 until the end of 2016.The most commonly used method of the software quality measurement Method is the empirical method, and some researchers have done an AHP and Fuzzy approach in measuring the software quality.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
The recruitment of new personnel is one of the most essential business processes which affect the quality of
human capital within any company. It is highly essential for the companies to ensure the recruitment of
right talent to maintain a competitive edge over the others in the market. However IT companies often face
a problem while recruiting new people for their ongoing projects due to lack of a proper framework that
defines a criteria for the selection process. In this paper we aim to develop a framework that would allow
any project manager to take the right decision for selecting new talent by correlating performance
parameters with the other domain-specific attributes of the candidates. Also, another important motivation
behind this project is to check the validity of the selection procedure often followed by various big
companies in both public and private sectors which focus only on academic scores, GPA/grades of students
from colleges and other academic backgrounds. We test if such a decision will produce optimal results in
the industry or is there a need for change that offers a more holistic approach to recruitment of new talent
in the software companies. The scope of this work extends beyond the IT domain and a similar procedure
can be adopted to develop a recruitment framework in other fields as well. Data-mining techniques provide
useful information from the historical projects depending on which the hiring-manager can make decisions
for recruiting high-quality workforce. This study aims to bridge this hiatus by developing a data-mining
framework based on an ensemble-learning technique to refocus on the criteria for personnel selection. The
results from this research clearly demonstrated that there is a need to refocus on the selection-criteria for
quality objectives.
Machine learning techniques can be used to analyse data from different perspectives and enable developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewIJERA Editor
Software testing is one of the most important steps in the process of Software Development. Testing provides
the glimpse of the proper functioning of the system under different conditions. It makes it a necessary step to
choose the best testing method for the software system to be successful and accepted by a large number of
people as the market is really competitive these days and only error free systems can survive for a longer period
of time. This paper gives the comparative analysis of two major methods of testing : Formal Specifications
Based Software Testing and Model Based Software Testing, which are used widely in the process of software
development process. It brings out how these two methods of testing can provide reliability to software system
including the major uses, advantages, and disadvantages of both the testing methods. It briefly gives the detailed
comparative analysis of these two methods of software testing. It also brings out the situations where formal
specifications based testing is more effective and efficient while model based testing being effective in others.
This comparative analysis will help one in deciding on a better testing technique, depending upon the situation,
and requirements of software, for the software to be successful in long run
Analyzing the solutions of DEA through information visualization and data min...Gurdal Ertek
Data envelopment analysis (DEA) has proven to be a useful tool for assessing efficiency or productivity of organizations, which is of vital practical importance in managerial decision making. DEA provides a significant amount of information from which analysts and managers derive insights and guidelines to promote their existing performances. Regarding to this fact, effective and methodologic analysis and interpretation of DEA solutions are very critical. The main objective of this study is then to develop a general decision support system (DSS) framework to analyze the solutions of basic DEA models. The paper formally shows how the solutions of DEA models should be structured so that these solutions can be examined and interpreted by analysts through information visualization and data mining techniques effectively. An innovative and convenient DEA solver, Smart DEA, is designed and developed in accordance with the proposed analysis framework. The developed software provides a DEA solution which is consistent with the framework and is ready-to-analyze with data mining tools, through a table-based structure. The developed framework is tested and applied in a real world project for bench marking the vendors of a leading Turkish automotive company. The results show the effectiveness and the efficacy of the proposed framework.
http://research.sabanciuniv.edu.
TRANSFORMING SOFTWARE REQUIREMENTS INTO TEST CASES VIA MODEL TRANSFORMATIONijseajournal
Executable test cases originate at the onset of testing as abstract requirements that represent system
behavior. Their manual development is time-consuming, susceptible to errors, and expensive. Translating
system requirements into behavioral models and then transforming them into a scripting language has the
potential to automate their conversion into executable tests. Ideally, an effective testing process should
start as early as possible, refine the use cases with ample details, and facilitate the creation of test cases.
We propose a methodology that enables automation in converting functional requirements into executable
test cases via model transformation. The proposed testing process starts with capturing system behavior in
the form of visual use cases, using a domain-specific language, defining transformation rules, and
ultimately transforming the use cases into executable tests.
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
Software cost estimation is of the most challenging task in project management in order to ensuring smoother development operation and target achievement. There has been evolution of various standards tools and techniques for cost estimation practiced in the industry at present times. However, it was never investigated about the overall picturization of effectiveness of such techniques till date. This paper initiates its contribution by presenting taxonomies of conventional cost-estimation techniques and then investigates the research trends towards frequently addressed problems in it. The paper also reviews the existing techniques in well-structured manner in order to highlight the problems addressed, techniques used, advantages associated and limitation explored from literatures. Finally, we also brief the explored open research issues as an added contribution to this manuscript.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
Software testing effort estimation with cobb douglas function- a practical ap...eSAT Journals
Abstract Effort estimation is one of the critical challenges in Software Testing Life Cycle (STLC). It is the basis for the project’s effort estimation, planning, scheduling and budget planning. This paper illustrates model with an objective to depict the accuracy and bias variation of an organization’s estimates of software testing effort through Cobb-Douglas function (CDF). Data variables selected for building the model were believed to be vital and have significant impact on the accuracy of estimates. Data gathered for the completed projects in the organization for about 13 releases. Statistically, all variables in this model were statistically significant at p<0.05><0.01 levels. The Cobb-Douglas function was selected and used for the software testing effort estimation. The results achieved with CDF were compared with the estimates provided by the area expert. The model’s estimation figures are more accurate than the expert judgment. CDF has one of the appropriate techniques for estimating effort for software testing. CDF model accuracy is 93.42%.
Efficiency of Women’s Technical Institutions By Using Bcc Model Through Dea A...IOSR Journals
This paper conducts an application of the DEA Methodology in the assessment of the performance of JNTUH Colleges the indicators included the Faculty, Students, Infrastructure and Placements of the technical Institutions. The results reveal those institutions that more efficiently carry out these activities. The proposed method has been used for selection of quality attributes in technical education setting the performance of an institute is likely to be influenced by quality of teacher, quality of students, infrastructure administration, extent of training and placement and many others. It is felt that quality and performance evaluation is necessary not only for appraisal but it is also required to improve overall service quality. Finally we discuss about the existence of differences in the strengths and weaknesses between the technical institutions.
Although there has been an extensive study over delivering, increasing and maintaining software quality, there has not been enough aide- mémoire on ‘Rating a Software‘s Quality’. This study would project the literature review thus far and also sculpt the scope and need for the evolution of a rating system of software quality for the future.
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
Similar to Software Cost Estimation Using Clustering and Ranking Scheme (20)
A NEW DATA ENCODER AND DECODER SCHEME FOR NETWORK ON CHIPEditor IJMTER
System-on-chip (soc) based system has so many disadvantages in power-dissipation as
well as clock rate while the data transfer from one system to another system in on-chip. At the same
time, a higher operated system does not support the lower operated bus network for data transfer.
However an alternative scheme is proposed for high speed data transfer. But this scheme is limited to
SOCs. Unlike soc, network-on-chip (NOC) has so many advantages for data transfer. It has a special
feature to transfer the data in on-chip named as transitional encoder. Its operation is based on input
transitions. At the same time it supports systems which are higher operated frequencies. In this
project, a low-power encoding scheme is proposed. The proposed system yields lower dynamic
power dissipation due to the reduction of switching activity and coupling switching activity when
compared to existing system. Even-though many factors which is based on power dissipation, the
dynamic power dissipation is only considerable for reasonable advantage. The proposed system is
synthesized using quartus II 9.1 software. Besides, the proposed system will be extended up to
interlink PE communication with help of routers and PE’s which are performed by various
operations. To implement this system in real NOC’s contains the proposed encoders and decoders for
data transfer with regular traffic scenarios should be considered.
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...Editor IJMTER
Coins are important part of our life. We use coins in a places like stores, banks, buses, trains
etc. So it becomes a basic need that coins can be sorted, counted automatically. For this, there is
necessary that the coins can be recognized automatically. Automated Coin Recognition System for the
Indian Coins of Rs. 1, 2, 5 and 10 with the rotation invariance. We have taken images from the both
sides of coin. So this system is capable to recognizing coins from both sides. Features are taken from the
images using techniques as a Hough Transformation, Pattern Averaging etc.
Analysis of VoIP Traffic in WiMAX EnvironmentEditor IJMTER
Worldwide Interoperability for Microwave Access (WiMAX) is currently one of the
hottest technologies in wireless communication. It is a standard based on the IEEE 802.16 wireless
technology that provides a very high throughput broadband connections over long distances. In
parallel, Voice Over Internet Protocol (VoIP) is a new technology which provides access to voice
communication over internet protocol and hence it is becomes an alternative to public switched
telephone networks (PSTN) due to its capability of transmission of voice as packets over IP
networks. A lot of research has been done in analyzing the performances of VoIP traffic over
WiMAX network. In this paper we review the analysis carried out by several authors for the most
common VoIP codec’s which are G.711, G.723.1 and G.729 over a WiMAX network using various
service classes. The objective is to compare the results for different types of service classes with
respect to the QoS parameters such as throughput, average delay and average jitter.
A Hybrid Cloud Approach for Secure Authorized De-DuplicationEditor IJMTER
The cloud backup is used for the personal storage of the people in terms of reducing the
mainlining process and managing the structure and storage space managing process. The challenging
process is the deduplication process in both the local and global backup de-duplications. In the prior
work they only provide the local storage de-duplication or vice versa global storage de-duplication in
terms of improving the storage capacity and the processing time. In this paper, the proposed system
is called as the ALG- Dedupe. It means the Application aware Local-Global Source De-duplication
proposed system to provide the efficient de-duplication process. It can provide the efficient deduplication process with the low system load, shortened backup window, and increased power
efficiency in the user’s personal storage. In the proposed system the large data is partitioned into
smaller part which is called as chunks of data. Here the data may contain the redundancy it will be
avoided before storing into the storage area.
Aging protocols that could incapacitate the InternetEditor IJMTER
The biggest threat to the Internet is the fact that it was never really designed. For e.g., the
BGP protocol is used by Internet routers to exchange information about changes to the Internet's
network topologies. However, it also is among the most fundamentally broken; as Internet routing
information can be poisoned with bogus routing information. Instead, it evolved in fits and start,
thanks to various protocols that have been cobbled together to fulfill the needs of the moment. Few
of protocols from them were designed with security in mind. or if they were sported no more than
was needed to keep out a nosy neighbor, not a malicious attacker. The result is a welter of aging
protocols susceptible to exploit on an Internet scale. Here are six Internet protocols that could stand
to be replaced sooner rather than later or are (mercifully) on the way out.
A Cloud Computing design with Wireless Sensor Networks For Agricultural Appli...Editor IJMTER
The emergence of exactitude agriculture has been promoted by the numerous developments within
the field of wireless sensing element actor networks (WSAN). These WSANs offer important data
for gathering, work management, development of crops, and limitation of crop diseases. Goals of
this paper to introducing cloud computing as a brand new way (technique) to be utilized in addition
to WSANs to any enhance their application and benefits to the area of agriculture.
A CAR POOLING MODEL WITH CMGV AND CMGNV STOCHASTIC VEHICLE TRAVEL TIMESEditor IJMTER
Carpooling (also car-sharing, ride-sharing, lift-sharing), is the sharing of car journeys so
that more than one person travels in a car. It helps to resolve a variety of problems that continue to
plague urban areas, ranging from energy demands and traffic congestion to environmental pollution.
Most of the existing method used stochastic disturbances arising from variations in vehicle travel
times for carpooling. However it doesn’t deal with the unmet demand with uncertain demand of the
vehicle for car pooling. To deal with this the proposed system uses Chance constrained
formulation/Programming (CCP) approach of the problem with stochastic demand and travel time
parameters, under mild assumptions on the distribution of stochastic parameters; and relates it with a
robust optimization approach. Since real problem sizes can be large, it could be difficult to find
optimal solutions within a reasonable period of time. Therefore solution algorithm using tabu
heuristic solution approach is developed to solve the model. Therefore, we constructed a stochastic
carpooling model that considers the in- fluence of stochastic travel times. The model is formulated as
an integer multiple commodity network flow problem. Since real problem sizes can be large, it could
be difficult to find optimal solutions within a reasonable period of time.
Sustainable Construction With Foam Concrete As A Green Green Building MaterialEditor IJMTER
A green building is an environmentally sustainable building, designed, constructed and
operated to minimise the total environmental impacts. Carbon dioxide (CO2) is the primary
greenhouse gas emitted through human activities. It is claimed that 5% of the world’s carbon dioxide
emission is attributed to cement industry, which is the vital constituent of concrete. Due to the
significant contribution to the environmental pollution, there is a need for finding an optimal solution
along with satisfying the civil construction needs. Apart from normal concrete bricks, a clay brick,
Foam concrete is a new innovative technology for sustainable building and civil construction which
fulfills the criteria of being a Green Material. This paper concludes that Foam Concrete can be an
effective sustainable material for construction and also focuses on the cost effectiveness in using
Foam Concrete as a building material in replacement with Clay Brick or other bricks.
USE OF ICT IN EDUCATION ONLINE COMPUTER BASED TESTEditor IJMTER
A good education system is required for overall prosperity of a nation. A tremendous
growth in the education sector had made the administration of education institutions complex. Any
researches reveal that the integration of ICT helps to reduce the complexity and enhance the overall
administration of education. This study has been undertaken to identify the various functional areas
to which ICT is deployed for information administration in education institutions and to find the
current extent of usage of ICT in all these functional areas pertaining to information administration.
The various factors that contribute to these functional areas were identified. A theoretical model was
derived and validated.
Textual Data Partitioning with Relationship and Discriminative AnalysisEditor IJMTER
Data partitioning methods are used to partition the data values with similarity. Similarity
measures are used to estimate transaction relationships. Hierarchical clustering model produces tree
structured results. Partitioned clustering produces results in grid format. Text documents are
unstructured data values with high dimensional attributes. Document clustering group ups unlabeled text
documents into meaningful clusters. Traditional clustering methods require cluster count (K) for the
document grouping process. Clustering accuracy degrades drastically with reference to the unsuitable
cluster count.
Textual data elements are divided into two types’ discriminative words and nondiscriminative
words. Only discriminative words are useful for grouping documents. The involvement of
nondiscriminative words confuses the clustering process and leads to poor clustering solution in return.
A variation inference algorithm is used to infer the document collection structure and partition of
document words at the same time. Dirichlet Process Mixture (DPM) model is used to partition
documents. DPM clustering model uses both the data likelihood and the clustering property of the
Dirichlet Process (DP). Dirichlet Process Mixture Model for Feature Partition (DPMFP) is used to
discover the latent cluster structure based on the DPM model. DPMFP clustering is performed without
requiring the number of clusters as input.
Document labels are used to estimate the discriminative word identification process. Concept
relationships are analyzed with Ontology support. Semantic weight model is used for the document
similarity analysis. The system improves the scalability with the support of labels and concept relations
for dimensionality reduction process.
Testing of Matrices Multiplication Methods on Different ProcessorsEditor IJMTER
There are many algorithms we found for matrices multiplication. Until now it has been
found that complexity of matrix multiplication is O(n3). Though Further research found that this
complexity can be decreased. This paper focus on the algorithm and its complexity of matrices
multiplication methods.
Malware is a worldwide pandemic. It is designed to damage computer systems without
the knowledge of the owner using the system. Software‟s from reputable vendors also contain
malicious code that affects the system or leaks information‟s to remote servers. Malware‟s includes
computer viruses, spyware, dishonest ad-ware, rootkits, Trojans, dialers etc. Malware detectors are
the primary tools in defense against malware. The quality of such a detector is determined by the
techniques it uses. It is therefore imperative that we study malware detection techniques and
understand their strengths and limitations. This survey examines different types of Malware and
malware detection methods.
SURVEY OF TRUST BASED BLUETOOTH AUTHENTICATION FOR MOBILE DEVICEEditor IJMTER
Practical requirements for securely demonstrating identities between two handheld
devices are an important concern. The adversary can inject a Man-In- The-Middle (MITM) attack to
intrude the protocol. Protocols that employ secret keys require the devices to share private
information in advance, in which it is not feasible in the above scenario. Apart from insecurely
typing passwords into handheld devices or comparing long hexadecimal keys displayed on the
devices’ screen, many other human-verifiable protocols have been proposed in the literature to solve
the problem. Unfortunately, most of these schemes are unsalable to more users. Even when there are
only three entities attempt to agree a session key, these protocols need to be rerun for three times.
So, in the existing method a bipartite and a tripartite authentication protocol is presented using a
temporary confidential channel. Besides, further extend the system into a transitive authentication
protocol that allows multiple handheld devices to establish a conference key securely and efficiently.
But this method detects only the outsider attacks. Method does not consider the insider attacks. So,
in the proposed method trust score based method is introduced which computes the trust values for
the nodes and provide the security. The trust score is computed has a positive influence on the
confidence with which an entity conducts transactions with that node. Network the behavior of the
node will be monitored periodically and its trust value is also updated .So depending on the behavior
of the node in the network trust relation will be established between two nodes.
GLAUCOMA is a chronic eye disease that can damage optic nerve. According to WHO It
is the second leading cause of blindness, and is predicted to affect around 80 million people by 2020.
Development of the disease leads to loss of vision, which occurs increasingly over a long period of
time. As the symptoms only occur when the disease is quite advanced so that glaucoma is called the
silent thief of sight. Glaucoma cannot be cured, but its development can be slowed down by
treatment. Therefore, detecting glaucoma in time is critical. However, many glaucoma patients are
unaware of the disease until it has reached its advanced stage. In this paper, some manual and
automatic methods are discussed to detect glaucoma. Manual analysis of the eye is time consuming
and the accuracy of the parameter measurements also varies with different clinicians. To overcome
these problems with manual analysis, the objective of this survey is to introduce a method to
automatically analyze the ultrasound images of the eye. Automatic analysis of this disease is much
more effective than manual analysis.
Survey: Multipath routing for Wireless Sensor NetworkEditor IJMTER
Reliability is playing very vital role in some application of Wireless Sensor Networks
and multipath routing is one of the ways to increase the probability of reliability. More over energy
consumption is constraint. In this paper, we provide a survey of the state-of-the-art of proposed
multipath routing algorithm for Wireless Sensor Networks. We study the design, analyze the tradeoff
of each design, and overview several presenting algorithms.
Step up DC-DC Impedance source network based PMDC Motor DriveEditor IJMTER
This paper is devoted to the Quasi Z source network based DC Drive. The cascaded
(two-stage) Quasi Z Source network could be derived by the adding of one diode, one inductor,
and two capacitors to the traditional quasi-Z-source inverter The proposed cascaded qZSI inherits all
the advantages of the traditional solution (voltage boost and buck functions in a single stage,
continuous input current, and improved reliability). Moreover, as compared to the conventional qZSI,
the proposed solution reduces the shoot-through duty cycle by over 30% at the same voltage boost
factor. Theoretical analysis of the two-stage qZSI in the shoot-through and non-shoot-through
operating modes is described. The proposed and traditional qZSI-networks are compared. A
prototype of a Quasi Z Source network based DC Drive was built to verify the theoretical
assumptions. The experimental results are presented and analyzed.
SPIRITUAL PERSPECTIVE OF AUROBINDO GHOSH’S PHILOSOPHY IN TODAY’S EDUCATIONEditor IJMTER
The paper reflects the spiritual philosophy of Aurobindo Ghosh which is helpful in today’s
education. In 19th century he wrote about spirituality, in accordance with that it is a core and vital part
of today’s education. It is very much essential for today’s kid. Here I propose the overview of that
philosophy.At the utmost regeneration of those values in today’s generation is the great deal with
education system. To develop the values and spiritual education in the youngers is the great moto of
mine. It is the materialistic world and without value redefinition among them is the harder task but not
difficult.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
Single Phase Thirteen-Level Inverter using Seven Switches for Photovoltaic sy...Editor IJMTER
This paper proposes a single-phase thirteen-level inverter using seven switches, with a
novel pulse width-modulated (PWM) control scheme. The Proposed multilevel inverter output
voltage level increasing by using less number of switches driven by the multicarrier modulation
techniques. The inverter is capable of producing thirteen levels of output-voltage (Vdc, 5/6Vdc,
4/6Vdc, 3/6Vdc, 2/6Vdc, 1/6Vdc, 0, -5/6Vdc, -4/6Vdc, -3/6Vdc, -2/6Vdc, -1/6Vdc,-Vdc) from the
dc supply voltage. A digital multi carrier PWM algorithm was implemented in a Spartan 3E FPGA.
The proposed system was verified through simulation and implemented in a prototype.
SIMULATION AND ANALYSIS OF TIBC FOR INDUCTION MOTOR DRIVEEditor IJMTER
This project proposes a new low cost converter- inverter drive system for induction motor.
The converter is designed to drive a three-phase induction motor directly from PhotoVoltaic (PV)
energy source. The three phase induction motor is better because of its low cost, reliability, no
maintenance, high efficiency and good power factor. Two Inductor Boost Converter (TIBC) is used
in first stage along with the voltage doubler rectifier and snubber to develop the system. The reason
to choose TIBC is due to its high voltage gain and low input current ripple, which minimizes the
oscillations at the module operation point. To convert the boosted DC output voltage from PV
module into AC, a voltage source inverter is implemented to attain sufficient voltage to drive the
motor. As the PV cell posses the nonlinear behaviour, the Maximum Power Point Tracker (MPPT)
controller is needed to improve the utilization efficiency of the converter. The MPPT algorithm
proposed in this paper based on hill climbing algorithm, for matching the load and to boost the PV
module output voltage. PI Controller is used to control the speed of the Induction Motor. The entire
system is simulated using Matlab Simulink environment. The system is expected to be operated with
high efficiency and low cost for long lifetime.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
Software Cost Estimation Using Clustering and Ranking Scheme
1. Scientific Journal Impact Factor (SJIF): 1.711
International Journal of Modern Trends in Engineering
and Research
www.ijmter.com
@IJMTER-2014, All rights Reserved 450
e-ISSN: 2349-9745
p-ISSN: 2393-8161
Software Cost Estimation Using Clustering and Ranking Scheme
Ms. S. Buvana1
, Dr. M. Latha2
, Mr. R. Subramanian3
1
Research Scholar
2
M.Sc., M.Phil., Ph.D, Associate Professor, Department of Computer Science
1,2
Sri Sarada College for Women, Salem, Tamilnadu, India
3
Erode Arts and Science College, Erode
Abstract-Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
I. INTRODUCTION
Cost judgment or estimation is one of the most difficult tasks in project management. It is to
accurately estimate needed resources and required schedules for software development projects. The
software estimation process includes estimating the size of the software product to be produced,
estimating the effort required, developing preliminary project schedules and finally, estimating on the
whole cost of the project. In the last few years’there are many software cost estimation methods
available including algorithmic methods, estimating by analogy, expert judgment method, price to win
method, top-down method and bottom-up method. No one method is necessarily better or worse than the
other, their strengths and weaknesses are often complimentary to each other. To understand their
strengths and weaknesses is very important when the user want to estimate their projects.
For improving the accuracy of cost estimation other fields also calibrate with the software
engineering fields. 2CEE is one of the cost estimation model developed by JPL for NASA [4]. This
model combination of data mining and software engineering fields. This type of estimation it can
calibrate machine learning algorithms with cost estimation models. The main aim of the system to
provide a survey of all these type of models and methods and see which model generates the accurate
sestimation.
II. RELATED WORK
During the last decades there has been evolving research concerning the identification of the best
SCE method [1]. Researchers strive to introduce prediction techniques including expert judgment,
algorithmic, statistical and machine learning methods. Miyazaki et al. claimed that the “de facto”
2. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 451
MMRE accuracy measure tends to advance models that underestimate the actual effort while
Kitchenham et al. indicated the variation of accuracy measures as a primary source of inconclusive
studies. Toward this direction, Foss et al. investigated the basis of this criticism through a simulation
study, proposing alternative accuracy indicators and concluding that there is need for applying well
established statistical procedures when conducting SCE experiments.
Myrtveit et al. extended the above-mentioned findings and pointed out that inconsistent results
are not caused only by accuracy measures but also by unreliable research procedures. Through a
simulation study, they studied the consequences of three main ingredients of the comparison process: the
single data sample, the accuracy indicator and the cross-validation procedure. Mittas and Angelis [5]
showed that the usual practice of promoting a model against a competitive one just by reporting an
indicator can lead to erroneous results since these indicators are single statistics of error distributions,
usually highly skewed and nonnormal. In this regard, they proposed resampling procedures for
hypothesis testing, such as permutation tests and bootstrap techniques for the construction of robust
confidence intervals.
Menzies et al. [6] studied the problem of “conclusion instability” through the COSEEKMO
toolkit that supports 15 parametric learners with row and column preprocessors based on two different
sets of tuning parameters. The Scott-Knott test presented here was used in another context, for
combining classifiers applied to large databases. Specifically, the Scott-Knott test and other statistical
tests were used for the selection of the best subgroup among different classification algorithms and the
subsequent fusion of the models’ decisions in this subgroup via simple methods, like weighted voting.
In [7], Demsar discusses the issue of statistical tests for comparisons of several machine learning
classifiers on multiple datasets reviewing several statistical methodologies. The method proposed as
more suitable is the nonparametric analogue of ANOVA, i.e., the Friedman test, along with the
corresponding Nemenyi post hoc test. The Friedman test ranks all the classifiers separately for each
dataset and then uses the average ranks of algorithms to test whether all classifiers are equivalent. In
case of differences, the Nemenyi test performs all the pairwise comparisons between classifiers to
identify the significant differences. This method is used by Lessmann et al. [9] for the comparison of
classifiers for prediction of defected modules. The methodology described in our paper, apart from the
fact that is applied to a different problem, i.e., the SCE where cost and prediction errors are continuous
variables, has fundamental differences regarding the goals, the way it is used and the output.
III. SOFTWARE COST ESTIMATION MODELS
The importance and the significant role of Software Cost Estimation (SCE) to the well-balanced
management of a forthcoming project are clearly portrayed through the introduction and utilization of a
large number of techniques during the past decades [1]. The rapidly increased need of large-scaled and
complex software systems leads managers to settle SCE as one of the most vital activities that is closely
related with the success or failure of the whole development process. Inaccurate estimates can be proved
catastrophic to both developers and customers since they can cause the delay of the product deliverables
or, even worse, the cancellation of a contract.
There is an imperative need to investigate what the state of the art in statistics is before trying to
derive conclusions and unstable results concerning the superiority of a prediction model over others for
a particular dataset. The answer to this problem cannot constitute a unique solution since the notion of
“best” is quite subjective [2]. In fact, a practitioner can always rank the prediction models according to a
predefined accuracy measure, but the critical issue is to identify how many of them are evidently the
best, in the sense that their difference from all the others is statistically significant. The research question
of finding the “best” prediction technique can be restated as a problem of identifying a subset or a group
of best techniques. The aim of the system is therefore to propose a statistical framework for comparative
3. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 452
SCE experiments concerning multiple prediction models. It is worth mentioning that the setup of the
current study was also inspired by an analogous attempt dealing with the problem of comparing
classification models in Software Defect Prediction, a research area that is also closely related to the
improvement of software quality.
The methodology is based on the analysis of a Design of Experiment (DOE) or Experimental
Design, a basic statistical tool in many applied research areas such as engineering, financial and medical
sciences. In the field of SCE it has not yet been used in a systematic manner. Generally, DOE refers to
the process of planning, designing and analyzing an experiment in order to derive valid and objective
conclusions effectively and efficiently by taking into account, in a balanced and systematic manner, the
sources of variation. In the present study [8], DOE analysis is used to compare different cost prediction
models by taking into account the blocking effect, i.e., the fact that they are applied repeatedly on the
same training-test datasets.
The statistical methodology is also based on an algorithmic procedure which is able to produce
nonoverlapping clusters of prediction models, homogeneous with respect to their predictive
performance. For this purpose, the system utilizes a specific test from the generic class of multiple
comparisons procedures, namely, the Scott-Knott test, which ranks the models and partitions them into
clusters.
The statistical framework is applied on a relatively large-scale set of 11 methods over six public
domain datasets from the PROMISE repository and the International Software Benchmarking Standards
Group (ISBSG) [3]. Finally, in order to address the disagreement on the performance measures, the
system applies the whole analysis on three functions of error that measure different important aspects of
prediction techniques: accuracy, bias and spread of estimates.
IV. PROBLEM STATEMENT
The important role of well-established statistical comparisons in SCE is highlighted in many
recent systems, especially during the last decade, where the findings are derived through formal
statistical hypothesis testing. Indeed, the researchers use parametric as well as nonparametric
procedures, whereas there has also been increasing interest for more robust statistical tests such as
permutation tests and bootstrapping techniques for the construction of confidence intervals. The system
belongs to a generic class in statistics known as “multiple hypothesis testing” and can be defined as the
procedure of testing more than one hypothesis simultaneously. Briefly describing the problem, the
conclusions derived from a statistical hypothesis test are always subject to uncertainty. For this reason,
the system uses an acceptable maximum probability of rejecting the null hypothesis when it is true and
this is referred to as a “Type I error”. In the case of multiple comparison problems, when several
hypotheses are carried out, the probability that at least a Type I error occurs increases dramatically with
the number of hypotheses. Multiple Comparative Algorithms (MCA) is used to evaluate the software
cost estimation measure quality. The following problems are identified from the current software cost
estimation models. The clustering accuracy is very low in the MCA scheme. Ranking prediction
accuracy is limited in the MCA scheme. Software product property relationships are not considered.
V. ENHANCED MULTIPLE COMPARATIVE ALGORITHMS (EMCA)
The software cost estimation measures are used to estimate the software product cost for the
development process. The Multiple Comparative Algorithms are used to analyze the quality of the
software cost estimation measures. The algorithm ranks and clusters the cost prediction models based on
the errors measured for a particular dataset. Therefore, each dataset has its own set of “best” models.
This is more realistic in SCE practice since each software development organization has its own dataset
and wants to find the models that best fit its data rather than trying to find a globally best model which is
4. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 453
unfeasible [10]. Furthermore, the clustering as an output is different from the output of pairwise
comparisons tests, like the Nemenyi test. For larger numbers of models the overlapping homogeneous
groups resulting from pairwise tests are ambiguous and problematic in interpretation. On the other hand,
a ranking and clustering algorithm provides clear groupings of models, designating the group of best
models for a particular dataset.
The goal of the system is to further extend the research concerning the comparison and ranking
of multiple alternative SCE models. The system uses a framework for conducting comparative
experiments and present an evaluation of this analysis over different datasets and prediction models. The
Multiple Comparative Algorithm (MCA) based software cost estimation measure quality analysis
scheme is improved with optimal centroid based clustering scheme. The system also uses the software
product property relationships for cost estimation process. The Enhanced Multiple Comparative
Algorithm (EMCA) is proposed to measure and rank the cost estimation measures with high accuracy
levels. The Multple Comparative Algorithms (MCA) scheme is used to measure the software cost
prediction model quality levels. The clustering and ranking model is used to evaluate the measures. The
Non overlapped clustering mechanism is used in the MCA analysis scheme. The EMCA scheme is
enhanced with optimal centroid based clustering mechanism to iimprove the cluster quality measures.
The rank values are fetched from the EMCA cluster results. The software cost estimation schemes are
categorized with reference o the cluster results. Random centroid based clustering scheme is replaced
with optimal centroid based model. The system improves the clustering accuracy levels and prediction
levels.
VI. SOFTWARE COST ESTIMATION USING CLUSTER ANALYSIS
The software cost estimation scheme analysis operations are evaluated with different statistical
models. The Multiple Comparative Algorithms (MCA) scheme is used to rank and cluster the software
cost estimation methods. The Enhanced Multiple Comparative Algorithms (EMCA) scheme is used to
improve the clustering accuracy and prediction measure analysis. The system is divided into four major
modules. They are dataset analysis, cost estimation, MCA scheme and EMCA scheme. The dataset
analysis scheme is used fetch data from the datasets. The cost estimation mechanism is used to measure
the software cost based on the software product information. The MCA scheme is used to rank and
cluster the software cost estimation models. The EMCA scheme is also used to measure the quality of
the software cost estimation measures.
6.1. Dataset Analysis
The dataset analysis module is designed to load the dataset values from benchmark data
collections. The data values are collected from the standard benchmark datasets. The datasets are
collected in ARFF model. The Attribute Relational File Format (ARFF) model is used to provide the
software produce details. The attribute for the ARFF file is provided in attribute text files. The ARFF
and text files are used as the input for the system. The ARFF file details are converted in to Comma
Separated Values (CSV) format using the WEKA tool. The CSV file is used as the input for the software
product cost estimation mechanism.
6.2. Cost Estimation
The cost estimation operations are carried out under cost estimation module. The system uses the
three different cost estimation measures. They are regression model, analogy model and machine
learning model. 11 different cost estimation measures are used in the system. The cost values are
estimated using the software product properties collected from the CSV input file. The cost values are
5. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 454
estimated and updated in to the database. The cost estimation process is carried out for the selected
software product.
6.3. Ranking and Clustering with MCA
The ranking process is carried out with clustering functions for software cost estimation model
analysis. Multiple Comparative Algorithms are used for the ranking and clustering process. Non
overlapping clustering scheme is used in the system. Random centroids are used in the clustering
process. The software cost estimation measures are ranked with their property values. The clustering and
ranking results are produced for the selected software product for all cost estimation measures.
6.4. Ranking and Clustering with EMCA
The Enhanced Multiple Comparative Algorithms (EMCA) scheme is used to rank and cluster the
software cost estimation measures. The optimal centorids are used for the clustering process. The system
improves the clustering quality with optimal centroids. The software cost estimation measure is analyzed
with cost and other property values. The system produces better ranking and clustering solutions for the
software cost estimation process.
VII. PERFORMANCE ANALYSIS
The software cost estimation models are analyzed with Multiple Comparative Algorithms
(MCA) and Enhanced Multiple Comparative Algorithms (EMCA) schemes. The clustering scheme is
improved in EMCA model using the optimal centroid scheme. The system is tested with 11 estimation
measures with standard benchmark datasets. The Fitness measures, purity and entropy measures are
used for the performance analysis process.
7.1. Datasets
The datasets for the experimentation are derived from two sources, namely, the PROMISE
repository and the International Software Benchmarking Standards Group (ISBSG, release 10). The
main reason for this selection was that these datasets have been extensively used to empirically validate
or justify a large amount of research results, whereas they are also publically available. Each dataset
contains a different number of projects and a set of independent variables with mixed-type
characteristics, whereas the dependent variable that has to be predicted is the actual effort. Another
criterion for the selection of the datasets was the ability to apply all the competitive prediction methods
on them. Therefore, the system did not consider datasets with too many categorical variables which
cause problems to certain methods like regression and Neural Networks.
The ISBSG repository contains 4,106 software projects from more than 25 countries, but most of
the variables have a large amount of missing values. Keeping in mind the guidelines of ISBSG
suggesting filtering of the data projects, the system decided to discard the projects with missing values.
Moreover, an important issue in SCE is the utilization of datasets with high quality in the process of
evaluation and comparison of prediction models. Due to this fact and the instructions of the ISBSG
organization that point out not taking into account projects with low quality on projects marked with
“A” in Data Quality Rating and UFP Rating. Finally, the independent characteristics utilized in the
construction of the alternative models are the order to retain the compatibility with other studies.
7.2. Fitness Measure
The F-measure is a harmonic combination of the precision and recall values used in information
retrieval. Each cluster obtained can be considered as the result of a query, whereas each preclassified set
of documents can be considered as the desired set of documents for that query. Thus, the system can
6. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 455
calculate the precision P(i, j) and recall R(I, j) of each cluster j for each class i. If ni is the number of
members of the class i, nj is the number of members of the cluster j and nij is the number of members of
the class i in the cluster j, then P(i, j) and R(i, j) can be defined as
,),(
j
ij
n
n
jiP = (1)
i
ij
n
n
jiR =),( (2)
The corresponding F-measure F(i, j) is defined as
),(),(
),(*),(*2
),(
jiRjiP
jiRjiP
jiF
+
= (3)
Then, the F-measure of the whole clustering result is defined as
∑=
i j
i
jiF
n
n
F )),,((max (4)
where n is the total number of documents in the data set. In general, the larger the F-measure is, the
better the clustering result is (2).
Figure: 7.1. F-measure Analysis of MCA and EMCA Techniques
The Fitness measure analysis between the Multiple Comparative Algorithms (MCA) scheme and
the Enhanced Multiple Comparative Algorithm (EMCA) schemes are shown in figure 7.1. The results
show that the Enhanced Multiple Comparative Algorithm (EMCA) scheme increases the Fitness
measure 10% than the Multiple Comparative Algorithm (MCA) scheme.
7.3. Purity
The purity analysis between the Multiple Comparative Algorithms (MCA) scheme and the
Enhanced Multiple Comparative Algorithm (EMCA) schemes are shown in figure 7.2. The results show
that the Enhanced Multiple Comparative Algorithm (EMCA) scheme increases the purity analysis 20%
than the Multiple Comparative Algorithm (MCA) scheme.
0.75
0.8
0.85
0.9
0.95
1
5 10 15 20 25Softwa
res
F-
Measur
e
F-measure analysis
of MCA and EMCA
Techniques
MCA
EMCA
7. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 456
Figure No: 7.2. Purity Analysis of MVSC, SMVSC and HSMVSC Techniques
7.4. Entropy
Entropy reflects the homogeneity of a set of objects and thus can be used to indicate the
homogeneity of a cluster. This is referred to cluster entropy. Lower cluster entropy indicates more
homogeneous clusters. On the other hand, the system can also measure the entropy of a prelabeled class
of objects, which indicates the homogeneity of a class with respect to the generated clusters. The less
fragmented a class across clusters, the higher its entropy and vice versa. This is referred to as class
entropy.
Figure: 7.3. Entropy Analysis of MCA and EMCA Techniques
The Entropy analysis between the Multiple Comparative Algorithms (MCA) scheme and the
Enhanced Multiple Comparative Algorithm (EMCA) schemes are shown in figure 7.3. The results show
that the Enhanced Multiple Comparative Algorithm (EMCA) scheme increases the Entropy analysis
25% than the Multiple Comparative Algorithm (MCA) scheme.
VIII. CONCLUSION AND FUTURE WORK
The software cost estimation measures are analyzed using the statistical methods. Ranking and
clustering operations are carried out under the analysis process. The system deals with a critical research
issue in software cost estimation concerning the simultaneous comparison of alternative prediction
models. They are ranking and clustering in groups of similar performance. The system examined the
0
0.5
1
1.5
5 10 15 20 25
Softw
ares
Purit
y
Purity analysis of
MCA and EMCA
Techniques
MCA
EMCA
0
1
2
3
5 10 15 20 25
Entropy
Softwares
Entropy Analysis of
MCA and EMCA
Techniques
MCA
EMCA
8. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 457
predictive power of 11 models over six public domain datasets. The system uses the Multiple
Comparative Algorithms (MCA) for the clustering and ranking process. The MCA scheme is enhanced
with optimal centroid models to improve the accuracy level. The standard benchmark data values are
used to test the quality of the ranking and clustering system.
The software cost estimation is performed with different types of estimation methods. The
system analyzes the software cost estimation methods and produces the ranked result based on the
quality and accuracy factors. Regression, Analogy and Machine learning approaches are used in the cost
estimation process. The system can be enhanced with the following features.
o The system can be enhanced to measure the maintenance cost for the software products.
o The system can be adapted to measure the efforts and risk levels for the software’s.
o The system can be improved to analyze the design quality issues based ranking and clustering
scheme.
REFERENCES
[1] M. Jorgensen and M. Shepperd, “A Systematic Review of Software Development Cost Estimation Studies,” IEEE Trans.
Software Eng., Jan. 2007.
[2] Nikolaos Mittas and Lefteris Angelis, “Ranking and Clustering Software Cost Estimation Models through a Multiple
Comparisons Algorithm”, IEEE Transactions On Software Engineering, Vol. 39, No. 4, April 2013.
[3] ISBSG Data Set 10, http://www.isbsg.org. 2007.
[4] Qiang He, Jun Han, Jean-Guy Schneider and Steve Versteeg, “Formulating Cost-Effective Monitoring Strategies for
Service-Based Systems”, IEEE Transactions On Software Engineering, May 2014.
[5] N. Mittas and L. Angelis, “Comparing Cost Prediction Models by Resampling Techniques,” J. Systems and Software, vol.
81, no. 5, pp. 616-632, May 2008.
[6] T. Menzies, D. Baker and K. Lum, “Stable Rankings for Different Effort Models,” Automated Software Eng., vol. 17, no.
4, pp. 409-437, Dec. 2010.
[7] J. Demsar, “Statistical Comparisons of Classifiers over Multiple Data Sets,” J. Machine Learning Research, 2006.
[8] Ayse Tosun Misirli and Ayse Basar Bener, “Bayesian Networks For Evidence-Based Decision-Making in Software
Engineering”, IEEE Transactions On Software Engineering, June 2014
[9] S. Lessmann, C. Mues and S. Pietsch, “Benchmarking Classification Models for Software Defect Prediction: A Proposed
Framework and Novel Findings,” IEEE Trans. Software Eng., July/Aug. 2008.
[10] Juan Manuel Vara, Alvaro Jimenez and Esperanza Marcos, “Dealing with Traceability in the MDD of Model
Transformations”, IEEE Transactions On Software Engineering, June 2014.