This document summarizes a research paper that evaluates software quality using the Choquet integral approach. The paper introduces aggregation functions commonly used to evaluate software quality, such as the arithmetic mean and weighted arithmetic mean. It then presents the Choquet integral as a new approach that considers interactions among quality criteria. The paper applies the Choquet integral to evaluate software alternatives based on ISO 9126 quality attributes, using real data from case studies. Evaluation results are compared to other aggregation methods to demonstrate the Choquet integral's ability to model interactions among criteria.
ANP-GP Approach for Selection of Software Architecture StylesWaqas Tariq
Abstract Selection of Software Architecture for any system is a difficult task as many different stake holders are involved in the selection process. Stakeholders view on quality requirements is different and at times they may also be conflicting in nature. Also selecting appropriate styles for the software architecture is important as styles impact characteristics of software (e.g. reliability, performance). Moreover, styles influence how software is built as they determine architectural elements (e.g. components, connectors) and rules on how to integrate these elements in the architecture. Selecting the best style is difficult because there are multiple factors such as project risk, corporate goals, limited availability of resources, etc. Therefore this study presents a method, called SSAS, for the selection of software architecture styles. Moreover, this selection is a multi-criteria decision-making problem in which different goals and objectives must be taken into consideration. In this paper, we suggest an improved selection methodology, which reflects interdependencies among evaluation criteria and alternatives using analytic network process (ANP) within a zero-one goal programming (ZOGP) model. Keywords: Software Architecture; Selection of Software Architecture Styles; Multi-Criteria Decision Making; Interdependence; Analytic Network Process (ANP); Zero-One Goal Programming (ZOGP)
Identifying Thresholds for Distance Design-based Direct Class Cohesion (D3C2)...IJECEIAES
In the several phases of activity in developing a software system, there is design phase. This phase has a purpose to determine and ensure that a software requirement can be realized in accordance with customer needs. The quality of design must be a guarantee at this phase. One of an indicator of quality design is cohesion. Cohesion is the level of relatedness between elements in one component. A Higher value of cohesion can indicate that a component are more modular, has own resources, and less dependent on another component. More independent, components are easy to maintenance. There are many metrics to count how many values of cohesion in a component. One of metric is The Distance Design-Based Direct Class Cohesion (D3C2). But, many practitioners are unable to apply them. Because there is no threshold that can categories the value of cohesion. This study aims to determine the threshold of cohesion metric based on the class diagram. The result showed that the threshold of D3C2 metric is 0.41. 0.41 is the value that has the highest level of agreement with the design expert.
A novel method for material selection in industrial applicationseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Evaluating effectiveness factor of object oriented design a testability pers...ijseajournal
Effectiveness is important quality factor to testability measurement of object oriented software at an initial
stage of software development process exclusively at design phase for high quality product. It will help
developer’s design capability to achieve the specified functionalities, characteristics, better design quality
and behavior using appropriate object oriented design (OOD) concepts and procedures. Metric based
model for ‘Effectiveness Quantification Model of Object Oriented Design’ has been proposed by
establishing the correlation between effectiveness and OOD constructs. Later ‘Effectiveness Quantification
Model’ is empirically validated and statistical significance of the study considers the high correlation for
model acceptance. The aim of this research work is to encourage researchers and developers for inclusion
of the effectiveness quantification model to access and quantify software effectiveness quality factor at
design time.
Evaluation of Process Capability Using Fuzzy Inference SystemIOSR Journals
In many industrial instances product quality depends on a multitude of dependent characteristics and as a consequence, attention on capability indices shifts from univariate domain to multivariate domain. In this research fuzzy inference system is used to determine the process capability index. Fuzzy sets can represent imprecise quantities as well as linguistic terms. Fuzzy inference system (FIS) is a method, based on the fuzzy theory, which maps the input values to the output values. The mapping mechanism is based on some set of rules, a list of if-then statements. In this research Mamdani fuzzy inference system is used to derive the overall output process capability when subjected to six crisp input and one output. This paper deals with a novel approach to evaluating process capability based on readily available information using fuzzy inference system.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
ANP-GP Approach for Selection of Software Architecture StylesWaqas Tariq
Abstract Selection of Software Architecture for any system is a difficult task as many different stake holders are involved in the selection process. Stakeholders view on quality requirements is different and at times they may also be conflicting in nature. Also selecting appropriate styles for the software architecture is important as styles impact characteristics of software (e.g. reliability, performance). Moreover, styles influence how software is built as they determine architectural elements (e.g. components, connectors) and rules on how to integrate these elements in the architecture. Selecting the best style is difficult because there are multiple factors such as project risk, corporate goals, limited availability of resources, etc. Therefore this study presents a method, called SSAS, for the selection of software architecture styles. Moreover, this selection is a multi-criteria decision-making problem in which different goals and objectives must be taken into consideration. In this paper, we suggest an improved selection methodology, which reflects interdependencies among evaluation criteria and alternatives using analytic network process (ANP) within a zero-one goal programming (ZOGP) model. Keywords: Software Architecture; Selection of Software Architecture Styles; Multi-Criteria Decision Making; Interdependence; Analytic Network Process (ANP); Zero-One Goal Programming (ZOGP)
Identifying Thresholds for Distance Design-based Direct Class Cohesion (D3C2)...IJECEIAES
In the several phases of activity in developing a software system, there is design phase. This phase has a purpose to determine and ensure that a software requirement can be realized in accordance with customer needs. The quality of design must be a guarantee at this phase. One of an indicator of quality design is cohesion. Cohesion is the level of relatedness between elements in one component. A Higher value of cohesion can indicate that a component are more modular, has own resources, and less dependent on another component. More independent, components are easy to maintenance. There are many metrics to count how many values of cohesion in a component. One of metric is The Distance Design-Based Direct Class Cohesion (D3C2). But, many practitioners are unable to apply them. Because there is no threshold that can categories the value of cohesion. This study aims to determine the threshold of cohesion metric based on the class diagram. The result showed that the threshold of D3C2 metric is 0.41. 0.41 is the value that has the highest level of agreement with the design expert.
A novel method for material selection in industrial applicationseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Evaluating effectiveness factor of object oriented design a testability pers...ijseajournal
Effectiveness is important quality factor to testability measurement of object oriented software at an initial
stage of software development process exclusively at design phase for high quality product. It will help
developer’s design capability to achieve the specified functionalities, characteristics, better design quality
and behavior using appropriate object oriented design (OOD) concepts and procedures. Metric based
model for ‘Effectiveness Quantification Model of Object Oriented Design’ has been proposed by
establishing the correlation between effectiveness and OOD constructs. Later ‘Effectiveness Quantification
Model’ is empirically validated and statistical significance of the study considers the high correlation for
model acceptance. The aim of this research work is to encourage researchers and developers for inclusion
of the effectiveness quantification model to access and quantify software effectiveness quality factor at
design time.
Evaluation of Process Capability Using Fuzzy Inference SystemIOSR Journals
In many industrial instances product quality depends on a multitude of dependent characteristics and as a consequence, attention on capability indices shifts from univariate domain to multivariate domain. In this research fuzzy inference system is used to determine the process capability index. Fuzzy sets can represent imprecise quantities as well as linguistic terms. Fuzzy inference system (FIS) is a method, based on the fuzzy theory, which maps the input values to the output values. The mapping mechanism is based on some set of rules, a list of if-then statements. In this research Mamdani fuzzy inference system is used to derive the overall output process capability when subjected to six crisp input and one output. This paper deals with a novel approach to evaluating process capability based on readily available information using fuzzy inference system.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
Application Of Analytic Hierarchy Process And Artificial Neural Network In Bi...IJARIDEA Journal
Abstract— An appropriate decision to bid initiates all bid preparation steps. Selective bidding will reduce the number of proposals to be submitted by the contractor and saves tender preparation time which can be utilized for refining the estimated cost. Usually in industrial engineering applications final decision will be based on the evaluation of many alternatives. This will be a very difficult problem when the criteria are expressed in different units or the pertinent data are not easily quantifiable. This paper emphasizes on the use of Analytic Hierarchy Process(AHP) for analyzing the risk degree of each factor, so that decision the can be taken accordingly in deciding an appropriate bid.AHP helps to decide the best solution from various selection criteria.The study also focuses on suggesting a much broader applicability of AHP and ANN techniques on decisions of bidding.
Keywords— Analytic Hierarchy Process(AHP), Artificial Neural Network(ANN), Consistency Index(CI),
Consistency Ratio(CR), Random Index(RI), Risk degree.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
Software size estimation at early stages of project development holds great significance to meet
the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor
of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for
measuring the size of object oriented projects. The class point approach is used to quantify
classes which are the logical building blocks in object oriented paradigm. In this paper, we
propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries
based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H
benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class
point approach is used to estimate the size of an OLAP System The results of our approach are
validated.
Software size estimation at early stages of project development holds great significance to meet the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for measuring the size of object oriented projects. The class point approach is used to quantify classes which are the logical building blocks in object oriented paradigm. In this paper, we propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class point approach is used to estimate the size of an OLAP System The results of our approach are validated.
VISUALIZATION OF A SYNTHETIC REPRESENTATION OF ASSOCIATION RULES TO ASSIST EX...csandit
In order to help the expert to validate association rules, some quality measures are proposed in
the literature. We distinguish two categories: objective and subjective measures. The first one
depends on a fixed threshold and on data structure from which the rules are extracted. The
second one has two subcategories: The first one consists on providing to the expert a tool for
rule interactive exploration. In fact, they present these rules in textual form. The second
subcategory includes the use of visualization systems to facilitate the task of rules mining.
However, this last subcategory assumes that experts have statistical knowledge to interpret and
validate association rules. Furthermore, the statistical methods have a lack of semantic
representation and could not help the experts during the process of validation. To solve this
problem, we propose in this paper a method which visualizes to the experts a synthetic
representation of association rules as a formal conceptual graph (FCG). FCG represents his
area of interest and allows him to realize the task of rules mining easily due to its semantic
richness.
Software cost estimation is a key open issue for the software industry, which
suffers from cost overruns frequently. As the most popular technique for object-oriented
software cost estimation is Use Case Points (UCP) method, however, it has two major
drawbacks: the uncertainty of the cost factors and the abrupt classification. To address
these two issues, refined the use case complexity classification using fuzzy logic theory which
mitigate the uncertainty of cost factors and improve the accuracy of classification.
Software estimation is a crucial task in software engineering. Software estimation
encompasses cost, effort, schedule, and size. The importance of software estimation becomes
critical in the early stages of the software life cycle when the details of software have not
been revealed yet. Several commercial and non-commercial tools exist to estimate software
in the early stages. Most software effort estimation methods require software size as one of
the important metric inputs and consequently, software size estimation in the early stages
becomes essential.
The proposed method presents a techniques using fuzzy logic theory to improve the
accuracy of the use case points method by refining the use case classification.
The presentation slides for workshop 4, “Structuring online collaboration though 3 Ts : task time & teams” in 2nd STELLARnet Alpine Rendez-Vous in the French Alps 27th to 31st March 2011
Predicting the Presence of Learning Motivation in Electronic Learning: A New ...TELKOMNIKA JOURNAL
Research affirms that electronic learning (e-learning) has a great deal of advantages to learning process, it’s provides learners with flexibility in terms of time, place, and pace. That easiness was not be a guarantee of a good learning outcomes though they got the same learning material and course, because the fact is the outcomes was not the same from one to another and even not satisfy enough. One of prerequisite to the successful of e-learning process is the presence of learning motivation and it was not easy to identify. We propose a novel model to predicting the presence of learning motivation in e-learning using those attributes that have been identified in previous research. This model has been built using WEKA toolkit by comparing fourteen algorithms for Tree Classifier and ten-fold cross validation testing methods to process 3.200 of data sets. The best accuracy reached at 91.1% and identified four parameters to predict the presence of learning motivation in e-learning, and aim to assist teachers in identify whether student needs motivation or does not. This study also confirmed that Tree Classifier still has the best accuracy to predict and classify academic performance, it was reached average 90.48% for fourteen algorithm for accuracy value.
Parallel and distributed genetic algorithm with multiple objectives to impro...khalil IBRAHIM
we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors for a range of lectures to check a set of constraints and reduce the cost of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increase exponentially and directly proportional to the size of the problem, The construction of timetable is the most complicated problem that was facing many universities, and increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA) is unable to provide satisfactory results, a distributed EA (dEA), which deploys the population on distributed systems, it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distribution models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.
For years, the Machine Learning community has focused on developing efficient
algorithms that can produce very accurate classifiers. However, it is often much easier
to find several good classifiers based on dataset combination, instead of single classifier
applied on deferent datasets. The advantages of using classifier dataset combinations
instead of a single one are twofold: it helps lowering the computational complexity by
using simpler models, and it can improve the classification accuracy and performance.
Most Data mining applications are based on pattern matching algorithms, thus improving
the performance of the classification has a positive impact on the quality of the overall
data mining task. Since combination strategies proved very useful in improving the
performance, these techniques have become very important in applications such as
Cancer detection, Speech Technology and Natural Language Processing .The aim of this
paper is basically to propose proprietary metric, Normalized Geometric Index (NGI)
based on the latent properties of datasets for improving the accuracy of data mining tasks.
Size and Time Estimation in Goal Graph Using Use Case Points (UCP): A SurveyIJERA Editor
In order to achieve ideal status and meet demands of stakeholders, each organization should follow their vision and long term plan. Goals and strategies are two fundamental basis in vision and mission. Goals identify framework of organization where processes, rules and resources are designed. Goals are modelled based on a graph structure by means of extraction, classification and determining requirements and their relations and in form of graph. Goal graph shows goals which should be satisfied in order to guarantee right route of organization. On the other hand, these goals can be called as predefined sub projects which business management unit should consider and analyse them. If we know approximate size and time of each part, we will design better management plans resulting in more prosperity and less fail. This paper studies how use case points method is used in calculating size and time in goal graph.
Hybrid Method HVS-MRMR for Variable Selection in Multilayer Artificial Neural...IJECEIAES
The variable selection is an important technique the reducing dimensionality of data frequently used in data preprocessing for performing data mining. This paper presents a new variable selection algorithm uses the heuristic variable selection (HVS) and Minimum Redundancy Maximum Relevance (MRMR). We enhance the HVS method for variab le selection by incorporating (MRMR) filter. Our algorithm is based on wrapper approach using multi-layer perceptron. We called this algorithm a HVS-MRMR Wrapper for variables selection. The relevance of a set of variables is measured by a convex combination of the relevance given by HVS criterion and the MRMR criterion. This approach selects new relevant variables; we evaluate the performance of HVS-MRMR on eight benchmark classification problems. The experimental results show that HVS-MRMR selected a less number of variables with high classification accuracy compared to MRMR and HVS and without variables selection on most datasets. HVS-MRMR can be applied to various classification problems that require high classification accuracy.
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...ijsc
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the
scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger
problem. A composite design patterns shows a synergy that makes the composition more than just the sum
of its parts which leads to ready-made software architectures. As far as we know, there are no studies on
composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented
software architecture for self-optimization in autonomic computing system using design patterns
composition and multi objective evolutionary algorithms that software designers and/or programmers can
exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing
the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design
patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions.
The use of composite design patterns in the architecture and quantitative measurements are presented. A
simple UML class diagram is used to describe the architecture.
Automatically Estimating Software Effort and Cost using Computing Intelligenc...cscpconf
In the IT industry, precisely estimate the effort of each software project the development cost
and schedule are count for much to the software company. So precisely estimation of man
power seems to be getting more important. In the past time, the IT companies estimate the work
effort of man power by human experts, using statistics method. However, the outcomes are
always unsatisfying the management level. Recently it becomes an interesting topic if computing
intelligence techniques can do better in this field. This research uses some computing
intelligence techniques, such as Pearson product-moment correlation coefficient method and
one-way ANOVA method to select key factors, and K-Means clustering algorithm to do project
clustering, to estimate the software project effort. The experimental result show that using
computing intelligence techniques to estimate the software project effort can get more precise
and more effective estimation than using traditional human experts did
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
Integrated bio-search approaches with multi-objective algorithms for optimiza...TELKOMNIKA JOURNAL
Optimal selection of features is very difficult and crucial to achieve, particularly for the task of classification. It is due to the traditional method of selecting features that function independently and generated the collection of irrelevant features, which therefore affects the quality of the accuracy of the classification. The goal of this paper is to leverage the potential of bio-inspired search algorithms, together with wrapper, in optimizing multi-objective algorithms, namely ENORA and NSGA-II to generate an optimal set of features. The main steps are to idealize the combination of ENORA and NSGA-II with suitable bio-search algorithms where multiple subset generation has been implemented. The next step is to validate the optimum feature set by conducting a subset evaluation. Eight (8) comparison datasets of various sizes have been deliberately selected to be checked. Results shown that the ideal combination of multi-objective algorithms, namely ENORA and NSGA-II, with the selected bio-inspired search algorithm is promising to achieve a better optimal solution (i.e. a best features with higher classification accuracy) for the selected datasets. This discovery implies that the ability of bio-inspired wrapper/filtered system algorithms will boost the efficiency of ENORA and NSGA-II for the task of selecting and classifying features.
Software reliability models (SRMs) are very important for estimating and predicting software
reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a
historical review of the Gompertz SRM is given. Based on several software failure data, the
parameters of the Gompertz software reliability model are estimated using two estimation
methods, the traditional maximum likelihood and the least square. The methods of estimation are
evaluated using the MSE and R-squared criteria. The results show that the least square
estimation is an attractive method in term of predictive performance and can be used when the
maximum likelihood method fails to give good prediction results.
A methodology to evaluate object oriented software systems using change requi...ijseajournal
It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object
-
oriented programming has become the standard, it is very important to
understand th
e problems of maintaining object
-
oriented software systems. This paper aims at evaluating
object
-
oriented software system through change requirement traceability
–
based impact analysis
methodology
for non functional requirements using functional requirem
ents
. The major issues have been
related to change impact algorithms and inheritance of functionality.
It is a well-known fact that precise definitions play significant role in the development of correct and robust
software. It has been recognized and emphasized that appropriately defined formal conceptual framework
of the context/problem domain proves quite useful in ensuring precise definitions, including those for
software metrics, which are consistent, unambiguous and language independent. In this paper, a formal
conceptual framework for defining metrics for component-based system is proposed, where the framework
formalises the behavioural aspects of the problem domain. The framework in respect of structural aspects
has been discussed in another paper.
A method for business process reengineering based on enterprise ontologyijseajournal
Business Process Reengineering increases enterprise's chance to survive in competition among
organizations , but failure rate among reengineering efforts is high, so new methods that decrease failure,
are needed, in this paper a business process reengineering method is presented that uses Enterprise
Ontology for modelling the current system and its goal is to improve analysing current system and
decreasing failure rate of BPR, and cost and time of performing processes, In this method instead of just
modelling processes, processes with their : interactions and relations, environment, staffs and customers
will be modelled in enterprise ontology. Also in choosing processes for reengineering step, after choosing
them, processes which, according to the enterprise ontology, has the most connection with the chosen ones,
will also be chosen to reengineer, finally this method is implemented on a company and As-Is and To-Be
processes are simulated and compared by ARIS tools, Report and Simulation Experiment
Application Of Analytic Hierarchy Process And Artificial Neural Network In Bi...IJARIDEA Journal
Abstract— An appropriate decision to bid initiates all bid preparation steps. Selective bidding will reduce the number of proposals to be submitted by the contractor and saves tender preparation time which can be utilized for refining the estimated cost. Usually in industrial engineering applications final decision will be based on the evaluation of many alternatives. This will be a very difficult problem when the criteria are expressed in different units or the pertinent data are not easily quantifiable. This paper emphasizes on the use of Analytic Hierarchy Process(AHP) for analyzing the risk degree of each factor, so that decision the can be taken accordingly in deciding an appropriate bid.AHP helps to decide the best solution from various selection criteria.The study also focuses on suggesting a much broader applicability of AHP and ANN techniques on decisions of bidding.
Keywords— Analytic Hierarchy Process(AHP), Artificial Neural Network(ANN), Consistency Index(CI),
Consistency Ratio(CR), Random Index(RI), Risk degree.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
Software size estimation at early stages of project development holds great significance to meet
the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor
of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for
measuring the size of object oriented projects. The class point approach is used to quantify
classes which are the logical building blocks in object oriented paradigm. In this paper, we
propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries
based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H
benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class
point approach is used to estimate the size of an OLAP System The results of our approach are
validated.
Software size estimation at early stages of project development holds great significance to meet the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for measuring the size of object oriented projects. The class point approach is used to quantify classes which are the logical building blocks in object oriented paradigm. In this paper, we propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class point approach is used to estimate the size of an OLAP System The results of our approach are validated.
VISUALIZATION OF A SYNTHETIC REPRESENTATION OF ASSOCIATION RULES TO ASSIST EX...csandit
In order to help the expert to validate association rules, some quality measures are proposed in
the literature. We distinguish two categories: objective and subjective measures. The first one
depends on a fixed threshold and on data structure from which the rules are extracted. The
second one has two subcategories: The first one consists on providing to the expert a tool for
rule interactive exploration. In fact, they present these rules in textual form. The second
subcategory includes the use of visualization systems to facilitate the task of rules mining.
However, this last subcategory assumes that experts have statistical knowledge to interpret and
validate association rules. Furthermore, the statistical methods have a lack of semantic
representation and could not help the experts during the process of validation. To solve this
problem, we propose in this paper a method which visualizes to the experts a synthetic
representation of association rules as a formal conceptual graph (FCG). FCG represents his
area of interest and allows him to realize the task of rules mining easily due to its semantic
richness.
Software cost estimation is a key open issue for the software industry, which
suffers from cost overruns frequently. As the most popular technique for object-oriented
software cost estimation is Use Case Points (UCP) method, however, it has two major
drawbacks: the uncertainty of the cost factors and the abrupt classification. To address
these two issues, refined the use case complexity classification using fuzzy logic theory which
mitigate the uncertainty of cost factors and improve the accuracy of classification.
Software estimation is a crucial task in software engineering. Software estimation
encompasses cost, effort, schedule, and size. The importance of software estimation becomes
critical in the early stages of the software life cycle when the details of software have not
been revealed yet. Several commercial and non-commercial tools exist to estimate software
in the early stages. Most software effort estimation methods require software size as one of
the important metric inputs and consequently, software size estimation in the early stages
becomes essential.
The proposed method presents a techniques using fuzzy logic theory to improve the
accuracy of the use case points method by refining the use case classification.
The presentation slides for workshop 4, “Structuring online collaboration though 3 Ts : task time & teams” in 2nd STELLARnet Alpine Rendez-Vous in the French Alps 27th to 31st March 2011
Predicting the Presence of Learning Motivation in Electronic Learning: A New ...TELKOMNIKA JOURNAL
Research affirms that electronic learning (e-learning) has a great deal of advantages to learning process, it’s provides learners with flexibility in terms of time, place, and pace. That easiness was not be a guarantee of a good learning outcomes though they got the same learning material and course, because the fact is the outcomes was not the same from one to another and even not satisfy enough. One of prerequisite to the successful of e-learning process is the presence of learning motivation and it was not easy to identify. We propose a novel model to predicting the presence of learning motivation in e-learning using those attributes that have been identified in previous research. This model has been built using WEKA toolkit by comparing fourteen algorithms for Tree Classifier and ten-fold cross validation testing methods to process 3.200 of data sets. The best accuracy reached at 91.1% and identified four parameters to predict the presence of learning motivation in e-learning, and aim to assist teachers in identify whether student needs motivation or does not. This study also confirmed that Tree Classifier still has the best accuracy to predict and classify academic performance, it was reached average 90.48% for fourteen algorithm for accuracy value.
Parallel and distributed genetic algorithm with multiple objectives to impro...khalil IBRAHIM
we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors for a range of lectures to check a set of constraints and reduce the cost of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increase exponentially and directly proportional to the size of the problem, The construction of timetable is the most complicated problem that was facing many universities, and increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA) is unable to provide satisfactory results, a distributed EA (dEA), which deploys the population on distributed systems, it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distribution models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.
For years, the Machine Learning community has focused on developing efficient
algorithms that can produce very accurate classifiers. However, it is often much easier
to find several good classifiers based on dataset combination, instead of single classifier
applied on deferent datasets. The advantages of using classifier dataset combinations
instead of a single one are twofold: it helps lowering the computational complexity by
using simpler models, and it can improve the classification accuracy and performance.
Most Data mining applications are based on pattern matching algorithms, thus improving
the performance of the classification has a positive impact on the quality of the overall
data mining task. Since combination strategies proved very useful in improving the
performance, these techniques have become very important in applications such as
Cancer detection, Speech Technology and Natural Language Processing .The aim of this
paper is basically to propose proprietary metric, Normalized Geometric Index (NGI)
based on the latent properties of datasets for improving the accuracy of data mining tasks.
Size and Time Estimation in Goal Graph Using Use Case Points (UCP): A SurveyIJERA Editor
In order to achieve ideal status and meet demands of stakeholders, each organization should follow their vision and long term plan. Goals and strategies are two fundamental basis in vision and mission. Goals identify framework of organization where processes, rules and resources are designed. Goals are modelled based on a graph structure by means of extraction, classification and determining requirements and their relations and in form of graph. Goal graph shows goals which should be satisfied in order to guarantee right route of organization. On the other hand, these goals can be called as predefined sub projects which business management unit should consider and analyse them. If we know approximate size and time of each part, we will design better management plans resulting in more prosperity and less fail. This paper studies how use case points method is used in calculating size and time in goal graph.
Hybrid Method HVS-MRMR for Variable Selection in Multilayer Artificial Neural...IJECEIAES
The variable selection is an important technique the reducing dimensionality of data frequently used in data preprocessing for performing data mining. This paper presents a new variable selection algorithm uses the heuristic variable selection (HVS) and Minimum Redundancy Maximum Relevance (MRMR). We enhance the HVS method for variab le selection by incorporating (MRMR) filter. Our algorithm is based on wrapper approach using multi-layer perceptron. We called this algorithm a HVS-MRMR Wrapper for variables selection. The relevance of a set of variables is measured by a convex combination of the relevance given by HVS criterion and the MRMR criterion. This approach selects new relevant variables; we evaluate the performance of HVS-MRMR on eight benchmark classification problems. The experimental results show that HVS-MRMR selected a less number of variables with high classification accuracy compared to MRMR and HVS and without variables selection on most datasets. HVS-MRMR can be applied to various classification problems that require high classification accuracy.
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...ijsc
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the
scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger
problem. A composite design patterns shows a synergy that makes the composition more than just the sum
of its parts which leads to ready-made software architectures. As far as we know, there are no studies on
composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented
software architecture for self-optimization in autonomic computing system using design patterns
composition and multi objective evolutionary algorithms that software designers and/or programmers can
exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing
the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design
patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions.
The use of composite design patterns in the architecture and quantitative measurements are presented. A
simple UML class diagram is used to describe the architecture.
Automatically Estimating Software Effort and Cost using Computing Intelligenc...cscpconf
In the IT industry, precisely estimate the effort of each software project the development cost
and schedule are count for much to the software company. So precisely estimation of man
power seems to be getting more important. In the past time, the IT companies estimate the work
effort of man power by human experts, using statistics method. However, the outcomes are
always unsatisfying the management level. Recently it becomes an interesting topic if computing
intelligence techniques can do better in this field. This research uses some computing
intelligence techniques, such as Pearson product-moment correlation coefficient method and
one-way ANOVA method to select key factors, and K-Means clustering algorithm to do project
clustering, to estimate the software project effort. The experimental result show that using
computing intelligence techniques to estimate the software project effort can get more precise
and more effective estimation than using traditional human experts did
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
Integrated bio-search approaches with multi-objective algorithms for optimiza...TELKOMNIKA JOURNAL
Optimal selection of features is very difficult and crucial to achieve, particularly for the task of classification. It is due to the traditional method of selecting features that function independently and generated the collection of irrelevant features, which therefore affects the quality of the accuracy of the classification. The goal of this paper is to leverage the potential of bio-inspired search algorithms, together with wrapper, in optimizing multi-objective algorithms, namely ENORA and NSGA-II to generate an optimal set of features. The main steps are to idealize the combination of ENORA and NSGA-II with suitable bio-search algorithms where multiple subset generation has been implemented. The next step is to validate the optimum feature set by conducting a subset evaluation. Eight (8) comparison datasets of various sizes have been deliberately selected to be checked. Results shown that the ideal combination of multi-objective algorithms, namely ENORA and NSGA-II, with the selected bio-inspired search algorithm is promising to achieve a better optimal solution (i.e. a best features with higher classification accuracy) for the selected datasets. This discovery implies that the ability of bio-inspired wrapper/filtered system algorithms will boost the efficiency of ENORA and NSGA-II for the task of selecting and classifying features.
Software reliability models (SRMs) are very important for estimating and predicting software
reliability in the testing/debugging phase. The contributions of this paper are as follows. First, a
historical review of the Gompertz SRM is given. Based on several software failure data, the
parameters of the Gompertz software reliability model are estimated using two estimation
methods, the traditional maximum likelihood and the least square. The methods of estimation are
evaluated using the MSE and R-squared criteria. The results show that the least square
estimation is an attractive method in term of predictive performance and can be used when the
maximum likelihood method fails to give good prediction results.
A methodology to evaluate object oriented software systems using change requi...ijseajournal
It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object
-
oriented programming has become the standard, it is very important to
understand th
e problems of maintaining object
-
oriented software systems. This paper aims at evaluating
object
-
oriented software system through change requirement traceability
–
based impact analysis
methodology
for non functional requirements using functional requirem
ents
. The major issues have been
related to change impact algorithms and inheritance of functionality.
It is a well-known fact that precise definitions play significant role in the development of correct and robust
software. It has been recognized and emphasized that appropriately defined formal conceptual framework
of the context/problem domain proves quite useful in ensuring precise definitions, including those for
software metrics, which are consistent, unambiguous and language independent. In this paper, a formal
conceptual framework for defining metrics for component-based system is proposed, where the framework
formalises the behavioural aspects of the problem domain. The framework in respect of structural aspects
has been discussed in another paper.
A method for business process reengineering based on enterprise ontologyijseajournal
Business Process Reengineering increases enterprise's chance to survive in competition among
organizations , but failure rate among reengineering efforts is high, so new methods that decrease failure,
are needed, in this paper a business process reengineering method is presented that uses Enterprise
Ontology for modelling the current system and its goal is to improve analysing current system and
decreasing failure rate of BPR, and cost and time of performing processes, In this method instead of just
modelling processes, processes with their : interactions and relations, environment, staffs and customers
will be modelled in enterprise ontology. Also in choosing processes for reengineering step, after choosing
them, processes which, according to the enterprise ontology, has the most connection with the chosen ones,
will also be chosen to reengineer, finally this method is implemented on a company and As-Is and To-Be
processes are simulated and compared by ARIS tools, Report and Simulation Experiment
Project risk management model based on prince2 and scrum frameworksijseajournal
Agile methods grew out of the real-life project experiences of leading software professionals who
had experienced the challenges and limitations of traditional waterfall development
methodologies on projects after projects. The agile development frameworks are widely used and
they don’t contain any risk management techniques because it is believed that short iterative
development cycles will minimize any unpredictable impact related to product development [1],
[2]. However in larger projects or during development of complex products, especially in the
global environment, the need of proper risk management is required. From the audit perspective,
there is the clear control requirement “BAI01.10 Manage programme and project risk“ defined by
COBIT 5 that requires that project risks should be systematically identified, analysed, responded
to, monitored and controlled. Additionally the risks should be centrally recorded [3, p. 125].
Additionally, controlling risk in software projects is considered to be a major contributor to
project success [4].
The need to manage risks in agile project management is also identified by various authors.
The SOA principles from the agile project management perspective were used to create
a framework for understanding agile risk management strategies for global IT projects [5]. Main
risk models and frameworks used by software engineers are discussed with conclusion that
the risk management steps are required for delivery of quality software [6], [7]. Agile
methodologies don’t cover the risk management knowledge area that can be taken from project
management frameworks like PMBOK [8]. Risks related to global software development projects
using Scrum have been researched and a conceptual framework to mitigate them designed [9].
Also the increasing variety of security threats should be managed as risks in the agile
development projects [10], [11].
Service-oriented computing has created new requirements for information systems development processes
and methods. The adoption of service-oriented development requires service identification methods
matching the challenge in enterprises. A wide variety of service identification methods (SIM) have been
proposed, but less attention has been paid to the actual requirements of the methods. This paper provides
an ethnographical look at challenges in service identification based on data from 14 service identification
sessions, providing insight into the practice of service identification. The findings identified two types of
service identification sessions and the results can be used for selecting the appropriate SIM based on the
type of the session.
Code cloning negatively affects industrial software and threatens intellectual property. This paper presents
a novel approach to detecting cloned software by using a bijective matching technique. The proposed
approach focuses on increasing the range of similarity measures and thus enhancing the precision of the
detection. This is achieved by extending a well-known stable-marriage problem (SMP) and demonstrating
how matches between code fragments of different files can be expressed. A prototype of the proposed
approach is provided using a proper scenario, which shows a noticeable improvement in several features
of clone detection such as scalability and accuracy.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
A METHOD FOR WEBSITE USABILITY EVALUATION: A COMPARATIVE ANALYSISIJwest
ABSTRACT
Graphical user interfaces design in software development process focuses on maximizing usability and the user's experience, in order to make the interaction for users easy, flexible and efficient. In this paper, we propose an approach for evaluating the usability satisfaction degree of a web-based system. The proposed method has been accomplished in two phases and implemented on an airlines website as a case study. In the first phase, a website usability test is implemented by a number of users, and then the results obtained are translated into charts for a final web-based system evaluation in the second phase. The results achieved
were satisfactory, since the places where the weaknesses and gaps in the website are identified and recommended solutions to avoid them are drawn. The authenticity of the results have been confirmed by comparing them with user opinions acquired from a questionnaire, which proves the precision in which the website is rated.
Threshold benchmarking for feature ranking techniquesjournalBEEI
In prediction modeling, the choice of features chosen from the original feature set is crucial for accuracy and model interpretability. Feature ranking techniques rank the features by its importance but there is no consensus on the number of features to be cut-off. Thus, it becomes important to identify a threshold value or range, so as to remove the redundant features. In this work, an empirical study is conducted for identification of the threshold benchmark for feature ranking algorithms. Experiments are conducted on Apache Click dataset with six popularly used ranker techniques and six machine learning techniques, to deduce a relationship between the total number of input features (N) to the threshold range. The area under the curve analysis shows that ≃ 33-50% of the features are necessary and sufficient to yield a reasonable performance measure, with a variance of 2%, in defect prediction models. Further, we also find that the log2(N) as the ranker threshold value represents the lower limit of the range.
Identification, Analysis & Empirical Validation (IAV) of Object Oriented Desi...rahulmonikasharma
Metrics and Measure are closely inter-related to each other. Measure is defined as way of defining amount, dimension, capacity or size of some attribute of a product in quantitative manner while Metric is unit used for measuring attribute. Software quality is one of the major concerns that need to be addressed and measured. Object oriented (OO) systems require effective metrics to assess quality of software. The paper is designed to identify attributes and measures that can help in determining and affecting quality attributes. The paper conducts empirical study by taking public dataset KC1 from NASA project database. It is validated by applying statistical techniques like correlation analysis and regression analysis. After analysis of data, it is found that metrics SLOC, RFC, WMC and CBO are significant and treated as quality indicators while metrics DIT and NOC are not significant. The results produced from them throws significant impact on improving software quality.
A NEW HYBRID FOR SOFTWARE COST ESTIMATION USING PARTICLE SWARM OPTIMIZATION A...ieijjournal
Software Cost Estimation (SCE) is considered one of the most important sections in software engineering that results in capabilities and well-deserved influence on the processes of cost and effort. Two factors of cost and effort in software projects determine the success and failure of projects. The project that will be completed in a certain time and manpower is a successful one and will have good profit to project
managers. In most of the SCE techniques, algorithmic models such as COCOMO algorithm models have been used. COCOMO model is not capable of estimating the close approximations to the actual cost, because it runs in the form of linear. So, the models should be adapted that simultaneously with the number of Lines of Code (LOC) has the ability to estimate in a fair and accurate fashion for effort factors. Metaheuristic algorithms can be a good model for SCE due to the ability of local and global search. In this paper, we have used the hybrid of Particle Swarm Optimization (PSO) and Differential Evolution (DE) for the SCE. Test results on NASA60 software dataset show that the rate of Mean Magnitude of Relative Error (MMRE) error on hybrid model, in comparison with COCOMO model is reduced to about 9.55%
A NEW HYBRID FOR SOFTWARE COST ESTIMATION USING PARTICLE SWARM OPTIMIZATION A...ieijjournal1
Software Cost Estimation (SCE) is considered one of the most important sections in software engineering
that results in capabilities and well-deserved influence on the processes of cost and effort. Two factors of
cost and effort in software projects determine the success and failure of projects. The project that will be
completed in a certain time and manpower is a successful one and will have good profit to project
managers. In most of the SCE techniques, algorithmic models such as COCOMO algorithm models have
been used. COCOMO model is not capable of estimating the close approximations to the actual cost,
because it runs in the form of linear. So, the models should be adapted that simultaneously with the number
of Lines of Code (LOC) has the ability to estimate in a fair and accurate fashion for effort factors. Metaheuristic
algorithms can be a good model for SCE due to the ability of local and global search. In this
paper, we have used the hybrid of Particle Swarm Optimization (PSO) and Differential Evolution (DE) for
the SCE. Test results on NASA60 software dataset show that the rate of Mean Magnitude of Relative Error
(MMRE) error on hybrid model, in comparison with COCOMO model is reduced to about 9.55%.
An Elite Model for COTS Component Selection ProcessIJEACS
Component-based software development (CBD) promises development of high-quality trustworthy software systems within specified budget and deadline. The selection of the most appropriate component based on specific requirement plays a vital role for high-quality software product. Multi-Agent software (MAS) engineering approach played a crucial role for selection of the most appropriate component based on a specific requirement in a distributed environment. In this paper, multi agent technique is used for component selection. A semi-automated solution to COTS component selection is proposed. It is evident from the result that (MAS) plays an essential role and is suitable for component selection in a distributed environment keeping in view of the system design and testing strategies.
EVALUATION OF THE SOFTWARE ARCHITECTURE STYLES FROM MAINTAINABILITY VIEWPOINTcscpconf
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
VISUALIZATION OF A SYNTHETIC REPRESENTATION OF ASSOCIATION RULES TO ASSIST EX...cscpconf
In order to help the expert to validate association rules, some quality measures are proposed in the literature. We distinguish two categories: objective and subjective measures. The first one
depends on a fixed threshold and on data structure from which the rules are extracted. The second one has two subcategories: The first one consists on providing to the expert a tool for
rule interactive exploration. In fact, they present these rules in textual form. The second subcategory includes the use of visualization systems to facilitate the task of rules mining.
However, this last subcategory assumes that experts have statistical knowledge to interpret and validate association rules. Furthermore, the statistical methods have a lack of semantic
representation and could not help the experts during the process of validation. To solve this problem, we propose in this paper a method which visualizes to the experts a synthetic
representation of association rules as a formal conceptual graph (FCG). FCG represents his area of interest and allows him to realize the task of rules mining easily due to its semantic
richness.
Evaluation of the software architecture styles from maintainability viewpointcsandit
In the process of software architecture design, different decisions are made that have systemwide
impact. An important decision of design stage is the selection of a suitable software
architecture style. Lack of investigation on the quantitative impact of architecture styles on
software quality attributes is the main problem in using such styles. So, the use of architecture
styles in designing is based on the intuition of software developers. The aim of this research is
to quantify the impacts of architecture styles on software maintainability. In this study,
architecture styles are evaluated based on coupling, complexity and cohesion metrics and
ranked by analytic hierarchy process from maintainability viewpoint. The main contribution of
this paper is quantification and ranking of software architecture styles from the perspective of
maintainability quality attribute at stage of architectural style selection.
Improving collaborative filtering using lexicon-based sentiment analysisIJECEIAES
Since data is available increasingly on the Internet, efforts are needed to develop and improve recommender systems to produce a list of possible favorite items. In this paper, we expand our work to enhance the accuracy of Arabic collaborative filtering by applying sentiment analysis to user reviews, we also addressed major problems of the current work by applying effective techniques to handle the scalability and sparsity problems. The proposed approach consists of two phases: the sentiment analysis and the recommendation phase. The sentiment analysis phase estimates sentiment scores using a special lexicon for the Arabic dataset. The item-based and singular value decomposition-based collaborative filtering are used in the second phase. Overall, our proposed approach improves the experiments’ results by reducing average of mean absolute and root mean squared errors using a large Arabic dataset consisting of 63,000 book reviews.
QUALITY ASSESSMENT MODEL OF THE ADAPTIVE GUIDANCEijseajournal
The need for adaptive guidance systems is now recognized for all software development processes. The new needs generated by the mobility context for software development led these guidance systems to both quality and ability adaptation to the possible variations of the development context. This paper deals with the adaptive guidance quality to satisfy the developer’s guidance needs. We propose a quality model to the adaptive guidance. This model offers a more detailed description of the quality factors of guidance service adaptation. This description aims to assess the quality level of each guidance adaptation factor and therefore the evaluation of the adaptive quality guidance service.
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...ijsc
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger problem. A composite design patterns shows a synergy that makes the composition more than just the sum of its parts which leads to ready-made software architectures. As far as we know, there are no studies on composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented software architecture for self-optimization in autonomic computing system using design patterns composition and multi objective evolutionary algorithms that software designers and/or programmers can exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions. The use of composite design patterns in the architecture and quantitative measurements are presented. A simple UML class diagram is used to describe the architecture.
AN APPROACH FOR SOFTWARE EFFORT ESTIMATION USING FUZZY NUMBERS AND GENETIC AL...csandit
One of the most critical tasks during the software development life cycle is that of estimating the effort and time involved in the development of the software product. Estimation may be performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is the process of identifying one or more historical projects that are similar to the project being developed and then using the estimates from them. Analogy-based estimation is integrated with Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with attribute measurement and data availability, fuzzy logic is introduced in the proposed model.But hardly a historical project is exactly same as the project being estimated due to some distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort needs to be adjusted when the most similar project has a similarity distance with the project being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic algorithm based adjustment mechanism may result to near the correct effort estimation.
AN APPROACH FOR SOFTWARE EFFORT ESTIMATION USING FUZZY NUMBERS AND GENETIC AL...cscpconf
One of the most critical tasks during the software development life cycle is that of estimating the effort and time involved in the development of the software product. Estimation may be performed by many ways such as: Expert judgments, lgorithmic effort estimation, Machine learning and Analogy-based estimation. In which Analogy-based software effort estimation is
the process of identifying one or more historical projects that are similar to the project being developed and then using the estimates from them. Analogy-based estimation is integrated with Fuzzy numbers in order to improve the performance of software project effort estimation during the early stages of a software development lifecycle. Because of uncertainty associated with tribute measurement and data availability, fuzzy logic is introduced in the proposed model. But hardly a historical project is exactly same as the project being estimated due to some distance associated in similarity distance. This means that the most similar project still has a similarity distance with the project being estimated in most of the cases. Therefore, the effort needs to be adjusted when the most similar project has a similarity distance with the project being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic algorithm based adjustment mechanism may result to near the correct effort estimation.
An approach for software effort estimation using fuzzy numbers and genetic al...csandit
One of the most critical tasks during the software development life cycle is that of estimating the
effort and time involved in the development of the software product. Estimation may be
performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is
the process of identifying one or more historical projects that are similar to the project being
developed and then using the estimates from them. Analogy-based estimation is integrated with
Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with
attribute measurement and data availability, fuzzy logic is introduced in the proposed model.
But hardly a historical project is exactly same as the project being estimated due to some
distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort
needs to be adjusted when the most similar project has a similarity distance with the project
being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The
proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic
algorithm based adjustment mechanism may result to near the correct effort estimation.
Testing-as-a-service (TaaS) comes along with the advancement in technology to meet the various demands
in software testing currently on the rise as multiple organizations seek to enforce new technology and
personal software tailoring their organization needs. Information Technology (IT) has facilitated the rise
as various organizations upgrade their system, which demands the continuous testing of the software as
exemplified by the multiple types; regression testing and penetration testing (PTaaS). TaaS contains
various features and capabilities, enabling software testing presented by cutting-edge technology, external
expertise provision to companies, public cloud, test library, and community-driven and simplified
infrastructure and operations.
Similar to Iso9126 based software quality evaluation using choquet integral (20)
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Iso9126 based software quality evaluation using choquet integral
1. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
DOI : 10.5121/ijsea.2015.6110 111
ISO9126 BASED SOFTWARE QUALITY EVALUATION
USING CHOQUET INTEGRAL
Abdelkareem M. Alashqar1
, Ahmad Abo Elfetouh2
and Hazem M. El-Bakry3
1,2,3
Information Systems Dept., Faculty of Computer and Information Sciences, Mansoura
University, EGYPT
ABSTRACT
Evaluating software quality is an important and essential issue in the development process because it helps
to deliver a competitive software product. A decision of selecting the best software based on quality
attributes is a type of multi-criteria decision-making (MCDM) processes where interactions among criteria
should be considered. This paper presents and develops quantitative evaluations by considering
interactions among criteria in the MCDM problems. The aggregator methods such as Arithmetic Mean
(AM) and Weighted Arithmetic Mean (WAM) are introduced, described and compared to Choquet Integral
(CI) approach which is a type of fuzzy measure used as a new method for MCDM. The comparisons are
shown by evaluating and ranking software alternatives based on six main quality attributes as identified by
the ISO 9126-1 standard. The evaluation experiments depend on real data collected from case studies.
KEYWORDS
Software Quality, Fuzzy Measure, Choquet Integral, Criteria Interaction, Multi Criteria Decision Making,
Aggregation Function, ISO9126.
1. INTRODUCTION
Software quality is an important and essential issue especially in current rapid change in the
software development industry. The process of evaluating software quality offers various benefits
during the development process as it let developers to deliver high qualified software product.
Quality is a relative term which differs with viewpoints transcendental view, product view,
manufacturing view, user view and value-based View [17]. Several models have been proposed
like ISO/IEC 9126 Model [16], McCall model [20], Boehm model [6], FURPS model [15],
Dromey [9] model , and FURPS model [14], to evaluate generic software application. A
comparative study of software quality models is found in [1] [26] [31]. Quality models define
software product qualities as a hierarchy of factors, criteria and metrics. A quality factor
represents behavioral characteristics of the system; a quality criterion is an attribute of a quality
factor that is related to software production and design; while a quality metrics is a measure that
captures some aspect of a quality criterion [32]. Most of the works that focus on software quality
evaluation do not reference the aggregation methods used to calculate the values of the different
element of the model. Sometimes a simple weighted average is used to summaries the various
quality measurements into a single score as proposed in [3] and [23]. Over the past decade, fuzzy
decision support has emerged as a means of providing effective tools and techniques for solving
MCDM problems. The authors in [11] used the aggregation function, a fuzzy decision-support
technique, to support the MCDM process in a game theory context. An MCDM approach based
on ordered weighted averaging (OWA) operators was proposed in [8] which permits a
2. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
112
sophisticated aggregation of measured atomic quality values using linguistic criteria to express
human expert’s evaluations. Recent studies as in [21] and [32], also adopts MCDM approach
based on fuzzy measures to evaluate software quality. In this study, we introduce a relatively
fuzzy decision support technique based on an aggregation function named the Choquet Integral
(CI) into the MCDM problem of evaluating quality of e-learning websites. The evaluation depend
on the main quality attributes defined in ISO/IEC 9126 standard, and take into account the
interactions between these quality attributes. In addition to that, the evaluations utilize real
empirical data collected from previous studies.
The rest of this paper in organized as follows: Section 2 mentions related work. Section 3
explains the proposed method that emphasizes the aggregations methods especially CI. Section 4
presents the experimental results, and Section 5 provides the conclusion and future work.
2. RELATED WORK
Some research studies attempt to evaluate software quality using fuzzy multi criteria approach
without considering interaction between criteria as [7] [29] and [30]. The work in [7] provides a
method to estimate the software quality criteria using fuzzy multi criteria approach. The method
used to quantify software quality for generic applications. The authors in [29] proposed a MCDM
approach to software quality assessment using fuzzy measures to model software experts’
decision making processes and help them to predict/evaluate software quality. Their approach
helps to predict/evaluate software quality with consistently over 60% accuracy. While the authors
in [30] present a software quality prediction model based on a fuzzy neural network. The
proposed model is a hybrid model of Artificial Neural Network (ANN) and Fuzzy Logic (FL),
which exploits the advantages of ANN and FL while eliminating their limitations.
A limited number of attempts apply fuzzy measure techniques to the modeling of MCDM process
in evaluating software quality attributes considering the interaction between these quality
attributes. In an early study carried out by [32], presents a fuzzy integral approach as an
aggregation operator to produce the overall assessment of quality attributes for web-based
applications (WBA). Later work [22] proposes a methodology based on fuzzy measure using CI
for comparing different software solutions based on the software requirements specification
(SRS) to a common problem. Our work is closest to work done in [22] [32] which use CI with
fuzzy measure to quantify the interacting software quality parameters and finds the best software
among different software and also made analysis of effect of interaction among criteria in finding
best alternative. However, it is different in that it uses CI to evaluate specific type of software that
is e-learning websites from user perspective depending on empirical collected data from previous
studies. Furthermore, our evaluations are based on ISO/IEC 9126 quality model. Furthermore,
while [22] assigned different values to ξ (an interaction degree among criteria as proposed by
[27]) to show how it affects the evaluation results, we use real interaction values as stated in the
literature in our evaluation purposes.
3. RESEARCH METHOD
This section presents a description of an aggregation function that is widely used by researchers
in quantitative evaluations, followed by a description of fuzzy measure method suing CI as a new
approach for evaluation.
3. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
113
3.1 Aggregation Function
MCDM is used if there are a set of alternatives and each is evaluated against several criteria. For
example, if a user wants to select the best one of several developed websites, it needs to consider
the best usability, the best efficiency, and so on. The degree to which an alternative satisfies a
criterion corresponds to a utility value [28]. The scores must then be combined in some ways to
produce an overall rating for that alternative. Such a process is very similar to an aggregation
function or aggregation operator, which combines several inputs into a single representative
output. The averaging aggregation is one of the most widely used type of aggregation functions.
The term “average” is commonly employed in everyday language when referring to the arithmetic
mean (AM). The AM of n values is the function: AMሺݔሻ =
ଵ
ሺݔଵ + ݔଶ + ⋯ + ݔሻ =
ଵ
∑ ݓ
ୀଵ ݔ
(1)
For example for a given input ݔ = ሺ0.5,0.2,0.7ሻ, the AMሺݔሻ = ሺ0.5 + 0.2 + 0.7ሻ/3 = 0.47.
In case some criteria are considered as more important than others, it is common to consider the
aggregation function to be additive and to take the form of a weighted arithmetic mean (WAM).
WAM is a linear function with respect to a positive valued weighting vector ݓ with ∑ ݓ
ୀଵ = 1:
ܹܯܣ௪ሺݔሻ = ݓଵݔଵ + ݓଶݔଶ + ⋯ + ݓݔ = ∑ ݓ
ୀଵ ݔ (2)
For example if a given input ݔ = ሺ0.5,0.2,0.7ሻ has corresponding weights ݓ = ሺ0.5,0.3,0.2ሻ, then
the ܹܯܣ௪ሺݔሻ = 0.5ሺ0.5ሻ + 0.2ሺ0.3ሻ + 0.7ሺ0.2ሻ = 0.25 + 0.06 + 0.14 = 0.45.
Although the most often used aggregation operators are the WAM [19], it is not an appropriate
method when there is an interaction among the criteria. Therefore, we choose CI aggregation
function as a type of fuzzy measure that takes into account the interaction among criteria [28].
3.2 Fuzzy Measures and Choquet Integral
A fuzzy measure (also called a capacity) ߤ on a set of input criteria ܺ = {ݔଵ, ݔଶ, … ݔ} is a set
function ߤ: 2௫
→ [0,1] satisfying [12][28]:
(i) ߤሺ߶ሻ = 0 and ߤሺܺሻ = 1 (boundary conditions).
(ii) ,ܣ ܤ ⊆ ܺ ܣ ⊆ ܤ implies ߤሺܣሻ ≤ ߤሺܤሻ (monotonicity).
Furthermore a capacity ߤ on ܺ is said to be [12] :
• Additive if ߤሺܣ ∪ ܤሻ = ߤሺܣሻ + ߤሺܤሻ for all disjoint sunsets ,ܣ ܤ ⊆ ܺ, and
• Cardinality-based if, for any ܤ ⊆ ܺ, ߤሺܤሻ depends only on the cardinality of .ܤ
Note that there is only one capacity on ܺ that is both additive and cardinality-based. It is called
the uniform capacity and denote it by ߤ∗
. It is easy to verify that ߤ∗
is given by
ߤ∗ሺܤሻ = ܾ
ݔൗ , ∀ܤ ⊆ ܺ
In the framework of aggregation, for each subset of criteria ܣ ⊆ ܺ, the number ߤሺܣሻ can be
interpreted as the weight or the importance of .ܣ The monotonicity of ߤ means that the weight of
4. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
114
a subset of criteria cannot decrease when new criteria are added to it. In the case ܺ = {ݔଵ, ݔଶ, ݔଷ}
the representation of fuzzy measures is shown as:
ߤሺ{∅}ሻ
ߤሺ{1}ሻ
ߤሺ{2}ሻ
ߤሺ{3}ሻ
ߤሺ{1,2}ሻ
ߤሺ{1,3}ሻ
ߤሺ{2,3}ሻ
ߤሺ{1,2,3}ሻ
The discrete CI with respect to a fuzzy measure ߤ is given by:
ܫܥఓሺݔሻ = ∑ ݔሺሻൣߤ൫൛݆หݔ ≥ ݔሺሻ}൯ − ߤ൫൛݆หݔ ≥ ݔሺାଵሻ}൯൧
ୀଵ (3)
Where ݔሺଵሻ, ݔሺଶሻ, … , ݔሺሻ is a non-decreasing permutation of the input ,ݔ and ݔሺାଵሻ = ∞ by
convention. For example, given an input ݔ = ሺ0.5,0.2,0.7ሻ and the fuzzy measure values:
ߤሺ{∅}ሻ = 0
ߤሺ{1}ሻ = 0.5
ߤሺ{2}ሻ = 0.3
ߤሺ{3}ሻ = 0.2
ߤሺ{1,2}ሻ = 0.7
ߤሺ{1,3}ሻ = 0.6
ߤሺ{2,3}ሻ = 0.9
ߤሺ{1,2,3}ሻ = 1
To calculate the CI, the input ݔ is arranged in a non-decreasing ሺ0.2,0.5,0.7ሻ). Then, the result
obtained from the input ݔ is:
ܫܥఓሺݔሻ = 0.2[ߤሺ{1,2,3}ሻ − ߤሺ{1,3}ሻ] + 0.5[ߤሺ{1,3}ሻ − ߤሺ{3}ሻ] + 0.7[ߤሺ{3}ሻ]
= 0.2ሺ1 − 0.6ሻ + 0.5ሺ0.6 − 0.2ሻ + 0.7ሺ0.2ሻ
= 0.2ሺ0.4ሻ + 0.5ሺ0.4ሻ + 0.7ሺ0.2ሻ = 0.42
The result value falls between the maximum and the minimum input expected in the bounding
condition of averaging functions such as AM.
It should be noted we adopted the additive fuzzy measure in this paper. A fuzzy measure is called
additive if ߤሺܣ ∪ ܤሻ = ߤሺܣሻ + ߤሺܤሻ whenever ܣ ∩ ܤ = ∅, super additive if ሺܣ ∪ ܤሻ ≥ ߤሺܣሻ +
ߤሺܤሻ whenever ܣ ∩ ܤ = ∅ and sub additive if ሺܣ ∪ ܤሻ ≤ ߤሺܣሻ + ߤሺܤሻ whenever ܣ ∩ ܤ = ∅.
Fuzzy measures are rich and important family. Furthermore, when they are used with some of the
integrals such as CI, they can generalize some well-known aggregation operators such as the AM.
The CI corresponds to the WAM functions when it is defined by the additive fuzzy measures. We
also adopted a k-additive as a recently concept developed by [10] to reduce the complexity in
fuzzy measure. The interactions between criteria are only considered for subsets of ݇ elements or
less, which reduces the number of variables to define the fuzzy measure. It allows for a trade-off
between modeling ability and complexity. A decision maker can decide how complex a fuzzy
5. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
115
measure that he wishes to consider by selecting a k-additive value ሺ1 ≤ ݇ ≤ ݊ሻ. When ݇ = 1 the
CI becomes equivalent to WAM, and when ݇ = ݊ the fuzzy measures are said to be unrestricted.
The Shapley value is an important concept related to CI used to measure the overall importance
of each criterion alone in all coalitions. Let ߤ be a fuzzy measure and ܺ = {ݔଵ, ݔଶ, … , ݔ} be the
set of criteria. The Shapley index for every input ݔ ∈ ܺ is defined as:
ߔఓሺ݅ሻ = ∑
ሺି||ିଵሻ!||!
!
[ߤሺܣ ∪ {ݔ}ሻ − ߤሺܣሻ]⊆/{௫} (4)
The Shapley value is the vector ߔሺ௩ሻ = ߔଵ, ߔଶ, … , ߔ. The index ߔ can be interpreted as a kind
of average value of the contribution of criteria ݔ in all groups of criteria. It also represents a true
sharing of the total amount ߤሺܺሻ as it must satisfy the condition ∑ ߔఓሺ݅ሻ = 1
ୀଵ .
A powerful capability of fuzzy measure approach using CI is the consideration of criteria
interaction. The interaction indices are interpreted as the behaviors of criteria in groups or as a
measurement of the interaction among criteria in the decision-making process. If ܺ =
{ݔଵ, ݔଶ, … , ݔ} is the set of criteria, then the interaction index for every set ⊆ ܺ :
ܫఓሺܣሻ = ∑
ሺି||ି||ሻ!||!
ሺି||ାଵሻ!⊆/ ∑ ሺ−1ሻห
ൗ ห
ߤሺܥ⋃ܤሻ⊆ (5)
This measure can include all combinations of groups of criteria where ܫఓሺܣሻ ∈ [−1,1]. The
interaction index ܫఓሺ݆݅ሻ for each pair ܣ = {ݔ, ݔ} of criteria is usually used due to its convenience
of interpretation. For a pair of criteria ݔ and ݔ , if they have a positive interaction (complement),
then ܫఓሺ݆݅ሻ > 0. Similarly, if ݔ and ݔ have a negative interaction (correlation), then ܫఓሺ݆݅ሻ < 0.
When ݔ and ݔ have little or no interaction (independence), ܫఓሺ݆݅ሻ = 0. The ܫఓሺ݆݅ሻ is more than
just the interaction between a pair of criteria themselves because each pair is considered in the
presence of all groups.
3.3 A Comparison between WAM and CI
In the WAM aggregation method seen so far, only a single weight is considered for each data
element. Besides, we cannot consider how to measure the importance of a set of sources. For
example, in the case of the multi-criteria decision making problem of website evaluation, we can
state that the efficiency criterion has an importance of 0.3, and that usability is more important,
and thus, its weight is 0.5. However, we have not considered the importance of efficiency and
usability when considered together. Fuzzy measures permit us to incorporate considerations not
included in the weights for the WAM operator. In particular, they can be used to express
redundancy, complementariness, and interactions among information sources or criteria.
Therefore, tools that use fuzzy measures such as CI to represent background knowledge permit
the consideration of sources that are not independent. We can see that the advantage of CI lies in
the fact that the fuzzy measure can account for the importance and the interaction between every
subset of criteria [28].
Lest us consider a decision maker of an organization has to evaluate the software products
according to their level in functionality (F), usability (U), and efficiency (E). Suppose that the
organization put more importance in functionality and usability than efficiency and the assigned
weights could be for example 0.4, 0.4, and 0.2 respectively. Applying WAM, Product A is better
than Product B and Product C, since it has highest weighted score as shown in Table 1 although it
was weak in characteristic E. However if the organization wants the product that is well versed
6. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
116
with all the three characteristics then Product C is better than other products as it has scored
consistent scores in all the three characteristics. This is due to much importance is given to
characteristics F and U, which are in sense synergic. So this problem can be solved using CI with
fuzzy measure as input. As F and U are synergic characteristics, the weights used for the pair
characteristics F and U should be less than the sum of weights of characteristics F and U, and
hence the sum of weights of characteristics F and U, ߤሺ,ܨ ܷሻ = 0.5. As product equally good at F
and E characteristics must be favored so that the weights for the pair of F and E must be greater
than sum of individual weights of F and E, so ߤሺ,ܨ ܧሻ = ߤሺܷ, ܧሻ = 0.9, where ߤሺ,ܨ ܷ, ܧሻ = 1.0.
So according to CI approach Product C is better than Product A and B as it has highest CI value
shown in Table 1. In contrast to WAM score Product A was better than B and C. Therefore it can
be concluded that fuzzy measure using CI is a better approach than WAM because it takes into
account interaction and importance of each criterion.
Table 1: Results of software products evaluation by WAM and CI
Product F U E WAM CI
Product A 9 8 5 7.8 6.9
Product B 4 6 8 6 6.3
Product C 7 7 8 7.2 7.9
We used the application of the Kappalab R package [13]. Kappalab package includes
implementation of several methods for fuzzy measures (called capacity) identification in Choquet
integral based multi-attribute utility theory (MAUT). These methods are based on least-squares
based approaches, maximum split approach, minimum variance approach and minimum distance
approaches. Most of the methods for capacity identification proposed in the literature can be
stated as optimization problems. However, they differ according to their objective function and
the preferential information they require as input [12]. We developed a tool using RStudio [24]
that utilize these methods. The main parameters values used as input to the kappalab R package
methods are the number of criteria and their weights, the alternative instances, the interaction
indices and the k-additive. There are six criteria which represent the ISO/IEC9126 based six
quality attributes. The instances of alternatives as well as the criteria weights are collected
depending on empirical data from two previous case studies. These studies provide evaluation of
e-learning websites where each case includes five instances. The criteria weights and the
interaction indices are identified depending on industry experience. The k-additive value
corresponds to the number of selected criteria.
4. EXPERIMENTS AND RESULTS
The data used in this paper were collected from two separate previous case studies that provide
evaluations for e-learning websites from the student perspective (see [23] and [3]). The
evaluations in the first one (which we called Case 1) were carried out in Thailand universities,
and evaluations presented in the later one (which we called Case 2) were carried out in Jordan
universities. Each study provides evaluations of five different categories of e-learning websites
using traditional averaging methods. However the names and categories are different in both
studies. The categories in Case 1 are: Educational and Social Science, Humanities, Agriculture,
Science and Technology, and Business Administration, where we call them A1, A2, A3, A4 and
A5 respectively. The categories in Case 2 are: Arts, Business, Educational and Social Sciences,
Engineering and Science where we call them A1, A2, A3, A4 and A5 respectively.
7. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
117
In this paper we use the data described so far in evaluating e-learning websites based on ISO9126
quality attributes using CI approach. The data are shown in Table 2 and Table 3. Where ‘F’, ‘R’,
‘U’, ‘E’, ‘M’, and ‘P’ stand for “Functionality”, “Reliability”, “Usability”, “Efficiency”,
“Maintainability”, and “Portability”, respectively.
Table 2: Case 1- Total quality attributes evaluations for 5 groups of subjects found in [23]
QA A1 A2 A3 A4 A5
F 60.46 49.44 57.66 54.13 53.98
R 82.76 80.09 74.60 76.11 83.27
U 68.86 63.26 64.41 56.64 62.72
E 37.72 40.13 44.08 36.01 42.88
M 36.03 17.75 19.96 38.16 19.54
P 40.08 34.91 37.49 39.95 36.24
AM 54.47 47.60 49.70 50.17 49.78
Table 2: Case 2- Total quality attributes evaluations for 5 groups of subjects found in [3]
QA A1 A2 A3 A4 A5
F 71. 67 76 04 74. 69 67. 81 72.19
R 55. 83 58. 33 56. 67 54. 17 57.92
U 72.19 67. 6 71. 46 67. 4 68. 96
E 63. 33 63. 33 63. 33 54. 17 61. 25
M 70.21 67. 5 73. 44 67. 71 68. 75
P 66.67 67 08 64. 17 58. 75 6 0. 83
AM 66.65 66.65 67. 29 61. 67 64. 98
Software quality is a relative term which differs with viewpoints. So different perspectives of
quality can be considered; such as user view, developer view, manager view, etc. For example,
from the system user’s point of view, usability of the system is the most important one; while
from the viewpoints of developers and maintainers, maintainability is the most important one. In
addition to that, the overall quality of software product can be expressed by a combination of
different views. However, it must be taken into account that no software product can satisfy all of
the stakeholders’ needs at the same time. In our work, the users’ viewpoint will be considered.
Given the fact that there are some trade-offs between software product capabilities and it is not
possible to satisfy all of the software requirements at the same time, it is essential to determine
the weights of all quality aspects in any software product. Because of the absence of relative
importance of quality attributes represented by weights in the two mentioned previous case
studies, the quality attributes weights were adopted from [5] as a result of industrial experience
and experts refinement. We depend on these results on evaluating the e-learning websites. The
weights are: 0.3, 0.19, 0.24, 0.18 0.05, 0.04 as user’s view weights for ‘‘Functionality”,
‘‘Reliability”, ”Usability”, ‘‘Efficiency”, ‘‘Maintainability”, and ‘‘Portability”, respectively .
The relationships between quality attributes exist. Software products quality attributes are among
the important subjects that require trade-offs especially when facing conflicts in quality
requirements or customers' desires and in order to conduct a correct and a successful tradeoff a
rigorous analysis might be required [2]. The interaction between quality attributes is defined form
[2] and [33] as depicted in Table 4 which shows 15 relationships for each pair of the 6 quality
attributes.
8. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
118
Table 4: Quality attributes Relationships as identified in [2] and [28]
QA F R U E M
R +
U + +
E 0 0 -
M + + + -
P 0 0 0 - +
Where positive relationship means there is a synergy between the two quality attributes; in other
words good values of one attribute result in a good value of the other. And the relationship means
there is a conflict between the two quality attributes; in other words, a good value of one attribute
result in a bad value of the other. The synergic relationship represented as complementary
relationship in kappalab R package while the conflict relationship represented as substitutive
relationship.
For calculating CI values we used the minimum distance (MD) approach because it helps in
finding, if it exits, the closest capacity to a capacity defined by the decision maker (DM) and
compatible with his/her initial preferences. This initial capacity is typically an additive capacity
representing the DM’s prior idea of what the aggregation function should be. In the absence of
clear requirements a very natural choice for fuzzy measure ߤ is the uniform capacity ߤ∗
. Also we
apply CI with less restricted fuzzy measures where the overall performance of CI increased as ݇
increased, and it achieved the best performance when ݇ is equal to the number of criteria where
the number of criteria in our study is 6. In addition to that the use of the CI with less restricted
fuzzy measures can model the MCDM process in a way that is closer to reality [18] .
Table 5: Quality Attributes Evaluations for the 5 groups of subjects of case 1 obtained from WAM and CI
QA A1 A2 A3 A4 A5
F 60.46 49.44 57.66 54.13 53.98
R 82.76 80.09 74.60 76.11 83.27
U 68.86 63.26 64.41 56.64 62.72
E 37.72 40.13 44.08 36.01 42.88
M 36.03 17.75 19.96 38.16 19.54
P 40.08 34.91 37.49 39.95 36.24
AM 54.47 47.60 49.70 50.17 49.78
WAM 60.58 54.74 57.36 54.28 57.21
CI 57.25 52.57 53.98 52.16 54.45
9. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
119
Table 6: Quality Attributes Evaluations for the 5 groups of subjects of Case 2 obtained from WAM and CI
QA A1 A2 A3 A4 A5
F 71. 67 76 04 74. 69 67. 81 72.19
R 55. 83 58. 33 56. 67 54. 17 57.92
U 72.19 67. 6 71. 46 67. 4 68. 96
E 63. 33 63. 33 63. 33 54. 17 61. 25
M 70.21 67. 5 73. 44 67. 71 68. 75
P 66.67 67 08 64. 17 58. 75 6 0. 83
AM 66.65 66.65 67. 29 61. 67 64. 98
WAM 67.01 67.58 67.96 62.30 66.11
CI 65.90 67.35 66.92 61.79 65.58
Studying the results in Table 5 for Case 1, obtained by considering the Shapley values and
considering synergic and conflicts element with CI, can let us report that the rank of websites as
follows A1>A5>A3>A2>A4, which means that A1 followed by A5 outperform the other
alternatives, while the rank using WAM is A1>A3>A5>A2>A4.
And studying the results in Table 6 for Case 2, obtained by considering the Shapley values and
considering synergic and conflicts element with CI, can let us report that the rank of websites as
follows A2>A3>A1>A5>A4, while the rank using WAM is A3>A2>A1>A5>A4.
It is important to mention here that we compared the evaluation results of fuzzy measure using CI
to WAM only, because these methods take into account the weighting of criteria, while AM does
not consider this.
5. CONCLUSION AND FUTURE WORK
Evaluating software quality is always an interest of researchers working to support users and
developers in decision making. Traditional methods as AM and WAM are unable to explore
MCDM process completely and accurately. In this paper we introduce a fuzzy measure technique
that uses the CI approach in evaluating quality attributes of e-learning websites based on ISO/IEC
9126 and depending on real data by analyzing the Shapley values and the interaction between
criteria. The results show more convenient evaluation for users and developers in decision
making. The calculation results conducted on two example case studies in Section 4 illustrate that
using CI approach with the consideration of synergic and conflict elements can essentially change
the alternatives ranking.
Extension to this work include taking into account evaluating sub-criteria of quality attributes and
considering different software product views. Carrying out the experiments using large amount of
empirical data is also needed to refine the results. Furthermore, it is needed to develop a tool that
facilitates applying experiments emphasizing the approaches provided by Kappalab R package.
REFERENCES
[1] Al-Badareen, A. B., Selamat, M. H., Jabar, M. A., Din, J., & Turaev, S. (2011). Software Quality
Models: A Comparative Study. In Software Engineering and Computer Systems (pp. 46-55). Springer
Berlin Heidelberg.
[2] Aldaajeh, S., Asghar, T., Khan, A. A., & ZakaUllah, M. (2012). Communing Different Views on
Quality Attributes Relationships' Nature. European Journal of Scientific Research, 68(1), 101-109.
10. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
120
[3] Baklizi, M., & Alghyaline, S. (2011, May). Evaluation of E-Learning websites in Jordan universities
based on ISO/IEC 9126 standard. In Communication Software and Networks (ICCSN), 2011 IEEE
3rd International Conference on (pp. 71-73). IEEE.
[4] Barbacci, M., Klein, M. H., Longstaff, T. A., & Weinstock, C. B. (1995). Quality Attributes (No.
CMU/SEI-95-TR-021). CARNEGIE-MELLON UNIV PITTSBURGH PA SOFTWARE
ENGINEERING INST.
[5] Behkamal, B., Kahani, M., & Akbari, M. K. (2009). Customizing ISO 9126 quality model for
evaluation of B2B applications. Information and software technology, 51(3), 599-609.
[6] Boehm, B. W., Brown, J. R., & Kaspar, H. (1978). Characteristics of software quality.
[7] Challa, J. S., Paul, A., Dada, Y., Nerella, V., Srivastava, P. R., & Singh, A. P. (2011). Integrated
Software Quality Evaluation: A Fuzzy Multi-Criteria Approach. JIPS, 7(3), 473-518.
[8] Davoli, P., Mazzoni, F., & Corradini, E. (2004). Quality assessment of cultural web sites with fuzzy
operators. Journal of Computer Information Systems, 46(1), 44.
[9] Dromey, R. G. (1995). A model for software product quality. Software Engineering, IEEE
Transactions on, 21(2), 146-162.
[10] Grabisch, M. (1997). k-order additive discrete fuzzy measures and their representation. Fuzzy sets and
systems, 92(2), 167-189.
[11] Grabisch, M., & Roubens, M. (2000). Application of the Choquet integral in multicriteria decision
making. Fuzzy measures and integrals, (40), 348-375.
[12] Grabisch, M., Kojadinovic, I., & Meyer, P. (2008). A review of methods for capacity identification in
Choquet integral based multi-attribute utility theory: Applications of the Kappalab R package.
European journal of operational research, 186(2), 766-785.
[13] Grabisch, M., Kojadinovic, I., & Meyer, P. (2012). kappalab: Non-additive measure and integral
manipulation functions. R package version 0.4-6, URL http://cran.r-
project.org/web/packages/kappalab/index.html.
[14] Grady, R. B. (1992). Practical software metrics for project management and process improvement.
Prentice-Hall, Inc..
[15] Grady, R. B. (1994). Successfully applying software metrics. Computer, 27(9), 18-25.
[16] ISO/IEC 9126-1, (2001), “Software engineering —Product quality — Part 1: Quality model”
[17] Kitchenham, B., & Pfleeger, S. L. (1996). Software quality: The elusive target. IEEE software, 13(1),
12-21.
[18] Li, G., Law, R., Vu, H. Q., & Rong, J. (2013). Discovering the hotel selection preferences of Hong
Kong inbound travelers using the Choquet Integral. Tourism Management, 36, 321-330.
[19] Marichal, J. L. (2000). An axiomatic approach of the discrete Choquet integral as a tool to aggregate
interacting criteria. Fuzzy Systems, IEEE Transactions on, 8(6), 800-807.
[20] McCall, J. A., Richards, P. K., & Walters, G. F. (1977). Factors in software quality. General Electric,
National Technical Information Service.
[21] Pasrija, V., Kumar, S., & Srivastava, P. R. (2012). Assessment of Software Quality: Choquet Integral
Approach. Procedia Technology, 6, 153-162.
[22] Pasrija, V., Kumar, S., & Srivastava, P. R. (2012). Assessment of Software Quality: Choquet Integral
Approach. Procedia Technology, 6, 153-162.
[23] Pruengkarn, R., Praneetpolgrang, P., & Srivihok, A. (2005, July). An evaluation model for e-learning
Websites in Thailand University. In Advanced Learning Technologies, 2005. ICALT 2005. Fifth
IEEE International Conference on (pp. 161-162). IEEE.
[24] RStudio, http://www.rstudio.com/, Accessed on 6/12/2015.
[25] Srivastava, P. R., Singh, A. P., & Vageesh, K. V. (2010). Assssment of Software Quality: A Fuzzy
Multi–Criteria Approach. Evolution of Computationand Optimization Algorithms in Software
EngineeringApplications and Techniques, IGI Global USA, 200-219.
[26] Suman, and Wadhwa M. (2014). A Comparative Study of Software Quality Models. (IJCSIT)
International Journal of Computer Science and Information Technologies, Vol. 5 (4) , 2014, 5634-
5638
[27] Takahagi, E. (2005). λ fuzzy measure identification methods using λ and weights. Senshu University,
Japan.
[28] Torra, V., & Narukawa, Y. (2007). Modeling decisions. Information Fusion and Aggregation
Operators.
11. International Journal of Software Engineering & Applications (IJSEA), Vol.6, No.1, January 2015
121
[29] Wang, X., Ceberio, M., Virani, S., Garcia, A., & Cummins, J. (2013). A Hybrid Algorithm to Extract
Fuzzy Measures for Software Quality Assessment. Journal of Uncertain Systems, 7(3), 219-237.
[30] Yang, B., Yao, L., & Huang, H. Z. (2007, August). Early software quality prediction based on a fuzzy
neural network model. In Natural Computation, 2007. ICNC 2007. Third International Conference on
(Vol. 1, pp. 760-764). IEEE.
[31] Youness, B., Abdelaziz, M, Habib, B. and Hicham M. (2013). Comparative Study of Software
Quality Models. IJCSI International Journal of Computer Science Issues, Vol. 10, Issue 6, No 1,
November 2013 ISSN (Print): 1694-0814 | ISSN (Online): 1694-0784.
[32] Zulzalil, H., Ghani, A. A. A., Selamat, M. H., & Mahmod, R. (2011). Using fuzzy integral to evaluate
the web-based applications. Malaysian Software Engineering Interest Group (MSEIG). Malaysia.
[33] Zulzalil, H., Ghani, A. A., Selamat, M. H., & Mahmod, R. (2008). A case study to identify quality
attributes relationships for web-based applications. IJCSNS, 8(11), 215.