The Ordered Weighted Averaging (OWA) operator was introduced by Yager [57] to provide a method for aggregating inputs that lie between the max and min operators. In this article two variants of probabilistic extensions the OWA operator-POWA and FPOWA (introduced by Merigo [26] and [27]) are considered as a basis of our generalizations in the environment of fuzzy uncertainty (parts II and III of this work), where different monotone measures (fuzzy measure) are used as uncertainty measures instead of the probability measure. For the identification of “classic” OWA and new operators (presented in parts II and III) of aggregations, the Information Structure is introduced where the incomplete available information in the general decision-making system is presented as a condensation of uncertainty measure, imprecision variable and objective function of weights.
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...ijsc
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the
scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger
problem. A composite design patterns shows a synergy that makes the composition more than just the sum
of its parts which leads to ready-made software architectures. As far as we know, there are no studies on
composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented
software architecture for self-optimization in autonomic computing system using design patterns
composition and multi objective evolutionary algorithms that software designers and/or programmers can
exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing
the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design
patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions.
The use of composite design patterns in the architecture and quantitative measurements are presented. A
simple UML class diagram is used to describe the architecture.
This document presents a proposed churn prediction model based on data mining techniques. The model consists of six steps: identifying the problem domain, data selection, investigating the data set, classification, clustering, and utilizing the knowledge gained. The authors apply their model to a data set of 5,000 mobile service customers using data mining tools. They train classification models using decision trees, neural networks, and support vector machines. Customers are classified as churners or non-churners. Churners are then clustered into three groups. The results are interpreted to gain insights into customer retention.
The document discusses multi-attribute decision making (MADM) and its application in selecting the optimal design for a circlip grooving operation. It describes identifying criteria such as material costs, manufacturing costs, and material properties. Utility functions are developed to evaluate alternatives based on the criteria. Three learning management systems (LMS) are evaluated and analyzed using the MADM model to select the best option. The analysis found that MADM can help design an optimal, cost-efficient circlip design that considers various parameters.
Invited talk at the Focus Fortnight 8: ""The analysis of discrete choice experiments", organized by the Centre for Bayesian Statistics in Health Economics, University of Sheffield (UK), September, 2007.
This document provides an overview of key concepts in modeling and simulation for decision support. It defines complex systems, open and closed systems, and hierarchical systems. It describes the differences between hard and soft problems, and the characteristics of hard systems and soft systems approaches. It also defines static and dynamic systems, and different types of models. Finally, it discusses the relationship between modeling and simulation and the key steps in a simulation process.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Integrating Fuzzy Dematel and SMAA-2 for Maintenance Expensesinventionjournals
The majority of the allowances being transferred to public institutions are mostly spent for buying new equipment, materials, facilities and their maintenance and repair. Some of the public sectors establish their own plants in order to reduce the maintenance and repair costs and gain ability to perform these activities. However, developing technology and variety of materials make their repair and maintenance activities more expensive for them. In this study, vital criteria for a public institution are determined. By using Fuzzy DEMATEL (Decision Making Trial And Evaluation Laboratory) method the degree of importance is identified by two defuzzification methods and the alternatives are ranked by using SMAA-2 (Stochastic Multi Criteria Acceptability Analysis) in three scenarios. The results show that different defuzzification methods change the order of preferences.
Modelling the expected loss of bodily injury claims using gradient boostingGregg Barrett
This document summarizes an effort to model the expected loss of bodily injury claims using gradient boosting. Frequency and severity models are built separately and then combined to estimate expected loss. Gradient boosting is chosen as the modeling approach due to its flexibility. Tuning parameters like shrinkage, number of trees, and depth must be selected. The goal is predictive accuracy over interpretability. Performance is evaluated on a test set not used for model selection.
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...ijsc
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the
scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger
problem. A composite design patterns shows a synergy that makes the composition more than just the sum
of its parts which leads to ready-made software architectures. As far as we know, there are no studies on
composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented
software architecture for self-optimization in autonomic computing system using design patterns
composition and multi objective evolutionary algorithms that software designers and/or programmers can
exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing
the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design
patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions.
The use of composite design patterns in the architecture and quantitative measurements are presented. A
simple UML class diagram is used to describe the architecture.
This document presents a proposed churn prediction model based on data mining techniques. The model consists of six steps: identifying the problem domain, data selection, investigating the data set, classification, clustering, and utilizing the knowledge gained. The authors apply their model to a data set of 5,000 mobile service customers using data mining tools. They train classification models using decision trees, neural networks, and support vector machines. Customers are classified as churners or non-churners. Churners are then clustered into three groups. The results are interpreted to gain insights into customer retention.
The document discusses multi-attribute decision making (MADM) and its application in selecting the optimal design for a circlip grooving operation. It describes identifying criteria such as material costs, manufacturing costs, and material properties. Utility functions are developed to evaluate alternatives based on the criteria. Three learning management systems (LMS) are evaluated and analyzed using the MADM model to select the best option. The analysis found that MADM can help design an optimal, cost-efficient circlip design that considers various parameters.
Invited talk at the Focus Fortnight 8: ""The analysis of discrete choice experiments", organized by the Centre for Bayesian Statistics in Health Economics, University of Sheffield (UK), September, 2007.
This document provides an overview of key concepts in modeling and simulation for decision support. It defines complex systems, open and closed systems, and hierarchical systems. It describes the differences between hard and soft problems, and the characteristics of hard systems and soft systems approaches. It also defines static and dynamic systems, and different types of models. Finally, it discusses the relationship between modeling and simulation and the key steps in a simulation process.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Integrating Fuzzy Dematel and SMAA-2 for Maintenance Expensesinventionjournals
The majority of the allowances being transferred to public institutions are mostly spent for buying new equipment, materials, facilities and their maintenance and repair. Some of the public sectors establish their own plants in order to reduce the maintenance and repair costs and gain ability to perform these activities. However, developing technology and variety of materials make their repair and maintenance activities more expensive for them. In this study, vital criteria for a public institution are determined. By using Fuzzy DEMATEL (Decision Making Trial And Evaluation Laboratory) method the degree of importance is identified by two defuzzification methods and the alternatives are ranked by using SMAA-2 (Stochastic Multi Criteria Acceptability Analysis) in three scenarios. The results show that different defuzzification methods change the order of preferences.
Modelling the expected loss of bodily injury claims using gradient boostingGregg Barrett
This document summarizes an effort to model the expected loss of bodily injury claims using gradient boosting. Frequency and severity models are built separately and then combined to estimate expected loss. Gradient boosting is chosen as the modeling approach due to its flexibility. Tuning parameters like shrinkage, number of trees, and depth must be selected. The goal is predictive accuracy over interpretability. Performance is evaluated on a test set not used for model selection.
Local coordination in online distributed constraint optimization problems - P...Antonio Maria Fiscarelli
I implemented a multiagent reinforcement learning algorithm for online constrained optimization problems, using Java and R. Several agents had to agree on a common solution for the optimization problem and they had to find the best cooperation network that is beneficial for the group performance.
This document provides an introduction to decision making methods. It outlines an 8-step general decision making process: 1) define the problem, 2) determine requirements, 3) establish goals, 4) identify alternatives, 5) define criteria, 6) select a decision making tool, 7) evaluate alternatives against criteria, and 8) validate solutions. It then discusses single vs. multiple criteria decisions and finite vs. infinite alternatives. Finally, it summarizes several multi-attribute decision making methods, including cost-benefit analysis, elementary methods like pros/cons analysis, and MAUT methods like the simple multiattribute rating technique.
Parallel and distributed genetic algorithm with multiple objectives to impro...khalil IBRAHIM
we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors for a range of lectures to check a set of constraints and reduce the cost of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increase exponentially and directly proportional to the size of the problem, The construction of timetable is the most complicated problem that was facing many universities, and increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA) is unable to provide satisfactory results, a distributed EA (dEA), which deploys the population on distributed systems, it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distribution models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.
Integrated bio-search approaches with multi-objective algorithms for optimiza...TELKOMNIKA JOURNAL
Optimal selection of features is very difficult and crucial to achieve, particularly for the task of classification. It is due to the traditional method of selecting features that function independently and generated the collection of irrelevant features, which therefore affects the quality of the accuracy of the classification. The goal of this paper is to leverage the potential of bio-inspired search algorithms, together with wrapper, in optimizing multi-objective algorithms, namely ENORA and NSGA-II to generate an optimal set of features. The main steps are to idealize the combination of ENORA and NSGA-II with suitable bio-search algorithms where multiple subset generation has been implemented. The next step is to validate the optimum feature set by conducting a subset evaluation. Eight (8) comparison datasets of various sizes have been deliberately selected to be checked. Results shown that the ideal combination of multi-objective algorithms, namely ENORA and NSGA-II, with the selected bio-inspired search algorithm is promising to achieve a better optimal solution (i.e. a best features with higher classification accuracy) for the selected datasets. This discovery implies that the ability of bio-inspired wrapper/filtered system algorithms will boost the efficiency of ENORA and NSGA-II for the task of selecting and classifying features.
This paper aims to build predictive models using the CRISP-DM framework to classify bank customers into predefined classes using a Portuguese marketing campaign dataset. It analyzes bank customer data containing over 45,000 instances and 16 features to classify customers as likely or not likely to subscribe to bank deposits. It uses multilayer perceptron and logistic regression algorithms for modeling. The results show that the multilayer perceptron model with a 70% training split provides the best average performance, accurately classifying customers 70% of the time.
Author Mr Di Chen : Ecole Polytechnique Féderale de Lausanne: Financial Engineering Section
This paper shows that complexity influences stock returns. By establishing the complexity and resilience measure of the common stock and analyzing the relationship between return, momentum, size, complexity, book-to-market ratio and resilience, three measures (size, complexity and momentum) stand out as the factors that can influence stock returns.
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
The clustering is a without monitoring process and one of the most common data mining techniques. The
purpose of clustering is grouping similar data together in a group, so were most similar to each other in a
cluster and the difference with most other instances in the cluster are. In this paper we focus on clustering
partition k-means, due to ease of implementation and high-speed performance of large data sets, After 30
year it is still very popular among the developed clustering algorithm and then for improvement problem of
placing of k-means algorithm in local optimal, we pose extended PSO algorithm, that its name is ECPSO.
Our new algorithm is able to be cause of exit from local optimal and with high percent produce the
problem’s optimal answer. The probe of results show that mooted algorithm have better performance
regards as other clustering algorithms specially in two index, the carefulness of clustering and the quality
of clustering.
This document describes a study that examines simplifying multi-attribute decision making under uncertainty by replacing distributions of attribute values with their expected values. The study uses simulations to test how well simplified models perform compared to full distribution models under different conditions, such as changes in distributions, errors in expected values, and problem size. Certain simplified models are highly sensitive to extreme non-linear preferences, while others provide generally acceptable performance that is robust to various changes in conditions.
This document discusses human supervisory control in advanced manufacturing systems (AMS). It defines supervisory control as human operators programming and receiving information from a computer connected to controlled processes. The key functions of supervisory control are identified as plan, teach, monitor, intervene, and learn. Determinants of multitasking performance in AMS are discussed, including scheduling, switching, confusion, cooperation and limited processing resources. The multiple resources theory, which proposes three dimensions (stages, input modality, processing codes) along which resources can be allocated, is presented as explaining multitasking performance better than the single resource theory.
The document compares the predictive performance of classification trees and logistic regression models in determining whether individuals are insured or not based on demographic and socioeconomic characteristics. It first describes the data and techniques used, including classification trees, cost complexity pruning, logistic regression, bagging, and random forests. It then describes the research method of using different portions of the data for model training and testing. The results show that logistic regression performs as well as classification trees on nonlinear data and better on linear data. Both methods select income as an important predictor, while logistic regression favors dummy variables and classification trees favor continuous variables. Random forests have the highest predictive accuracy overall.
T OWARDS A S YSTEM D YNAMICS M ODELING M E- THOD B ASED ON DEMATELijcsit
This document proposes a new method for constructing system dynamics models that combines the Decision Making Trial and Evaluation Laboratory (DEMATEL) technique with system dynamics modeling. DEMATEL is first used to systematically define and quantify causal relationships between variables in a system. The results from DEMATEL, including impact relation maps and a total influence matrix, are then used to derive the causal loop diagram and define variable weights in the stock-flow chart equations of the system dynamics model. This combined method aims to overcome limitations and subjectivity in traditional system dynamics modeling that relies solely on decision makers' mental models.
A MULTI-POPULATION BASED FROG-MEMETIC ALGORITHM FOR JOB SHOP SCHEDULING PROBLEMacijjournal
The Job Shop Scheduling Problem (JSSP) is a well known practical planning problem in the
manufacturing sector. We have considered the JSSP with an objective of minimizing makespan. In this
paper, we develop a three-stage hybrid approach called JSFMA to solve the JSSP. In JSFMA,
considering a method similar to Shuffled Frog Leaping algorithm we divide the population in several sub
populations and then solve the problem using a Memetic algorithm. The proposed approach have been
compared with other algorithms for the Job Shop Scheduling and evaluated with satisfactory results on a
set of the JSSP instances derived from classical Job Shop Scheduling benchmarks. We have solved 20
benchmark problems from Lawrence’s datasets and compared the results obtained with the results of the
algorithms established in the literature. The experimental results show that JSFMA could gain the best
known makespan in 17 out of 20 problems.
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
Employees are the backbone of corporate activities and the giving of bonuses, job titles and allowances to employees to motivate the work of employees is very necessary, salesman on the company very much and to find the best salesman cannot be done manually and for that required the implementation of a system in this decision support system by applying the TOPSIS method, it is expected with the implementation of TOPSIS method the expected results of top management can be fulfilled.
IRJET- Optimization of Thickness in Wood Furniture StructureIRJET Journal
This document discusses optimizing the thickness of wood structures used in furniture. It describes using computer-aided engineering (CAE) tools like the finite element method to model and simulate stresses on wooden furniture components under different loads. The goal is to optimize thickness to reduce material costs while ensuring the strength and functionality of the furniture. Properties of Douglas pine wood are input into CAE software to analyze stresses and identify any needed thickness adjustments to keep stresses in the elastic range and prevent failure under normal and overload conditions. Optimizing thickness this way can lower costs while making sure furniture will not fail mechanically.
In context-aware trust evaluation, using ontology tree is a popular approach to represent the relation
between contexts. Usually, similarity between two contexts is computed using these trees. Therefore, the
performance of trust evaluation highly depends on the quality of ontology trees. Fairness or granularity
consistency is one of the major limitations affecting the quality of ontology tree. This limitation refers to
inequality of semantic similarity in the most ontology trees. In other words, semantic similarity of every two
adjacent nodes is unequal in these trees. It deteriorates the performance of contexts similarity computation.
We overcome this limitation by weighting tree edges based on their semantic similarity. Weight of each
edge is computed using Normalized Similarity Score (NSS) method. This method is based on frequencies of
concepts (words) co-occurrences in the pages indexed by search engines. Our experiments represent the
better performance of the proposed approach in comparison with established trust evaluation approaches.
The suggested approach can enhance efficiency of any solution which models semantic relations by
ontology tree.
The document discusses C4.5 algorithm for building univariate decision trees and methods for building multivariate decision trees. C4.5 uses entropy, gain, and pruning to build trees that classify instances based on one attribute per node. Multivariate trees can classify using linear combinations of attributes at nodes to better handle correlated attributes. Methods like absolute error correction and thermal perceptron are presented for training linear machines to construct multivariate trees. Examples of trees generated by both approaches are shown.
The Evaluation of Topsis and Fuzzy-Topsis Method for Decision Making System i...IRJET Journal
This document discusses using fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) as an analytical tool for decision making in data mining. Fuzzy TOPSIS extends the traditional TOPSIS method to handle uncertainties by using fuzzy set theory. It involves defining ratings and weights as linguistic variables represented by fuzzy numbers. The key steps are normalizing the fuzzy decision matrix, determining fuzzy positive and negative ideal solutions, calculating distances from the ideal solutions, and determining a closeness coefficient to rank the alternatives. The literature review discusses previous research applying fuzzy set concepts to TOPSIS to address limitations of crisp data in modeling real-world decision problems.
Integrating Fuzzy Dematel and SMAA-2 for Maintenance Expensesinventionjournals
: The majority of the allowances being transferred to public institutions are mostly spent for buying new equipment, materials, facilities and their maintenance and repair. Some of the public sectors establish their own plants in order to reduce the maintenance and repair costs and gain ability to perform these activities. However, developing technology and variety of materials make their repair and maintenance activities more expensive for them. In this study, vital criteria for a public institution are determined. By using Fuzzy DEMATEL (Decision Making Trial And Evaluation Laboratory) method the degree of importance is identified by two defuzzification methods and the alternatives are ranked by using SMAA-2 (Stochastic Multi Criteria Acceptability Analysis) in three scenarios. The results show that different defuzzification methods change the order of preferences.
Project Risk management is an integral part for business survival. This research paper focuses on determining project risk factors using genetic algorithm and fuzzy logic base on the demerits of conventional approaches. Genetic algorithm help optimise the parameters data items while fuzzy logic handle imprecisions. Unified Modelling Language was utilized for modelling the software system, depicting clearly the interaction between various components and the dynamic aspect of the system. This paper demonstrates the practical application of metric based soft computing techniques in the health sector in determining patient’s satisfaction
Knowledge Identification using Rough Set Theory in Software Development Proce...ijcnes
The knowledge processing system leads the power of the organization in the world business race. All the industries are adopting knowledge management system for their human capital .The level of interaction occurs among the employees in the industry increase the knowledge creation, identification, representation and utilization. The knowledge discovery data process complexity various depend on the domain, nature of the applications, organizational system and many more organizational policies. The process time and volume of data is to be reduced for the decision supporting and Knowledge data discovery process using rough set theory equivalence association in the software development process and Information Technology Organization. Determination of the target factor variables that influence the processing knowledge in the organization .The variables are identified based equivalence association of all combinational factors of the variables. The researcher paper observed software development project, which produced un-deterministic result of the project development. This paper aimed to find the relations of variable, which could contribute more knowledge for the successful completion and delivery of the project that increase the software process development delivery. However, the activity variables leads to determine the set of activities carried out the professional group and encourage them to provide more attention on the selective activities.
VISUALIZATION OF A SYNTHETIC REPRESENTATION OF ASSOCIATION RULES TO ASSIST EX...cscpconf
In order to help the expert to validate association rules, some quality measures are proposed in the literature. We distinguish two categories: objective and subjective measures. The first one
depends on a fixed threshold and on data structure from which the rules are extracted. The second one has two subcategories: The first one consists on providing to the expert a tool for
rule interactive exploration. In fact, they present these rules in textual form. The second subcategory includes the use of visualization systems to facilitate the task of rules mining.
However, this last subcategory assumes that experts have statistical knowledge to interpret and validate association rules. Furthermore, the statistical methods have a lack of semantic
representation and could not help the experts during the process of validation. To solve this problem, we propose in this paper a method which visualizes to the experts a synthetic
representation of association rules as a formal conceptual graph (FCG). FCG represents his area of interest and allows him to realize the task of rules mining easily due to its semantic
richness.
Local coordination in online distributed constraint optimization problems - P...Antonio Maria Fiscarelli
I implemented a multiagent reinforcement learning algorithm for online constrained optimization problems, using Java and R. Several agents had to agree on a common solution for the optimization problem and they had to find the best cooperation network that is beneficial for the group performance.
This document provides an introduction to decision making methods. It outlines an 8-step general decision making process: 1) define the problem, 2) determine requirements, 3) establish goals, 4) identify alternatives, 5) define criteria, 6) select a decision making tool, 7) evaluate alternatives against criteria, and 8) validate solutions. It then discusses single vs. multiple criteria decisions and finite vs. infinite alternatives. Finally, it summarizes several multi-attribute decision making methods, including cost-benefit analysis, elementary methods like pros/cons analysis, and MAUT methods like the simple multiattribute rating technique.
Parallel and distributed genetic algorithm with multiple objectives to impro...khalil IBRAHIM
we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors for a range of lectures to check a set of constraints and reduce the cost of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increase exponentially and directly proportional to the size of the problem, The construction of timetable is the most complicated problem that was facing many universities, and increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA) is unable to provide satisfactory results, a distributed EA (dEA), which deploys the population on distributed systems, it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distribution models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.
Integrated bio-search approaches with multi-objective algorithms for optimiza...TELKOMNIKA JOURNAL
Optimal selection of features is very difficult and crucial to achieve, particularly for the task of classification. It is due to the traditional method of selecting features that function independently and generated the collection of irrelevant features, which therefore affects the quality of the accuracy of the classification. The goal of this paper is to leverage the potential of bio-inspired search algorithms, together with wrapper, in optimizing multi-objective algorithms, namely ENORA and NSGA-II to generate an optimal set of features. The main steps are to idealize the combination of ENORA and NSGA-II with suitable bio-search algorithms where multiple subset generation has been implemented. The next step is to validate the optimum feature set by conducting a subset evaluation. Eight (8) comparison datasets of various sizes have been deliberately selected to be checked. Results shown that the ideal combination of multi-objective algorithms, namely ENORA and NSGA-II, with the selected bio-inspired search algorithm is promising to achieve a better optimal solution (i.e. a best features with higher classification accuracy) for the selected datasets. This discovery implies that the ability of bio-inspired wrapper/filtered system algorithms will boost the efficiency of ENORA and NSGA-II for the task of selecting and classifying features.
This paper aims to build predictive models using the CRISP-DM framework to classify bank customers into predefined classes using a Portuguese marketing campaign dataset. It analyzes bank customer data containing over 45,000 instances and 16 features to classify customers as likely or not likely to subscribe to bank deposits. It uses multilayer perceptron and logistic regression algorithms for modeling. The results show that the multilayer perceptron model with a 70% training split provides the best average performance, accurately classifying customers 70% of the time.
Author Mr Di Chen : Ecole Polytechnique Féderale de Lausanne: Financial Engineering Section
This paper shows that complexity influences stock returns. By establishing the complexity and resilience measure of the common stock and analyzing the relationship between return, momentum, size, complexity, book-to-market ratio and resilience, three measures (size, complexity and momentum) stand out as the factors that can influence stock returns.
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
The clustering is a without monitoring process and one of the most common data mining techniques. The
purpose of clustering is grouping similar data together in a group, so were most similar to each other in a
cluster and the difference with most other instances in the cluster are. In this paper we focus on clustering
partition k-means, due to ease of implementation and high-speed performance of large data sets, After 30
year it is still very popular among the developed clustering algorithm and then for improvement problem of
placing of k-means algorithm in local optimal, we pose extended PSO algorithm, that its name is ECPSO.
Our new algorithm is able to be cause of exit from local optimal and with high percent produce the
problem’s optimal answer. The probe of results show that mooted algorithm have better performance
regards as other clustering algorithms specially in two index, the carefulness of clustering and the quality
of clustering.
This document describes a study that examines simplifying multi-attribute decision making under uncertainty by replacing distributions of attribute values with their expected values. The study uses simulations to test how well simplified models perform compared to full distribution models under different conditions, such as changes in distributions, errors in expected values, and problem size. Certain simplified models are highly sensitive to extreme non-linear preferences, while others provide generally acceptable performance that is robust to various changes in conditions.
This document discusses human supervisory control in advanced manufacturing systems (AMS). It defines supervisory control as human operators programming and receiving information from a computer connected to controlled processes. The key functions of supervisory control are identified as plan, teach, monitor, intervene, and learn. Determinants of multitasking performance in AMS are discussed, including scheduling, switching, confusion, cooperation and limited processing resources. The multiple resources theory, which proposes three dimensions (stages, input modality, processing codes) along which resources can be allocated, is presented as explaining multitasking performance better than the single resource theory.
The document compares the predictive performance of classification trees and logistic regression models in determining whether individuals are insured or not based on demographic and socioeconomic characteristics. It first describes the data and techniques used, including classification trees, cost complexity pruning, logistic regression, bagging, and random forests. It then describes the research method of using different portions of the data for model training and testing. The results show that logistic regression performs as well as classification trees on nonlinear data and better on linear data. Both methods select income as an important predictor, while logistic regression favors dummy variables and classification trees favor continuous variables. Random forests have the highest predictive accuracy overall.
T OWARDS A S YSTEM D YNAMICS M ODELING M E- THOD B ASED ON DEMATELijcsit
This document proposes a new method for constructing system dynamics models that combines the Decision Making Trial and Evaluation Laboratory (DEMATEL) technique with system dynamics modeling. DEMATEL is first used to systematically define and quantify causal relationships between variables in a system. The results from DEMATEL, including impact relation maps and a total influence matrix, are then used to derive the causal loop diagram and define variable weights in the stock-flow chart equations of the system dynamics model. This combined method aims to overcome limitations and subjectivity in traditional system dynamics modeling that relies solely on decision makers' mental models.
A MULTI-POPULATION BASED FROG-MEMETIC ALGORITHM FOR JOB SHOP SCHEDULING PROBLEMacijjournal
The Job Shop Scheduling Problem (JSSP) is a well known practical planning problem in the
manufacturing sector. We have considered the JSSP with an objective of minimizing makespan. In this
paper, we develop a three-stage hybrid approach called JSFMA to solve the JSSP. In JSFMA,
considering a method similar to Shuffled Frog Leaping algorithm we divide the population in several sub
populations and then solve the problem using a Memetic algorithm. The proposed approach have been
compared with other algorithms for the Job Shop Scheduling and evaluated with satisfactory results on a
set of the JSSP instances derived from classical Job Shop Scheduling benchmarks. We have solved 20
benchmark problems from Lawrence’s datasets and compared the results obtained with the results of the
algorithms established in the literature. The experimental results show that JSFMA could gain the best
known makespan in 17 out of 20 problems.
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
Employees are the backbone of corporate activities and the giving of bonuses, job titles and allowances to employees to motivate the work of employees is very necessary, salesman on the company very much and to find the best salesman cannot be done manually and for that required the implementation of a system in this decision support system by applying the TOPSIS method, it is expected with the implementation of TOPSIS method the expected results of top management can be fulfilled.
IRJET- Optimization of Thickness in Wood Furniture StructureIRJET Journal
This document discusses optimizing the thickness of wood structures used in furniture. It describes using computer-aided engineering (CAE) tools like the finite element method to model and simulate stresses on wooden furniture components under different loads. The goal is to optimize thickness to reduce material costs while ensuring the strength and functionality of the furniture. Properties of Douglas pine wood are input into CAE software to analyze stresses and identify any needed thickness adjustments to keep stresses in the elastic range and prevent failure under normal and overload conditions. Optimizing thickness this way can lower costs while making sure furniture will not fail mechanically.
In context-aware trust evaluation, using ontology tree is a popular approach to represent the relation
between contexts. Usually, similarity between two contexts is computed using these trees. Therefore, the
performance of trust evaluation highly depends on the quality of ontology trees. Fairness or granularity
consistency is one of the major limitations affecting the quality of ontology tree. This limitation refers to
inequality of semantic similarity in the most ontology trees. In other words, semantic similarity of every two
adjacent nodes is unequal in these trees. It deteriorates the performance of contexts similarity computation.
We overcome this limitation by weighting tree edges based on their semantic similarity. Weight of each
edge is computed using Normalized Similarity Score (NSS) method. This method is based on frequencies of
concepts (words) co-occurrences in the pages indexed by search engines. Our experiments represent the
better performance of the proposed approach in comparison with established trust evaluation approaches.
The suggested approach can enhance efficiency of any solution which models semantic relations by
ontology tree.
The document discusses C4.5 algorithm for building univariate decision trees and methods for building multivariate decision trees. C4.5 uses entropy, gain, and pruning to build trees that classify instances based on one attribute per node. Multivariate trees can classify using linear combinations of attributes at nodes to better handle correlated attributes. Methods like absolute error correction and thermal perceptron are presented for training linear machines to construct multivariate trees. Examples of trees generated by both approaches are shown.
The Evaluation of Topsis and Fuzzy-Topsis Method for Decision Making System i...IRJET Journal
This document discusses using fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) as an analytical tool for decision making in data mining. Fuzzy TOPSIS extends the traditional TOPSIS method to handle uncertainties by using fuzzy set theory. It involves defining ratings and weights as linguistic variables represented by fuzzy numbers. The key steps are normalizing the fuzzy decision matrix, determining fuzzy positive and negative ideal solutions, calculating distances from the ideal solutions, and determining a closeness coefficient to rank the alternatives. The literature review discusses previous research applying fuzzy set concepts to TOPSIS to address limitations of crisp data in modeling real-world decision problems.
Integrating Fuzzy Dematel and SMAA-2 for Maintenance Expensesinventionjournals
: The majority of the allowances being transferred to public institutions are mostly spent for buying new equipment, materials, facilities and their maintenance and repair. Some of the public sectors establish their own plants in order to reduce the maintenance and repair costs and gain ability to perform these activities. However, developing technology and variety of materials make their repair and maintenance activities more expensive for them. In this study, vital criteria for a public institution are determined. By using Fuzzy DEMATEL (Decision Making Trial And Evaluation Laboratory) method the degree of importance is identified by two defuzzification methods and the alternatives are ranked by using SMAA-2 (Stochastic Multi Criteria Acceptability Analysis) in three scenarios. The results show that different defuzzification methods change the order of preferences.
Project Risk management is an integral part for business survival. This research paper focuses on determining project risk factors using genetic algorithm and fuzzy logic base on the demerits of conventional approaches. Genetic algorithm help optimise the parameters data items while fuzzy logic handle imprecisions. Unified Modelling Language was utilized for modelling the software system, depicting clearly the interaction between various components and the dynamic aspect of the system. This paper demonstrates the practical application of metric based soft computing techniques in the health sector in determining patient’s satisfaction
Knowledge Identification using Rough Set Theory in Software Development Proce...ijcnes
The knowledge processing system leads the power of the organization in the world business race. All the industries are adopting knowledge management system for their human capital .The level of interaction occurs among the employees in the industry increase the knowledge creation, identification, representation and utilization. The knowledge discovery data process complexity various depend on the domain, nature of the applications, organizational system and many more organizational policies. The process time and volume of data is to be reduced for the decision supporting and Knowledge data discovery process using rough set theory equivalence association in the software development process and Information Technology Organization. Determination of the target factor variables that influence the processing knowledge in the organization .The variables are identified based equivalence association of all combinational factors of the variables. The researcher paper observed software development project, which produced un-deterministic result of the project development. This paper aimed to find the relations of variable, which could contribute more knowledge for the successful completion and delivery of the project that increase the software process development delivery. However, the activity variables leads to determine the set of activities carried out the professional group and encourage them to provide more attention on the selective activities.
VISUALIZATION OF A SYNTHETIC REPRESENTATION OF ASSOCIATION RULES TO ASSIST EX...cscpconf
In order to help the expert to validate association rules, some quality measures are proposed in the literature. We distinguish two categories: objective and subjective measures. The first one
depends on a fixed threshold and on data structure from which the rules are extracted. The second one has two subcategories: The first one consists on providing to the expert a tool for
rule interactive exploration. In fact, they present these rules in textual form. The second subcategory includes the use of visualization systems to facilitate the task of rules mining.
However, this last subcategory assumes that experts have statistical knowledge to interpret and validate association rules. Furthermore, the statistical methods have a lack of semantic
representation and could not help the experts during the process of validation. To solve this problem, we propose in this paper a method which visualizes to the experts a synthetic
representation of association rules as a formal conceptual graph (FCG). FCG represents his area of interest and allows him to realize the task of rules mining easily due to its semantic
richness.
VISUALIZATION OF A SYNTHETIC REPRESENTATION OF ASSOCIATION RULES TO ASSIST EX...csandit
In order to help the expert to validate association rules, some quality measures are proposed in
the literature. We distinguish two categories: objective and subjective measures. The first one
depends on a fixed threshold and on data structure from which the rules are extracted. The
second one has two subcategories: The first one consists on providing to the expert a tool for
rule interactive exploration. In fact, they present these rules in textual form. The second
subcategory includes the use of visualization systems to facilitate the task of rules mining.
However, this last subcategory assumes that experts have statistical knowledge to interpret and
validate association rules. Furthermore, the statistical methods have a lack of semantic
representation and could not help the experts during the process of validation. To solve this
problem, we propose in this paper a method which visualizes to the experts a synthetic
representation of association rules as a formal conceptual graph (FCG). FCG represents his
area of interest and allows him to realize the task of rules mining easily due to its semantic
richness.
This document describes a study that develops a fuzzy inference system (FIS) to assess the sustainability of biomass production for energy purposes. The FIS uses four input parameters - energy output, energy balance ratio, fertilizer usage, and pesticide usage - with defined membership functions. Eighty-one IF-THEN rules were created relating the input parameters to a single output parameter, a fuzzy sustainability index (FSI). The FSI indicates the sustainability level as very low, low, medium, high or very high. The FIS provides a means to evaluate biomass sustainability that can handle uncertain input data, unlike other assessment methods. Graphs show the relationship between input parameters and the fuzzy output based on the rules.
This document provides an overview of a survey of multi-objective evolutionary algorithms for data mining tasks. It discusses key concepts in multi-objective optimization and evolutionary algorithms. It also reviews common data mining tasks like feature selection, classification, clustering, and association rule mining that are often formulated as multi-objective problems and solved using multi-objective evolutionary algorithms. The survey focuses on reviewing applications of multi-objective evolutionary algorithms for feature selection and classification in part 1, and applications for clustering, association rule mining and other tasks in part 2.
Formation control of non-identical multi-agent systemsIJECEIAES
The problem considered in this work is formation control for non-identical linear multi-agent systems (MASs) under a time-varying communication network. The size of the formation is scalable via a scaling factor determined by a leader agent. Past works on scalable formation are limited to identical agents under a fixed communication network. In addition, the formation scaling variable is updated under a leader-follower network. Differently, this work considers a leaderless undirected network in addition to a leader-follower network to update the formation scaling variable. The control law to achieve scalable formation is based on the internal model principle and consensus algorithm. A biased reference output, updated in a distributed manner, is introduced such that each agent tracks a different reference output. Numerical examples show the effectiveness of the proposed method.
This document proposes a new similarity measure for comparing spatial MDX queries in a spatial data warehouse to support spatial personalization approaches. The proposed similarity measure takes into account the topology, direction, and distance between the spatial objects referenced in the MDX queries. It defines the topological distance between spatial scenes referenced in queries based on a conceptual neighborhood graph. It also defines the directional distance between queries based on a graph of spatial directions and transformation costs. The similarity measure will be included in a recommendation approach the authors are developing to recommend relevant anticipated queries to users based on their previous queries.
Analysis of Agile and Multi-Agent Based Process Scheduling Modelirjes
As an answer of long growing frustration of waterfall Software development life cycle concepts,
agile software development concept was evolved in 90’s. The most popular agile methodologies is the Extreme
Programming (XP). Most software companies nowadays aim to produce efficient, flexible and valuable
Software in short time period with minimal costs, and within unstable, changing environments. This complex
problem can be modeled as a multi-agent based system, where agents negotiate resources. Agents can be used to
represent projects and resources. Crucial for the multi-agent based system in project scheduling model, is the
availability of an effective algorithm for prioritizing and scheduling of task. To evaluate the models, simulations
were carried out with real life and several generated data sets. The developed model (Multi-agent based System)
provides an optimized and flexible agile process scheduling and reduces overheads in the software process as it
responds quickly to changing requirements without excessive work in project scheduling.
Selection of Equipment by Using Saw and Vikor Methods IJERA Editor
This document discusses methods for selecting equipment using multi-criteria decision making approaches. It presents the SAW (Simple Additive Weighting) and VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje) methods for equipment selection. The document outlines the steps for both SAW and VIKOR methods. It also discusses consistency testing to validate the results and ensure less than 0.1 consistency ratio. The methods are then applied to a case study of equipment selection at a spring manufacturing unit to demonstrate the process.
The document discusses using k-means clustering on a life insurance customer dataset to predict customer preferences. It first provides background on k-means clustering and its application in data mining. It then describes applying k-means to a dataset of 14,180 customer records with 10 attributes from an Albanian insurance company. This identified 5 clusters characterizing different customer segments based on attributes like gender, age, and preferred insurance product type and amount. The results help the insurance company better understand customer preferences to improve performance.
The document discusses different types of matrices used in environmental impact assessments to identify interactions between project activities and environmental factors. It describes simple matrices, stepped matrices, and weighted matrices. Simple matrices cross-reference project phases with environmental elements and can use symbols to show impact scale. Stepped matrices consider how activities relate to resources and how one action's impacts can affect other resources. Weighted matrices allow ranking impacts by assigning weights to environmental components and scoring project impacts.
GRID COMPUTING: STRATEGIC DECISION MAKING IN RESOURCE SELECTIONIJCSEA Journal
The rapid development of computer networks around the world generated new areas especially in computer instruction processing. In grid computing, instruction processing is performed by external processors available to the system. An important topic in this area is task scheduling to available external resources. However, we do not deal with this topic here. In this paper we intend to work on strategic decision making on selecting the best alternative resources for processing instructions with respect to criteria in special conditions. Where the criteria might be security, political, technical, cost, etc. Grid computing should be determined with respect to the processing objectives of instructions of a program. This paper seeks a way through combining Analytic Hierarchy Process (AHP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to help us in ranking and selecting available resources according to considerable criteria in allocating instructions to resources. Therefore, our findings will help technical managers of organizations in choosing as well as ranking candidate alternatives for processing program instructions.
Heuristics for the Maximal Diversity Selection ProblemIJMER
The problem of selecting k items from among a given set of N items such that the ‘diversity’
among the k items is maximum, is a classical problem with applications in many diverse areas such as
forming committees, jury selection, product testing, surveys, plant breeding, ecological preservation,
capital investment, etc. A suitably defined distance metric is used to determine the diversity. However,
this is a hard problem, and the optimal solution is computationally intractable. In this paper we present
the experimental evaluation of two approximation algorithms (heuristics) for the maximal diversity
selection problem
1) The document describes a study that uses a causal loop diagram (CLD) to model the impact of electronic data interchange (EDI) implementation on an operations system. The CLD identifies several feedback loops involving factors like error rate, work pressure, costs, and profit.
2) Implementing EDI reduces paperwork and the potential for errors, but increasing EDI use also raises IT costs. Higher error rates can increase costs and lower customer satisfaction and profit. This can increase work pressure and further raise error rates.
3) The CLD captures these complex relationships and feedback effects to provide insights into how changes in one part of the system, like implementing EDI, can reverberate through the entire operations
Operations research originated during World War II when scientists applied scientific methods to military operations. It has since been applied to many domains including business, transportation, and public health. Some key OR techniques include linear programming, transportation models, assignment problems, queuing theory, simulation, and inventory control models. The OR process involves formulating the problem, developing a mathematical model, selecting data inputs, solving the model, validating the model, and implementing the solution. Models can be classified as deterministic or stochastic, descriptive, predictive, or prescriptive, static or dynamic, and analytical or simulation-based. OR aims to help management make better decisions through quantitative analysis and optimization of systems and processes.
An Explanation Framework for Interpretable Credit Scoring gerogepatton
With the recent boosted enthusiasm in Artificial Intelligence (AI) and Financial Technology (FinTech),
applications such as credit scoring have gained substantial academic interest. However, despite the evergrowing achievements, the biggest obstacle in most AI systems is their lack of interpretability. This
deficiency of transparency limits their application in different domains including credit scoring. Credit
scoring systems help financial experts make better decisions regarding whether or not to accept a loan
application so that loans with a high probability of default are not accepted. Apart from the noisy and
highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the
`right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit
Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic
decisions are understandable and coherent. A recently introduced concept is eXplainable AI (XAI), which
focuses on making black-box models more interpretable. In this work, we present a credit scoring model
that is both accurate and interpretable. For classification, state-of-the-art performance on the Home
Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient
Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework,
which provides different explanations (i.e. global, local feature-based and local instance- based) that are
required by different people in different situations. Evaluation through the use of functionally-grounded,
application-grounded and human-grounded analysis shows that the explanations provided are simple and
consistent as well as correct, effective, easy to understand, sufficiently detailed and trustworthy.
AN EXPLANATION FRAMEWORK FOR INTERPRETABLE CREDIT SCORINGijaia
With the recent boosted enthusiasm in Artificial Intelligence (AI) and Financial Technology (FinTech),
applications such as credit scoring have gained substantial academic interest. However, despite the evergrowing achievements, the biggest obstacle in most AI systems is their lack of interpretability. This
deficiency of transparency limits their application in different domains including credit scoring. Credit
scoring systems help financial experts make better decisions regarding whether or not to accept a loan
application so that loans with a high probability of default are not accepted. Apart from the noisy and
highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the
`right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit
Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic
decisions are understandable and coherent. A recently introduced concept is eXplainable AI (XAI), which
focuses on making black-box models more interpretable. In this work, we present a credit scoring model
that is both accurate and interpretable. For classification, state-of-the-art performance on the Home
Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient
Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework,
which provides different explanations (i.e. global, local feature-based and local instance- based) that are
required by different people in different situations. Evaluation through the use of functionally-grounded,
application-grounded and human-grounded analysis shows that the explanations provided are simple and
consistent as well as correct, effective, easy to understand, sufficiently detailed and trustworthy.
Similar to New view of fuzzy aggregations. part I: general information structure for decision-making models (20)
The document proposes a new method for multiple attribute decision making problems with spherical fuzzy information. It introduces the concept of spherical fuzzy sets which can better capture uncertain and inconsistent information. It then defines spherical fuzzy cross-entropy as an extension of cross-entropy between fuzzy sets. Spherical fuzzy cross-entropy measures the discrimination between two spherical fuzzy sets based on the membership, non-membership, and hesitancy degrees. Finally, it presents a multiple attribute decision making model that uses spherical fuzzy weighted cross-entropy to rank alternatives based on their distance from an ideal alternative. An example on enterprise resource planning system selection is provided to demonstrate the approach.
1. The document describes a fuzzy logic-based decision-making system to predict the risk of a person being infected with COVID-19 based on their symptoms and parameters.
2. The system uses 8 input variables like fever, cough, breathing difficulty, etc. and 1 output variable of COVID-19 prognosis. It defines membership functions and 77 fuzzy rules to relate the inputs and output.
3. Testing on real patient data yielded an accuracy of 97.2%, sensitivity of 100%, and specificity of 96.2% in predicting low, moderate, and high risk of COVID-19 infection.
The aim of this paper is to investigate different definitions of soft points in the existing literature on soft set theory and its extensions in different directions. Then limitations of these definitions are illustrated with the help of examples. Moreover, the definition of soft point in the setup of fuzzy soft set, intervalvalued fuzzy soft set, hesitant fuzzy soft set and intuitionistic soft set are also discussed. We also suggest an approach to unify the definitions of soft point which is more applicable than the existing notions.
The Ordered Weighted Averaging (OWA) operator was introduced by Yager [34] to provide a method for aggregating inputs that lie between the max and min operators. In this article we continue to present some extensions of OWA-type aggregation operators. Several variants of the generalizations of the fuzzy-probabilistic OWA operator-FPOWA (introduced by Merigo [13], [14]) are presented in the environment of fuzzy uncertainty, where different monotone measures (fuzzy measure) are used as uncertainty measures. The considered monotone measures are: possibility measure, Sugeno additive measure, monotone measure associated with Belief Structure and Choquet capacity of order two. New aggregation operators are introduced: AsFPOWA and SA-AsFPOWA. Some properties of new aggregation operators and their information measures are proved. Concrete faces of new operators are presented with respect to different monotone measures and mean operators. Concrete operators are induced by the Monotone Expectation (Choquet integral) or Fuzzy Expected Value (Sugeno Integral) and the Associated Probability Class (APC) of a monotone measure. New aggregation operators belong to the Information Structure I6 (see Part I, Section 3). For the illustration of new constructions of AsFPOWA and SA-AsFPOWA operators an example of a fuzzy decision-making problem regarding the political management with possibility uncertainty is considered. Several aggregation operators (“classic” and new operators) are used for the comparing of the results of decision making.
We present the notion of Pythagorean Fuzzy Weak Bi-Ideals (PFWBI) and interval valued Pythagorean fuzzy weak bi-ideals of Γ-near-rings and studies some of its properties. We present the notion of interval valued Pythagorean fuzzy weak bi-ideal and establish some of its properties. We study interval valued Pythagorean fuzzy weak bi-ideals of Γ-near-ring using homomorphism.
A Quadripartitioned Neutrosophic Pythagorean (QNP) set is a powerful general format framework that generalizes the concept of Quadripartitioned Neutrosophic Sets and Neutrosophic Pythagorean Sets. In this paper, we apply the notion of quadripartitioned Neutrosophic Pythagorean sets to Lie algebras. We develop the concept of QNP Lie subalgebras and QNP Lie ideals. We describe some interesting results of QNP Lie ideals.
The main concept of neutrosophy is that any idea has not only a certain degree of truth but also a degree of falsity and indeterminacy in its own right. Although there are many applications of neutrosophy in different disciplines, the incorporation of its logic in education and psychology is rather scarce compared to other fields. In this study, the Satisfaction with Life Scale was converted into the neutrosophic form and the results were compared in terms of confirmatory analysis by convolutional neural networks. To sum up, two different formulas are proposed at the end of the study to determine the validity of any scale in terms of neutrosophy. While the Lawshe methodology concentrates on the dominating opinions of experts limited by a one-dimensional data space analysis, it should be advocated that the options can be placed in three-dimensional data space in the neutrosophic analysis . The effect may be negligible for a small number of items and participants, but it may create enormous changes for a large number of items and participants. Secondly, the degree of freedom of Lawshe technique is only 1 in 3D space, whereas the degree of freedom of neutrosophical scale is 3, so researchers have to employ three separate parameters of 3D space in neutrosophical scale while a resarcher is restricted in a 1D space in Lawshe technique in 3D space. The third distinction relates to the analysis of statistics. The Lawhe technical approach focuses on the experts' ratio of choices, whereas the importance and correlation level of each item for the analysis in neutrosophical logic are analysed. The fourth relates to the opinion of experts. The Lawshe technique is focused on expert opinions, yet in many ways the word expert is not defined. In a neutrosophical scale, however, researchers primarily address actual participants in order to understand whether the item is comprehended or opposed to or is imprecise. In this research, an alternative technique is presented to construct a valid scale in which the scale first is transformed into a neutrosophical one before being compared using neural networks. It may be concluded that each measuring scale is used for the desired aim to evaluate how suitable and representative the measurements obtained are so that its content validity can be evaluated.
This document discusses interval-valued Atanassov's intuitionistic fuzzy sets and their application to multi-attribute group decision making. It proposes a new extension of the weighted average and ordered weighted average operators for interval-valued Atanassov's intuitionistic fuzzy sets based on the best interval representation. A total order is also introduced for interval-valued Atanassov's intuitionistic fuzzy values. The new operators and total order are used to develop a method for multi-attribute group decision making that considers uncertainty in experts' assessments at every step. Two examples are provided to demonstrate the application of the proposed method.
Examining the trend of the global economy shows that global trade is moving towards high-tech products. Given that these products generate very high added value, countries that can produce and export these products will have high growth in the industrial sector. The importance of investing in advanced technologies for economic and social growth and development is so great that it is mentioned as one of the strong levers to achieve development. It should be noted that the policy of developing advanced technologies requires consideration of various performance aspects, risks and future risks in the investment phase. Risk related to high-tech investment projects has a meaning other than financial concepts only. In recent years, researchers have focused on identifying, analyzing, and prioritizing risk. There are two important components in measuring investment risk in high-tech industries, which include identifying the characteristics and criteria for measuring system risk and how to measure them. This study tries to evaluate and rank the investment risks in advanced industries using fuzzy TOPSIS technique based on verbal variables.
The document presents a new algorithm for solving fully fuzzy transportation problems with trapezoidal fuzzy numbers. Transportation problems aim to minimize transportation costs while meeting supply and demand constraints. Existing methods often provide precise solutions rather than fuzzy solutions that account for uncertainty in costs, supplies, and demands. The proposed two-step method converts the fuzzy transportation problem into two interval transportation problems, whose optimal solutions provide the optimal fuzzy solution to the original problem. This allows the method to consider all parameters as fuzzy numbers and provide a fuzzy optimal solution and cost without negative components.
This paper presents a time series analysis of a novel coronavirus, COVID-19, discovered in China in December 2019 using intuitionistic fuzzy logic system with neural network learning capability. Fuzzy logic systems are known to be universal approximation tools that can estimate a nonlinear function as closely as possible to the actual values. The main idea in this study is to use intuitionistic fuzzy logic system that enables hesitation and has membership and non-membership functions that are optimized to predict COVID-19 outbreak cases. Intuitionistic fuzzy logic systems are known to provide good results with improved prediction accuracy and are excellent tools for uncertainty modelling. The hesitation-enabled fuzzy logic system is evaluated using COVID-19 pandemic cases for Nigeria, being part of the COVID-19 data for African countries obtained from Kaggle data repository. The hesitation-enabled fuzzy logic model is compared with the classical fuzzy logic system and artificial neural network and shown to offer improved performance in terms of root mean squared error, mean absolute error and mean absolute percentage error. Intuitionistic fuzzy logic system however incurs a setback in terms of the high computing time compared to the classical fuzzy logic system.
Hypersoft set is an extension of the soft set where there is more than one set of attributes occur and it is very much helpful in multi-criteria group decision making problem. In a hypersoft set, the function F is a multi-argument function. In this paper, we have used the notion of Fuzzy Hypersoft Set (FHSS), which is a combination of fuzzy set and hypersoft set. In earlier research works the concept of Fuzzy Soft Set (FSS) was introduced and it was applied successfully in various fields. The FHSS theory gives more flexibility as compared to FSS to tackle the parameterized problems of uncertainty. To overcome the issue where FSS failed to explain uncertainty and incompleteness there is a dire need for another environment which is known as FHSS. It works well when there is more complexity involved in the parametric data i.e the data that involves vague concepts. This work includes some basic set-theoretic operations on FHSSs and for the reliability and the authenticity of these operations, we have shown its application with the help of a suitable example. This example shows that how FHSS theory plays its role to solve real decision-making problems.
One of the most important issues that organizations have to deal with is the timely identification and detection of risk factors aimed at preventing incidents. Managers’ and engineers’ tendency towards minimizing risk factors in a service, process or design system has obliged them to analyze the reliability of such systems in order to minimize the risks and identify the probable errors. Concerning what was just mentioned, a more accurate Failure Mode and Effects Analysis (FMEA) is adopted based on fuzzy logic and fuzzy numbers. Fuzzy TOPSIS is also used to identify, rank, and prioritize error and risk factors. This paper uses FMEA as a risk identification tool. Then, Fuzzy Risk Priority Number (FRPN) is calculated and triangular fuzzy numbers are prioritized through Fuzzy TOPSIS. In order to have a better understanding toward the mentioned concepts, a case study is presented.
The mp-quantales were introduced in a previous paper as an abstraction of the lattices of ideals in mp-rings and the lattices of ideals in conormal lattices. Several properties of m-rings and conormal lattices were generalized to mp-quantales. In this paper we shall prove new characterization theorems for mp-quantales and for semiprime mp-quantales (these last structures coincide with the P F-quantales). Some proofs reflect the way in which the reticulation functor (from coherent quantales to bounded distributive lattices) allows us to export some properties from conormal lattices to mp-quantales.
Transportation Problem (TP) is an important network structured linear programming problem that arises in several contexts and has deservedly received a great deal of attention in the literature. The central concept in this problem is to find the least total transportation cost of a commodity in order to satisfy demands at destinations using available supplies at origins in a crisp environment. In real life situations, the decision maker may not be sure about the precise values of the coefficients belonging to the transportation problem. The aim of this paper is to introduce a formulation of TP involving Triangular fuzzy numbers for the transportation costs and values of supplies and demands. We propose a two-step method for solving fuzzy transportation problem where all of the parameters are represented by non-negative triangular fuzzy numbers i.e., an Interval Transportation Problems (TPIn) and a Classical Transport Problem (TP). Since the proposed approach is based on classical approach it is very easy to understand and to apply on real life transportation problems for the decision makers. To illustrate the proposed approach two application examples are solved. The results show that the proposed method is simpler and computationally more efficient than existing methods in the literature.
One of the most important issues concerning the designing a supply chain is selecting the supplier. Selecting proper suppliers is one of the most crucial activities of an organization towards the gradual improvement and a promotion in performance. This intricacy is because suppliers fulfil a part of customer’s expectancy and selecting among them is multi-criteria decision, which needs a systematic and organized approach without which this decision may lead to failure. The purpose of this research is proposing a new method for assessment and rating the suppliers. We have identified several evaluation criteria and attributes; the selection among them was by the Simple Multi-Attribute Rating Technique (SMART) method, then we have specified the connection and the influence of the criteria on each other by DEMATEL method. After that, suppliers were graded by using the Fuzzy Analytical Network Process (FANP) approach and the most efficient one was selected. The innovation of this research is combining the SMART method, DEMATEL method, and Analytical Network Process in Fuzzy state which lead to more exact and efficient results which is proposed for the first time by the researchers of this study.
Interval Type-2 Fuzzy Logic Systems (IT2 FLSs) have shown popularity, superiority, and more accuracy in performance in a number of applications in the last decade. This is due to its ability to cope with uncertainty and precisions adequately when compared with its type-1 counterpart. In this paper, an Interval Type-2 Fuzzy Logic System (IT2FLS) is employed for remote vital signs monitoring and predicting of shock level in cardiac patients. Also, the conventional, Type-1 Fuzzy Logic System (T1FLS) is applied to the prediction problems for comparison purpose. The cardiac patients’ health datasets were used to perform empirical comparison on the developed system. The result of study indicated that IT2FLS could coped with more information and handled more uncertainties in health data than T1FLS. The statistical evaluation using performance metrices indicated a minimal error with IT2FLS compared to its counterpart, T1FLS. It was generally observed that the shock level prediction experiment for cardiac patients showed the superiority of IT2FLS paradigm over T1FLS.
Statistics mainly concerned with data that may be qualitative or quantitative. Earlier we have used the notion of statistics in the classical sense where we assign values that are crisp. But in reality, we find some areas where the crisp concept is not sufficient to solve the problem. So, it seems difficult to assign a definite value for each parameter. For this, fuzzy sets and logic have been introduced to give the flexibility to analyze and classify statistical data. Moreover, we may come across such parameters that are indeterminate, uncertain, imprecise, incomplete, unknown, unsure, approximate, and even completely unknown. Intuitionistic fuzzy set explain uncertainty at some extent. But itis not sufficient to study all sorts of uncertainty present in real-life. It means that there exists data which are neutrosophic in nature. So, neutrosophic data plays a significant role to study the concept of indeterminacy present in a data without any restriction. The main objective of preparing this article is to highlighting the importance of neutrosophication of statistical data in a study to assess the symptoms related to Reproductive Tract Infections (RTIs) or Sexually Transmitted Infections (STIs) among women by sampling estimation.
In real life situations, there are many issues in which we face uncertainties, vagueness, complexities and unpredictability. Neutrosophic sets are a mathematical tool to address some issues which cannot be met using the existing methods. Neutrosophic soft matrices play a crucial role in handling indeterminant and inconsistent information during decision making process. The main focus of this article is to discuss the concept of neutrosophic sets, neutrosophic soft sets and neutrosophic soft matrices theory which are very useful and applicable in various situations involving uncertainties and imprecisions. Thereafter our intention is to find a new method for constructing a decision matrix using neutrosophic soft matrices as an application of the theory. A neutrosophic soft matrix based algorithm is considered to solve some problems in the diagnosis of a disease from the occurrence of various symptoms in patients. This article deals with patient-symptoms and symptoms-disease neutrosophic soft matrices. To come to a decision, a score matrix is defined where multiplication based on max-min operation and complementation of neutrosophic soft matrices are taken into considerations.
The escalation of COVID-19 curves is high and the researchers worldwide are working on diagnostic models, in the way this article proposes COVID-19 diagnostic model using Plithogenic cognitive maps. This paper introduces the new concept of Plithogenic sub cognitive maps including the mediating effects of the factors. The thirteen study factors are categorized as grouping factors, parametric factors, risks factors and output factor. The effect of one factor over another is measured directly based on neutrosophic triangular representation of expert’s opinion and indirectly by computing the mediating factor’s effects. This new approach is more realistic in nature as it takes the mediating effects into consideration together with contradiction degree of the factors. The possibility of children, adult and old age with risk factors and parametric factors being infected by corona virus is determined by this diagnostic model.The escalation of COVID-19 curves is high and the researchers worldwide are working on diagnostic models, in the way this article proposes COVID-19 diagnostic model using Plithogenic cognitive maps. This paper introduces the new concept of Plithogenic sub cognitive maps including the mediating effects of the factors. The thirteen study factors are categorized as grouping factors, parametric factors, risks factors and output factor. The effect of one factor over another is measured directly based on neutrosophic triangular representation of expert’s opinion and indirectly by computing the mediating factor’s effects. This new approach is more realistic in nature as it takes the mediating effects into consideration together with contradiction degree of the factors. The possibility of children, adult and old age with risk factors and parametric factors being infected by corona virus is determined by this diagnostic model.
More from Journal of Fuzzy Extension and Applications (20)
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Basics of crystallography, crystal systems, classes and different forms
New view of fuzzy aggregations. part I: general information structure for decision-making models
1. Corresponding Author: gia.sirbiladze@tsu.ge jfea@aihe.ac.ir
10.22105/JFEA.2021.275084.1080/JFEA.2021.281500.1061
E-ISSN: 2717-3453 | P-ISSN: 2783-1442
|
Abstract
1 | Introduction
It is well recognized that intelligent decision support systems and technologies have been playing
an important role in improving almost every aspect of human society. Intensive study over the
past several years has resulted in significant progress in both the theory and applications of
optimization and decision sciences.
Optimization and decision-making problems are traditionally handled by either the deterministic
or the probabilistic approach. When working with complex systems in parallel with classical
approaches of their modelling, the most important matter is to assume fuzziness ([3], [6], [13],
[15]-[32], [35]-[43], [49]-[62] and others). All this is connected to the complexity of study of
complex and vague processes and events in nature and society, which are caused by lack or
Journal of Fuzzy Extension and Applications
www.journal-fea.com
J. Fuzzy. Ext. Appl. Vol. 2, No. 2 (2021) 130–143.
Paper Type: Research Paper
New View of Fuzzy Aggregations. Part I: General
Information Structure for Decision-Making Models
Gia Sirbiladze*
Department of Computer Sciences, Javakhishvili Tbilisi State University, Tbilisi; gia.sirbiladze@tsu.ge
Citation:
Sirbiladze, G. (2021). New view of fuzzy aggregations. part i: general information structure for
decision-making models title. Journal of fuzzy extension and application, 2 (2), 130-143.
Accepted: 01/06/2021
Revised: 17/05/2021
Reviewed: 09/04/2021
Received: 25/02/2021
The Ordered Weighted Averaging (OWA) operator was introduced by Yager [57] to provide a method for aggregating
inputs that lie between the max and min operators. In this article two variants of probabilistic extensions the OWA
operator-POWA and FPOWA (introduced by Merigo [26] and [27]) are considered as a basis of our generalizations in
the environment of fuzzy uncertainty (parts II and III of this work), where different monotone measures (fuzzy measure)
are used as uncertainty measures instead of the probability measure. For the identification of “classic” OWA and new
operators (presented in parts II and III) of aggregations, the Information Structure is introduced where the incomplete
available information in the general decision-making system is presented as a condensation of uncertainty measure,
imprecision variable and objective function of weights.
Keywords: Mean aggregation operators, Fuzzy aggregations, Fuzzy measure, Fuzzy numbers, Fuzzy
decision making.
Licensee Journal
of Fuzzy Extension and
Applications. This article
is an open access article
distributed under the
terms and conditions of
the Creative Commons
Attribution (CC BY)
license
(http://creativecommons.
org/licenses/by/4.0).
http://dx.doi.org/10.22105/jfea.2021.275084.1080
2. 131
New
view
of
fuzzy
aggregations
part
I:
general
information
structure
for
decision-making
models
Part
I:
General
Information
Structure
for
Decision-Making
Models
shortage of objective information and when expert data are essential for construction of credible decisions.
With the growth of complexity of information our ability to make credible decisions from possible
alternatives with complex states of nature reduces to some level, below which some dual characteristics
such as precision and certainty become mutually conflicting ([3], [11], [20]-[22], [36]-[38], [41], [49], [51],
[54], [55] and others). When working on real, complex decision systems using an exact or some stochastic
quantitative analysis is often less convenient, concluding that the use of fuzzy methods is necessary,
because systems approach for development of information structure of investigated decision system [20],
[36], [37] with combined fuzzy-stochastic uncertainty enables us to construct convenient intelligent
decision support instruments. Obviously, the source for obtaining combined objective + fuzzy + stochastic
samplings is the populations of fuzzy-characteristics of expert’s knowledge ([22], [36], [38], [42], [51] and
others). Our research is concerned with quantitative-information analysis of the complex uncertainty and
its use for modelling of more precise decisions with minimal decision risks from the point of view of
systems research. The main objects of our attention are 1) the analysis of Information Structures of expert’s
knowledge, its uncertainty measure and imprecision variable and 2) the construction of instruments of
aggregation operators, which condense both characteristics of incomplete information - an uncertainty
measure and an imprecision variable in the scalar ranking values of possible alternatives in the decision-
making system. The first problem is considered in this paper. The second problem will be presented in the
Parts II and III of this work.
Making decisions under uncertainty is a pervasive task faced by many Decision-Making Persons (DMP),
experts, investigators or others. The main difficulty is that a selection must be made between alternatives
in which the choice of alternative doesn’t necessarily lead to well determined payoffs (experts’ valuations,
utilities and so on) to be received as a result of selecting an alternative. In this case DMP is faced with the
problem of comparing multifaceted objects whose complexity often exceeds his/her ability to compare of
uncertain alternatives. One approach to addressing this problem is to use valuation functions (or
aggregation operators). These valuation functions convert the multifaceted uncertain outcome associated
with an alternative into a single (scalar) value. This value provides a characterization of the DMP or expert
perception of the worth the possible uncertain alternative being evaluated. The problems of Decision
Making Under Uncertainty (DMUU) [51] were discussed and investigated by many well-known authors
([1]-[6], [9], [10], [13], [15]-[18], [23]-[60], [62] and others). In this work our focus is directed on the
construction of new generalizations of the aggregation Ordered Weighted Averaging (OWA) operator in
the fuzzy-probabilistic uncertainty environment.
In Section 2 some preliminary concepts are presented on the OWA operator; on the arithmetic of the
triangular fuzzy numbers; on some extensions of the OWA operator – POWA and FPOWA operators in
the probabilistic uncertainty (developed by Merigo [26] and [27]) and their information measures (see
Section 3). In Section 4 a new conceptual Information Structure (IS) of a General Decision-Making System
(GDMS) with fuzzy-probabilistic uncertainty is defined. This IS classifying some aggregation operators
and new generalizations of the OWA operator defined in the parts II and III of this work.
2|On the OWA Operator and Its Some Fuzzy-Probabilistic
Generalizations
In this type of problem, the DMP has a collection 1 2 n
D {d ,d ,...,d } of possible uncertain alternatives from
which he must select one or some ranking of decisions by some expert’s preference relation values.
Associated with this problem is a variable of characteristics, activities, symptoms and so on, which acts on
the decision procedure. This variable is normally called the state of nature, which affects the payoff, utilities,
valuations and others to the DMP’s preferences or subjective activities. This variable is assumed to take its
values (states of nature) from some set { , ,..., }
1 2 m
S s s s . As a result, the DMP knows that if he selects di
and the state of nature assumes the value sj then his payoff (valuation, utility and so on) is aij. The objective
of the decision is to select the “best” alternative, get the biggest payoff (valuation, utility and so on). But
in DMUU [51] the selection procedure becomes more difficult. In this case each alternative can be seen as
3. 132
Sirbiladze
|J.
Fuzzy.
Ext.
Appl.
2(2)
(2021)
130-143
corresponding to a row vector of possible payoffs. To make a choice the DMP must compare these
vectors, a problem which generally doesn’t lead to a compelling solution. Assume di and dk are two
alternatives such that for all
ij kj
j, j 1,2,...,m a a (Table 1). In this case there is no reason to select dk.
In this situation we shall say di dominates k i k
d ( d d )
f . Furthermore, if there exists one alternative that
dominates all the alternatives then it will be optimal solution and as a result, we call this the Pareto
optimal. Faced with the general difficulty of comparing vector payoffs we must provide some means of
comparing these vectors. Our focus in this work is on the construction of valuation function
(aggregation operator) F that can take a collection of m values and convert it into a single value,
m 1
F : R R .
Once we apply this function to each of the alternatives, we select the alternative with the largest scalar
value. The construction of F involves considerations of two aspects. The first being the satisfaction of
some rational, objective properties naturally required of any function used to convert (aggregate) a vector
of payoffs (valuations, utilities and so on) into an equivalent scalar value. The second aspect being the
inclusion of characteristics particular to the DMP’s subjective properties or preferences, dependences
with respect to risks and other main external factors.
Table 1. Decision matrix.
First, we shall consider the objective properties required of the valuation function (aggregation operator)
F [51].
1) The first property is the satisfaction of Pareto optimality. To insure this, we require that if for
j=1,2,…,m, then
An aggregation operator satisfying this condition is said to be monotonic.
2) A second condition is that the value of an alternative should be bounded by its best payoffs
(valuations, utilities) and worst possible one.
i 1,2,...,n.
This condition is said to be bounded.
3) Remark: if
ij i
a a for all j , then from Eq. (2)
min
ij ij
j 1,m j 1,m
min{a } max{a }and
i1 i2 m i
F(a ,a ,...,a ) a .
S
D
1
s 2
s … K
s … m
s
1
d 11
a 12
a … 1k
a … 1m
a
2
d 21
a 22
a … 2k
a … 2m
a
… … … … … … …
3
d i1
a i2
a … ik
a … im
a
… … … … … …
n
d n1
a n2
a … nk
a … nm
a
i1 i2 im k1 k2 km
F(a ,a ,...,a ) F(a ,a ,...,a ). (1)
ij i1 i2 im ij
min{a } F(a ,a ,...,a ) max{a }
ij i1 i2 im ij
j 1,m j 1,m
min{a } F(a ,a ,....,a ) max{a }
(2)
4. 133
New
view
of
fuzzy
aggregations
part
I:
general
information
structure
for
decision-making
models
Part
I:
General
Information
Structure
for
Decision-Making
Models
This condition is said to be idempotent.
4) The final objective condition is that the indexing of the states of nature shouldn’t affect the answer:
where permutatio n(.) is some permutation of the set i1 i 2 im
{a ,a ,...,a } . An aggregation function satisfying this
is said to be symmetric (or commutative).
Finally, we have required that our aggregation function satisfy four conditions: monotonicity, boundedness,
idempotency and symmetricity. Such functions are called mean or averaging operators [51].
In determining which of the many possible aggregation operators to select as our valuation function, we
need some guidance from the DMP. The choice of a valuation function, from among the aggregation
operators is essentially a “subjective” act reflecting the preferences of the DMP for one vector of payoffs
over another. What is needed are tools and procedure to enable a DMP to reflect their subjective
preferences into valuations. There are important problems in expert knowledge engineering for which we
often use such intelligent technologies as neural networks, machine learning, fuzzy logic control systems,
knowledge representations and others.
These problems may be solved by introducing information measures of aggregation operators ([1], [2], [4],
[12], [13], [15], [16], [26]-[33], [35], [38], [40]-[42], [45]-[60], [62] and others). In this paper we will present
new extensions of information measures of operators constructed bellow.
As an example, we present some mean aggregation operators. Assume we have an m -tuple of values
1 2 m
{a ,a ,...,a } .
Then 1 2 m i
i 1m
F( a ,a ,...,a ) min{a }
is one mean aggregation operator. The use of the operator Min corresponds
to a pessimistic attitude, one in which the DMP assumes the worst thing will happen. Another example of
a mean aggregation operator is 1 2 m i
F( a ,a ,...,a ) Max{a }.
Here we have very optimistic valuations. Another example is the simple average:
m
1 2 m i
i 1
1
Mean( a ,a ,...,a ) a .
m
In [58] Yager introduced a class of mean operators called OWA operator.
Definition 1. [57]. An OWA operator of dimension m is mapping m 1
OWA : R R
that has an associated
weighting vector W of dimension m with j
w [ 0;1]
and
m
j
j 1
w 1,
such that
where bj is the jth largest of the i
{a }, i=1,2,...,m.
Note that different properties could be studied such as the distinction between descending and ascending
orders, different measures for characterizing the weighting vector and different families of the OWA
operator ([1], [4], [26]-[33], [45], [47]-[52], [56], [57], [59], [60], [62] and others).
The OWA operator and its modifications are among the most known mean aggregation operators to the
construction of DMUU valuation functions. These aggregations are generalizations of known instrument
i1 i2 im i1 i2 im
F(a ,a ,...,a ) F(Permutation(a ,a ,...,a )), (3)
m
1 m j j
j 1
OWA(a ,...,a ) w b , (4)
5. 134
Sirbiladze
|J.
Fuzzy.
Ext.
Appl.
2(2)
(2021)
130-143
as Choquet Integral ([5], [7], [23], [38], [41], [51], [53], [54], [57] and others), Sugeno integral ([14], [17],
[24], [25], [36], [42], [44] and others) or induced mean functions ([2], [12], [60], [62] and others).
The Fuzzy Numbers (FN) have been studied by many authors ([11] and [19] and others). It can represent
in a more complete way as an imprecision variable of the incomplete information because it can consider
the maximum and minimum and the possibility that the interval values may occur.
Definition 2. [19]. 1
a( t ) : R [ 0;1]
% is called the FN which can be considered as a generalization of
the interval number:
where 1
1 2 2 3
a a' a'' a R .
In the following, we are going to review the triangular FN (TFN) [20] arithmetic operation as follows
(in Eq. (5)
2 2
a' a'' ). Let a
% and b
% be two TFNs, where 1 2 3
a ( a ,a ,a )
% and 1 2 3
b ( b ,b ,b )
% . Then
The set of all TFNs is denoted by ψ and positive TFNs ( i
a 0
) by ψ
.
Note that other operations and ranking methods could be studied ([19] and others).
Now we consider some extensions of the OWA operator, mainly developed by [26], [27], and [29],
because our future investigations concern with extensions of Merigo’s aggregation operators constructed
on the basis of the OWA operator.
Definition 3. [29]. Let ψ be the set of TFNs. A fuzzy OWA operator - FOWA of dimension m is a
mapping M
FOWA : ψ ψ
that has an associated weighting vector w of dimension m with j
w [ 0,1]
,
m
j
j 1
w 1
and
2 2
1
1 2
2 1
3
2 3
3 2
1 if t a ,a
t a
if t a ,a
a a
a t
a t
if t a ,a
a a
0 otherwise
% (5)
1: %
1 1 2 2 3 3
a b (a b , a b , a b ).
%
2: %
1 3 2 2 3 1
a b (a b , a b , a b ).
%
1 3 2 2 3 1
a b a b , a b , a b
%
% .
3: 1 2 3
a k (ka , ka , ka ), k>0.
%
4:
%
i
k k k k
1 2 3
a (a ,a ,a ), k>0, a 0.
5: %
1 1 2 2 3 3 i 1
a.b (a b , a b ,a b ), a 0, b 0.
%
6: % 1
i
3 2 1
1 1 1
b { , , }, b 0.
b b b
7: % % %
1 3 1 3
2 2 2 2
a a b b
a b if a b and if a b than a b if otherwise a b.
2 2
% % %
2 2
a b if a b
%
% and 2 2
if a b then a b
%
% 1 3 1 3
a a b b
if otherwise a b
2 2
%
% .
(6)
6. 135
New
view
of
fuzzy
aggregations
part
I:
general
information
structure
for
decision-making
models
Part
I:
General
Information
Structure
for
Decision-Making
Models
where j
b
% is the jth largest of the m
i i 1
{a }
% , and i
a ψ, i=1,2,...,m.
The FOWA operator is an extension of the OWA operator that uses imprecision information in the
arguments represented in the form of TFNs. The reason for using this aggregation operator is that
sometimes the available information presented by the DMP and formalized in payoffs (valuations, utilities
and others) can’t be assessed with exact numbers and it is necessary to use other techniques such as TFNs.
So, in this aggregation incomplete information is presented by imprecision variable of expert’s reflections
and formalized in TFNs. Sometimes the available information presented by the DMP (or expert) also has
an uncertain character, which is presented by the probability distribution on the states of nature
consequents on the payoffs of the DMP.
The fuzzy-probability aggregations based on the OWA operator was constructed by Merigo and others.
One of the variants we present here:
where j
b is the jth largest of the i
{a }, i=1,2,...,m; ; each argument i
a has an associated probability i
p with
m
i
i 1
p 1
, i
0 p 1
, j j j
P βw (1 β ) p
with β [ 0,1]
and j
p is the probability i
p ordered according
to j
b , that is according to the jth largest of the i
a .
Note that if β 0
, we get the usual probabilistic mean aggregation (mathematical expectation - p
E with
respect to probability distribution m
i i 1
{ p } ), and if β 1
, we get the OWA operator. Equivalent
representation of Eq. (8) may be defined as:
We often use probabilistic information in the decision-making systems and consequently in their
aggregation operators. Many fuzzy-probabilistic aggregations have been researched in OWA and other
operators ([5], [17], [18], [26]-[32], [35]-[42], [49]-[53], [59], [60], [62] and others). In the following we
present one of them defined in [27]:
Definition 4. [27]. Let ψ be the set of TFNs. A fuzzy-probabilistic OWA operator - FPOWA of
dimension m is a mapping
m
FPOWA : ψ ψ that associated a weighting vector w of dimension m such
that
j
w [ 0,1] ,
m
j
j 1
w 1 , according to the following formula:
where j
b
% is the jth largest of the m
i i 1
{a } are TFNs and each one has an associated probability i
i
p P( a a )
% %
, with
m
j
j 1
p 1
, j
0 p 1
, j j j
p βw (1 β ) p '
, β [ 0,1]
and j
p ' is the probability ordered according to
j j
j
b ( p' P( a b ))
% % % that is according to the jth largest of the m
i i 1
{a }
% .
m
1 2 m j
j
j 1
FOWA(a ,a ,...a ) w b .
% % % % (7)
m
1 2 m j j
j 1
POWA(a ,a ,...,a ) p b .
(8)
n
1 2 m j j
j 1
m
i i 1 2 m
i 1
P 1 2 m
POWA(a ,a ,...a ) β w b
(1 β) p a β.OWA(a ,a ,...,a )
(1 β).E (a ,a ,...,a ).
(9)
%
% % %
n
1 2 m j j
j 1
ˆ
FPOWA(a , a ,..., a ) p b . (10)
7. 136
Sirbiladze
|J.
Fuzzy.
Ext.
Appl.
2(2)
(2021)
130-143
Analogously to Eq. (9) we present the equivalent form of the FPOWA operator as a weighted sum of
the OWA operator and the mathematical expectation - p
E :
In [27] the Semi-boundary condition of the aggregation operator (11) was proved. Semi-boundary condition
of some operator F if defined as:
So, the FPOWA operator is monotonic, bounded, idempotent, symmetric and semi-bounded.
3|on the Information Measures of the POWA and FPOWA
Operators
As preliminary concepts of our investigation we present four probabilistic information measures of the
POWA and FPOWA operators defined in [27] following similar methodology developed for the OWA
operator ([1], [2], [3], [6], [47], [48], [50], [52] and others):
The Orness parameter classifies the POWA and FPOWA operators in regard to their location between
and and or:
The Entropy (dispersion) measures the amount of information being used in the aggregation:
The divergence of weighted vector w measures the divergence of the weights against the degree of
Orness:
% % %
% % % %
% % %
n ~
1 2 m j j
j 1
m
i i 1 2 m
i 1
p 1 2 m
FPOWA a , a ,..., a β w b
(1 β) p a β OWA a ,a ,...,a
(1 β) E a ,a ,...,a .
(11)
% % % %
% % % %
% % %
i p 1 2 m
i
1 2 m i
i
p 1 2 m
β min{a } (1 β) E a ,a ,...,a
F a ,a ,...,a β max{a }
(1 β) E a ,a ,...,a .
(12)
1 2,..., m
m
j
j 1
'
m
j
j 1
α(p , p p )
m j
β w
m 1
m j
(1 β) p .
m 1
(13)
1 m
2
m m
j j i i
j 1 i 1
H(p , p ,..., p )
β w ln w (1 β) p ln p .
(14)
8. 137
New
view
of
fuzzy
aggregations
part
I:
general
information
structure
for
decision-making
models
Part
I:
General
Information
Structure
for
Decision-Making
Models
where α(W ) is an Orness measure of the OWA or FOWA operators ( β 1)
:
and α( p ) is an Orness measure of the fuzzy-probabilistic aggregation ( β 0 )
:
The balance parameter measures the balance of the weights against the Orness or the andness
4|General Decision-Making System (GDMS) and Its Information
Structure (IS)
In the parts II and III of this work we will focus on the construction of new generalizations of the POWA
and FPOWA fuzzy-probabilistic aggregation operators induced by the ME (Choquet Integral [5], [7], [23],
[38], [41], [51], [53], [54], [57] and others), or the FEV (Sugeno integral [14], [17], [24], [25], [36], [42], [44]
and others) with respect to different monotone measures (fuzzy measure [8], [14], [21], [22], [36]-[38], [43],
[44], [53]-[55], [61] and others). When trying to functionally describe insufficient expert data, in many real
situations the property of additivity remains unrevealed for a measurable representation of a set and this
creates an additional restriction. Hence, to study such data, it is better to use monotone measures
(estimators) instead of additive ones. So, we will construct new generalizations of the POWA and FPOWA
operators with respect to different monotone measures (instead of the probability measure) and different
mean operators.
We introduce the definition of a monotone measure (fuzzy measure) [44] adapted to the case of a finite
referential.
Definition 5. Let 1 2 m
S {s ,s ,...,s }be a finite set and g be a set function
s
g : 2 [ 0,1] . We say g is a
monotone measure on S if it satisfies
( i ) g 0; g( S ) 1;
( ii ) A, B S, if A B, then g A g B .
A monotone measure is a normalized and monotone set function. It can be considered as an extension of
the probability concept, where additivity is replaced by the weaker condition of monotonicity. Non-
additive but monotone measures were first used in the fuzzy analysis in the 1980s [44] and are well
1 2 m
2
m
j
j 1
2
m
j
j 1
ˆ ˆ ˆ
Div(p , p ,..., p )
m j
β w α(W)
m 1
m j
(1 β) p α(P)
m 1
(15)
m
j
j 1
m j
α(W) w
m 1
, (16)
m
j
j 1
m j
α P p
m 1
. (17)
1 2 m
m
j
j 1
m
j
j 1
ˆ ˆ ˆ
Bal(p , p ,..., p )
m 1 2j
β w
m 1
m 1 2j
(1 β) p .
m 1
(18)
9. 138
Sirbiladze
|J.
Fuzzy.
Ext.
Appl.
2(2)
(2021)
130-143
investigated ([8], [14], [21], [22], [36]-[38], [43], [44], [53]-[55], [61] and others). Therefore, in order to
classify OWA-type aggregation operators with probabilistic (POWA, FPOWA operators and others) or
fuzzy uncertainty (defined in parts II and III) it is necessary to define an information structure of these
operators. The different cases of incompleteness (uncertainty measure + imprecision variable) and
objectivity (objective weighted function) will be considered in our new aggregation operators. Therefore,
from the point of view of systems approach it is necessary to describe and formally present the scheme
of GDMS in uncertain – objective environment. GDMS gives us the possibility to identify the different
cases of levels of incompleteness and objectivity of available information which in whole defines the
aggregation procedure.
Now we define the general decision-making system and its information structure which will be
considered in the aggregation problems of parts II and III.
Definition 6. The GDMS that will combine decision-making technologies and methods of construction
of decision functions (aggregation operators) may be presented by the following 8-tuple
where 1 2 n
D {d ,d ,...,d }
is a set of all possible alternatives (decisions, diagnosis and so on) that are made
by a Decision-Making Person (DMP).
1 2 m}
S {s ,s ,...,s
is a set of systems states of nature (actions, activities, factors, symptoms and so on) that
are act on the possible alternatives in the decision procedure?
a - is an imprecision on precision variable of payoffs (utilities, valuations, some degrees of satisfaction
to a fuzzy set, prices and so on), which will by defined by DMP’s subjective properties of preferences,
dependences with respect to risks and other external factors. As a result, variable a constructs some
decision matrix (binary relation) on D S
.
g is an uncertainty measure on s s
2 ( g : 2 [ 0,1]).
In our case it may be some monotone measure. W
is an objective weighted function (or vector) on the states of nature - S
I is the Information Structure on the data of states of nature. Cases of different levels of information
incompleteness (uncertainty measure + imprecision variable) and objectivity (objective weighted
function) on the states of nature will be considered as:
I = Information Structure (on S): =imprecision (on S) + uncertainty (on S) + objectivity (on S), where:
Imprecision on S may be presented by some inexact (stochastic, fuzzy, fuzzy-stochastic or other) variable.
Uncertainty on S may be presented by the levels of belief, credibility, probability, possibility and other
monotone measures on s
2 . These levels identify the possibility of occurrence of some groups (events,
focal elements and others) on the states of nature.
Objectivity on S is defined by the objective importance of states of nature in the procedure of decision
making. As usual the objective function is presented by a weighted function (vector) 0
W : S R .
Now we may classify cases of the Information Structure – I:
I1: The case:
Imprecision is presented by some exact variable 1
a : S R .
The measure of uncertainty does not exist.
D, S, a, g, W, I, F, Im , (19)
10. 139
New
view
of
fuzzy
aggregations
part
I:
general
information
structure
for
decision-making
models
Part
I:
General
Information
Structure
for
Decision-Making
Models
Objectivity is presented by the weights 1 2 m
W {w ,w ,...,w }.
Examples: OWA and MEAN operators belong to I1.
I2: The case:
Imprecision is presented by some fuzzy variable:
% %
a ψ; a : S [ 0,1].
The measure of uncertainty does not exist.
Objectivity is presented by the weights 1 2 m
W {w ,w ,...,w }.
Examples: FOWA operator belongs to I2.
I3: The case:
Imprecision is presented by some stochastic variable: 1
a : S R .
The measure of uncertainty is presented by concerning probability distribution on S ( S
P : 2 [ 0,1]
)
i 1
p P{s },i 1,2,...,m.
Objectivity is presented by the weights 1 2 m
W {w ,w ,...,w }.
Example: POWA operator belongs to I3.
I4: The case:
Imprecision is presented by the some fuzzy-stochastic variable:
% %
a ψ; a : S [ 0,1].
Uncertainty measure is presented by the concerning probability distribution on S ( s
P : 2 [ 0,1]
)
i i
p P{s },i 1,2,...,m.
Objectivity is presented by the weights 1 2 m
W {w ,w ,...,w }.
Example: FPOWA operator belongs to I4.
I5: The case:
Imprecision is presented by some exact variable: 1
a : S R .
The measure of uncertainty defined by some monotone measure (possibility measure [11], [14], [21], [22],
λ -additive measure ([44] and so on) s
g : 2 [ 0,1].
Objectivity is presented by the weights 1 2 m
W {w ,w ,...,w }.
Examples: SEV (Yager [51]) operator belongs to I5; SEV-POWA, AsPOWA, SA-POWA, SA-AsPOWA
(will be defined in the part II of this work) operators belong to I5.
I6: The case:
Imprecision is presented by some fuzzy variable:
% %
a ψ; a : S [ 0,1].
The measure of uncertainty is presented by some monotone measure
s
g : 2 [ 0,1].
Objectivity is presented by the weights 1 2 m
W {w ,w ,...,w }.
11. 140
Sirbiladze
|J.
Fuzzy.
Ext.
Appl.
2(2)
(2021)
130-143
Examples. SEV-FOWA, AsFPOWA, and SA-AsFPOWA operators (will be defined in the part III of
this work) belong to I6.
Note that some other cases may be considered in the Information Structure – I (for an example, the
cases when the weights in structure are not present and others).
7) F – is an aggregation (in our case OWA-type) operator for ranking of possible alternatives by its
outcome values calculated by the F . Following the Information Structure I on the states of nature
for all possible alternatives
d D,F( d )
is a ranking value. In general, F( d )
is defined as converted
(or condensed) information of imprecision values plus uncertainty measure and objective weights.
F( d ) aggregation( a( d ), g, w).
We say – that alternative j
d is more prefered (dominated) than
k
d , j k
d d ,
f j k
if F(d ) F( d ),
and j
d is equivalent to k
d , j k
d d
, if j k
F( d ) F( d )
. So, the aggregation operator F induces some
preference binary relation f on the all-possible alternatives - D .
8) Im is a set of information measures of an aggregation operator F :
In order to classify OWA-type aggregation operators {F} it is necessary to investigate information
measures (Eq.(20)). This analysis also gives us some information on the inherent subjectivity of the
choice of the decision aggregation operator by DMP [6].
5|Conclusion
This paper has a conceptual and introductory character. The main preliminary concepts were presented.
Definitions of the OWA operator and the POWA and FPOWA operators as some fuzzy-probabilistic
extensions of the OWA operator were introduced. Their information measures as - Orness, Enropy,
Divergence and Balance were considered. From the point of view of systems approach the scheme of
GDMS in uncertain – objective environment and its Information Structure was described and formally
presented. New GDMS gives us the possibility to identify the different cases of levels of incompleteness
and objectivity of available information which in whole defines the aggregation procedure. The main
results on the constructions of new generalizations of the POWA and FPOWA operators will be
presented in Parts II and III of this work.
Acknowledgment
This work was supported by Shota Rustaveli National Science Foundation of Georgia (SRNSFG) [FR-
18-466].
References
Beliakov, G. (2005). Learning weights in the generalized OWA operators. Fuzzy optimization and decision
making, 4(2), 119-130. https://doi.org/10.1007/s10700-004-5868-3
Beliakov, G., Pradera, A., & Calvo, T. (2007). Aggregation functions: A guide for practitioners (Vol. 221).
Heidelberg: Springer.
Im {Orness, Dispersion, Divergence, Balance}.
(20)
12. 141
New
view
of
fuzzy
aggregations
part
I:
general
information
structure
for
decision-making
models
Part
I:
General
Information
Structure
for
Decision-Making
Models
Bellman, R. E., & Zadeh, L. A. (1970). Decision-making in a fuzzy environment, management science, 17(4),
B-141-B-273. https://doi.org/10.1287/mnsc.17.4.B141
Calvo, T., & Beliakov, G. (2008). Identification of weights in aggregation operators. In Fuzzy sets and their
extensions: representation, aggregation and models (pp. 145-162). Berlin, Heidelberg: Springer.
https://doi.org/10.1007/978-3-540-73723-0_8
De Campos Ibañez, L. M., & Carmona, M. J. B. (1989). Representation of fuzzy measures through
probabilities. Fuzzy sets and systems, 31(1), 23-36. https://doi.org/10.1016/0165-0114(89)90064-X
Carlsson, C., & Fullér, R. (2012). Fuzzy reasoning in decision making and optimization. Physica-Verlag
Heidelberg-New York.
Choquet, G. (1954). Theory of capacities. Annals of the Fourier Institute, 5, 131-295.
DOI: https://doi.org/10.5802/aif.53
Denneberg, D. (2013). Non-additive measure and integral (Vol. 27). Springer Science & Business Media.
Dong, Y., Xu, Y., Li, H., & Feng, B. (2010). The OWA-based consensus operator under linguistic
representation models using position indexes. European journal of operational research, 203(2), 455-463.
https://doi.org/10.1016/j.ejor.2009.08.013
Dubois, D., Marichal, J. L., Prade, H., Roubens, M., & Sabbadin, R. (2001). The use of the discrete Sugeno
integral in decision-making: A survery. International journal of uncertainty, fuzziness and knowledge-based
systems, 9(05), 539-561. https://doi.org/10.1142/S0218488501001058
Dubois, D., & Prade, H. (2007). Possibility theory. Scholarpedia, 2(10), 2074.
Mesiar, R., Calvo, T., & Mayor, G. (2002). Aggregation operators: new trends and applications. Physica-Verlag.
Gil-lafuente, A. M., & Merigo-lindahl, J. M. (Eds.). (2010). Computational Intelligence in Business and
Economics. Proceedings of The Ms' 10 International Conference (Vol. 3). World Scientific.
Grabisch, M., Sugeno, M., & Murofushi, T. (2010). Fuzzy measures and integrals: theory and applications.
Heidelberg: Physica. http://hdl.handle.net/10637/3294
Greco, S., Pereira, R. A. M., Squillante, M., & Yager, R. R. (Eds.). (2010). Preferences and Decisions: Models and
Applications (Vol. 257). Springer.
Kacprzyk, J., & Zadrożny, S. (2009). Towards a general and unified characterization of individual and
collective choice functions under fuzzy and nonfuzzy preferences and majority via the ordered weighted
average operators. International journal of intelligent systems, 24(1), 4-26. https://doi.org/10.1002/int.20325
Kandel, A. (1980). On the control and evaluation of uncertain processes. IEEE transactions on automatic
control, 25(6), 1182-1187. DOI: 10.1109/TAC.1980.1102544
Kandel, A. (1978). Fuzzy statistics and forecast evaluation. IEEE transactions on systems, man, and
cybernetics, 8(5),396-401.
http://pascal-rancis.inist.fr/vibad/index.php?action=getRecordDetail&idt=PASCAL7930008529
Kaufman, M. M. (1985). Gupta, Introduction to fuzzy arithmetic. Van Nostrand Reinhold Company.
Klir, G. J. (2013). Architecture of systems problem solving. Springer Science & Business Media.
Klir, G. J., & Folger, T. A. (1998). Fuzzy sets, uncertainty and information. Prentice Hall, Englewood Cliffs
Klir, G. J., & Wierman, M. J. (2013). Uncertainty-based information: elements of generalized information
theory (Vol. 15). Physica.
Marichal, J. L. (2000). An axiomatic approach of the discrete Choquet integral as a tool to aggregate
interacting criteria. IEEE transactions on fuzzy systems, 8(6), 800-807. DOI: 10.1109/91.890347
Marichal, J. L. (2000). On Choquet and Sugeno integrals as aggregation functions. Fuzzy measures and
integrals-theory and applications, 247-272.
Marichal, J. L. (2000). On Sugeno integral as an aggregation function. Fuzzy sets and systems, 114(3), 347-365.
https://doi.org/10.1016/S0165-0114(98)00116-X
Merigo, J. M. (2011). The uncertain probabilistic weighted average and its application in the theory of
expertons. African journal of business management, 5(15), 6092-6102.
Merigó, J. M. (2011). Fuzzy multi-person decision making with fuzzy probabilistic aggregation
operators. International journal of fuzzy systems, 13(3), p163-174.
Merigó, J. M., & Casanovas, M. (2011). The uncertain induced quasi‐arithmetic OWA operator. International
journal of intelligent systems, 26(1), 1-24. https://doi.org/10.1002/int.20444
Merigo, J. M., & Casanovas, M. (2010). Fuzzy generalized hybrid aggregation operators and its application
in fuzzy decision making. International journal of fuzzy systems, 12(1), 15-24.
13. 142
Sirbiladze
|J.
Fuzzy.
Ext.
Appl.
2(2)
(2021)
130-143
Merigo, J. M., & Casanovas, M. (2010). The fuzzy generalized OWA operator and its application in
strategic decision making. Cybernetics and systems: an international journal, 41(5), 359-370.
https://doi.org/10.1080/01969722.2010.486223
Merigó, J. M., & Casanovas, M. (2009). Induced aggregation operators in decision making with the
Dempster‐Shafer belief structure. International journal of intelligent systems, 24(8), 934-954.
https://doi.org/10.1002/int.20368
Merigo, J. M., Casanovas, M., & Martínez, L. (2010). Linguistic aggregation operators for linguistic
decision making based on the Dempster-Shafer theory of evidence. International journal of
uncertainty, fuzziness and knowledge-based systems, 18(03), 287-304.
https://doi.org/10.1142/S0218488510006544
Mesiar, R., & Špirková, J. (2006). Weighted means and weighting functions. Kybernetika, 42(2), 151-
160.
Shafer, G. (1976). A mathematical theory of evidence (Vol. 42). Princeton university press.
Sikharulidze, A., & Sirbiladze, G. (2008). Average misbilief criterion on the minimal fuzzy
covering problem. Proceedings of the 9th WSEAS international confeerence on fuzzy systems (pp. 42-48).
Sirbiladze, G. (2012). Extremal fuzzy dynamic systems: Theory and applications (Vol. 28). Springer
Science & Business Media.
Sirbiladze, G. (2005). Modeling of extremal fuzzy dynamic systems. Part III. Modeling of extremal
and controllable extremal fuzzy processes. International journal of general systems, 34(2), 169-198.
https://doi.org/10.1080/03081070512331325204
Sirbiladze, G., & Gachechiladze, T. (2005). Restored fuzzy measures in expert decision-
making. Information sciences, 169(1-2), 71-95. https://doi.org/10.1016/j.ins.2004.02.010
Sirbiladze, G., Ghvaberidze, B., Latsabidze, T., & Matsaberidze, B. (2009). Using a minimal fuzzy
covering in decision-making problems. Information sciences, 179(12), 2022-2027.
https://doi.org/10.1016/j.ins.2009.02.004
Sirbiladze, G., Sikharulidze, A., & Sirbiladze, N. (2010, February). Generalized weighted fuzzy
expected values in uncertainty environment. Recent advances in artificial intelligence, knowledge
engineering and data bases: proceedings O the 9th WSEAS international conference on artificial intelligence,
knowledge engineering and data bases (AIKED'10) (pp. 54-64).
Sirbiladze, G., & Sikharulidze, A. (2003). Weighted fuzzy averages in fuzzy environment: Part I.
Insufficient expert data and fuzzy averages. International Journal of Uncertainty, Fuzziness and
Knowledge-Based Systems, 11(02), 139-157.
Sirbiladze, G., Sikharulidze, A., Ghvaberidze, B., & Matsaberidze, B. (2011). Fuzzy-probabilistic
aggregations in the discrete covering problem. International journal of general systems, 40(02), 169-196.
https://doi.org/10.1080/03081079.2010.508954
Sirbiladze, G., & Zaporozhets, N. (2003). About two probability representations of fuzzy measures
on a finite set. Journal of fuzzy mathematics, 11(3), 549-566.
Sugeno, M. (1974). Theory of fuzzy integrals and its applications (Doctoral Thesis, Tokyo Institute of
technology). (In Japenees). https://ci.nii.ac.jp/naid/10017209011/
Torra, V. (1997). The weighted OWA operator. International journal of intelligent systems, 12(2), 153-
166. https://doi.org/10.1002/(SICI)1098-111X(199702)12:2<153::AID-INT3>3.0.CO;2-P
Torra, V., & Narukawa, Y. (2007). Modeling decisions: information fusion and aggregation operators.
Springer Science and Business Media.
Yager, R. R. (2009). Weighted maximum entropy OWA aggregation with applications to decision
making under risk. IEEE transactions on systems, man, and cybernetics-part a: systems and humans, 39(3),
555-564. DOI: 10.1109/TSMCA.2009.2014535
Yager, R. R. (2009). On the dispersion measure of OWA operators. Information sciences, 179(22), 3908-
3919. https://doi.org/10.1016/j.ins.2009.07.015
Yager, R. R. (2007). Aggregation of ordinal information. Fuzzy optimization and decision making, 6(3),
199-219. https://doi.org/10.1007/s10700-007-9008-8
Yager, R. R. (2004). Generalized OWA aggregation operators. Fuzzy optimization and decision
making, 3(1), 93-107. https://doi.org/10.1023/B:FODM.0000013074.68765.97
Yager, R. R. (2002). On the evaluation of uncertain courses of action. Fuzzy optimization and decision
making, 1(1), 13-41. https://doi.org/10.1023/A:1013715523644
14. 143
New
view
of
fuzzy
aggregations
part
I:
general
information
structure
for
decision-making
models
Part
I:
General
Information
Structure
for
Decision-Making
Models
Yager, R. R. (2002). Heavy OWA operators. Fuzzy optimization and decision making, 1(4), 379-397.
https://doi.org/10.1023/A:1020959313432
Yager, R. R. (2002). On the cardinality index and attitudinal character of fuzzy measures. International journal
of general systems, 31(3), 303-329. https://doi.org/10.1080/03081070290018047
Yager, R. R. (2000). On the entropy of fuzzy measures. IEEE transactions on fuzzy systems, 8(4), 453-461.
DOI: 10.1109/91.868951
Yager, R. R. (1999). A class of fuzzy measures generated from a Dempster–Shafer belief
structure. International journal of intelligent systems, 14(12), 1239-1247. https://doi.org/10.1002/(SICI)1098-
111X(199912)14:12<1239::AID-INT5>3.0.CO;2-G
Yager, R. R. (1993). Families of OWA operators. Fuzzy sets and systems, 59(2), 125-148.
https://doi.org/10.1016/0165-0114(93)90194-M
Yager, R. R. (1988). On ordered weighted averaging aggregation operators in multicriteria
decisionmaking. IEEE transactions on systems, man, and cybernetics, 18(1), 183-190. DOI: 10.1109/21.87068
Yager, R., Fedrizzi, M., & Kacprzyk, J. (1994). Advances in the Dempster-Shafer theory of evidence. New York:
John Wiley & Sons.
Yager, R. R., & Kacprzyk, J. (Eds.). (2012). The ordered weighted averaging operators: theory and applications.
Springer Science & Business Media. Norwell: Kluwer Academic Publishers.
Yager, R. R., Kacprzyk, J., & Beliakov, G. (Eds.). (2011). Recent developments in the ordered weighted averaging
operators: theory and practice (Vol. 265). Springer. DOI: 10.1007/978-3-642-17910-5
Wang, Z., & Klir, G. J. (2009). Generalized measure theory. IFSR international series of systems science and
engineering 25, 1st edition. Springer. DOI: 10.1007/978-0-387-76852-6
Xu, Z., & Da, Q. L. (2003). An overview of operators for aggregating information. International journal of
intelligent systems, 18(9), 953-969. https://doi.org/10.1002/int.10127