In fuzzy decision-making processes based on linguistic information, operations on discrete fuzzy numbers
are commonly performed. Aggregation and defuzzification operations are some of these often used
operations. Many aggregation and defuzzification operators produce results independent to the decisionmaker’s
strategy. On the other hand, the Weighted Average Based on Levels (WABL) approach can take
into account the level weights and the decision maker's "optimism" strategy. This gives flexibility to the
WABL operator and, through machine learning, can be trained in the direction of the decision maker's
strategy, producing more satisfactory results for the decision maker. However, in order to determine the
WABL value, it is necessary to calculate some integrals. In this study, the concept of WABL for discrete
trapezoidal fuzzy numbers is investigated, and analytical formulas have been proven to facilitate the
calculation of WABL value for these fuzzy numbers. Trapezoidal and their special form, triangular fuzzy
numbers, are the most commonly used fuzzy number types in fuzzy modeling, so in this study, such numbers
have been studied. Computational examples explaining the theoretical results have been performed.
This document summarizes a project on recognizing handwritten digits using machine learning classifiers. The researchers used the MNIST dataset and preprocessed the images before extracting features. They then applied Naive Bayes and Logistic Regression classifiers and evaluated their performance based on accuracy and confusion matrices. Logistic Regression significantly outperformed Naive Bayes. Regularization was also investigated for Logistic Regression, with cross-validation used to select the optimal regularization parameter.
PREDICTIVE EVALUATION OF THE STOCK PORTFOLIO PERFORMANCE USING FUZZY CMEANS A...ijfls
The aim of this paper is to investigate the trend of the return of a portfolio formed randomly or for any
specific technique. The approach is made using two techniques fuzzy: fuzzy c-means (FCM) algorithm and
the fuzzy transform, where the rules used at fuzzy transform arise from the application of the FCM
algorithm. The results show that the proposed methodology is able to predict the trend of the return of a
stock portfolio, as well as the tendency of the market index. Real data of the financial market are used from
2004 until 2007.
The purpose research is to develop the decision model of Multi-Criteria Group Decision Making (MCGDM) into Interval Value Fuzzy Multi-Criteria Group Decision Making (IV-FMCGDM), while the specific purpose is to construct decision-making model of Adaptive Interval Value Fuzzy Analytic Hierarchy Process (AIV- FAHP) uses Triangular Fuzzy Number (TFN) and group decision aggregation functions using Interval Value Geometric Means Aggregation (IV-GMA). The novelty research is to study the concept of group decision making by improving the middle point on the Interval Value Triangular Fuzzy Number (IV TFN). It provides more accurate modeling, and better rating performance, and more effective linguistic representation. This research produced a new decision-making model and algorithm based on AIV-FAHP used to measure the quality of e-learning.
LATTICE-CELL : HYBRID APPROACH FOR TEXT CATEGORIZATIONcsandit
The document proposes a new text categorization framework called LATTICE-CELL that is based on concepts lattice and cellular automata. It models concept structures using a Cellular Automaton for Symbolic Induction (CASI) in order to reduce the time complexity of categorization caused by concept lattices. The framework consists of a preprocessing module to create a vector representation of documents and a categorization module that generates the categorization model by representing the concept lattice structure as a cellular lattice. Experiments show the approach improves performance while reducing categorization time compared to other algorithms such as Naive Bayes and k-nearest neighbors.
Sparse representation based classification of mr images of brain for alzheime...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This presentation guide you through Linear Discriminant
Analysis, LDA: Overview, Assumptions of LDA and Prepare the data for LDA.
For more topics stay tuned with Learnbay.
An Interval Type-2 Fuzzy Approach for Process Plan Selectioninventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
TYPE-2 FUZZY LINEAR PROGRAMMING PROBLEMS WITH PERFECTLY NORMAL INTERVAL TYPE-...ijceronline
In this paper, the Perfectly normal Interval Type-2 Fuzzy Linear Programming (PnIT2FLP) model is considered. This model is reduced to crisp linear programming model. This transformation is performed by a proposed ranking method. Based on the proposed fuzzy ranking method and arithmetic operation, the solution of Perfectly normal Interval Type-2 Fuzzy Linear Programming model is obtained by the solutions of linear programming model with help of MATLAB. Finally, the method is illustrated by numerical examples.
This document summarizes a project on recognizing handwritten digits using machine learning classifiers. The researchers used the MNIST dataset and preprocessed the images before extracting features. They then applied Naive Bayes and Logistic Regression classifiers and evaluated their performance based on accuracy and confusion matrices. Logistic Regression significantly outperformed Naive Bayes. Regularization was also investigated for Logistic Regression, with cross-validation used to select the optimal regularization parameter.
PREDICTIVE EVALUATION OF THE STOCK PORTFOLIO PERFORMANCE USING FUZZY CMEANS A...ijfls
The aim of this paper is to investigate the trend of the return of a portfolio formed randomly or for any
specific technique. The approach is made using two techniques fuzzy: fuzzy c-means (FCM) algorithm and
the fuzzy transform, where the rules used at fuzzy transform arise from the application of the FCM
algorithm. The results show that the proposed methodology is able to predict the trend of the return of a
stock portfolio, as well as the tendency of the market index. Real data of the financial market are used from
2004 until 2007.
The purpose research is to develop the decision model of Multi-Criteria Group Decision Making (MCGDM) into Interval Value Fuzzy Multi-Criteria Group Decision Making (IV-FMCGDM), while the specific purpose is to construct decision-making model of Adaptive Interval Value Fuzzy Analytic Hierarchy Process (AIV- FAHP) uses Triangular Fuzzy Number (TFN) and group decision aggregation functions using Interval Value Geometric Means Aggregation (IV-GMA). The novelty research is to study the concept of group decision making by improving the middle point on the Interval Value Triangular Fuzzy Number (IV TFN). It provides more accurate modeling, and better rating performance, and more effective linguistic representation. This research produced a new decision-making model and algorithm based on AIV-FAHP used to measure the quality of e-learning.
LATTICE-CELL : HYBRID APPROACH FOR TEXT CATEGORIZATIONcsandit
The document proposes a new text categorization framework called LATTICE-CELL that is based on concepts lattice and cellular automata. It models concept structures using a Cellular Automaton for Symbolic Induction (CASI) in order to reduce the time complexity of categorization caused by concept lattices. The framework consists of a preprocessing module to create a vector representation of documents and a categorization module that generates the categorization model by representing the concept lattice structure as a cellular lattice. Experiments show the approach improves performance while reducing categorization time compared to other algorithms such as Naive Bayes and k-nearest neighbors.
Sparse representation based classification of mr images of brain for alzheime...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This presentation guide you through Linear Discriminant
Analysis, LDA: Overview, Assumptions of LDA and Prepare the data for LDA.
For more topics stay tuned with Learnbay.
An Interval Type-2 Fuzzy Approach for Process Plan Selectioninventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
TYPE-2 FUZZY LINEAR PROGRAMMING PROBLEMS WITH PERFECTLY NORMAL INTERVAL TYPE-...ijceronline
In this paper, the Perfectly normal Interval Type-2 Fuzzy Linear Programming (PnIT2FLP) model is considered. This model is reduced to crisp linear programming model. This transformation is performed by a proposed ranking method. Based on the proposed fuzzy ranking method and arithmetic operation, the solution of Perfectly normal Interval Type-2 Fuzzy Linear Programming model is obtained by the solutions of linear programming model with help of MATLAB. Finally, the method is illustrated by numerical examples.
This document discusses fuzzy logical databases and an efficient algorithm for evaluating fuzzy equi-joins. It begins with an introduction to fuzzy concepts in databases, including representing imprecise data using fuzzy sets and membership functions. It then defines a new measure for fuzzy equality that is used to define a fuzzy equi-join. The document proposes a sort-merge join algorithm that sorts relations based on a partial order of intervals to efficiently evaluate the fuzzy equi-join in two phases: sorting and joining. Experimental results are said to show a significant improvement in efficiency when using this algorithm.
MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...cscpconf
This document presents a mixed 0-1 goal programming approach to solve interval-valued fractional bilevel programming problems using a bio-inspired computational algorithm. It formulates the problem using goal programming to minimize regret intervals for target intervals of achieving goals. A genetic algorithm is used to determine target intervals and optimal decisions by distributing decision powers hierarchically. It presents the problem formulation, design of the genetic algorithm using fitter codon selection and two-point crossover, and formulation of interval-valued goals by determining best and worst solutions for objectives of decision makers at different levels using the genetic algorithm.
A step-by-step complete guide for Logistic Regression Classifier especially mentioning its Decision/Activation Function, Objective Function and Objective Function Optimization procedures.
Concepts in order statistics and bayesian estimationAlexander Decker
This academic article discusses concepts in order statistics and Bayesian estimation. It provides definitions and formulas related to order statistics, including probability density functions for order statistics. It also defines survival functions, life probability functions, life probability density functions, and failure rate functions. Additionally, it covers concepts in Bayesian statistics such as loss functions, risk functions, and prior distribution functions.
Bayesian analysis of shape parameter of Lomax distribution using different lo...Premier Publishers
The Lomax distribution also known as Pareto distribution of the second kind or Pearson Type VI distribution has been used in the analysis of income data, and business failure data. It may describe the lifetime of a decreasing failure rate component as a heavy tailed alternative to the exponential distribution. In this paper we consider the estimation of the parameter of Lomax distribution. Baye’s estimator is obtained by using Jeffery’s and extension of Jeffery’s prior by using squared error loss function, Al-Bayyati’s loss function and Precautionary loss function. Maximum likelihood estimation is also discussed. These methods are compared by using mean square error through simulation study with varying sample sizes. The study aims to find out a suitable estimator of the parameter of the distribution. Finally, we analyze one data set for illustration.
PREDICTING CLASS-IMBALANCED BUSINESS RISK USING RESAMPLING, REGULARIZATION, A...IJMIT JOURNAL
We aim at developing and improving the imbalanced business risk modeling via jointly using proper
evaluation criteria, resampling, cross-validation, classifier regularization, and ensembling techniques.
Area Under the Receiver Operating Characteristic Curve (AUC of ROC) is used for model comparison
based on 10-fold cross validation. Two undersampling strategies including random undersampling (RUS)
and cluster centroid undersampling (CCUS), as well as two oversampling methods including random
oversampling (ROS) and Synthetic Minority Oversampling Technique (SMOTE), are applied. Three highly
interpretable classifiers, including logistic regression without regularization (LR), L1-regularized LR
(L1LR), and decision tree (DT) are implemented. Two ensembling techniques, including Bagging and
Boosting, are applied on the DT classifier for further model improvement. The results show that, Boosting
on DT by using the oversampled data containing 50% positives via SMOTE is the optimal model and it can
achieve AUC, recall, and F1 score valued 0.8633, 0.9260, and 0.8907, respectively.
A NEW PERSPECTIVE OF PARAMODULATION COMPLEXITY BY SOLVING 100 SLIDING BLOCK P...ijaia
This paper gives complete guidelines for authors submitting papers for the AIRCC Journals. A sliding puzzle is a combination puzzle where a player slides pieces along specific routes on a board to reach a certain end configuration. In this paper, we propose a novel measurement of the complexity of 100 sliding puzzles with paramodulation, which is an inference method of automated reasoning. It turned out that by counting the number of clauses yielded with paramodulation, we can evaluate the difficulty of each puzzle. In the experiment, we have generated 100 * 8 puzzles that passed the solvability checking by countering inversions. By doing this, we can distinguish the complexity of 8 puzzles with the number generated with paramodulation. For example, board [2,3,6,1,7,8,5,4, hole] is the easiest with score 3008 and board [6,5,8,7,4,3,2,1, hole] is the most difficult with score 48653.Besides, we have succeeded in obverse several layers of complexity (the number of clauses generated) in 100 puzzles. We can conclude that the proposed method can provide a new perspective of paramodulation complexity concerning sliding block puzzles.
The document discusses C4.5 algorithm for building univariate decision trees and methods for building multivariate decision trees. C4.5 uses entropy, gain, and pruning to build trees that classify instances based on one attribute per node. Multivariate trees can classify using linear combinations of attributes at nodes to better handle correlated attributes. Methods like absolute error correction and thermal perceptron are presented for training linear machines to construct multivariate trees. Examples of trees generated by both approaches are shown.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
INTERVAL TYPE-2 INTUITIONISTIC FUZZY LOGIC SYSTEM FOR TIME SERIES AND IDENTIF...ijfls
This paper proposes a sliding mode control-based learning of interval type-2 intuitionistic fuzzy logic system for time series and identification problems. Until now, derivative-based algorithms such as gradient descent back propagation, extended Kalman filter, decoupled extended Kalman filter and hybrid method of decoupled extended Kalman filter and gradient descent methods have been utilized for the optimization of the parameters of interval type-2 intuitionistic fuzzy logic systems. The proposed model is based on a Takagi-Sugeno-Kang inference system. The evaluations of the model are conducted using both real world and artificially generated datasets. Analysis of results reveals that the proposed interval type-2 intuitionistic fuzzy logic system trained with sliding mode control learning algorithm (derivative-free) do outperforms some existing models in terms of the test root mean squared error while competing favourable with other models in the literature. Moreover, the proposed model may stand as a good choice for real time applications where running time is paramount compared to the derivative-based models.
CATEGORY TREES – CLASSIFIERS THAT BRANCH ON CATEGORYijaia
This paper presents a batch classifier that splits a dataset into tree branches depending on the category type. It has been improved from the earlier version and fixed a mistake in the earlier paper. Two important changes have been made. The first is to represent each category with a separate classifier. Each classifier then classifies its own subset of data rows, using batch input values to create the centroid and also represent the category itself. If the classifier contains data from more than one category however, it needs to create new classifiers for the incorrect data. The second change therefore is to allow the classifier to branch to new layers when there is a split in the data, and create new classifiers there for the data rows that are incorrectly classified. Each layer can therefore branch like a tree - not for distinguishing features, but for distinguishing categories. The paper then suggests a further innovation, which is to represent some data columns with fixed value ranges, or bands. When considering features, it is shown that some of the data can be classified directly through fixed value ranges, while the rest must be classified using a classifier technique and the idea allows the paper to discuss a biological analogy with neurons and neuron links. Tests show that the method can successfully classify a diverse set of benchmark datasets to better than the state-of-the-art.
Sparse Observability using LP Presolve and LTDL Factorization in IMPL (IMPL-S...Alkis Vazacopoulos
Presented in this short document is a description of our technology we call “Sparse Observability”. Observability is the estimatability metric (Bagajewicz, 2010) to structurally determine that an unmeasured variable or regressed parameter is either uniquely solvable (observable) or otherwise unsolvable (unobservable) in data reconciliation and regression (DRR) applications. Ultimately, our purpose to use efficient sparse matrix techniques is to solve large industrial DRR flowsheets quickly and accurately.
Most other implementations of observability calculation use dense linear algebra such as reduced row echelon form (RREF), Gauss-Jordan decomposition (Crowe et. al. 1983; Madron 1992), QR factorization which can now be considered as semi-sparse (Swartz, 1989; Sanchez and Romagnoli, 1996), Schur complements, Cholesky factorization (Kelly, 1998a) and singular value decomposition (SVD) (Kelly, 1999). A sparse LU decomposition with complete-pivoting from Albuquerque and Biegler (1996) for dynamic data reconciliation observability computation was used but it is uncertain if complete-pivoting causes extreme “fill-ins” of the lower and upper triangular matrices essentially making them near-dense. There is another sparse observability method using an LP sub-solver found in Kelly and Zyngier (2008) but this requires solving as many LP sub-problems as there are unmeasured variables which can be considered as somewhat inefficient.
IMPL’s sparse observability technique uses the variable classification and nomenclature found in Kelly (1998b) given that if we partition or separate the unmeasured variables into independent (B12) and dependent (B34) sub-matrices then all dependent unmeasured variables by definition are unobservable. If any independent unmeasured variable is a (linear) function of any dependent variable then this independent variable is of course also unobservable because it is dependent on another non-observable variable.
Logistic regression for ordered dependant variable with more than 2 levelsArup Guha
This document discusses multinomial logistic regression models. Multinomial logistic regression can handle dependent variables with more than two categories that may be ordinal (ordered categories) or nominal (unordered categories). The document focuses on proportional odds cumulative logit models, which model ordinal dependent variables by considering the natural ordering of categories. It provides an example of using SAS code to fit a proportional odds model to model the impact of radiation exposure on human health.
This document discusses the team's approach to solving the Higgs Boson Machine Learning Challenge on Kaggle. It first provides background on the particle physics problem and the goal of classifying events as signal or background. It then describes the team's data preprocessing steps, including handling missing values, data normalization, and feature selection/derivation. Finally, it discusses the machine learning techniques tested, including Random Forest, Gradient Boosting, Neural Networks, and XGBoost classifiers. The team aimed to predict event weights to enable both classification and ranking of test events. Random Forest achieved an initial private score of 2.90576 but struggled with memory usage, leading the team to explore other techniques.
1. The researcher wants to examine whether three dimensions of sports team cohesion (correct task knowledge, ability evaluation, and emotional awareness) can predict competition ranking using discriminant analysis.
2. Competition ranking is the categorical dependent variable.
3. Discriminant analysis will be used to find the dimensions that discriminate between competition rankings and develop classification functions to predict group membership.
This document proposes using truncated non-negative matrix factorization (NMF) with sparseness constraints for privacy-preserving data perturbation. NMF is used to distort individual data values while preserving statistical distributions. Experimental results on breast cancer and ionosphere datasets show that the method effectively conceals sensitive information while maintaining data mining performance after distortion, as measured by a k-nearest neighbors classifier's accuracy. The degree of data distortion and privacy can be controlled by varying the NMF rank, sparseness constraint, and truncation threshold.
An improvised similarity measure for generalized fuzzy numbersjournalBEEI
Similarity measure between two fuzzy sets is an important tool for comparing various characteristics of the fuzzy sets. It is a preferred approach as compared to distance methods as the defuzzification process in obtaining the distance between fuzzy sets will incur loss of information. Many similarity measures have been introduced but most of them are not capable to discriminate certain type of fuzzy numbers. In this paper, an improvised similarity measure for generalized fuzzy numbers that incorporate several essential features is proposed. The features under consideration are geometric mean averaging, Hausdorff distance, distance between elements, distance between center of gravity and the Jaccard index. The new similarity measure is validated using some benchmark sample sets. The proposed similarity measure is found to be consistent with other existing methods with an advantage of able to solve some discriminant problems that other methods cannot. Analysis of the advantages of the improvised similarity measure is presented and discussed. The proposed similarity measure can be incorporated in decision making procedure with fuzzy environment for ranking purposes.
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ijsc
In fuzzy decision-making processes based on linguistic information, operations on discrete fuzzy numbers are commonly performed. Aggregation and defuzzification operations are some of these often used operations. Many aggregation and defuzzification operators produce results independent to the decisionmaker’s strategy. On the other hand, the Weighted Average Based on Levels (WABL) approach can take into account the level weights and the decision maker's "optimism" strategy. This gives flexibility to the WABL operator and, through machine learning, can be trained in the direction of the decision maker's strategy, producing more satisfactory results for the decision maker. However, in order to determine the WABL value, it is necessary to calculate some integrals. In this study, the concept of WABL for discrete trapezoidal fuzzy numbers is investigated, and analytical formulas have been proven to facilitate the calculation of WABL value for these fuzzy numbers. Trapezoidal and their special form, triangular fuzzy numbers, are the most commonly used fuzzy number types in fuzzy modeling, so in this study, such numbers have been studied. Computational examples explaining the theoretical results have been performed.
A Robust Method Based On LOVO Functions For Solving Least Squares ProblemsDawn Cook
The document presents a new robust method for solving least squares problems based on Lower Order-Value Optimization (LOVO) functions. The method combines a Levenberg-Marquardt algorithm adapted for LOVO problems with a voting schema to estimate the number of possible outliers without requiring it as a parameter. Numerical results show the algorithm is able to detect and ignore outliers to find better model fits to data compared to other robust algorithms.
Iterative Determinant Method for Solving Eigenvalue Problemsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Multimodal Biometrics Recognition by Dimensionality Diminution MethodIJERA Editor
Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint,
Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new
dimensionality reduction method called Dimension Diminish Projection (DDP) in this paper. DDP can not only
preserve local information by capturing the intra-modal geometry, but also extract between-class relevant
structures for classification effectively. Experimental results show that our proposed method performs better
than other algorithms including PCA, LDA and MFA.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
This document discusses fuzzy logical databases and an efficient algorithm for evaluating fuzzy equi-joins. It begins with an introduction to fuzzy concepts in databases, including representing imprecise data using fuzzy sets and membership functions. It then defines a new measure for fuzzy equality that is used to define a fuzzy equi-join. The document proposes a sort-merge join algorithm that sorts relations based on a partial order of intervals to efficiently evaluate the fuzzy equi-join in two phases: sorting and joining. Experimental results are said to show a significant improvement in efficiency when using this algorithm.
MIXED 0−1 GOAL PROGRAMMING APPROACH TO INTERVAL-VALUED BILEVEL PROGRAMMING PR...cscpconf
This document presents a mixed 0-1 goal programming approach to solve interval-valued fractional bilevel programming problems using a bio-inspired computational algorithm. It formulates the problem using goal programming to minimize regret intervals for target intervals of achieving goals. A genetic algorithm is used to determine target intervals and optimal decisions by distributing decision powers hierarchically. It presents the problem formulation, design of the genetic algorithm using fitter codon selection and two-point crossover, and formulation of interval-valued goals by determining best and worst solutions for objectives of decision makers at different levels using the genetic algorithm.
A step-by-step complete guide for Logistic Regression Classifier especially mentioning its Decision/Activation Function, Objective Function and Objective Function Optimization procedures.
Concepts in order statistics and bayesian estimationAlexander Decker
This academic article discusses concepts in order statistics and Bayesian estimation. It provides definitions and formulas related to order statistics, including probability density functions for order statistics. It also defines survival functions, life probability functions, life probability density functions, and failure rate functions. Additionally, it covers concepts in Bayesian statistics such as loss functions, risk functions, and prior distribution functions.
Bayesian analysis of shape parameter of Lomax distribution using different lo...Premier Publishers
The Lomax distribution also known as Pareto distribution of the second kind or Pearson Type VI distribution has been used in the analysis of income data, and business failure data. It may describe the lifetime of a decreasing failure rate component as a heavy tailed alternative to the exponential distribution. In this paper we consider the estimation of the parameter of Lomax distribution. Baye’s estimator is obtained by using Jeffery’s and extension of Jeffery’s prior by using squared error loss function, Al-Bayyati’s loss function and Precautionary loss function. Maximum likelihood estimation is also discussed. These methods are compared by using mean square error through simulation study with varying sample sizes. The study aims to find out a suitable estimator of the parameter of the distribution. Finally, we analyze one data set for illustration.
PREDICTING CLASS-IMBALANCED BUSINESS RISK USING RESAMPLING, REGULARIZATION, A...IJMIT JOURNAL
We aim at developing and improving the imbalanced business risk modeling via jointly using proper
evaluation criteria, resampling, cross-validation, classifier regularization, and ensembling techniques.
Area Under the Receiver Operating Characteristic Curve (AUC of ROC) is used for model comparison
based on 10-fold cross validation. Two undersampling strategies including random undersampling (RUS)
and cluster centroid undersampling (CCUS), as well as two oversampling methods including random
oversampling (ROS) and Synthetic Minority Oversampling Technique (SMOTE), are applied. Three highly
interpretable classifiers, including logistic regression without regularization (LR), L1-regularized LR
(L1LR), and decision tree (DT) are implemented. Two ensembling techniques, including Bagging and
Boosting, are applied on the DT classifier for further model improvement. The results show that, Boosting
on DT by using the oversampled data containing 50% positives via SMOTE is the optimal model and it can
achieve AUC, recall, and F1 score valued 0.8633, 0.9260, and 0.8907, respectively.
A NEW PERSPECTIVE OF PARAMODULATION COMPLEXITY BY SOLVING 100 SLIDING BLOCK P...ijaia
This paper gives complete guidelines for authors submitting papers for the AIRCC Journals. A sliding puzzle is a combination puzzle where a player slides pieces along specific routes on a board to reach a certain end configuration. In this paper, we propose a novel measurement of the complexity of 100 sliding puzzles with paramodulation, which is an inference method of automated reasoning. It turned out that by counting the number of clauses yielded with paramodulation, we can evaluate the difficulty of each puzzle. In the experiment, we have generated 100 * 8 puzzles that passed the solvability checking by countering inversions. By doing this, we can distinguish the complexity of 8 puzzles with the number generated with paramodulation. For example, board [2,3,6,1,7,8,5,4, hole] is the easiest with score 3008 and board [6,5,8,7,4,3,2,1, hole] is the most difficult with score 48653.Besides, we have succeeded in obverse several layers of complexity (the number of clauses generated) in 100 puzzles. We can conclude that the proposed method can provide a new perspective of paramodulation complexity concerning sliding block puzzles.
The document discusses C4.5 algorithm for building univariate decision trees and methods for building multivariate decision trees. C4.5 uses entropy, gain, and pruning to build trees that classify instances based on one attribute per node. Multivariate trees can classify using linear combinations of attributes at nodes to better handle correlated attributes. Methods like absolute error correction and thermal perceptron are presented for training linear machines to construct multivariate trees. Examples of trees generated by both approaches are shown.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
INTERVAL TYPE-2 INTUITIONISTIC FUZZY LOGIC SYSTEM FOR TIME SERIES AND IDENTIF...ijfls
This paper proposes a sliding mode control-based learning of interval type-2 intuitionistic fuzzy logic system for time series and identification problems. Until now, derivative-based algorithms such as gradient descent back propagation, extended Kalman filter, decoupled extended Kalman filter and hybrid method of decoupled extended Kalman filter and gradient descent methods have been utilized for the optimization of the parameters of interval type-2 intuitionistic fuzzy logic systems. The proposed model is based on a Takagi-Sugeno-Kang inference system. The evaluations of the model are conducted using both real world and artificially generated datasets. Analysis of results reveals that the proposed interval type-2 intuitionistic fuzzy logic system trained with sliding mode control learning algorithm (derivative-free) do outperforms some existing models in terms of the test root mean squared error while competing favourable with other models in the literature. Moreover, the proposed model may stand as a good choice for real time applications where running time is paramount compared to the derivative-based models.
CATEGORY TREES – CLASSIFIERS THAT BRANCH ON CATEGORYijaia
This paper presents a batch classifier that splits a dataset into tree branches depending on the category type. It has been improved from the earlier version and fixed a mistake in the earlier paper. Two important changes have been made. The first is to represent each category with a separate classifier. Each classifier then classifies its own subset of data rows, using batch input values to create the centroid and also represent the category itself. If the classifier contains data from more than one category however, it needs to create new classifiers for the incorrect data. The second change therefore is to allow the classifier to branch to new layers when there is a split in the data, and create new classifiers there for the data rows that are incorrectly classified. Each layer can therefore branch like a tree - not for distinguishing features, but for distinguishing categories. The paper then suggests a further innovation, which is to represent some data columns with fixed value ranges, or bands. When considering features, it is shown that some of the data can be classified directly through fixed value ranges, while the rest must be classified using a classifier technique and the idea allows the paper to discuss a biological analogy with neurons and neuron links. Tests show that the method can successfully classify a diverse set of benchmark datasets to better than the state-of-the-art.
Sparse Observability using LP Presolve and LTDL Factorization in IMPL (IMPL-S...Alkis Vazacopoulos
Presented in this short document is a description of our technology we call “Sparse Observability”. Observability is the estimatability metric (Bagajewicz, 2010) to structurally determine that an unmeasured variable or regressed parameter is either uniquely solvable (observable) or otherwise unsolvable (unobservable) in data reconciliation and regression (DRR) applications. Ultimately, our purpose to use efficient sparse matrix techniques is to solve large industrial DRR flowsheets quickly and accurately.
Most other implementations of observability calculation use dense linear algebra such as reduced row echelon form (RREF), Gauss-Jordan decomposition (Crowe et. al. 1983; Madron 1992), QR factorization which can now be considered as semi-sparse (Swartz, 1989; Sanchez and Romagnoli, 1996), Schur complements, Cholesky factorization (Kelly, 1998a) and singular value decomposition (SVD) (Kelly, 1999). A sparse LU decomposition with complete-pivoting from Albuquerque and Biegler (1996) for dynamic data reconciliation observability computation was used but it is uncertain if complete-pivoting causes extreme “fill-ins” of the lower and upper triangular matrices essentially making them near-dense. There is another sparse observability method using an LP sub-solver found in Kelly and Zyngier (2008) but this requires solving as many LP sub-problems as there are unmeasured variables which can be considered as somewhat inefficient.
IMPL’s sparse observability technique uses the variable classification and nomenclature found in Kelly (1998b) given that if we partition or separate the unmeasured variables into independent (B12) and dependent (B34) sub-matrices then all dependent unmeasured variables by definition are unobservable. If any independent unmeasured variable is a (linear) function of any dependent variable then this independent variable is of course also unobservable because it is dependent on another non-observable variable.
Logistic regression for ordered dependant variable with more than 2 levelsArup Guha
This document discusses multinomial logistic regression models. Multinomial logistic regression can handle dependent variables with more than two categories that may be ordinal (ordered categories) or nominal (unordered categories). The document focuses on proportional odds cumulative logit models, which model ordinal dependent variables by considering the natural ordering of categories. It provides an example of using SAS code to fit a proportional odds model to model the impact of radiation exposure on human health.
This document discusses the team's approach to solving the Higgs Boson Machine Learning Challenge on Kaggle. It first provides background on the particle physics problem and the goal of classifying events as signal or background. It then describes the team's data preprocessing steps, including handling missing values, data normalization, and feature selection/derivation. Finally, it discusses the machine learning techniques tested, including Random Forest, Gradient Boosting, Neural Networks, and XGBoost classifiers. The team aimed to predict event weights to enable both classification and ranking of test events. Random Forest achieved an initial private score of 2.90576 but struggled with memory usage, leading the team to explore other techniques.
1. The researcher wants to examine whether three dimensions of sports team cohesion (correct task knowledge, ability evaluation, and emotional awareness) can predict competition ranking using discriminant analysis.
2. Competition ranking is the categorical dependent variable.
3. Discriminant analysis will be used to find the dimensions that discriminate between competition rankings and develop classification functions to predict group membership.
This document proposes using truncated non-negative matrix factorization (NMF) with sparseness constraints for privacy-preserving data perturbation. NMF is used to distort individual data values while preserving statistical distributions. Experimental results on breast cancer and ionosphere datasets show that the method effectively conceals sensitive information while maintaining data mining performance after distortion, as measured by a k-nearest neighbors classifier's accuracy. The degree of data distortion and privacy can be controlled by varying the NMF rank, sparseness constraint, and truncation threshold.
An improvised similarity measure for generalized fuzzy numbersjournalBEEI
Similarity measure between two fuzzy sets is an important tool for comparing various characteristics of the fuzzy sets. It is a preferred approach as compared to distance methods as the defuzzification process in obtaining the distance between fuzzy sets will incur loss of information. Many similarity measures have been introduced but most of them are not capable to discriminate certain type of fuzzy numbers. In this paper, an improvised similarity measure for generalized fuzzy numbers that incorporate several essential features is proposed. The features under consideration are geometric mean averaging, Hausdorff distance, distance between elements, distance between center of gravity and the Jaccard index. The new similarity measure is validated using some benchmark sample sets. The proposed similarity measure is found to be consistent with other existing methods with an advantage of able to solve some discriminant problems that other methods cannot. Analysis of the advantages of the improvised similarity measure is presented and discussed. The proposed similarity measure can be incorporated in decision making procedure with fuzzy environment for ranking purposes.
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ijsc
In fuzzy decision-making processes based on linguistic information, operations on discrete fuzzy numbers are commonly performed. Aggregation and defuzzification operations are some of these often used operations. Many aggregation and defuzzification operators produce results independent to the decisionmaker’s strategy. On the other hand, the Weighted Average Based on Levels (WABL) approach can take into account the level weights and the decision maker's "optimism" strategy. This gives flexibility to the WABL operator and, through machine learning, can be trained in the direction of the decision maker's strategy, producing more satisfactory results for the decision maker. However, in order to determine the WABL value, it is necessary to calculate some integrals. In this study, the concept of WABL for discrete trapezoidal fuzzy numbers is investigated, and analytical formulas have been proven to facilitate the calculation of WABL value for these fuzzy numbers. Trapezoidal and their special form, triangular fuzzy numbers, are the most commonly used fuzzy number types in fuzzy modeling, so in this study, such numbers have been studied. Computational examples explaining the theoretical results have been performed.
A Robust Method Based On LOVO Functions For Solving Least Squares ProblemsDawn Cook
The document presents a new robust method for solving least squares problems based on Lower Order-Value Optimization (LOVO) functions. The method combines a Levenberg-Marquardt algorithm adapted for LOVO problems with a voting schema to estimate the number of possible outliers without requiring it as a parameter. Numerical results show the algorithm is able to detect and ignore outliers to find better model fits to data compared to other robust algorithms.
Iterative Determinant Method for Solving Eigenvalue Problemsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Multimodal Biometrics Recognition by Dimensionality Diminution MethodIJERA Editor
Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint,
Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new
dimensionality reduction method called Dimension Diminish Projection (DDP) in this paper. DDP can not only
preserve local information by capturing the intra-modal geometry, but also extract between-class relevant
structures for classification effectively. Experimental results show that our proposed method performs better
than other algorithms including PCA, LDA and MFA.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
A support vector machine (SVM) learns the decision surface from two different classes of the input points, there are misclassifications in some of the input points in several applications. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The numerical examples, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
IRJET- Diabetic Haemorrhage Detection using DWT and Elliptical LBPIRJET Journal
This document presents a method for detecting diabetic hemorrhages in retinal images using Discrete Wavelet Transform (DWT) and Elliptical Local Binary Pattern (ELBP). Retinal images are preprocessed using mean filtering for illumination correction. ELBP is applied to extract texture features, then DWT is used to reduce features. A Support Vector Machine (SVM) classifier is trained on features to classify images as hemorrhage or healthy. Testing on a retinal image database achieved 92.3% accuracy for hemorrhage detection.
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...cscpconf
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
THE NEW HYBRID COAW METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
In this article using Cuckoo Optimization Algorithm and simple additive weighting method the hybrid COAW algorithm is presented to solve multi-objective problems. Cuckoo algorithm is an efficient and structured method for solving nonlinear continuous problems. The created Pareto frontiers of the COAW proposed algorithm are exact and have good dispersion. This method has a high speed in finding the
Pareto frontiers and identifies the beginning and end points of Pareto frontiers properly. In order to validation the proposed algorithm, several experimental problems were analyzed. The results of which indicate the proper effectiveness of COAW algorithm for solving multi-objective problems.
KNN and ARL Based Imputation to Estimate Missing Valuesijeei-iaes
Missing data are the absence of data items for a subject; they hide some information that may be important. In practice, missing data have been one major factor affecting data quality. Thus, Missing value imputation is needed. Methods such as hierarchical clustering and K-means clustering are not robust to missing data and may lose effectiveness even with a few missing values. Therefore, to improve the quality of data method for missing value imputation is needed. In this paper KNN and ARL based Imputation are introduced to impute missing values and accuracy of both the algorithms are measured by using normalized root mean sqare error. The result shows that ARL is more accurate and robust method for missing value estimation.
A combined-conventional-and-differential-evolution-method-for-model-order-red...Cemal Ardil
X r3 )
(20)
The document proposes a mixed method for model order reduction of single-input single-output systems. The method combines a conventional technique using Mihailov stability criterion with a differential evolution technique. In the conventional part, the reduced denominator polynomial is derived using Mihailov stability criterion, while the numerator is obtained by matching continued fraction expansions. Then, the denominator polynomial is recalculated using differential evolution optimization to minimize integral squared error between the original and reduced models. The method is demonstrated on a numerical example and shown to produce superior results compared to using only the conventional method.
A parsimonious SVM model selection criterion for classification of real-world ...o_almasi
This paper proposes and optimizes a two-term cost function consisting of a sparseness term and a generalized v-fold cross-validation term by a new adaptive particle swarm optimization (APSO). APSO updates its parameters adaptively based on a dynamic feedback from the success rate of the each particle’s personal best. Since the proposed cost function is based on the choosing fewer numbers of support vectors, the complexity of SVM models decreased while the accuracy remains in an acceptable range. Therefore, the testing time decreases and makes SVM more applicable for practical applications in real data sets. A comparative study on data sets of UCI database is performed between the proposed cost function and conventional cost function to demonstrate the effectiveness of the proposed cost function.
The document describes a Stata package of programs for estimating panel vector autoregression (VAR) models. The package allows for convenient estimation, model selection, inference and other analyses of panel VAR models using generalized method of moments in a Stata environment. The programs address panel VAR specification, estimation, model selection criteria, impulse response analyses, and forecast error variance decomposition. The syntax and outputs of the commands are designed to be similar to Stata's built-in VAR commands for time series data.
The New Hybrid COAW Method for Solving Multi-Objective Problemsijfcstjournal
In this article using Cuckoo Optimization Algorithm and simple additive weighting method the hybrid COAW algorithm is presented to solve multi-objective problems. Cuckoo algorithm is an efficient and structured method for solving nonlinear continuous problems. The created Pareto frontiers of the COAW proposed algorithm are exact and have good dispersion. This method has a high speed in finding the Pareto frontiers and identifies the beginning and end points of Pareto frontiers properly. In order to validation the proposed algorithm, several experimental problems were analyzed. The results of which indicate the proper effectiveness of COAW algorithm for solving multi-objective problems
A study of the Behavior of Floating-Point Errorsijpla
The dangers of programs performing floating-point computations are well known. This is due to numerical reliability issues resulting from rounding errors arising during the computations. In general, these round-off errors are neglected because they are small. However, they can be accumulated and propagated and lead to faulty execution and failures. Typically, in critical embedded systems scenario, these faults may cause dramatic damages (eg. failures of Ariane 5 launch and Patriot Rocket mission). The ufp (unit in the first place) and ulp (unit in the last place) functions are used to estimate maximum value of round-off errors. In this paper, the idea consists in studying the behavior of round-off errors, checking their numerical stability using a set of constraints and ensuring that the computation results of round-off errors do not become larger when solving constraints about the ufp and ulp values.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAijejournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAacijjournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
IMPROVING SUPERVISED CLASSIFICATION OF DAILY ACTIVITIES LIVING USING NEW COST...cscpconf
The growing population of elders in the society calls for a new approach in care giving. By inferring what activities elderly are performing in their houses it is possible to determine their
physical and cognitive capabilities. In this paper we show the potential of important discriminative classifiers namely the Soft-Support Vector Machines (C-SVM), Conditional Random Fields (CRF) and k-Nearest Neighbors (k-NN) for recognizing activities from sensor patterns in a smart home environment. We address also the class imbalance problem in activity recognition field which has been known to hinder the learning performance of classifiers. Cost sensitive learning is attractive under most imbalanced circumstances, but it is difficult to determine the precise misclassification costs in practice. We introduce a new criterion for selecting the suitable cost parameter C of the C-SVM method. Through our evaluation on four real world imbalanced activity datasets, we demonstrate that C-SVM based on our proposed criterion outperforms the state-of-the-art discriminative methods in activity recognition.
Similar to ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRETE TRAPEZOIDAL FUZZY NUMBERS (20)
Road construction is not as easy as it seems to be, it includes various steps and it starts with its designing and
structure including the traffic volume consideration. Then base layer is done by bulldozers and levelers and after
base surface coating has to be done. For giving road a smooth surface with flexibility, Asphalt concrete is used.
Asphalt requires an aggregate sub base material layer, and then a base layer to be put into first place. Asphalt road
construction is formulated to support the heavy traffic load and climatic conditions. It is 100% recyclable and
saving non renewable natural resources.
With the advancement of technology, Asphalt technology gives assurance about the good drainage system and with
skid resistance it can be used where safety is necessary such as outsidethe schools.
The largest use of Asphalt is for making asphalt concrete for road surfaces. It is widely used in airports around the
world due to the sturdiness and ability to be repaired quickly, it is widely used for runways dedicated to aircraft
landing and taking off. Asphalt is normally stored and transported at 150’C or 300’F temperature
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
A high-Speed Communication System is based on the Design of a Bi-NoC Router, ...DharmaBanothu
The Network on Chip (NoC) has emerged as an effective
solution for intercommunication infrastructure within System on
Chip (SoC) designs, overcoming the limitations of traditional
methods that face significant bottlenecks. However, the complexity
of NoC design presents numerous challenges related to
performance metrics such as scalability, latency, power
consumption, and signal integrity. This project addresses the
issues within the router's memory unit and proposes an enhanced
memory structure. To achieve efficient data transfer, FIFO buffers
are implemented in distributed RAM and virtual channels for
FPGA-based NoC. The project introduces advanced FIFO-based
memory units within the NoC router, assessing their performance
in a Bi-directional NoC (Bi-NoC) configuration. The primary
objective is to reduce the router's workload while enhancing the
FIFO internal structure. To further improve data transfer speed,
a Bi-NoC with a self-configurable intercommunication channel is
suggested. Simulation and synthesis results demonstrate
guaranteed throughput, predictable latency, and equitable
network access, showing significant improvement over previous
designs
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRETE TRAPEZOIDAL FUZZY NUMBERS
1. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
DOI: 10.5121/ijsc.2018.9301 1
ANALYTICAL FORMULATIONS FOR THE LEVEL
BASED WEIGHTED AVERAGE VALUE OF
DISCRETE TRAPEZOIDAL FUZZY NUMBERS
Resmiye Nasiboglu1*
, Rahila Abdullayeva2
1
Department of Computer Science, Dokuz Eylul University, Izmir, Turkey
2
Department of Informatics, Sumgait State University, Sumgait, Azerbaijan
ABSTRACT
In fuzzy decision-making processes based on linguistic information, operations on discrete fuzzy numbers
are commonly performed. Aggregation and defuzzification operations are some of these often used
operations. Many aggregation and defuzzification operators produce results independent to the decision-
maker’s strategy. On the other hand, the Weighted Average Based on Levels (WABL) approach can take
into account the level weights and the decision maker's "optimism" strategy. This gives flexibility to the
WABL operator and, through machine learning, can be trained in the direction of the decision maker's
strategy, producing more satisfactory results for the decision maker. However, in order to determine the
WABL value, it is necessary to calculate some integrals. In this study, the concept of WABL for discrete
trapezoidal fuzzy numbers is investigated, and analytical formulas have been proven to facilitate the
calculation of WABL value for these fuzzy numbers. Trapezoidal and their special form, triangular fuzzy
numbers, are the most commonly used fuzzy number types in fuzzy modeling, so in this study, such numbers
have been studied. Computational examples explaining the theoretical results have been performed.
KEYWORDS
Fuzzy number;Trapezoidal; Weighted level-based averaging; Defuzzification.
1. INTRODUCTION
Firstly introduced by Lotfi A. Zadeh in 1965, the fuzzy logic and fuzzy sets theory led to the
integration of verbal linguistic information into mathematical models [1]. In fuzzy decision-
making models based on linguistic information, usually operations on discrete fuzzy numbers are
performed [2, 3].In [2], in order to merge subjective evaluations, a compensatory class of
aggregation functions on the finite chain from [4] is used. Then the ranking method proposed by
L. Chen and H. Lu in [5] is used to choose the best alternative, i.e., to exploit the collective
linguistic preference (see [6]). This ranking method is based on the left and right dominance
values of alternatives which is defined as the average difference of the left and right spreads at
some discrete levels. Herein, the index of optimism is used to reflect a decision maker’s degree of
optimism. In our study, a more sophisticated form of this approach based on the Weighted
Average Based on Levels (WABL) defuzzification operator is investigated.
Generally, defuzzification or determining the crisp representative of a fuzzy number (FN) is one
of the basic operations in fuzzy inference systems, fuzzy decision-making systems and many
other fuzzy logic based systems. Investigations on defuzzification methods keep their actuality
2. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
2
nowadays, and various recent studies on defuzzification methods are available in the literature [7
- 10].
The well-known basic defuzzification methods are the Center of Area (COA), the Mean of
Maxima (MOM), the Bisector of Area (BOA), etc. This group of methods are based on integral
calculations based on the real number axis that the fuzzy number is defined. However, there are
other group of methods based on integrals on [0, 1] membership degrees’ axis. The most general
representative of the last group of methods is the Weighted Average Based on Levels method
(WABL). This method is based on the study about the mean value of the fuzzy number proposed
in the pioneer study [10]. Later researches on this method have been continued in many studies
[11 - 13]. More detailed investigations on the WABL approach has been handled by Nasibov with
study [14] and have been continued in studies [15 - 18].
The main advantage of the WABL method is that it can be adjusted according to the decision-
making strategy, or its parameters can be calculated via machine learning. In addition, the WABL
parameters can be adjusted appropriately to behave as well-known methods such as COA, MOM,
etc. [13, 19].In [13], one of the WABL type level based method called SLIDE is represented. The
advantage of the SLIDE method is that the parameters can be adjusted to give better results in
fuzzy controllers. In [13], also a machine learning approach has been given to optimally adjust
the parameters of the SLIDE method. It transforms to the COA and MOM defuzzification
methods as special cases.
WABL approach and its variations is used for various purposes in many other papers. In many
studies, the WABL approach is handled for finding the best approximations of fuzzy numbers [20
-26]. Many other studies use the WABL approach to perform choice and ranking as well as for
determining the distances between fuzzy numbers [27, 28]. In [29] an approach to obtain
trapezoidal approximation of fuzzy numbers with respect to weighted distance based on WABL is
proposed. In studies [30, 31] step type, and piecewise linear approximations are also investigated.
In all of the previous studies, the WABL operator is presented and investigated for fuzzy numbers
with continuous universe of levels in the interval [0, 1]. In this study, we investigate the WABL
for discrete universe of levels in the same interval:
Λ = { , , … , | ∈ [0, 1]; < < ⋯ < }. (1.1)
The basic forms of fuzzy numbers such as triangular and trapezoidal fuzzy numbers with discrete
universe of levels and with different patterns of level weights are investigated in this study and
some analytical formulas to calculate the WABL values are presented.
Rest of the paper is organized as follows. The next preliminaries section recalls the definition of
the WABL operator and recalls the analytical formulas for calculation the WABL values of the
continuous fuzzy numbers with various type of level weights functions. Then, in section 3,
discrete leveled trapezoidal fuzzy numbers are defined and different patterns of level weight
functions for discrete case are proposed. In the section 4, the levels’ weights pattern functions are
investigated. In the next section 5, the WABL values for discrete leveled trapezoidal FN with
various levels’ weights patterns are investigated and some analytical formulas are proven. Using
this formulas give us a way for simple calculation of the WABL value of a fuzzy number without
using more complicated integral calculations. Next, in the section 6 some computational
examples calculating WABL values of discrete fuzzy numbers are illustrated. Finally, the
conclusion part highlighting benefits of this study completes the paper.
3. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
3
2. PRELIMINARIES
According to the -representation, any fuzzy subset of the number axis , or any fuzzy
number can be defined as follows:
⋃ ( / )∈( , ] (2.1)
where,
= [ ( ), ( )] = { ∈ | ( ) ≤ ≤ ( )}, (2.2)
and ∀ ∈ (0,1], [ ( ), ( )] is a continuous closed interval. In this connection, it is assumed
that ≠ ∅, i.e. is a normal fuzzy number.
Let be a fuzzy number given via -representation. Density function of degrees’ importance (in
short – degree-importance function) we call the function $( ) that satisfies the following
normality constraints:
% $( )& = 1, (2.3)
$( ) ≥ 0, ∀α ∈ (0, 1]. (2.4)
Definition 2.1. The Weighted Averaging Based on Levels (WABL) operator for a continuous
fuzzy number is calculated as below:
( ) ( ; *, $) = % (* ( ) + (1 − *) ( ))$( )& , (2.5)
where * ∈ [0, 1] is the “optimism” coefficient of the decision maker’s strategy and the degree-
importance function $ satisfies the normality constraints (2.3)-(2.4).
Based on this definition, a lot of methods can be constructed for obtaining the WABL parameters
(i.e. the degree-importance function$ and the optimism parameter*).These parameters allow the
method to gain flexibility. One of the methods for calculating the parameters used in WABL
operator is developed with using equations system [18]. We will use the notation
( ) ( )instead of ( ) ( ; *, $) for simplicity from now on.
Notice that any function $( ), satisfying constraints (2.3) and (2.4) could be considered as a
continuous degree-importance function. The following patterns of this function is handled in
studies [17, 19]:
$( ) = (- + 1) .
, - = 0, 1, 2, … (2.6)
It is clear that according to the parameter k, the degrees’ importance (weights) will be constant
(for - = 0), or be increasing linearly (for - = 1), quadratic (for - = 2), etc. w.r.t. level cuts.
In [19] it is shown that most of the well-known defuzzification operators can be simulated using
the WABL operator. Simple analytical formulas to calculate WABL values of a continuous
triangular and trapezoidal fuzzy numbers are formulated also in [19]. Some of these formulas are
mentioned below.
4. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
4
a) b)
Fig. 1. a) = (0, 1, 2)triangular, and b) = (0, 13, 14, 2) trapezoidal fuzzy numbers.
Definition 2.2. A fuzzy number with membership function in the form
5 (6) = 7
893
:93
, 6 ∈ [0, 1),
498
49:
, 6 ∈ [1, 2],
0, ; ℎ=2>?@=.
B (2.7)
is called a triangular fuzzy number = (0, 1, 2) (Fig. 1a).
The LR functions of the triangular fuzzy number = (0, 1, 2) is as follows:
( ) = 0 + (1 − 0) and ( ) = 2 − (2 − 1), ∀α ∈ [0, 1] (2.8)
Theorem 2.1 [19]. Let a FN = (0, 1, 2)be a triangular fuzzy number and suppose that the
distribution function of the importance of the degrees is in the form (2.6). Then the following
formula for WABL is valid:
( ) ( ) = * C2 −
.D
.DE
(2 − 1)F + (1 − *) C0 +
.D
.DE
(1 − 0)F, (2.9)
where - is the parameter of the degree-importance function.
Definition 2.3. A fuzzy number with membership function in the form
5 (6) =
G
H
I
H
J
893
:K93
, 6 ∈ [0, 13),
1, 6 ∈ [13, 14)
498
49:L
, 6 ∈ [14, 2),
0, ; ℎ=2>?@=.
B (2.10)
is called a trapezoidal fuzzy number = (0, 13, 14, 2) (Fig. 1b).
The LR functions of the trapezoidal fuzzy number = (0, 13, 14, 2)is as follows:
( ) = 0 + (13 − 0) and ( ) = 2 − (2 − 14), ∀α ∈ [0, 1] (2.11)
1
)(xµ
130 2 614
1
)(xµ
10 2 6
5. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
5
Theorem 2.2 [19]. Suppose = (0, 13, 14, 2)is a trapezoidal fuzzy number and let the
distribution function of the importance of the degrees is in the form (2.6). Then the following
formula is valid for the WABL:
( ) ( ) = * C2 −
.D
.DE
(2 − 14)F + (1 − *) C0 +
.D
.DE
(13 − 0)F, (2.12)
where - is the parameter of the degree-importance function.
3. WABL OF A DISCRETE FUZZY NUMBER
As has been mentioned above, decision-making processes based on linguistic information, mostly
performs with discrete fuzzy numbers [2, 3]. In our case, discrete fuzzy numbers with a given
discrete universe M = {6 , 6E, … , 6N|6 ∈ , ? = 1, … , O} and for a given discrete values of the
membership degrees
Λ = { , , … , | ∈ [0, 1]; < < ⋯ < } (3.1)
is handled. Such fuzzy numbers can be represented as follows:
= ⋃ 5(6)/68∈P , (3.2)
where 5(6) ∈ Λ, ∀x ∈ U. This form of fuzzy number we call a discrete valued fuzzy number. In
case of satisfying only the constraint (3.1), we will call the FN as discrete leveled fuzzy number.
Definition 3.1. Discrete triangular FN = (0, 1, 2) is a FN with discrete universe M that
= ∑ 5 (6 )/68T∊P , (3.3)
where
5 (6 ) = 7
8T93
:93
, 6 ∈ [0, 1),
498T
49:
, 6 ∈ [1, 2],
0, ; ℎ=2>?@=.
B (3.4)
Definition 3.2. Discrete trapezoidal FN = (0, 13, 14, 2) is a FN with discrete universe M that
= ∑ 5 (6 )/68T∊P , (3.5)
where
5 (6 ) =
G
H
I
H
J
8T93
:K93
, 6 ∈ [0, 13),
1, 6 ∈ [13, 14)
498T
49:L
, 6 ∈ [14, 2),
0, ; ℎ=2>?@=.
B (3.6)
Let = {6 VM|µ(6 ) ≥ }be the level set of the fuzzy number . So it will be
( ) = 1?O{6 |6 ∈ }, (3.7)
( ) = 1W6{6 |6 ∈ }. (3.8)
6. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
6
Let denote
X ( ) = (1 − *) ( ) + * ( ) , (3.9)
where * ∈ [0, 1] is the “optimism” coefficient of the WABL operator, and X ( ) is the mean
value according to the optimism coefficient * for the level . Then the WABL value of the
fuzzy number is calculated as follows:
( ) ( ) = ∑ $( )(* ( ) + (1 − *) ( ))∈Λ = ∑ $( )∈Λ X ( ) (3.10)
∑ $( )∈Λ = 1, (3.11)
$( ) ≥ 0, ∀ ∈ Λ, (3.12)
where $( ), ∈ Λ, is the degree-importance mass function.
4. USING OF PATTERN FUNCTIONS FOR CONSTRUCTING OF DISCRETE
LEVEL WEIGHTS
We will consider the discrete FN for the case where the levels’ set Λ is a discrete set on [0, 1],
such as Λ = { , , … , }. In this case, similarly to the formula (2.6), the level weights (i.e.
degree-importance) can be produced according to various patterns such as constant, linear,
quadratic etc. For this purpose we can use a general pattern function as follows:
Y( ) ≡ Y = ?.
, ? = 0,1, … , . (4.1)
It is obvious that according to the different values of the parameter - = 0, 1, 2, …, it can be
produced different patterns such as constant, linear, quadratic etc. The following must be taken
into account for $( ) ≡ $ , ? = 0,1, … , ,
$ =
[T
, ? = 0,1, … , , (4.2)
where
] = ∑ Y^ . (4.3)
It is clear that the non-negativity and normality conditions are satisfied:
$ ≥ 0, ? = 0,1, … , , (4.4)
∑ $ = 1^ . (4.5)
Some special cases of the level weights are handled below.
a. The level weights are constant. It should be - = 0 in the weights’ pattern function,
and
Y = ? = 1, ? = 0,1, … , , (4.6)
so
] = ∑ 1^ = + 1, (4.7)
will be satisfied. Considering the eq. (4.2), the level weights will be in the form below
7. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
7
$ =
=
D
, ? = 0,1, … , . (4.8)
b. The weights are linearly increasing w.r.t. levels. In this case,it should be - = 1 in the
weights’ pattern function, consequently
Y = ? = ?, ? = 0,1, … , , (4.9)
so
] = ∑ ?^ =
( D )
E
, (4.10)
will be satisfied. Considering eq. (4.2), the level weights will be in the form below
$ =
=
E
( D )
, ? = 0,1, … , . (4.11)
c. The weights are quadratic increasing w.r.t. levels. In this case,it should be - = 2 in
the weights’ pattern function, consequently
Y = ?E
, ? = 0,1, … , , (4.12)
so
] = ∑ ?E
^ =
( D )(E D )
_
, (4.13)
will be provided. Considering eq. (4.2), the level weights will be in the form below
$ =
`
=
_ `
( D )(E D )
, ? = 0,1, … , . (4.14)
In the next section, some analytical formulas have been developed to calculate the WABL value
for a discrete trapezoidal fuzzy number in case of equal distributed discrete levels and with
different weights’ pattern functions.
5. DETERMINING OF THE WABL VALUE FOR A DISCRETE LEVELED
TRAPEZOIDAL FUZZY NUMBER IN CASE OF EQUAL DISTRIBUTED
LEVELS
Let us consider the given levels are equal distributed, i.e. the levels’ set is Λ = { , , … , }
with∆ = *;O@ . So the following equalities are satisfied:
∆ = ⇒ = ?∆ ,? = 0,1, … , , (5.1)
Definition 5.1. Let consider the trapezoidal FN = (0, 13, 14, 2). Suppose that the level sets
T, ? = 0,1, … , , are constructed according to the discrete values of ∈ [0, 1], ? = 0,1, … , .
Such fuzzy numbers we call trapezoidal discrete leveled fuzzy numbers (Fig. 2).
8. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
8
Fig. 2. The discrete leveled trapezoidal FN = (0, 13, 14, 2) with levels ∈ [0, 1], ? = 0,1, … , .
Proposition 5.1. The following equality is satisfied for a trapezoidal FN = (0, 13, 14, 2) for
any level ∈ [0, 1]:
X( ) = X(0) + [X(1) − X(0)]. (5.2)
Proof: The left and right side functions of a trapezoidal FN = (0, 13, 14, 2) are as follows:
( ) = 0 + (13 − 0), ∈ [0, 1], (5.3)
( ) = 2 − (2 − 14), ∈ [0, 1]. (5.4)
So, according to eq.(3.9),
X( ) = (1 − *) ( ) + * ( ) = (1 − *)(0 + (13 − 0)) + *(2 − (2 − 14)). (5.5)
Consider that
X(0) = (1 − *)0 + *2, (5.6)
and
X(1) = (1 − *)13 + *14, (5.7)
we can write:
X( ) = (1 − *)0 + (1 − *)(13 − 0) + *2 − *(2 − 14)) (5.8)
= X(0) + (1 − *)(13 − 0) − *(2 − 14)
= X(0) + [(1 − *)(13 − 0) − *(2 − 14)]
= X(0) + [(1 − *)13 − (1 − *)0 − *2 + *14]
= X(0) + [X(1) − X(0)]
(5.9)
Proposition 5.2. For any discrete leveled trapezoidal FN = (0, 13, 14, 2), when∆ = *;O@ ,
the following is valid:
∑ X( )^ =
( D )
E
(X(0) + X(1)) (5.10)
Proof: Considering the Proposition 5.1 in case of a discrete leveled trapezoidal FN =
(0, 13, 14, 2), we can write
1
)(xµ
130 2 614
9. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
9
c X( )
^
= c(X(0) + (X(1) − X(0))
^
=
( + 1)X(0) + (X(1) − X(0)) ∑ ^ (5.11)
Considering ∆ = *;O@ , we can write
∆ = ⇒ = ?∆ ,? = 0,1, … , , (5.12)
So
∑ ^ = ∑ ?^ =
( D )
E
=
D
E
. (5.13)
Considering the eq. (5.13) in (5.11), we can write:
c X( )
^
= ( + 1)X(0) + (X(1) − X(0))
+ 1
2
=
( + 1)
2
(2X(0) + X(1) − X(0))
=
( D )
E
(X(0) + X(1)), (5.14)
which completes the proof.
Proposition 5.3.For any discrete leveled trapezoidal FN = (0, 13, 14, 2), when ∆ =
*;O@ ,the following is valid:
∑ ?X( )^ =
( D )[d e( )D(E D )fe( )9e( )g]
_
(5.15)
Proof:
c ?X( )
^
= c ?hX(0) + fX(1) − X(0)gi
^
= ∑ ?X(0) + fX(1) − X(0)g ∑ ?^^ (5.16)
Considering = , ? = 0,1, … , , when∆ = *;O@ , and considering the well-known following
equality:
∑ ?E
^ =
( D )(E D )
_
, (5.17)
the eq. (5.16) can be continued as follows:
c ?X( )
^
=
X(0) ( + 1)
2
+ fX(1) − X(0)g c ?
^
?
=
X(0) ( + 1)
2
+
fX(1) − X(0)g ( + 1)(2 + 1)
6
=
3X(0) ( + 1) + fX(1) − X(0)g( + 1)(2 + 1)
6
=
( D )fd e( )D(E D )(e( )9e( )g
_
, (5.18)
10. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
10
which completes the proof.
Theorem 5.1.If ∆ = *;O@ , and the level weights are equally distributed, then the WABL of
the discrete leveled trapezoidal FN = (0, 13, 14, 2)is as follows:
( ) ( ) =
e( )De( )
E
(5.19)
Proof: It is cler that
( ) ( ) = ∑ $( )(* ( ) + (1 − *) ( ))∈Λ = ∑ $( )X( )∈Λ . (5.20)
We assume thatΛ = { , , … , }, so the equation (5.16) can be written as follows:
( ) ( ) = ∑ $ X( )^ (5.21)
where $ ≡ $( ), ? = 0,1, … , .
In case of the equal distributed level weights, the level weights will be in the form (4.8).
According to the Proposition 5.2,the following is valid
∑ X( )^ =
D
E
(X(0) + X(1)), (5.22)
so we can write
( ) ( ) = ∑ $ X( )^ = D
∑ X( )^ =
e( )De( )
E
, (5.23)
which completes the proof.
Let us consider the linear increasing distribution of the levels’ weights as in (4.9). So, the level
weights must be as (4.11).
Theorem 5.2.If ∆ = *;O@ , and the level weights are linear increasing w.r.t. levels according
to the pattern (4.9), then the WABL of the discrete leveled trapezoidal FN = (0, 13, 14, 2)is as
follows:
( ) ( ) = X(0) +
E D
d
fX(1) − X(0)g (5.24)
Proof: Considering that
∆ = *;O@ ⇒ = , ? = 0,1, … , , (5.25)
the level weights have the pattern (4.11), and X( ) can be calculated as in Proposition 5.1, the
following equalities can be written:
( ) ( ) = ∑ $ X( )^ =
E
( D )
∑ ?X( )^ . (5.26)
Considering the Proposition 5.3 that
∑ ?X( )^ =
( D )[d e( )D(E D )fe( )9e( )g]
_
, (5.27)
we can write
11. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
11
( ) ( ) =
2
( + 1)
( + 1)[3 X(0) + (2 + 1)fX(1) − X(0)g]
6
=
d e( )D(E D )fe( )9e( )g
d
= X(0) +
E D
d
fX(1) − X(0)g , (5.28)
which completes the proof.
It is clear that when 13 = 14, the trapezoidal FN becomes a triangular one. Thus, the provision
of the all previous propositions and theorems are also valid for triangular fuzzy numbers.
6. COMPUTATIONAL EXAMPLES
The first example is about calculation of the WABL value for any discrete fuzzy number with any
discrete set of levels (without the assumption that ∆ = *;O@ ).
Example 6.1. Let calculate the WABL value of the discrete fuzzy number given below:
=
.
9E
+
.l
+
.m
+ E
+
.m
l
+
.n
n
, (6.1)
when the “optimism” parameter * = 0.2. Suppose that the levels weights are as follows:
$(0.1) = 0.1, $(0.4) = 0.3, $(0.5) = 0.3, $(0.7) = 0.2, $(1.0) = 0.1. (6.2)
It is clear from the conditions of the example that the discrete universe is M =
{−2, 0, 1, 2, 4, 5} and the levels’ set is Λ = {0.1, 0.4, 0.5, 0.7, 1.0}.
So we can calculate (let denote X ≡ X( )):
. = 1?O{−2, 0, 1, 2, 4, 5}=-2; . = 1W6{−2, 0, 1, 2, 4, 5} = 5, (6.3)
X . = 0.8 · (−2) + 0.2 · 5 = −0.6; (6.4)
.l = 1?O{0, 1, 2, 4, 5}=0; .l = 1W6{0, 1, 2, 4, 5} = 5, (6.5)
X .l = 0.8 · 0 + 0.2 · 5 = 1.0; (6.6)
.n = 1?O{1, 2, 4, 5}=1; .n = 1W6{1, 2, 4, 5} = 5, (6.7)
X .n = 0.8 · 1 + 0.2 · 5 = 1.8; (6.8)
.m = 1?O{1, 2, 4}=1; .m = 1W6{1, 2, 4} = 4, (6.9)
X .m = 0.8 · 1 + 0.2 · 4 = 1.6; (6.10)
. = 1?O{2}=2; . = 1W6{2} = 2, (6.11)
X . = 0.8 · 2 + 0.2 · 2 = 2.0; (6.12)
Therefore,
12. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
12
( ) ( ) = c $
∈Λ
X =
0.1 · (−0.6) + 0.3 · 1.0 + 0.3 · 1.8 + 0.2 · 1.6 + 0.1 · 2.0 = 1.3 (6.13)
The next example is about calculation of the WABL for a discrete leveled trapezoidal fuzzy
number with a discrete set of levels with the equally distributed level weights and with the
assumption that ∆ = *;O@ .
Example 6.2. Let calculate the WABL value of the discrete leveled trapezoidal fuzzy number
= (10, 14, 15, 23) (Fig. 3), and assume that the “optimism” parameter is:* = 0.8. Suppose that
the levels are equally distributed and the levels’ weights are generated according the pattern
function Y = ? = 1, ? = 0,1, … ,4, so
] = ∑ Y =l
^ 5. (6.14)
Fig.3. The discrete leveled trapezoidal FN = (10, 14, 15, 23) and its WABL value.
According to the eq. (4.5), the level weights will be in the form below
$ =
= n
, ? = 0,1, … ,4. (6.15)
Now, we calculate the X(0) and X(1):
X(0) = (1 − *)0 + *2 = 0.2 ∙ 10 + 0.8 ∙ 23 = 20.4, (6.16)
X(1) = (1 − *)13 + *14 = 0.2 ∙ 14 + 0.8 ∙ 15 = 14.8. (6.17)
So we can calculate the WABL value quickly according to the theorem 5.1,
( ) ( ) =
e( )De( )
E
=
E .lD l.u
E
= 17.6 . (6.18)
Finally, the following example is about calculation of the WABL for a discrete trapezoidal fuzzy
number with a discrete set of levels with the assumption that ∆ = *;O@ , and levels’ weights are
linear increasing w.r.t the levels.
Example 6.3. Let calculate the WABL value of the same discrete trapezoidal fuzzy number
= (10, 14, 15, 23), and assume that the “optimism” parameter also is * = 0.8. Now suppose
that the levels’ weights are generated according the pattern function:
Y = ? = ?, ? = 0,1, … , , (6.19)
1
)(xµ
1410 23 615
( )
17.6
13. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
13
with = 4, so
] = ∑ Y =l
^ ∑ ? =l
^ 10. (6.20)
According to the eq. (4.11), the level weights will be in the form below:
$ =
= , ? = 0,1, … ,4. (6.21)
i.e.
$ = 0, $ = , $E =
E
, $d =
d
, $l =
l
. (6.22)
It is clear that because the “optimism” parameter*is the same to the example 6.2, the values of the
X(0) and X(1) will be the same to the previous example, soX(0) = 20.4 and X(1) = 14.8.
Finally, we can calculate the WABL value quickly according to the theorem 5.2,
( ) ( ) = X(0) +
(2 + 1)
3
fX(1) − X(0)g =
20.4 +
v
E
(14.8 − 20.4) = 19.9 . (6.23)
7. CONCLUSION
In this study, we handle the discrete fuzzy numbers that are used in various type of fuzzy
decision-making systems with linguistic information. Moreover, the trapezoidal and their special
form, triangular fuzzy numbers, are the most commonly used fuzzy number types in fuzzy
modeling. So in this study, such type of discrete fuzzy numbers have been considered. The
WABL operator, which take into account the level weights and the decision maker's "optimism"
coefficient, are defined and investigated for these numbers. Note that the flexibility of the WABL
operator gives opportunity through machine learning, train its parameters according to the
decision maker's strategy, producing more satisfactory results for the decision maker. In this
study, simple analytical formulas have been formulated for the calculation of WABL values for
discrete trapezoidal fuzzy numbers = (0, 13, 14, 2) with constant, linear and quadratic form
pattern functions of level weights. Examples, reinforcing the use of the theoretical formulas, have
been demonstrated. However, since trapezoidal fuzzy number transforms to the triangular one
= (0, 1, 2)when13 = 14 = 1,all the results are also valid for discrete triangular fuzzy
numbers.
In our future studies, we plan to develop analytical formulas that facilitate the calculation of
WABL for parametric trapezoidal discrete fuzzy numbers, which is a more general form of the
trapezoidal discrete fuzzy numbers.
ACKNOWLEDGEMENTS
This study is partially funded by Scientific Research Projects Coordination Office of DokuzEylul
University under grant 2017.KB.FEN.015.
REFERENCES
[1] L.A. Zadeh, Fuzzy sets, Inform. Contr. 8, 338-353, (1965).
[2] S. Massanet, J.V.Riera, J. Torrens, E. Herrera-Viedma, A new linguistic computational model based
on discrete fuzzy numbers for computing with words, Information Sciences 258, 277–290, (2014).
14. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
14
[3] F. Herrera, E. Herrera-Viedma, L. Martinez, A fuzzy linguistic methodology to deal with unbalanced
linguistic term sets, IEEE Transactions on Fuzzy Systems 16 (2), 354–370, (2008)
[4] M. Mas, M. Monserrat, J. Torrens, Kernel aggregation functions on finite scales. Constructions from
their marginals, Fuzzy Sets and Systems 241, 27-40,(2014).
[5] L.-H. Chen, H.-W. Lu, An approximate approach for ranking fuzzy numbers based on left and right
dominance, Computers & Mathematics with Applications 41 (12), 1589–1602, (2001).
[6] F. Herrera, E. Herrera-Viedma, Linguistic decision analysis: steps for solving decision problems
under linguistic information, Fuzzy Sets and Systems 115 (1), 67–82, (2000).
[7] H. Rouhparvar, A. Panahi, A new definition for defuzzification of generalized fuzzy numbers and its
application, Appl. Soft Comput. 30, 577-584, (2015).
[8] A. Chandramohan, M.V.C. Rao, M.S. Arumugam, Two new and useful defuzzification methods
based on root mean square value, Soft Comput. 10, 1047-1059, (2006).
[9] M. V. Bobyr, N. A. Milostnaya, S. A. Kulabuhov, A method of defuzzification based on the approach
of areas' ratio, Applied Soft Comput. 59, 19-32, (2017).
[10] D. Dubois, H. Prade, The Mean Value of a Fuzzy Number, Fuzzy Sets and Systems 24, 279–300,
(1987).
[11] T. Calvo, R. Mesiar, Criteria importance in median-like aggregation, IEEE Trans. On Fuzzy Systems
9, 662-666, (2001).
[12] C. Carlsson, R. Fuller, Onpossibilistic mean value and variance of fuzzy numbers, Fuzzy Sets and
Systems 122, 315-326, (2001).
[13] R. Yager, D. Filev, SLIDE: A simple adaptive defuzzification method, IEEE Trans. on Fuzzy
Systems 1, 69-78, (1993).
[14] E. N. Nasibov, To linear transformations with fuzzy arguments, Izv. Academy of Sciences of
Azerbaijan, Ser. Phys.-Tech.& Math. Sci., No.6, 164-169, (1989) (in Russian).
[15] E. N. Nasibov, Aggregation of fuzzy information on the basis of decompositional representation,
Cybernetics and Systems Analysis 41(2), 309-318, (2005).
[16] E. N. Nasibov, Aggregation of Fuzzy Values in Linear Programming Problems, Automatic Control
and Computer Sciences 37(2), 1-11, (2003).
[17] E. N. Nasibov, Certain Integral Characteristics of Fuzzy Numbers and a Visual Interactive Method for
Choosing the Strategy of Their Calculation, Journal of Comp. and System Sci. Int. 41(4), 584-590,
(2002).
[18] E. N. Nasibov, O. Baskan, A. Mert, A learning algorithm for level sets weights in weighted level-
based averaging method, Fuzzy Optimization and Decision Making 4, 279-291, (2005).
[19] E. N. Nasibov, A. Mert, On Methods of Defuzzification of Parametrically Represented Fuzzy
Numbers, Automatic Control and Computer Sciences 41(5), 265-273, (2007).
[20] C. T. Yeh, Weighted trapezoidal and triangular approximations of fuzzy numbers, Fuzzy Sets and
Systems 160, 3059-3079, (2009).
[21] W. Zeng, H. Li, Weighted triangular approximation of fuzzy numbers, International Journal of
Approximate Reasoning 46, 137–150, (2007).
15. International Journal on Soft Computing (IJSC) Vol.9, No.2/3, August 2018
15
[22] C. T. Yeh, H. M. Chu, Approximations by LR-type fuzzy numbers, Fuzzy Sets and Systems 257, 23-
40, (2014).
[23] A.I. Ban, L. Coroianu, Nearest interval, triangular and trapezoidal approximation of a fuzzy number
preserving ambiguity, Int. J. Approx. Reason. 53, 805-836, (2012).
[24] C.-T. Yeh, Weighted semi-trapezoidal approximations of fuzzy numbers, Fuzzy Sets Systems 165,
61-80, (2011).
[25] C. T. Yeh, Existence of interval, triangular, and trapezoidal approximations of fuzzy numbers under a
general condition, Fuzzy Sets and Systems 310, 1-13, (2017).
[26] S. Abbasbandy, M. Amirfakhrian, The nearest trapezoidal form of a generalized left right fuzzy
number, Int. J. Approx. Reason. 43, 166–178, (2006).
[27] F. Molinari, A new criterion of choice between generalized triangular fuzzy numbers, Fuzzy Sets and
Systems 296, 51-69, (2016).
[28] J. D. Kim, E. L. Moon, E. Jeong, D. H. Hong, Ranking methods for fuzzy numbers: The solution to
Brunelli and Mezei's conjecture, Fuzzy Sets and Systems 315, 109-113, (2017).
[29] R. Ezzati, Y. Koochakpoor, N. Goodarzi, M. Maghasedi, A new approach for trapezoidal
approximation of fuzzy numbers using WABL distance, Journal of Interpolation and Approximation
in Scientific Computing 2014, 1-9, (2014).
[30] G. Wang, J. Li, Approximations of fuzzy numbers by step type fuzzy numbers, Fuzzy Sets and
Systems 310, 47-59, (2017).
[31] L. Coroianu, M. Gagolewski, P. Grzegorzewski, Nearest piecewise linear approximation of fuzzy
numbers, Fuzzy Sets and Systems 233, 26-51, (2013).