Interval type-2 fuzzy logic systems (IT2FLSs), have recently shown great potential in various applications
with dynamic uncertainties. It is believed that additional degree of uncertainty provided by IT2FL allows
for better representation of the uncertainty and vagueness present in prediction models. However,
determining the parameters of the membership functions of IT2FL is important for providing optimum
performance of the system. Particle Swarm Optimization (PSO) has attracted the interest of researchers
due to their simplicity, effectiveness and efficiency in solving real-world optimization problems. In this
paper, a novel optimal IT2FLS is designed, applied for predicting winning chances in elections. PSO is
used as an optimized algorithm to tune the parameter of the primary membership function of the IT2FL to
improve the performance and increase the accuracy of the IT2F set. Simulation results show the superiority
of the PSO-IT2FL to the similar non-optimal IT2FL system with an increase in the prediction.
This document discusses using various artificial intelligence techniques like neural networks and fuzzy inference systems to predict the direction of stock prices for Microsoft and Intel over a 13 year period. It evaluates the performance of different models, including backpropagation neural networks, fuzzy inference systems using neural learning and genetic algorithms. The best models were able to correctly predict the direction of Microsoft stock prices 63% of the time, resulting in returns up to 103%. While prediction of Intel was more difficult, achieving the highest returns required selecting the stock with the best performing model.
GA-CFS APPROACH TO INCREASE THE ACCURACY OF ESTIMATES IN ELECTIONS PARTICIPATIONijfcstjournal
This paper proposes a combined GA-CFS approach to increase the accuracy of predicting participation in elections by identifying and removing noisy features. The approach uses genetic algorithm as a search method to select important features for election participation based on correlation-based feature selection, which evaluates feature subsets. When applied to a dataset on election participation factors, the combined method improved the predictive accuracy of classification algorithms compared to using the full feature set. The results demonstrate that the GA-CFS approach is effective at identifying and removing irrelevant and noisy features to enhance predictive performance.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
This document presents a proposed churn prediction model based on data mining techniques. The model consists of six steps: identifying the problem domain, data selection, investigating the data set, classification, clustering, and utilizing the knowledge gained. The authors apply their model to a data set of 5,000 mobile service customers using data mining tools. They train classification models using decision trees, neural networks, and support vector machines. Customers are classified as churners or non-churners. Churners are then clustered into three groups. The results are interpreted to gain insights into customer retention.
PREDICTING CLASS-IMBALANCED BUSINESS RISK USING RESAMPLING, REGULARIZATION, A...IJMIT JOURNAL
We aim at developing and improving the imbalanced business risk modeling via jointly using proper
evaluation criteria, resampling, cross-validation, classifier regularization, and ensembling techniques.
Area Under the Receiver Operating Characteristic Curve (AUC of ROC) is used for model comparison
based on 10-fold cross validation. Two undersampling strategies including random undersampling (RUS)
and cluster centroid undersampling (CCUS), as well as two oversampling methods including random
oversampling (ROS) and Synthetic Minority Oversampling Technique (SMOTE), are applied. Three highly
interpretable classifiers, including logistic regression without regularization (LR), L1-regularized LR
(L1LR), and decision tree (DT) are implemented. Two ensembling techniques, including Bagging and
Boosting, are applied on the DT classifier for further model improvement. The results show that, Boosting
on DT by using the oversampled data containing 50% positives via SMOTE is the optimal model and it can
achieve AUC, recall, and F1 score valued 0.8633, 0.9260, and 0.8907, respectively.
T OWARDS A S YSTEM D YNAMICS M ODELING M E- THOD B ASED ON DEMATELijcsit
This document proposes a new method for constructing system dynamics models that combines the Decision Making Trial and Evaluation Laboratory (DEMATEL) technique with system dynamics modeling. DEMATEL is first used to systematically define and quantify causal relationships between variables in a system. The results from DEMATEL, including impact relation maps and a total influence matrix, are then used to derive the causal loop diagram and define variable weights in the stock-flow chart equations of the system dynamics model. This combined method aims to overcome limitations and subjectivity in traditional system dynamics modeling that relies solely on decision makers' mental models.
Fuzzy Logic Approach to Identify Deprivation Index in Peninsular MalaysiajournalBEEI
Deprivation indices are similar to inequalities index or index of disadvantageous. It was built to measure the basic necessities in a specific study area or region. There were many indices that have been constructed in the previous study. However, since these indices had depended mostly on two factors; socio-economic conditions and geography of the study area, thus different result would be generated in different areas. The objective of this study is to construct the new index based on above factors in Peninsular Malaysia by using a fuzzy logic approach. This study employed twelve variables from different facilities condition that was obtained from Malaysia 2000’s census report. These variables were considered as input parameters in the fuzzy logic system. Data turned into linguistic variables and shaped into rules in the form of IF-ELSE conditions. After that, the centroid of area method is applied to acquire the final deprivation index for a specific district in Peninsular Malaysia. The result showed that less developed states generated lower index for examples Kelantan and Kedah while more developed states generated a higher index for examples Selangor and W.P. Kuala Lumpur.
Exploring the behavioral intention to use e-government services: Validating t...Mark Anthony Camilleri
This study explores the online users’ behavioral intention to utilize the governments’ websites and their electronic services. The research methodology validates the measuring items from the unified theory of acceptance and use of technology (UTAUT) to better understand the participants’ attitudes toward their performance expectancy, effort expectancy, social norms, facilitating condition and behavioral intention to use the electronic government (e-gov) services. The findings from the structural equations modeling approach reported a satisfactory fit for this study’s research model. The results suggest that there were highly significant, direct effects from the UTAUT constructs, where the utilitarian motives predicted the online users’ behavioral intentions to use e-gov. Moreover, there were significant moderating influences from the demographic variables, including age, gender and experiences that effected the individuals’ usage of the governments’ online services. In conclusion, this contribution identifies its limitations and suggests possible research avenues to academia.
This document discusses using various artificial intelligence techniques like neural networks and fuzzy inference systems to predict the direction of stock prices for Microsoft and Intel over a 13 year period. It evaluates the performance of different models, including backpropagation neural networks, fuzzy inference systems using neural learning and genetic algorithms. The best models were able to correctly predict the direction of Microsoft stock prices 63% of the time, resulting in returns up to 103%. While prediction of Intel was more difficult, achieving the highest returns required selecting the stock with the best performing model.
GA-CFS APPROACH TO INCREASE THE ACCURACY OF ESTIMATES IN ELECTIONS PARTICIPATIONijfcstjournal
This paper proposes a combined GA-CFS approach to increase the accuracy of predicting participation in elections by identifying and removing noisy features. The approach uses genetic algorithm as a search method to select important features for election participation based on correlation-based feature selection, which evaluates feature subsets. When applied to a dataset on election participation factors, the combined method improved the predictive accuracy of classification algorithms compared to using the full feature set. The results demonstrate that the GA-CFS approach is effective at identifying and removing irrelevant and noisy features to enhance predictive performance.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
This document presents a proposed churn prediction model based on data mining techniques. The model consists of six steps: identifying the problem domain, data selection, investigating the data set, classification, clustering, and utilizing the knowledge gained. The authors apply their model to a data set of 5,000 mobile service customers using data mining tools. They train classification models using decision trees, neural networks, and support vector machines. Customers are classified as churners or non-churners. Churners are then clustered into three groups. The results are interpreted to gain insights into customer retention.
PREDICTING CLASS-IMBALANCED BUSINESS RISK USING RESAMPLING, REGULARIZATION, A...IJMIT JOURNAL
We aim at developing and improving the imbalanced business risk modeling via jointly using proper
evaluation criteria, resampling, cross-validation, classifier regularization, and ensembling techniques.
Area Under the Receiver Operating Characteristic Curve (AUC of ROC) is used for model comparison
based on 10-fold cross validation. Two undersampling strategies including random undersampling (RUS)
and cluster centroid undersampling (CCUS), as well as two oversampling methods including random
oversampling (ROS) and Synthetic Minority Oversampling Technique (SMOTE), are applied. Three highly
interpretable classifiers, including logistic regression without regularization (LR), L1-regularized LR
(L1LR), and decision tree (DT) are implemented. Two ensembling techniques, including Bagging and
Boosting, are applied on the DT classifier for further model improvement. The results show that, Boosting
on DT by using the oversampled data containing 50% positives via SMOTE is the optimal model and it can
achieve AUC, recall, and F1 score valued 0.8633, 0.9260, and 0.8907, respectively.
T OWARDS A S YSTEM D YNAMICS M ODELING M E- THOD B ASED ON DEMATELijcsit
This document proposes a new method for constructing system dynamics models that combines the Decision Making Trial and Evaluation Laboratory (DEMATEL) technique with system dynamics modeling. DEMATEL is first used to systematically define and quantify causal relationships between variables in a system. The results from DEMATEL, including impact relation maps and a total influence matrix, are then used to derive the causal loop diagram and define variable weights in the stock-flow chart equations of the system dynamics model. This combined method aims to overcome limitations and subjectivity in traditional system dynamics modeling that relies solely on decision makers' mental models.
Fuzzy Logic Approach to Identify Deprivation Index in Peninsular MalaysiajournalBEEI
Deprivation indices are similar to inequalities index or index of disadvantageous. It was built to measure the basic necessities in a specific study area or region. There were many indices that have been constructed in the previous study. However, since these indices had depended mostly on two factors; socio-economic conditions and geography of the study area, thus different result would be generated in different areas. The objective of this study is to construct the new index based on above factors in Peninsular Malaysia by using a fuzzy logic approach. This study employed twelve variables from different facilities condition that was obtained from Malaysia 2000’s census report. These variables were considered as input parameters in the fuzzy logic system. Data turned into linguistic variables and shaped into rules in the form of IF-ELSE conditions. After that, the centroid of area method is applied to acquire the final deprivation index for a specific district in Peninsular Malaysia. The result showed that less developed states generated lower index for examples Kelantan and Kedah while more developed states generated a higher index for examples Selangor and W.P. Kuala Lumpur.
Exploring the behavioral intention to use e-government services: Validating t...Mark Anthony Camilleri
This study explores the online users’ behavioral intention to utilize the governments’ websites and their electronic services. The research methodology validates the measuring items from the unified theory of acceptance and use of technology (UTAUT) to better understand the participants’ attitudes toward their performance expectancy, effort expectancy, social norms, facilitating condition and behavioral intention to use the electronic government (e-gov) services. The findings from the structural equations modeling approach reported a satisfactory fit for this study’s research model. The results suggest that there were highly significant, direct effects from the UTAUT constructs, where the utilitarian motives predicted the online users’ behavioral intentions to use e-gov. Moreover, there were significant moderating influences from the demographic variables, including age, gender and experiences that effected the individuals’ usage of the governments’ online services. In conclusion, this contribution identifies its limitations and suggests possible research avenues to academia.
PRE-PROCESSING TECHNIQUES FOR FACIAL EMOTION RECOGNITION SYSTEMIAEME Publication
Humans share a universal and fundamental set of emotions which are exhibited through consistent facial expressions. Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature extraction and classification technique for emotion recognition is still an open problem. Image pre-processing and normalization is significant part of face recognition systems. Changes in lighting conditions produces dramatically decrease of recognition performance. In this paper, the image pre-processing techniques like K-Nearest Neighbor, Cultural Algorithm and Genetic Algorithm are used to remove the noise in the facial image for enhancing the emotion recognition. The performance of the preprocessing techniques are evaluated with various performance metrics.
The process of determining cuts tuition for students are usually given with the same nominal. And in this paper is the determination of the pieces tuition for students who are less able to be different, depending on how much income parents and the number of children covered. For income parents who get discounted tuition fee of IDR Rp.1,500,000 and for the number of children in these families also determine the number of pieces obtained. Tsukamoto Fuzzy system is the model used in this paper. Each input variable is divided into three membership functions. In this paper, Nine Tsukamoto Fuzzy model rules have been applied. The system also provides a consequent change of parameters if the current parameter values to be changed. The smaller the parent's income, the greater the pieces obtained. The more children insured the greater the college acquired pieces.
This document provides an overview of data mining techniques designed for imbalanced datasets. It discusses how imbalanced datasets, where one class is greatly underrepresented compared to others, pose challenges for machine learning algorithms. Several approaches have been proposed to address this issue, including sampling methods like oversampling the minority class and undersampling the majority class, as well as cost-sensitive methods that assign higher misclassification costs. The document reviews common sampling strategies used for imbalanced datasets, such as SMOTE, and cost-sensitive approaches involving cost matrices and cost curves. Overall, it examines how various sampling and cost-based methods can help improve classification of imbalanced datasets in fields such as medical diagnosis and text classification.
This document discusses research methodology and processing of data. It covers editing, coding, classification, and tabulation as important steps in processing data collected during research. Editing involves correcting errors and omissions in the data. Coding assigns standardized codes to responses for efficient analysis. Classification groups the data based on common characteristics. Tabulation arranges the classified data in an organized table for analysis. The document also defines hypothesis and discusses types of hypotheses, characteristics of a good hypothesis, and the procedure for testing hypotheses using statistical techniques. Finally, it defines interpretation as drawing inferences from analyzed data and discusses techniques for proper interpretation.
FORECASTING MACROECONOMICAL INDICES WITH MACHINE LEARNING: IMPARTIAL ANALYSIS...ijscai
The importance of economic freedom has often been stressed by supporters of liberalism, but can its actual effect be observed in a data driven, objective way? To analyze this relation the Economic Freedom of the World (EFW) index and the Human Development Index (HDI) were examined with modern machine learning algorithms and a wide-ranging approach. Considering the EFW index’s preference of a liberalistic oriented economic policy, an objective recommendation for creating an economic policy that improves people’s everyday lives might be derived by the analysis results. It was found that these more advanced algorithms achieve a considerably stronger correlation between both indices than pure statistical means yet leave a small room for interpretation towards a counter-liberalistic implementation of demand-driven economic policy.
PARTICIPATION ANTICIPATING IN ELECTIONS USING DATA MINING METHODSIJCI JOURNAL
Anticipating the political behavior of people will be considerable help for election candidates to assess the
possibility of their success and to be acknowledged about the public motivations to select them. In this
paper, we provide a general schematic of the architecture of participation anticipating system in
presidential election by using KNN, Classification Tree and Naïve Bayes and tools orange based on crisp
which had hopeful output. To test and assess the proposed model, we begin to use the case study by
selecting 100 qualified persons who attend in 11th presidential election of Islamic republic of Iran and
anticipate their participation in Kohkiloye & Boyerahmad. We indicate that KNN can perform anticipation
and classification processes with high accuracy in compared with two other algorithms to anticipate
participation.
IRJET- Analyzing Voting Results using Influence MatrixIRJET Journal
This document discusses analyzing voting results using an influence matrix. It proposes modeling voting outcomes as results from an opinion dynamics process, where opinions evolve according to social influence. It formulates estimating the maximum posteriori opinions and influence matrix from voting data. The influence matrix technique is described to solve fluid flow problems numerically. It demonstrates vote prediction and dynamic results visualization based on the estimated influence matrix. Future work could explore modeling stubborn agents' topic-dependent beliefs instead of independently.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an overview of a survey of multi-objective evolutionary algorithms for data mining tasks. It discusses key concepts in multi-objective optimization and evolutionary algorithms. It also reviews common data mining tasks like feature selection, classification, clustering, and association rule mining that are often formulated as multi-objective problems and solved using multi-objective evolutionary algorithms. The survey focuses on reviewing applications of multi-objective evolutionary algorithms for feature selection and classification in part 1, and applications for clustering, association rule mining and other tasks in part 2.
Consideration the effect of e banking on bank profitability; case study selec...Alexander Decker
This document summarizes a study that examines the effect of e-banking (electronic banking) on bank profitability in selected Asian countries between 1990 and 2010. The study uses panel data and panel cointegration tests to analyze the long-run relationship between e-banking and two measures of profitability: return on assets (ROA) and return on equity (ROE). The results show that e-banking starts contributing to banks' ROE after three years, but a negative impact is observed after one year.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
NATIONAL CULTURAL DIMENSIONS AND ELECTRONIC CLINICAL RECORDS ACCEPTANCE: AN E...IJMIT JOURNAL
The propose of the present paper is to describe the development of a measurement scale to assess the impact of the national cultural factors on the electronic clinical records acceptance in the Hospital Center Ibn Sina in Morocco.The methodology assumed is based on the Churchill paradigm (1979).Thus, our contribution focuses on the exploratory phase, which the items have been analyzed by the principal components' method and aninternal consistency method using the Cronbach’s Alpha (α). The results show a satisfactory factorial structure and excellent reliability of all the items, except some which are eliminated.
Modelling the expected loss of bodily injury claims using gradient boostingGregg Barrett
This document summarizes an effort to model the expected loss of bodily injury claims using gradient boosting. Frequency and severity models are built separately and then combined to estimate expected loss. Gradient boosting is chosen as the modeling approach due to its flexibility. Tuning parameters like shrinkage, number of trees, and depth must be selected. The goal is predictive accuracy over interpretability. Performance is evaluated on a test set not used for model selection.
Efficient Facial Expression and Face Recognition using Ranking MethodIJERA Editor
Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.
Integrated bio-search approaches with multi-objective algorithms for optimiza...TELKOMNIKA JOURNAL
Optimal selection of features is very difficult and crucial to achieve, particularly for the task of classification. It is due to the traditional method of selecting features that function independently and generated the collection of irrelevant features, which therefore affects the quality of the accuracy of the classification. The goal of this paper is to leverage the potential of bio-inspired search algorithms, together with wrapper, in optimizing multi-objective algorithms, namely ENORA and NSGA-II to generate an optimal set of features. The main steps are to idealize the combination of ENORA and NSGA-II with suitable bio-search algorithms where multiple subset generation has been implemented. The next step is to validate the optimum feature set by conducting a subset evaluation. Eight (8) comparison datasets of various sizes have been deliberately selected to be checked. Results shown that the ideal combination of multi-objective algorithms, namely ENORA and NSGA-II, with the selected bio-inspired search algorithm is promising to achieve a better optimal solution (i.e. a best features with higher classification accuracy) for the selected datasets. This discovery implies that the ability of bio-inspired wrapper/filtered system algorithms will boost the efficiency of ENORA and NSGA-II for the task of selecting and classifying features.
FORECASTING MACROECONOMICAL INDICES WITH MACHINE LEARNING: IMPARTIAL ANALYSIS...ijscai
The importance of economic freedom has often been stressed by supporters of liberalism, but can its actual
effect be observed in a data driven, objective way? To analyze this relation the Economic Freedom of the
World (EFW) index and the Human Development Index (HDI) were examined with modern machine learning algorithms and a wide-ranging approach. Considering the EFW index’s preference of a liberalistic
oriented economic policy, an objective recommendation for creating an economic policy that improves
people’s everyday lives might be derived by the analysis results. It was found that these more advanced
algorithms achieve a considerably stronger correlation between both indices than pure statistical means
yet leave a small room for interpretation towards a counter-liberalistic implementation of demand-driven
economic policy.
Forecasting Macroeconomical Indices with Machine Learning : Impartial Analysi...IJSCAI Journal
The importance of economic freedom has often been stressed by supporters of liberalism, but can its actual
effect be observed in a data driven, objective way? To analyze this relation the Economic Freedom of the
World (EFW) index and the Human Development Index (HDI) were examined with modern machine learning algorithms and a wide-ranging approach. Considering the EFW index’s preference of a liberalistic
oriented economic policy, an objective recommendation for creating an economic policy that improves
people’s everyday lives might be derived by the analysis results. It was found that these more advanced
algorithms achieve a considerably stronger correlation between both indices than pure statistical means
yet leave a small room for interpretation towards a counter-liberalistic implementation of demand-driven
economic policy.
FORECASTING MACROECONOMICAL INDICES WITH MACHINE LEARNING: IMPARTIAL ANALYSIS...ijscai
The importance of economic freedom has often been stressed by supporters of liberalism, but can its actual
effect be observed in a data driven, objective way? To analyze this relation the Economic Freedom of the
World (EFW) index and the Human Development Index (HDI) were examined with modern machine learning algorithms and a wide-ranging approach. Considering the EFW index’s preference of a liberalistic
oriented economic policy, an objective recommendation for creating an economic policy that improves
people’s everyday lives might be derived by the analysis results. It was found that these more advanced
algorithms achieve a considerably stronger correlation between both indices than pure statistical means
yet leave a small room for interpretation towards a counter-liberalistic implementation of demand-driven
economic policy.
Importance of the neutral category in fuzzy clustering of sentimentsijfls
This document summarizes a research paper on the importance of including a neutral category in sentiment analysis using fuzzy clustering. It discusses related work on sentiment analysis and clustering algorithms. It then presents a model that uses fuzzy c-means clustering to analyze tweets about the International Criminal Court in Kenya. The model represents sentiments in a 3D plot to capture varying levels of positivity, negativity, and neutrality, rather than just positive or negative. The goal is to gain more information from sentiments than a simple two-category classification.
Intelligent analysis of the effect of internetIJCI JOURNAL
This paper analyzes the effect of Information Technology on professionals, academicians and students in
perspective of their relations, education, job purpose, health, entertainment and electronic business that
can bring changes in society. Technology can have both positive and negative consequences for people of
different walks of life at different times. The need is to understand the true impact of internet on the society
so that people can start thinking and build a healthy society. In this paper, an empirical study is considered
60 persons; a causal loop model is formed relating the parameters on the basis of data collected. These
parameters are used to form the fuzzy dynamic model to analyze the effect of the internet on the society.
The model is analyzed and suitable solutions are proposed to counter the negative effect of internet on our
society.
Assessment of the main features of the model of dissemination of information ...IJECEIAES
Social networks provide a fairly wide range of data that allows one way or another to evaluate the effect of the dissemination of information. This article presents the results of a study that describes methods for determining the key parameters of the model needed to analyze and predict the dissemination of information in social networks. An approach based on the analysis of statistical data on user behavior in social networks is proposed. The process of evaluating the main features of the model is described, including the mathematical methods used for data analysis and information dissemination modeling. The study aims to understand the processes of information dissemination in social networks and develop recommendations for the effective use of social networks as a communication and brand promotion tool, as well as to consider the analytical properties of the classical susceptible-infected-removed (SIR) model and evaluate its applicability to the problem of information dissemination. The results of the study can be used to create algorithms and techniques that will effectively manage the process of information dissemination in social networks.
304 Part II • Predictive AnalyticsMachine LearningIntrodu.docxpriestmanmable
304 Part II • Predictive Analytics/Machine Learning
Introduction and Motivation
Analytics has been used by many businesses, organi-
zations, and government agencies to learn from past
experiences to more effectively and efficiently use
their limited resources to achieve their goals and objec-
tives. Despite all the promises of analytics, however,
its multidimensional and multidisciplinary nature can
sometimes disserve its proper, full-fledged application.
This is particularly true for the use of predictive analyt-
ics in several social science disciplines because these
domains are traditionally dominated by descriptive
analytics (causal-explanatory statistical modeling) and
might not have easy access to the set of skills required
to build predictive analytics models. A review of the
extant literature shows that drug court is one such
area. While many researchers have studied this social
phenomenon, its characteristics, its requirements, and
its outcomes from a descriptive analytics perspective,
there currently is a dearth of predictive analytics mod-
els that can accurately and appropriately predict who
would (or would not) graduate from intervention and
treatment programs. To fill this gap and to help author-
ities better manage the resources, and to improve the
outcomes, this study sought to develop and compare
several predictive analytics models (both single models
and ensembles) to identify who would graduate from
these treatment programs.
Ten years after President Richard Nixon first
declared a “war on drugs,” President Ronald Reagan
signed an executive order leading to stricter drug
enforcement, stating, “We’re taking down the surren-
der flag that has flown over so many drug efforts; we
are running up a battle flag.” The reinforcement of the
war on drugs resulted in an unprecedented 10-fold
surge in the number of citizens incarcerated for drug
offences during the following two decades. The sky-
rocketing number of drug cases inundated court
dockets, overloaded the criminal justice system, and
overcrowded prisons. The abundance of drug-related
caseloads, aggravated by a longer processing time
than that for most other felonies, imposed tremen-
dous costs on state and federal departments of justice.
Regarding the increased demand, court systems started
to look for innovative ways to accelerate the inquest
of drug-related cases. Perhaps analytics-driven deci-
sion support systems are the solution to the problem.
To support this claim, the current study’s goal was
to build and compare several predictive models that
use a large sample of data from drug courts across
different locations to predict who is more likely to
complete the treatment successfully. The researchers
believed that this endeavor might reduce the costs to
the criminal justice system and local communities.
Methodology
The methodology used in this research effort
included a multi-step process that employed pre-
dictive analytics.
PRE-PROCESSING TECHNIQUES FOR FACIAL EMOTION RECOGNITION SYSTEMIAEME Publication
Humans share a universal and fundamental set of emotions which are exhibited through consistent facial expressions. Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature extraction and classification technique for emotion recognition is still an open problem. Image pre-processing and normalization is significant part of face recognition systems. Changes in lighting conditions produces dramatically decrease of recognition performance. In this paper, the image pre-processing techniques like K-Nearest Neighbor, Cultural Algorithm and Genetic Algorithm are used to remove the noise in the facial image for enhancing the emotion recognition. The performance of the preprocessing techniques are evaluated with various performance metrics.
The process of determining cuts tuition for students are usually given with the same nominal. And in this paper is the determination of the pieces tuition for students who are less able to be different, depending on how much income parents and the number of children covered. For income parents who get discounted tuition fee of IDR Rp.1,500,000 and for the number of children in these families also determine the number of pieces obtained. Tsukamoto Fuzzy system is the model used in this paper. Each input variable is divided into three membership functions. In this paper, Nine Tsukamoto Fuzzy model rules have been applied. The system also provides a consequent change of parameters if the current parameter values to be changed. The smaller the parent's income, the greater the pieces obtained. The more children insured the greater the college acquired pieces.
This document provides an overview of data mining techniques designed for imbalanced datasets. It discusses how imbalanced datasets, where one class is greatly underrepresented compared to others, pose challenges for machine learning algorithms. Several approaches have been proposed to address this issue, including sampling methods like oversampling the minority class and undersampling the majority class, as well as cost-sensitive methods that assign higher misclassification costs. The document reviews common sampling strategies used for imbalanced datasets, such as SMOTE, and cost-sensitive approaches involving cost matrices and cost curves. Overall, it examines how various sampling and cost-based methods can help improve classification of imbalanced datasets in fields such as medical diagnosis and text classification.
This document discusses research methodology and processing of data. It covers editing, coding, classification, and tabulation as important steps in processing data collected during research. Editing involves correcting errors and omissions in the data. Coding assigns standardized codes to responses for efficient analysis. Classification groups the data based on common characteristics. Tabulation arranges the classified data in an organized table for analysis. The document also defines hypothesis and discusses types of hypotheses, characteristics of a good hypothesis, and the procedure for testing hypotheses using statistical techniques. Finally, it defines interpretation as drawing inferences from analyzed data and discusses techniques for proper interpretation.
FORECASTING MACROECONOMICAL INDICES WITH MACHINE LEARNING: IMPARTIAL ANALYSIS...ijscai
The importance of economic freedom has often been stressed by supporters of liberalism, but can its actual effect be observed in a data driven, objective way? To analyze this relation the Economic Freedom of the World (EFW) index and the Human Development Index (HDI) were examined with modern machine learning algorithms and a wide-ranging approach. Considering the EFW index’s preference of a liberalistic oriented economic policy, an objective recommendation for creating an economic policy that improves people’s everyday lives might be derived by the analysis results. It was found that these more advanced algorithms achieve a considerably stronger correlation between both indices than pure statistical means yet leave a small room for interpretation towards a counter-liberalistic implementation of demand-driven economic policy.
PARTICIPATION ANTICIPATING IN ELECTIONS USING DATA MINING METHODSIJCI JOURNAL
Anticipating the political behavior of people will be considerable help for election candidates to assess the
possibility of their success and to be acknowledged about the public motivations to select them. In this
paper, we provide a general schematic of the architecture of participation anticipating system in
presidential election by using KNN, Classification Tree and Naïve Bayes and tools orange based on crisp
which had hopeful output. To test and assess the proposed model, we begin to use the case study by
selecting 100 qualified persons who attend in 11th presidential election of Islamic republic of Iran and
anticipate their participation in Kohkiloye & Boyerahmad. We indicate that KNN can perform anticipation
and classification processes with high accuracy in compared with two other algorithms to anticipate
participation.
IRJET- Analyzing Voting Results using Influence MatrixIRJET Journal
This document discusses analyzing voting results using an influence matrix. It proposes modeling voting outcomes as results from an opinion dynamics process, where opinions evolve according to social influence. It formulates estimating the maximum posteriori opinions and influence matrix from voting data. The influence matrix technique is described to solve fluid flow problems numerically. It demonstrates vote prediction and dynamic results visualization based on the estimated influence matrix. Future work could explore modeling stubborn agents' topic-dependent beliefs instead of independently.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document provides an overview of a survey of multi-objective evolutionary algorithms for data mining tasks. It discusses key concepts in multi-objective optimization and evolutionary algorithms. It also reviews common data mining tasks like feature selection, classification, clustering, and association rule mining that are often formulated as multi-objective problems and solved using multi-objective evolutionary algorithms. The survey focuses on reviewing applications of multi-objective evolutionary algorithms for feature selection and classification in part 1, and applications for clustering, association rule mining and other tasks in part 2.
Consideration the effect of e banking on bank profitability; case study selec...Alexander Decker
This document summarizes a study that examines the effect of e-banking (electronic banking) on bank profitability in selected Asian countries between 1990 and 2010. The study uses panel data and panel cointegration tests to analyze the long-run relationship between e-banking and two measures of profitability: return on assets (ROA) and return on equity (ROE). The results show that e-banking starts contributing to banks' ROE after three years, but a negative impact is observed after one year.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
NATIONAL CULTURAL DIMENSIONS AND ELECTRONIC CLINICAL RECORDS ACCEPTANCE: AN E...IJMIT JOURNAL
The propose of the present paper is to describe the development of a measurement scale to assess the impact of the national cultural factors on the electronic clinical records acceptance in the Hospital Center Ibn Sina in Morocco.The methodology assumed is based on the Churchill paradigm (1979).Thus, our contribution focuses on the exploratory phase, which the items have been analyzed by the principal components' method and aninternal consistency method using the Cronbach’s Alpha (α). The results show a satisfactory factorial structure and excellent reliability of all the items, except some which are eliminated.
Modelling the expected loss of bodily injury claims using gradient boostingGregg Barrett
This document summarizes an effort to model the expected loss of bodily injury claims using gradient boosting. Frequency and severity models are built separately and then combined to estimate expected loss. Gradient boosting is chosen as the modeling approach due to its flexibility. Tuning parameters like shrinkage, number of trees, and depth must be selected. The goal is predictive accuracy over interpretability. Performance is evaluated on a test set not used for model selection.
Efficient Facial Expression and Face Recognition using Ranking MethodIJERA Editor
Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.
Integrated bio-search approaches with multi-objective algorithms for optimiza...TELKOMNIKA JOURNAL
Optimal selection of features is very difficult and crucial to achieve, particularly for the task of classification. It is due to the traditional method of selecting features that function independently and generated the collection of irrelevant features, which therefore affects the quality of the accuracy of the classification. The goal of this paper is to leverage the potential of bio-inspired search algorithms, together with wrapper, in optimizing multi-objective algorithms, namely ENORA and NSGA-II to generate an optimal set of features. The main steps are to idealize the combination of ENORA and NSGA-II with suitable bio-search algorithms where multiple subset generation has been implemented. The next step is to validate the optimum feature set by conducting a subset evaluation. Eight (8) comparison datasets of various sizes have been deliberately selected to be checked. Results shown that the ideal combination of multi-objective algorithms, namely ENORA and NSGA-II, with the selected bio-inspired search algorithm is promising to achieve a better optimal solution (i.e. a best features with higher classification accuracy) for the selected datasets. This discovery implies that the ability of bio-inspired wrapper/filtered system algorithms will boost the efficiency of ENORA and NSGA-II for the task of selecting and classifying features.
FORECASTING MACROECONOMICAL INDICES WITH MACHINE LEARNING: IMPARTIAL ANALYSIS...ijscai
The importance of economic freedom has often been stressed by supporters of liberalism, but can its actual
effect be observed in a data driven, objective way? To analyze this relation the Economic Freedom of the
World (EFW) index and the Human Development Index (HDI) were examined with modern machine learning algorithms and a wide-ranging approach. Considering the EFW index’s preference of a liberalistic
oriented economic policy, an objective recommendation for creating an economic policy that improves
people’s everyday lives might be derived by the analysis results. It was found that these more advanced
algorithms achieve a considerably stronger correlation between both indices than pure statistical means
yet leave a small room for interpretation towards a counter-liberalistic implementation of demand-driven
economic policy.
Forecasting Macroeconomical Indices with Machine Learning : Impartial Analysi...IJSCAI Journal
The importance of economic freedom has often been stressed by supporters of liberalism, but can its actual
effect be observed in a data driven, objective way? To analyze this relation the Economic Freedom of the
World (EFW) index and the Human Development Index (HDI) were examined with modern machine learning algorithms and a wide-ranging approach. Considering the EFW index’s preference of a liberalistic
oriented economic policy, an objective recommendation for creating an economic policy that improves
people’s everyday lives might be derived by the analysis results. It was found that these more advanced
algorithms achieve a considerably stronger correlation between both indices than pure statistical means
yet leave a small room for interpretation towards a counter-liberalistic implementation of demand-driven
economic policy.
FORECASTING MACROECONOMICAL INDICES WITH MACHINE LEARNING: IMPARTIAL ANALYSIS...ijscai
The importance of economic freedom has often been stressed by supporters of liberalism, but can its actual
effect be observed in a data driven, objective way? To analyze this relation the Economic Freedom of the
World (EFW) index and the Human Development Index (HDI) were examined with modern machine learning algorithms and a wide-ranging approach. Considering the EFW index’s preference of a liberalistic
oriented economic policy, an objective recommendation for creating an economic policy that improves
people’s everyday lives might be derived by the analysis results. It was found that these more advanced
algorithms achieve a considerably stronger correlation between both indices than pure statistical means
yet leave a small room for interpretation towards a counter-liberalistic implementation of demand-driven
economic policy.
Importance of the neutral category in fuzzy clustering of sentimentsijfls
This document summarizes a research paper on the importance of including a neutral category in sentiment analysis using fuzzy clustering. It discusses related work on sentiment analysis and clustering algorithms. It then presents a model that uses fuzzy c-means clustering to analyze tweets about the International Criminal Court in Kenya. The model represents sentiments in a 3D plot to capture varying levels of positivity, negativity, and neutrality, rather than just positive or negative. The goal is to gain more information from sentiments than a simple two-category classification.
Intelligent analysis of the effect of internetIJCI JOURNAL
This paper analyzes the effect of Information Technology on professionals, academicians and students in
perspective of their relations, education, job purpose, health, entertainment and electronic business that
can bring changes in society. Technology can have both positive and negative consequences for people of
different walks of life at different times. The need is to understand the true impact of internet on the society
so that people can start thinking and build a healthy society. In this paper, an empirical study is considered
60 persons; a causal loop model is formed relating the parameters on the basis of data collected. These
parameters are used to form the fuzzy dynamic model to analyze the effect of the internet on the society.
The model is analyzed and suitable solutions are proposed to counter the negative effect of internet on our
society.
Assessment of the main features of the model of dissemination of information ...IJECEIAES
Social networks provide a fairly wide range of data that allows one way or another to evaluate the effect of the dissemination of information. This article presents the results of a study that describes methods for determining the key parameters of the model needed to analyze and predict the dissemination of information in social networks. An approach based on the analysis of statistical data on user behavior in social networks is proposed. The process of evaluating the main features of the model is described, including the mathematical methods used for data analysis and information dissemination modeling. The study aims to understand the processes of information dissemination in social networks and develop recommendations for the effective use of social networks as a communication and brand promotion tool, as well as to consider the analytical properties of the classical susceptible-infected-removed (SIR) model and evaluate its applicability to the problem of information dissemination. The results of the study can be used to create algorithms and techniques that will effectively manage the process of information dissemination in social networks.
304 Part II • Predictive AnalyticsMachine LearningIntrodu.docxpriestmanmable
304 Part II • Predictive Analytics/Machine Learning
Introduction and Motivation
Analytics has been used by many businesses, organi-
zations, and government agencies to learn from past
experiences to more effectively and efficiently use
their limited resources to achieve their goals and objec-
tives. Despite all the promises of analytics, however,
its multidimensional and multidisciplinary nature can
sometimes disserve its proper, full-fledged application.
This is particularly true for the use of predictive analyt-
ics in several social science disciplines because these
domains are traditionally dominated by descriptive
analytics (causal-explanatory statistical modeling) and
might not have easy access to the set of skills required
to build predictive analytics models. A review of the
extant literature shows that drug court is one such
area. While many researchers have studied this social
phenomenon, its characteristics, its requirements, and
its outcomes from a descriptive analytics perspective,
there currently is a dearth of predictive analytics mod-
els that can accurately and appropriately predict who
would (or would not) graduate from intervention and
treatment programs. To fill this gap and to help author-
ities better manage the resources, and to improve the
outcomes, this study sought to develop and compare
several predictive analytics models (both single models
and ensembles) to identify who would graduate from
these treatment programs.
Ten years after President Richard Nixon first
declared a “war on drugs,” President Ronald Reagan
signed an executive order leading to stricter drug
enforcement, stating, “We’re taking down the surren-
der flag that has flown over so many drug efforts; we
are running up a battle flag.” The reinforcement of the
war on drugs resulted in an unprecedented 10-fold
surge in the number of citizens incarcerated for drug
offences during the following two decades. The sky-
rocketing number of drug cases inundated court
dockets, overloaded the criminal justice system, and
overcrowded prisons. The abundance of drug-related
caseloads, aggravated by a longer processing time
than that for most other felonies, imposed tremen-
dous costs on state and federal departments of justice.
Regarding the increased demand, court systems started
to look for innovative ways to accelerate the inquest
of drug-related cases. Perhaps analytics-driven deci-
sion support systems are the solution to the problem.
To support this claim, the current study’s goal was
to build and compare several predictive models that
use a large sample of data from drug courts across
different locations to predict who is more likely to
complete the treatment successfully. The researchers
believed that this endeavor might reduce the costs to
the criminal justice system and local communities.
Methodology
The methodology used in this research effort
included a multi-step process that employed pre-
dictive analytics.
Accident Analysis Models And Methods Guidance For Safety ProfessionalsLeslie Schulte
This document provides an overview of accident analysis models and methods. It discusses three categories of analysis techniques: sequential, epidemiological, and systemic. Sequential techniques view accidents as resulting from time-ordered causal events, but cannot adequately account for organizational and human factors. Epidemiological techniques were developed to consider organizational influences, and view accidents as stemming from latent failures within a system. More recently, systemic techniques have emerged that treat socio-technical systems holistically and focus on the interactions between system components. The document aims to inform readers on different analysis approaches and factors that influence model selection.
Developing of decision support system for budget allocation of an r&d organiz...eSAT Publishing House
1) The document describes developing a decision support system for budget allocation of an R&D organization using a performance-based goal programming model.
2) It analyzes nine years of budget data from the organization and finds a wide gap between allocated funds and funds utilized.
3) The proposed model assesses R&D programs based on priority and risk factors using fuzzy set theory, and aims to allocate budgets in a more realistic and accurate manner than the previous approach.
IRJET- Deep Neural Network based Mechanism to Compute Depression in Socia...IRJET Journal
The document describes a proposed system to analyze social media posts using deep neural networks to detect signs of depression. It involves collecting social media posts from users over a period of 90 days. A 3-layer deep neural network would analyze the posts to identify emotions and habits. Regression analysis of the neural network outputs over time would determine a "depression quotient" score for each user, indicating their risk of depression. The system aims to provide automated advice and prognosis to help depressed users.
A Clustering Method for Weak Signals to Support Anticipative IntelligenceCSCJournals
This document proposes a clustering method to analyze weak signals, which are short texts that may indicate future trends when analyzed together. The method involves preprocessing weak signals by removing stop words, stemming words, and identifying synonyms. It then clusters the weak signals using the K-medoids algorithm based on the number of similar words between signals, including identical words, stemmed words, and synonyms. The method was tested on a database of weak signals related to bioenergy. The clustering is intended to group similar weak signals to help form hypotheses about potential future changes or opportunities.
Interpreting available data is a focal issue in data mining. Gathering of primary data is a difficult and expensive affair for assessing the trends for any business decision especially when multiple players are present. There is no uniform formula-type work procedure to deduce information from a vast data set especially if the data formats in the secondary sources are not uniform and need enormous cleansing to
mend the data for statistical analysis. In this paper, an incremental approach to cleanse data using a simple yet extended procedure is presented and it is shown how to deduce conclusions to facilitate business decisions. Freely available Indian Telecom Industry’s data over a year is used to illustrate this process. It is shown how to conclude the superiority of one telecom service provider over the others comparing different parameters like network availability, customer service quality etc. using a relative parameter
quantification technique. It is found that this method is computationally less costly than the other known
methods.
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...MITAILibrary
The document provides a review of machine learning interpretability methods. It begins with an introduction to explainable artificial intelligence and a discussion of key concepts like interpretability and explainability. It then presents a taxonomy of interpretability methods that are divided into four main categories: methods for explaining black-box models, creating white-box models, promoting fairness, and analyzing model sensitivity. Specific machine learning interpretability techniques are summarized within each category.
Predicting user behavior using data profiling and hidden Markov modelIJECEIAES
Mental health disorders affect many aspects of patient’s lives, including emotions, cognition, and especially behaviors. E-health technology helps to collect information wealth in a non-invasive manner, which represents a promising opportunity to construct health behavior markers. Combining such user behavior data can provide a more comprehensive and contextual view than questionnaire data. Due to behavioral data, we can train machine learning models to understand the data pattern and also use prediction algorithms to know the next state of a person’s behavior. The remaining challenges for this issue are how to apply mathematical formulations to textual datasets and find metadata that aids to identify the person’s life pattern and also predict the next state of his comportment. The main idea of this work is to use a hidden Markov model (HMM) to predict user behavior from social media applications by analyzing and detecting states and symbols from the user behavior dataset. To achieve this goal, we need to analyze and detect the states and symbols from the user behavior dataset, then convert the textual data to mathematical and numerical matrices. Finally, apply the HMM model to predict the hidden user behavior states. We tested our program and identified that the log-likelihood was higher and better when the model fits the data. In any case, the results of the study indicated that the program was suitable for the purpose and yielded valuable data.
Running head USING IT TO MODEL BEHAVIOR FOR POLICY MAKING .docxjenkinsmandie
Running head: USING IT TO MODEL BEHAVIOR FOR POLICY MAKING 1
USING IT TO MODEL BEHAVIOR FOR POLICY MAKING 9
Using IT to Model Behaviour for Policy Making
Naga Devika Cheekati
University of The Cumberlands
Annotated Bibliography
Li, W., & Zhang, X. (2014). Simulation of the smart grid communications: Challenges, techniques, and future trends. Computers & Electrical Engineering, 40(1), 270-288.
Li and Zhang (2014), investigate how technology can be used in a simulation that would aid in determining how effectively smart grid technology can be implemented. The successful implementation of smart grid technology requires the combination of several different frameworks that rely on information communication technology to aid in the regulation of power created and supplied. A simulation of possible communication networks that can be used is made in the study as a way of testing the viability of a smart grid system and its application in reality. The paper successfully identifies various simulation frameworks that can be used to successfully gauge how the system can be created. The findings show that information technology can play an integral role in creating simulations that can support policymaking.
Sarabando, C., Cravino, J. P., & Soares, A. A. (2014). Contribution of a computer simulation to students' learning of the physics concepts of weight and mass. Procedia Technology, 13, 112-121.
Sarabando, Cravino, and Soares (2014) investigate the use of computer simulation to analyse how students learn key concepts of physics. Software is used to analyse common learning processes used in teaching physics. Students in the sample population were asked to carry out learning activities ordinarily on the traditional learning environment. The results were then compared to learning activities that were carried out using computer software. The findings showed that the use of computer simulation in learning improved the retention rate, while the language used by teachers also impacted the speed of learning. The findings of the study can be used in the formulation of learning policies, which shows that IT simulation can be successfully used in the formulation of public policy.
Mensah, P., Merkuryev, Y., & Longo, .. F. (2015). Using ICT in Developing a Resilient Supply Chain Strategy. Procedia Computer Science, 43, 101-108.
Mensah, Merkuryev, and Longo (2015) analyse how simulation can be used to improve supply chain performance. According to the study, many different factors impact on the performance of a supply chain, some of which are not taken into consideration when designing supply chain activities. Through simulation aided by information technology, all key factors that influence the performance of the supply chain can be analysed in-depth and included in simulation models. The models are then used to analyse how a supply chain will perform under different c.
PhD Presentation on 23rd April 2020 Before Viva Voce Bord Dr. Heera Lal IAS
The document discusses a study analyzing the role of ICT in achieving good governance in India, specifically related to the Mid-Day Meal Scheme in Uttar Pradesh. It outlines the study's objectives to analyze stakeholder perceptions of ICT and good governance dimensions, identify importance of dimensions, and analyze the impact of ICT interventions like IVRS. The conceptual framework shows ICT as the independent variable and good governance dimensions like participation, transparency, etc. as dependent variables. Hypotheses are presented and tested regarding differences in perceptions across stakeholders and the impact of ICT on good governance.
This document presents a system for monitoring suicidal behavior that uses data mining algorithms to predict individuals who may be at higher risk of suicide. The system collects information from users about factors like depression, stress, and social activity levels. It then applies the ID3 and Apriori algorithms to the data to generate personalized solutions and predictions about a user's risk. The system is intended to help educate people about suicide prevention by providing advice tailored to their individual characteristics and circumstances. It aims to reduce healthcare costs by providing this information directly to users without needing to visit a doctor. The end goal is to create a simple mobile and web application that can help prevent suicide through early detection of at-risk individuals.
Human Resource Recruitment using Fuzzy Decision Making MethodIJSRD
Traditional way of recruiting personnel was through a group decision making problem under multiple criteria containing subjectivity, imprecision and vagueness. In order to keep up with increasing competition of globalization and fast technological improvements and changes, Human resource field needs the best fit employee for a job vacancy. This paper proposes a Fuzzy Decision Making (FDM) method for recruitment process of Human Resource management, which is based on the Fuzzy set theory. Different criteria of human resource recruitment process will be given fuzzy linguistic terms such as absolutely low, extremely low, very low, low, medium, high, very high, extremely high, absolutely high depends upon the importance given by the Human Resource manager. Fuzzy Scaled Weight (FSW) will be calculated for all the criteria. Cumulative Fuzzy Value (CFV) is calculated for all the applicants from Fuzzy Scaled Weight values. Interview Fuzzy Value (IFV) is given by the human resource manager for all the applicants attending interviews. Final Fuzzy Value (FFV) will be calculated aggregating both Cumulative Fuzzy Value and Interview Fuzzy Value. Finally, all the applicant records are arranged in the order of FFV values. Then, Human resource manager can select the applicants easily from the arranged list. Fuzzy Decision Making method system is developed to illustrate the above FDM method for the Human Resource recruitment process.
This document discusses a study that uses a mixed logit model to predict firm financial distress. Mixed logit is an advanced discrete choice modeling technique that relaxes assumptions of standard logit models. It allows for observed and unobserved heterogeneity across firms. The study aims to demonstrate the empirical usefulness of mixed logit in financial distress prediction by comparing its performance to standard logit models. Results and out-of-sample forecasts show mixed logit outperforms standard logit models by significant margins in predicting firm financial distress.
Statistics is the systematic collection, organization, analysis, and interpretation of data. It plays an important role in decision making by helping extract meaningful information from raw data. There are two main types of statistics - descriptive statistics which summarizes and presents data, and inferential statistics which makes inferences, tests hypotheses, and determines relationships in the data. Statistics has many applications in fields like business, medicine, economics and more. It helps simplify complex data, enable comparisons, identify trends, and aid decision making. Common statistical terms include population, sample, variables, attributes, and parameters. Data can be collected through various methods including direct observation, interviews, questionnaires, and more.
Similar to PSO OPTIMIZED INTERVAL TYPE-2 FUZZY DESIGN FOR ELECTIONS RESULTS PREDICTION (20)
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Applications of artificial Intelligence in Mechanical Engineering.pdf
PSO OPTIMIZED INTERVAL TYPE-2 FUZZY DESIGN FOR ELECTIONS RESULTS PREDICTION
1. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
DOI: 10.5121/ijfls.2019.9101 1
PSO OPTIMIZED INTERVAL TYPE-2 FUZZY DESIGN
FOR ELECTIONS RESULTS PREDICTION
Uduak Umoh1
, Samuel Udoh1
, Etebong Isong2
, Regina Asuquo1
, and Emmanuel
Nyoho1
1
Department of Computer Science, University of Uyo, PMB 1017, Akwa Ibom State,
Nigeria
2
Department of Computer Science, Akwa Ibom State University, Mkpatenin, Nigeria
ABSTRACT
Interval type-2 fuzzy logic systems (IT2FLSs), have recently shown great potential in various applications
with dynamic uncertainties. It is believed that additional degree of uncertainty provided by IT2FL allows
for better representation of the uncertainty and vagueness present in prediction models. However,
determining the parameters of the membership functions of IT2FL is important for providing optimum
performance of the system. Particle Swarm Optimization (PSO) has attracted the interest of researchers
due to their simplicity, effectiveness and efficiency in solving real-world optimization problems. In this
paper, a novel optimal IT2FLS is designed, applied for predicting winning chances in elections. PSO is
used as an optimized algorithm to tune the parameter of the primary membership function of the IT2FL to
improve the performance and increase the accuracy of the IT2F set. Simulation results show the superiority
of the PSO-IT2FL to the similar non-optimal IT2FL system with an increase in the prediction.
KEYWORDS
Type-2 Fuzzy logic system, Particle swarm optimization, optimum fuzzy membership function, politics,
democracy, election prediction accuracy
1. INTRODUCTION
In a technical sense, an election is a process by which an office is assigned to a person by an act
of voting that involves the simultaneous expression of opinion by many people. Elections have
been the usual mechanism by which modern representative democracy has operated since the
17th century. It serves as a process of changing the government in a country through peaceful
means. The process and properties may vary for different sectors and different countries. The
election prediction, on the other hand, aims at forecasting the outcome of elections. The major
challenge in election prediction is the task of predicting an accurate result. The goal of predicting
election outcomes requires forecasters to pay attention to the political and economic indicators
that vary across a series of electoral contests, such as macroeconomic conditions and popularity
ratings. There are three leading approaches to Election prediction – opinion polls, prediction
markets, and models [1].
Opinion polls are usually designed to represent the opinions of a population by conducting a
series of questions and then extrapolating generalities in ratio or within confidence intervals.
Prediction markets are betting markets where people buy and sell candidate futures based on who
they think will win the election. Models use non-polling aggregate data to predict election
outcomes with the use of measures of government and economic performance. All three are
united in that they are driven by some methodological theory, but models represent the most
accurate and reliable method of predicting elections to date [2].
2. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
2
To predict an accurate result of elections with respect to the different candidates and parties is
quite a challengeable task since the system is linguistic, vague, and dynamic in nature. Also
predicting more accurately requires more information and the use of more parameters which has
been lacking. A variety of predictive models such as statistical, regression analysis, predicting
from past experiences, etc. are proposed for elections predictions and have given inaccurate
results. Due to the changing times and trends, statistical methods are unable to predict elections in
a realistic way sequel to their inadequacy to represent fuzzy information that characterize election
processes. Traditional rigorous mathematical approaches are inappropriate for the modeling of
this kind of humanistic system due to the imprecision in their linguistic expressions. In general,
economic conditions and public opinion surveys are used to build the models for predicting the
election outcome. [3] [4] are some typical contributions on election forecasting from political
scientists. [5], an economist also developed models based on economic variables in predicting
election outcomes. In numerous cases, the existing legal framework establishes that political
parties should “democratically” elect their candidates, but this concept is very vague, and there
are few if any applicable legal provisions.
However, the election systems are inherently vague and cannot be expressed easily in precise
numbers. The vagueness is both in the values of the variables and in which variable is important
and thus should be considered. For example, public opinion surveys and economic conditions are
frequently expressed linguistically as “very important, important, not important” or “very good,
good, not good”. These linguistic expressions cannot be converted into precise numbers without
loss of some of the original meanings. The concept of information is inherently associated with
the concept of uncertainty which affects decision-making due to some deficiency. Uncertain
information may be incomplete, imprecise, fragmentary, not fully reliable, vague, contradictory,
or deficient in some other way [6] [7] [8]. Recently, optimization techniques, inspired in social
behaviours, have evolved as an alternative to statistical and regression analysis model for
prediction, through use of information about real or synthetic data [9]. Some of these methods are
genetic algorithms [10] [11], type-1 fuzzy logic [12] [13], type-2 fuzzy logic [7], particle swarm
optimization [14] [15], etc. Basically, the interest in evolutionary approach is due to easy code
building and implementation, no usage of information about gradients and, capacity to escape
from local optimal [9] [16].
T1FL, developed by [12] is a form of many-valued logic based on fuzzy set theory. The initial
idea of T1FL is to generalize the notion of membership to a continuous range of [0, 1], called the
membership grade, to indicate its gradient nature, where the membership grade for each point x in
the universe is another point μ x ∈ 0, 1 . T1FL deals with reasoning that is approximate rather
than fixed and exact and helps in modeling knowledge through the use of if-then fuzzy rules. A
T1FL system is made up of fuzzifier, rule-base, fuzzy inference engine, and defuzzification units.
It has been applied in many scientific and engineering fields, including social science in the area
of politics [17] [18] [19] [20] [21] [22] [23] [24] [25]. However, T1FL has limited capabilities to
directly and adequately handle uncertainties and imprecision because the T1F set has a
membership grade that is crisp.
The T2FL, which is an extension of T1FL [7], has evolved to overcome the limitation of T1FL. In
T2F sets, the membership grade μ x is a distribution of the expected point in [0, 1]. However, it
is difficult and computationally intensive estimating this distribution and to work with T2FL. The
IT2FL, a special case of T2FL, attempts to capture the uncertainty of T2F sets using T1F sets.
The IT2FL has all the benefits of the T1FL system. In addition, it has a type-reduction unit that
generates a T1F set output. The IT2FLS is again characterized by IF-THEN rules, but its
antecedent or consequent sets are now type-2.
The main idea of IT2FL in politics resides in the fact that the political expert reasoning and
election data are strongly based on vague and uncertain statements so common in the social field.
IT2F sets are employed in this paper because they give the theoretical benefits of full T2F sets,
3. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
3
capable of handling uncertainties and imprecision of the parameters better than T1FL and are
practical to use. Thus, it can be used when the circumstances are too uncertain to determine exact
membership grades in order to improve result accuracy, tractability, robustness and low-cost
solutions, as in the case of election results prediction. However, in order to find the optimal
intelligent prediction of election results, the bio-inspired method is applied on the optimization of
the membership functions’ parameters of IT2FL to obtain the optimal one.
Particle swarm optimization (PSO), introduced by Kennedy and Eberhart [26] is a metaheuristic
approach very useful in optimization problems. PSO maintains a swarm of particles and each
particle represents a possible solution. These particles "fly" through a multidimensional search
space, where the position of each particle is adjusted according to ones experience and that of its
neighbours [27]. In this paper, an IT2FL is used to predict election results. This predictor is
optimized by Particle Swarm Optimization (PSO) algorithm for an optimal result. The remainder
of this paper is organized as follows; Section 2 presents related literature while section 3 gives
proposed design. Results and discussions are presented in section 4. Section 5 presents the
conclusion and section 6 gives references.
2. THE INTERVAL TYPE-TWO CONTROLLERS
The T1FL, developed by Professor L. A. Zadeh [12], is a form of many-valued logic and it deals
with reasoning that is approximate rather than fixed and exact. FL has rapidly become one of the
most successful of today's technologies for developing sophisticated control systems because of
its ability to handle vague statement, uncertain or imprecise information and also in decision-
making system (DMS). In recent years the T1FLS has been on increase application in many to
solve real-world problems because of the fact that fuzzy systems can deal with linguistic data
along with the numerical data. T1FL has been applied in scientific, industrial, healthcare, etc for
the design, development and implementation of algorithms and models. T1FL design can
accommodate the ambiguities of real-world in human language and logic, unlike traditional
approaches which require accurate equations to model real-world behaviours.
Although T1FLS can handle the uncertainties related to imprecise data, in some cases a higher
degree of precision is required. This triggered the introduction of IT2FLS concept [7], which is an
extension of T1FL. An IT2F set is characterized by a fuzzy membership value for each element of
this set which is a fuzzy number with an interval in [0, 1]. Such sets can deal with the
uncertainties about the fuzzy membership value itself. The uncertainty in data can be represented
by the footprint of uncertainty (FOU). The IT2FL theory has been applied to different fields of
human existence. The IT2FL has the ability to handle uncertainty adequately than its T1FL
counterpart with ability, reliability, capability and robustness.
The structure of an IT2FL as shown in Figure 1 is made up of five components; The fuzzification,
rule-base, inference engine, type reduction and defuzzification units. Fuzzification maps inputs
(real values) to fuzzy values. Inference engine applies a fuzzy reasoning mechanism to obtain a
fuzzy output. The knowledge base contains a set of fuzzy rules, which is of the form
: is and … then ! , = 1,2, … $and a membership functions set known as the
database. Type Reducer transforms a fuzzy set into a Type- 1 Fuzzy Set. The defuzzification
maps one output to precise values. An interval type-2 fuzzy set (IT2 FS)Ã is characterized by a
membership interval in the universe of discourse X as;
à = &' x, u , μà x, u )*∀x ∈ X, ∀u ∈Jx⊆ 0, 1 } (1)
4. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
4
Fig. 1: Structure of an IT2FL Model for Election Results Prediction
à = ∑ 0∑ 1/23∈45 67
8 / (2)
Where x, the primary variable, has domain X; u∈U, the secondary variable, has domain Jx at each
x∈X;Jx is called the primary membership of x and the secondary grades of à all equal 1. [28]
[29]. Uncertainty about à is conveyed by the union of all the primary memberships, which is
called the footprint of uncertainty (FOU) of A, encompassing all the embedded primary
membership functions J: of à as shown in 3.
μà x, u = 1, ;< à = ⋃ J:∀:∈> = { x, u : u ∈ Jx ⊆ 0, 1 } (3)
FOU(Ã) is bounded by upper membership function (UMF) μà x and lower membership function
(LMF) @Ã(x), ∀x∈X, respectively assuming minimum and maximum of the membership functions
of the embedded T1FSs in the FOU
μà x ≡ ;< à ∀ ∈ B (4)
μà x ≡ ;< à ∀ ∈ B (5)
For IT2FS, J: = μà x , μà x ], ∀ ∈ B (6)
We employ IT2 Gaussian membership function (GMF) with fixed mean, c and uncertain standard
deviation, I to evaluate the degree of membership (DoM) of the input variables. GMF is suitable
for a highly dynamic random system such as election results prediction.
@J = exp L−
5NO
PQ
R IS I , IP (7)
The upper and lower membership functions are calculated using:
@̅ÃUV
= exp W−
5UNOUV
PQXY,UV
Y Z , @̅Ã = [ , IP; (8)
@ÃUV
= exp W−
5UNOUV
PQ^,UV
Y Z , @Ã = [ , I ; (9)
Where w is the centre (mean), σ is the standard deviation and x is the input vector. The variables
IP, _ and I , _ are premise parameters that define the DoM of each element to the fuzzy set Ã
and FOUs of the IT2FS. The detail description is found in [30] [28]. The fuzzy rules are defined
as;
5. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
5
` @ , P @ P , a[b, … , a[b _ @ _ cde[@f =
min @ , P @ P , … @ _ , max @ , P @ P , … @ _ (10)
The firing intervals for lower and upper membership functions are evaluated using [31] as;
' ) = h@ij^
U' ′
) ∗. . .∗ @ijV
U ' ′
)l , @ij^
U' ′
) ∗. . .∗ @ijV
U ' ′
) ≡ h , l , = 1, 2, ..,M] (11)
Where Fi
(xi
) is the antecedent of rule i and µF1
i
(xˈ) is the DoM of x in F. @ij^
U(x) and @ijV
U (x) are
upper and lower MFs of @iU. Instead of the product, the minimum can be used as an operator in
(11). Typically, the firing intervals for low, medium and high membership functions for each
input variable is evaluated as,
mno 5 = p
@oqrrstuvwx + @oqrrstuvwxP
2
,
@oqrrstuvwz + @oqrrstuvwzP
2
{
= @oqrrstuvw _ , @oqrrstuvw _ (12)
Where,@oqrrstuvwx and @oqrrstuvwxP are the left-hand side uncertainty region boundaries and
@oqrrstuvwz |}~ @oqrrstuvwzP are right-hand side uncertainty region boundaries for
membership function variables.
The inference engine combines the fired rules and gives a mapping from input IT2FSs to output
IT2FSs. Perform type-reduction for combining ′ and its consequent. In this paper, we
explore the center of sets type-reducer (COS) method. The details are as follows:
€z
•
= ‚ƒ
•
, ‚„
•
≡ ‚ƒ, ‚„ = ⋃
∑ iUfU…
U†^
∑ iU…
U†^
iU∈‡U 5ˆ
fU∈‰U
(14)
Which, ‚„ =
∑ iŠ
UfŠ
U…
U†^
∑ iŠ
U…
U†^
(15)
‚ƒ =
∑ i‹
U
f‹
U…
U†^
∑ i‹
U…
U†^
(16)
Compute output ‚ =
f‹ŒfŠ
P
(17)
3. THE PARTICLE SWAM OPTIMIZATION
The Particle swarm optimization (PSO) [26], is a population-based stochastic
optimization technique inspired by social behaviour of bird flocking, fish schooling or
human grouping to find the optimal solution using a population of particles. The
fundamentals of PSO are the population of particles, interconnection topologies, search
algorithms and evaluation rules which cooperate together in finding the optimal solution
to the problem. The PSO searches through an n-dimensional problem space with the aim
of minimizing or maximizing the objective function of the problem. In the PSO
algorithm, for example, the birds in a flock are symbolically represented as "particles",
considered as simple agents "flying" through a problem space. A particle's location in the
multi-dimensional problem space represents one solution for the problem. When a
particle moves to a new location, a different problem solution is generated. This solution
is evaluated by a fitness function that provides a quantitative value of the solution's
utility.
6. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
6
Each individual or particle of the population has both an adaptable velocity (position change)
according to which it moves in the search space and a memory, remembering the best position of
the search space it has ever visited. Thus, the basic concept of PSO lies in accelerating each
particle towards the best individual of a topological neighbourhood with a random weighted
acceleration at each time. PSO is an algorithm with a simple structure, simple parameter setting
and fast convergence speed. It is widely applied to solve various function optimization problems,
mathematical modeling, system control, or the problems that can be transformed to function
optimization problems [32] [33] [34] [35[. Over the past years, PSO has demonstrated an efficient
and successful application in many research and application areas with a better result faster and
cheap as compared to other approaches. Additionally, PSO adjusts a few parameters, a slight
modification of one version could well be employed in a wide variety of applications. PSO has
been used to solve a wide range of problems across many applications, including, fuzzy
controllers design [25] [36].
In PSO algorithm, each particle has a position vector (Pi) and a velocity vector (vi) in the search
space and inertia weight (w), parameter employed to control the previous velocities on the current
velocity. First, the particles are initialized randomly. Then, it finds the best solution by iteration.
Each particle flies through the solution space of problem and adjusts its flying velocity to search
for the global optimum according to its own and social historical experiences. At every learning
cycle, each particle’s position and velocity are updated by the two best positions. One of which is
the best solution found by the particle itself called personal best value (Pbest) and the other is the
best solution found by the whole swarm, called global best value (Gbest).
The particle velocityis updated by (18) and the position of the particle is calculated and presented
in (19),while the inertia weightis calculated in (20) respectively.
• Ž + 1 = •. • Ž + • . ‘ . '’ − B Ž ) + •P. ‘P. ’“ − B Ž (18)
B Ž + 1 = B Ž + • Ž + 1 (19)
Where:
- particle index
t - iteration number
• - displacement of particle’s movement
• Ž - is the current (previous) particle velocity
• Ž + 1 - is the updated particle velocity
• - constriction coefficient
B – particle’s position within the problem domain
= _”5 − •–}
OV—˜NOVU™
wš: “›
(20)
Where w is the inertia weight, _”5 = 0.8 and _ = 0.4, gen is the evolutionary generation
number.
First, parameter values of i, • 1 , B 1 , • and•P, •, ’ and ’“ are initialized to begin the
particle’s movements within the problem space. The overall steps of the PSO algorithm are
presented in Figure 2. The procedure described in Figure 2 is applied to find an optimal
IT2FLCcombined with PSO to achieve a better space of solutions.
B Ž - current particle position
• |}~•P - two positive constant
‘ |}~‘P - are normalized unit random
numbers in the range [0, 1]
’ - individual best candidate solution for
particle
’“ - global best candidate solution.
7. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
7
Fig. 2: The PSO Algorithm
4. THE PARTICLE INTERVAL TYPE-2 FUZZY LOGIC DESIGN FOR ELECTION
RESULTS PREDICTION
Standard The IT2FL model is designed to predict the election results. The concept of IT2FL
inference system is used to calculate the percentage of chances of selection of a candidate to win
the election. Fuzzy Inference Systems (FIS) use fuzzy sets and “if-then” rules relevant to fuzzy
sets to make decisions about incomplete or vague information. There are basically Mamdani and
Sugeno, two most commonly fuzzy inference systems are present in MATLAB. The study
employs Mamdani type of IT2FIS to evaluate the chances of selection of a candidate. IT2FIS
system executes in five major steps: fuzzification, rulebase, inference, type-reduction and
defuzzification as discussed in section 2.
In this study, initially, thirteen input parameters are collected as the factors according to
their level of importance for a candidate to win an election. After proper analysis with some
experts and removing/grouping some dependent arguments, only six input parameters are selected
and use as input variables. The study considers popularity, the strength of the political party,
credibility from past performance, financial/economic power, number of years in active politics
and educational achievement as input parameters while; winning chance is the output variable,
representing the level of chances of a candidate winning an election. The universe of discourse
(UOD) for the inputs and output variables and the domain intervals of the variables used in the
developed fuzzy models as well as the range of each variable are defined and shown in Table 1.
These are partitioned according to their lower and upper values used in controlling the models.
Fuzzy sets of the inputs and output variables and their associated values and labels are defined
and presented in Table 2 respectively.
Table 1: Domain Intervals of Input and Output Variables
Variables Lower Bound Upper Bound
Popularity
The strength of
partyCredibility
Financial State
Years of Active
ServiceEducational
Achievement
0
0
0
0
0
0
10
1
100
10
10
8
WinningChances 0 100
8. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
8
Table 2: Inputs and Outputs MF Variables Fuzzy sets
Fuzzy Set Gaussian MF Range
Lower
I 1
Upper
I 2
Center
c
Symbol
Popularity
Low
Medium
High
1.566,
0.93
1.347
2.14
1.36
1.908
0.0
5.0
10.0
LO
ME
HI
Strength of Party
Weak
Average
Strong
0.121
0.069
0.089
0.17
0.11
0.137
0.0
0.5
1.0
LO
ME
HI
Credibility
Low
Moderate
High
11.56
6.21
10.19
16.89
10.12
15.39
0.0
50.0
100.0
LO
ME
HI
Years of Active Politics
Low
Moderate
High
0.936
0.708
1.06
1.443
1.135
1.607
0.0
5.0
10.0
LO
ME
HI
Financial/Economic Power
Weak
Average
Strong
1.019
0.779
0.923
1.539
1.241
1.402
0.0
5.0
10.0
LO
ME
HI
Educational Achievement
Low
Moderate
High
0.957
0.59
0.848
1.4
0.918
1.286
0.0
4.0
8.0
LO
ME
HI
Winning Chances
Very Low
Low
Moderate
High
Very High
10.2
8.49
6.702
8.26
9.36
15.06
12.03
10.02
11.73
13.6
0.0
33.2
50.0
66.7
100.0
VL
LO
Mo
HI
VH
Fuzzification: this module maps the crisp values to interval type-2 fuzzy sets (IT2FSs) using a
defined Gaussian membership function method. Each input variable has three membership
functions as, low, moderate, high. The output variable (the level of chances of winning) has five
membership functions as, very low, low, moderate, high and very high. The linguistic variable of
six input memberships represented in Table 2 can be presented as follows:
Input Variable (Popularity)
@7£¤3ƒ”„ ¥fx , @7£¤3ƒ”„ ¥fxP I.e. Input Popularity Upper and Lower membership function for
Low
@7£¤3ƒ”„ ¥f¦ , @7£¤3ƒ”„ ¥f¦P] i.e. Input Popularity Upper and Lower membership function for
Moderate
10. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
10
output IT2F sets and then performs a center-of-set calculation using the iterative Karnik-Mendel
(KM) algorithm in (14) to produce an interval T1 FS (type-reduced set) determined by its two
endpoints, yL and yR . Finally, we perform defuzzification by finding the average of the two
endpoints,‚ƒ and ‚„ in (17) and obtain the crisp value (Winning Chance). After we obtain the
IT2FLC design, we then apply PSO technique to find the parameters of the IT2FLC of each
candidate.
5. THE PSO OPTIMIZED INTERVAL TYPE-2 FUZZY LOGIC DESIGN FOR
ELECTION RESULTS
In order to design the optimized IT2FL for election result prediction presented in section
4 above, the PSO algorithms are applied to search globally optimal parameters of primary
membership functions (MFs) of the IT2FL to improve the performance and increase the
accuracy of the IT2F set. The main idea consists in using the PSO algorithm to
dynamically adjust the MFs of the IT2FL election predictor. The structure of the
optimized IT2FL controller with PSO is presented in Figure 3.
In this paper, four the linguistic input variables are used which include; Popularity, Strength of
political party, Credibility from Past performance, Financial/Economic power, Number of years
in Active politics and Educational Achievement. The IT2FL-PSO algorithm for election result
prediction is shown in Figure 4.The velocity of the particle is updated and the position of the
particle is calculated using (18) and (19) while the inertia weight is computed using (20)
respectively. The IT2FL-PSO parameters are given in Table 3. From Figure 4, the PSO algorithm
is used to train the IT2FLCparameters to extract the best values that aid the prediction of election
results by dynamically adjusting the MFs of the IT2FL system. The best individual/particle is
injected into the population of the worst method in four iterations and vice versa. PSO uses the
population/swarm in the iteration in obtaining the best individual/particle. The maximum number
of iterations/generations is used in stopping the algorithm while the best result obtained from the
IT2FL-PSO process is kept. The IT2FL system generating the latest gbest is the optimal IT2FL
system. Table 4 shows the inputs and outputs MF variables fuzzy sets tuned with the PSO
algorithm.
Table 3: IT2FL-PSO Parameters
S/N Parameter Description Value
1 w Inertia 1.2
2 c1 Constant 2
3 c2 Constant 2
4 r1 Random number [0,1]
5 • Ž Particle Velocity
6 • Ž + 1
11. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
11
Fig. 3: PSO Optimized IT2FL for Election Result Prediction
Fig. 4: IT2FL-PSO Algorithm for Election Results Prediction
Table 4: Input and output MF variable fuzzy sets after tuning with PSO algorithm.
Fuzzy Set Gaussian MF Range
Lower
I 1
Upper
I 2
Center
c
Symbol
Popularity
Low
Medium
High
1.19
1.02
1.02
1.64
1.43
1.46
0.0
5.5
10.0
LO
ME
HI
Strength of Party
Weak
Average
Strong
0.11
0.11
0.08
0.15
0.16
0.12
0.0
0.5
1.1
LO
ME
HI
Credibility
Low
Moderate
High
10.7
9.11
14.4
16.02
14.15
20.24
0.0
47.3
100.0
LO
ME
HI
12. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
12
Years of Active Politics
Low
Moderate
High
1.24
0.7
1.35
1.68
1.14
2.02
0.0
4.77
10.0
LO
ME
HI
Financial/Economic Power
Weak
Average
Strong
1.01
1.08
1.35
1.53
1.6
1.99
0.0
5.24
10.0
LO
ME
HI
Educational Achievement
Low
Moderate
High
1.24
0.68
0.84
1.68
1.01
1.28
0.0
4.36
8.0
LO
ME
HI
Winning Chances
Very Low
Low
Moderate
High
Very High
10.2
8.49
6.702
8.26
9.36
15.06
12.03
10.02
11.73
13.6
0.0
33.2
50.0
66.7
100.0
VL
LO
Mo
HI
VH
6. SIMULATION RESULTS
In this paper, IT2FL for election result prediction system is developed and is applied to predict
the chances of a candidate winning an election in Akwa Ibom State, Nigeria. Input data are
generated based on the six variables; Popularity, Strength of party Credibility, Financial State,
Years of Active Service and Educational Achievement whileWinning Chances is the desired
output. For each input, Gaussian membership functions with fixed mean and uncertain standard
deviation are used. Secondly, we optimize IT2FL controllers using PSO approach. In this case,
we combine IT2FL and PSO in order to obtain the best design of an optimal IT2FL and apply to
prediction problem.
6.1 ELECTION RESULTS PREDICTION OBTAINED IT2FLC APPROACH
Figure 6 shows membership function evaluation based on the six input variables. The input and
output membership functions plots showing the degree of membership are shown in Figures 6(a)-
(g) respectively. Figure 7 gives theIT2FL rule viewers of chances of winning an election while
Table 4 presents the results of IT2FL for Election Winning Chance. Figure 9 shows the graph of
the result of IT2FL for election result prediction in Table 4.
Fig. 5: Membership Function Evaluation
13. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
13
Fig. 6: IT2FL Input and output Membership function plots (a) Popularity (b) Strength of Party (c)
Credibility (d) Financial /Economic power (e) Years in active politics (f) Educational achievement and (g)
Winning Chances
Fig. 7: IT2FL Rule Viewers of Chances of Winning Election
14. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
14
Table 5: Results of IT2FLC for Election Winning Chance
Fig. 9: The graph of the results of IT2FL for election result prediction
6.2 OPTIMAL IT2FLC RESULTS OBTAINED USING PSO METHOD
In this section, we present simulations results of the IT2FL controllers obtained with the PSO
method. Figure 10(a-g) presents the IT2FL input and output membership function plots after
tuning with the PSO algorithm.Table 5 shows theresults of IT2FL-PSO election winning
chance.Figure 11 shows the graph oftheresults of IT2FL-PSO election winning chance. Table 6
contains the configuration values of the IT2FL-PSO, the execution time of hybrid IT2FL-PSO
and the average error for each configuration. The goal of using the optimal controller is to obtain
the best steady state error.Figure 12 shows the behaviour of theindividuals/particles of the PSO
approach giving the best IT2FLC to predict the election result.The first row shows the best
IT2FLC obtained by PSO with 0.0128 minimum errors.
15. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
15
Fig. 10: IT2FL Input and output Membership Function plots after tuning with PSO (a) Popularity (b)
Strength of Party (c) Credibility (d) Financial/Economic power (e) Years in active politics (f) Educational
achievement and (g) Winning Chances
Table 6: IT2FL-PSO Election Winning Chance Results
16. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
16
Fig. 11: The graph of the results of IT2FL-PSO for election result prediction
Table 7: Results of the IT2FL obtained by PSO approach with Execution Time and Average Error for
Election Prediction
No. Particles Iteration/
Generation
C1 C2 Inertia Execution
Time
Error
1 200 200 0.84 0.77 0.33980 4:48:34 0.0128
2 60 70 1.2 0.94 0.49484 4:42:04 0.0411
3 50 100 2 2 0.36603 4:29:30 0.4344
4 150 45 0.58 0.97 0.63619 4:43:39 0.5412
5 70 80 1.3 1.56 0.54752 4:35:03 0.5991
6 40 75 0.63 0.79 0.47059 4:40:30 0.0834
7 25 80 1.4 0.98 0.44752 4:38:15 0.0985
8 200 70 0.81 0.91 0.13141 3:33:46 0.1232
9 90 50 1.76 1.85 0.60501 4:45:48 1.9758
10 200 80 0.73 0.15 0.84235 3:48:34 0.12
Fig. 12: The behaviour of the individuals/particles of the PSO approach with optimized IT2FL at the Best
PSO of 0.01278
The comparison of IT2FL and IT2FL-PSO methods is shown in the summary of the results in
Table 7. Figure 12 shows the graph of the comparison of the IT2FL andIT2FL-PSOapproaches in
election result prediction.
.
Best PSO = 0.01278
17. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
17
Table 8: Comparison of the IT2FL andIT2FL-PSOmethods in election result prediction
Figure 13: The graph of the comparison of the IT2FL andIT2FL-PSOapproaches in election result
prediction
7. CONCLUSION
In this paper, we describe the application of a bio-inspired method to design an optimized IT2FL
controller using PSO method. To test the optimized IT2FLC,a prediction of winning chances in
an election was carried out, using a case study of Akwa Ibom State in Nigeria. To achieve our
objective, IT2FL system was used which considers all the important parameters that must affect
the prediction (winning chances) of a candidate in an election. Six parameters were defined using
the Gaussian membership functions approach. 99 rules were defined based on "if-then"
conditions and stored in the rule-base. The results of both IT2FLC and IT2FL-PSO with respect
to election prediction were presented. From the IT2FL-PSO results, it was observed that the
behaviour of the individuals/particles of the PSO approach with optimized IT2FL performed best
at PSO value of 0.01278.
To perform a comparison of the optimization method we present a final table of results, where the
average error of 10 tests of each IT2FL controller was applied to predict the winning chances in
an election. The plots of the result showed that the optimal IT2FLC could get stability in less than
10 seconds. The IT2FLC obtained by PSO performed better than the result obtained with ordinary
IT2FLCs in terms of stability and average error.
Generally, the simulated results indicate that the IT2FLC obtained using PSO approach as applied
to election prediction system has improved the results with less error and better prediction. With
the satisfactory results, the study can be used as an automatic prediction evaluator in the field of
election and other related processes. In the future, the number of input parameters could be added
No IT2FL IT2FL-PSO
1 60.3 62.28683
2 60.19783 62.25202
3 50.18605 52.81921
4 17.64278 21.93618
5 60.22269 62.03462
6 78.73049 79.79031
7 59.20954 64.85969
8 75.47522 76.99285
9 60.31628 63.44625
10 60.3 62.76658
11 58.04733 62.83292
No IT2FL IT2FL-PSO
12 60.3 62.28683
13 80.30631 81.90493
14 80.17248 82.07171
15 60.3 62.28683
16 75.36295 78.17228
17 79.85012 80.78207
18 43.09218 46.12043
19 63.61258 67.74694
20 66.39356 74.3621
….. …… …
39 60.038 62.3788
18. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
18
to the system to achieve more accurate results. Hybrid approach could also be employed to
improve the fuzzy logic controller design.
REFERENCES
[1] Lewis-Beck, M. S. & Tien, C. (2011). Political and election forecasting. In M. Clements and D.
Hendry (Eds.), The Oxford handbook of economic forecasting (pp. 655-672).
[2] Lewis-Beck, M. S. (2005). Election forecasting: Principles and practice. British Journal of Politics
and International Relations, 7, 145-164
[3] Campbell, J.E and Garand, J.C. (Eds.) (2000). Before the Vote: Forecasting American National
Election, Sage Publication, Inc.
[4] Lewis-Beck, M.S and Rice, T.W. (1992). Forecasting Elections, Congressional Quarterly Press,
Washington DC.
[5] Fair, R. (1988). The effects of economic events on votes for president: Update, Political Behavior 10,
168-179
[6] Castillo, O and P. Melin (2008), Type-2 Fuzzy Logic: Theory and Applications (Springer, Heidelberg,
2008)
[7] Zadeh, L.A. (1975), The concept of a linguistic variable and its application to approximate reasoning.
Inf. Sci. 8, 43–80.
[8] Mendel, J.M. (2000) Uncertainty, fuzzy logic, and signal processing. Sig. Process. J. 80, 913–933.
[9] Lobato, F. S. & Souza, D. L. (2008). Adaptive Differential Evolution Method Applied To Controllers
Tuning. 7th Brazilian Conference on Dynamics, Control and Applications.
[10] Bandyopadhyay, R.;Chakraborty, U. K. & Patranabis, D. (2001), Auto Tuning a PID Controller:A
Fuzzy-Genetic Approach. Journal of Systems Architecture, 47(7), 663–73.
[11] Pan, I.; Das, S. & Gupta, A. (2011). Tuning of an Optimal Fuzzy PID Controller with Stochastic
Algorithms for Networked Control Systems with Random Time Delay, ISA Transactions, Vol. 50,
28–36.
[12] Zadeh, L. A. (1965). Fuzzy sets: Information and Control. 8, 338-353.
[13] Hamid, B.; Mohamed, T.; Pierre-Yves, G. & Salim, L. (2010). Tuning Fuzzy PD and PI Controllers
using Reinforcement Learning. ISA Transactions, Vol. 49 (4), 543–51.
[14] Kim, T-H.; Maruta, I. & Sugie, T. (2008). Robust PID Controller Tuning based on the Constrained
Particle Swarm Optimization, Automatica, Vol. 44, 1104–1110.
[15] Solihin, M. I.; Tack, L. F. & Kean, M. L. (2011). Tuning of PID Controller Using Particle Swarm
Optimization (PSO). Proceeding of the International Conference on Advanced Science,Engineering
and Information Technology, ISBN 978-983-42366-4-9.
[16] Whiteley, P., Sanders, D., Stewart, M. & Clarke, H. (2010). Aggregate level forecasting of the 2010
general elections in Britain: The seats-votes model. Electoral Studies, 30, 278-283.
[17] Jang, J., Sun, C., Mizutani, E.: Neuro-fuzzy and soft computing: a computational approach to learning
and machine intelligence. Prentice-Hall, Upper Saddle River (1997)
[18] Vaidehi, V, Monica .S, Mohamed Sheik Safeer .S, Deepika .M, Sangeetha .S A, (2008) Prediction
System Based on Fuzzy Logic. Proceedings of the World Congress on Engineering and Computer
Science (WCECS), October 22 - 24, 2008, San Francisco, USA
[19] Teran, L. (2011). A Fuzzy-based advisor for election and creation of political communities,
Information System Research Group. 3,
[20] Singh, H, Singh, G. and Bhatia, N. (2012), Election Results Prediction System based on Fuzzy Logic.
International Journal of Computer Application, 53 (9), 30-37.
19. International Journal of Fuzzy Logic Systems (IJFLS) Vol.9, No.1, January 2019
19
[21] Mahadev, M. M. and Kulkarni, R.V. (2013, Review: Role of Fuzzy Expert System for Prediction of
Election Results. Reviews Literature, 1 (2), 1-7.
[22] Pal, K. and Tyagi, S. (2014) Selection of Candidate by Political Parties Using Fuzzy Logic.
International Conference on Advanced Research and Innovation (ICARI), 387-390.
[23] Umoh, U. A. and Udosen, A. (2014), Sugeno-Type Fuzzy Inference Model for Stock Price
Prediction.International Journal of Computer Applications (0975 – 8887) 103 (3), 1-12.
[24] Rehman, R. (2017), Fuzzy Rule-Based Candidate Selection Evaluator by Political Parties.
International Journal of Advanced Research in Computer Science, 8(3), 445-451.
[25] Umoh, U. A. and Asuquo, D. (2017) Fuzzy Logic-Based Quality of Service Evaluation for
Multimedia Transmission over Wireless Ad Hoc Networks. International Journal of Computational
Intelligence and Applications (IJCIA). 16(4), 1-22. DOI: 10.1142/S1469026817500237.
[26] Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of IEEE International
Conference on Neural Networks, IV, pp. 1942–1948. IEEE Service Center, Piscataway (1995)
[27] Kennedy, J., Eberhart, R.: Swarm Intelligence. Morgan Kaufmann, San Francisco (2001)
[28] Mendel, J. (2001) Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions, 1st
ed. Prentice-Hall, Upper Saddle River, NJ, 2(1).
[29] Karnik, N. N. and Mendel, J. M., (2001). Operations on Type-2 Fuzzy Sets, Fuzzy Sets and Systems,
Vol. 122, No. 2, 2001, pp. 327-348. doi:10.1016/S0165-0114(00)00079-8
[30] Liang, Q. and N. N. Karnik (2000), Connection Admission Control in ATM Networks Using Survey-
Based Type-2 Fuzzy Logic Systems’. IEE Transactions on Systems, Man, and Cybernetics—Part C:
Applications and Reviews, vol. 30, no. 3., pp. 329-340.
[31] Hagras, H. (2007). Type-2 FLCS: A new generation of fuzzy controllers. IEEE Computational
Intelligence Magazine, 2(1), 30–43.
[32] Sharifi, A.; Harati, A.; Vahedian, A. Marker-based human pose tracking using adaptive annealed
particle swarm optimization with search space partitioning. Image Vis. Comput. 2017, 62, 28–38
[33] Gandomi, A.H.; Yun, G.J.; Yang, X.S.; Talatahari, S. Chaos-enhanced accelerated particle swarm
optimization. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 327–340.
[34] Nobile, M.S.; Cazzaniga, P.; Besozzi, D.; Colombo, R. Fuzzy self-turning PSO: A settings-free
algorithm for global optimization. Swarm Evol. Comput. 2017
[35] Rao, R.V. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained
optimization problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34.
[36] J. Kennedy and R. Mendes, (2002), Population structure and particle swarm performance, in
Proceedings of IEEE Conference on Evolutionary Computation, 2002, 1671–1676.