Analytical Hierarchical Process has been used as a useful methodology for multi-criteria decision making environments with substantial applications in recent years. But the weakness of the traditional AHP method lies in the use of subjective judgement based assessment and standardized scale for pairwise comparison matrix creation. The paper proposes a Condorcet Voting Theory based AHP method to solve multi criteria decision making problems where Analytical Hierarchy Process (AHP) is combined with Condorcet theory based preferential voting technique followed by a quantitative ratio method for framing the comparison matrix instead of the standard importance scale in traditional AHP approach. The consistency ratio (CR) is calculated for both the approaches to determine and compare the consistency of both the methods. The results reveal Condorcet- AHP method to be superior generating lower consistency ratio and more accurate ranking of the criterion for solving MCDM problems.
PRIORITIZING THE BANKING SERVICE QUALITY OF DIFFERENT BRANCHES USING FACTOR A...ijmvsc
In recent years, India’s service industry is developing rapidly. The objective of the study is to explore the
dimensions of customer perceived service quality in the context of the Indian banking industry. In order to
categorize the customer needs into quality dimensions, Factor analysis (FA) has been carried out on
customer responses obtained through questionnaire survey. Analytic Hierarchy Process (AHP) is employed
to determine the weights of the banking service quality dimensions. The priority structure of the quality
dimensions provides an idea for the Banking management to allocate the resources in an effective manner
to achieve more customer satisfaction. Technique for Order Preference Similarity to Ideal Solution
(TOPSIS) is used to obtain final ranking of different branches.
Application of Fuzzy Analytic Hierarchy Process and TOPSIS Methods for Destin...ijtsrd
Destination selection is one of the most become an extremely popular. Sometimes the terms tourism and tourism are used pejoratively to indicate a shallow interest in the societies or islands that traveler's tour. This system presents the use of fuzzy AHP and TOPSIS for deciding on the selection of destination as like the selection of island. In this system, eight countries that include in South East Asia Thailand, Singapore, Malaysia, Indonesia, Philippine, Vietnam, Cambodia, Brunei are used. At first, the user can choose the specific country to decide the island of these countries and their preferences attraction, environment, accommodation, transportation, restaurant, activity, entertainment and other facilities are taken as inputs and then display the list of alternatives that matched with user's preferences. Fuzzy analytic hierarchy process is used in determining the weight of criteria and alternatives. Technique for Order Preference by Similarity to Ideal Solution TOPSIS method is used for determining the final ranking of the alternatives. Finally, this system shows the list of destinations depend on user's preferences. Hnin Min Oo | Su Hlaing Hnin "Application of Fuzzy Analytic Hierarchy Process and TOPSIS Methods for Destination Selection" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27975.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-processing/27975/application-of-fuzzy-analytic-hierarchy-process-and-topsis-methods-for-destination-selection/hnin-min-oo
Data analysis is a process that involves gathering, modeling, and transforming data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making. It describes several major techniques for data analysis, including correlation analysis, regression analysis, factor analysis, cluster analysis, correspondence analysis, conjoint analysis, CHAID analysis, discriminant/logistic regression analysis, multidimensional scaling, and structural equation modeling.
This document provides information about data interpretation and different ways to present data. It discusses numerical data tables including time series tables, spatial tables, frequency distribution tables, and cumulative frequency tables. Examples are given to show how to calculate capacity utilization, sales growth percentages, and solve other problems using the data in tables. Cartesian graphs are also introduced as a way to show the variation of a quantity with respect to two parameters on the X and Y axes.
PROVIDING A METHOD FOR DETERMINING THE INDEX OF CUSTOMER CHURN IN INDUSTRYIJITCA Journal
Churn customer, one of the most important issues in customer relationship management and marketing is especially in industries such as telecommunications, the financial and insurance. In recent decades much
research has been done in this area. In this research, the index set for the reasons set reason churn customers for our customers is of particular importance. In this study we are intended to provide a formula for the index churn customers, the better to understand the reasons for customers to provide churn. Therefore, in order to evaluate the formula provided through six Classification methods (Decision tree QUEST, Decision tree C5.0, Decision tree CHAID, Decision trees CART, Bayesian network, Neural network) to evaluate the formula will be involved with individual indicators
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
A Formal Machine Learning or Multi Objective Decision Making System for Deter...Editor IJCATR
Decision-making typically needs the mechanisms to compromise among opposing norms. Once multiple objectives square measure is concerned of machine learning, a vital step is to check the weights of individual objectives to the system-level performance. Determinant, the weights of multi-objectives is associate in analysis method, associated it's been typically treated as a drawback. However, our preliminary investigation has shown that existing methodologies in managing the weights of multi-objectives have some obvious limitations like the determination of weights is treated as one drawback, a result supporting such associate improvement is limited, if associated it will even be unreliable, once knowledge concerning multiple objectives is incomplete like an integrity caused by poor data. The constraints of weights are also mentioned. Variable weights square measure is natural in decision-making processes. Here, we'd like to develop a scientific methodology in determinant variable weights of multi-objectives. The roles of weights in a creative multi-objective decision-making or machine-learning of square measure analyzed, and therefore the weights square measure determined with the help of a standard neural network.
This document summarizes a study that used machine learning algorithms to predict happiness based on survey data from over 4,600 users. The researchers analyzed data from 100+ survey questions along with demographics to predict if users identified as happy or unhappy. They tested multiple algorithms including naive Bayes, decision trees, random forests, gradient boosting, and support vector machines. Gradient boosting achieved the highest AUC score of 0.68 on test data, outperforming other algorithms. The researchers concluded machine learning can effectively predict happiness from subjective survey responses, with ensemble methods like gradient boosting and random forests performing best.
PRIORITIZING THE BANKING SERVICE QUALITY OF DIFFERENT BRANCHES USING FACTOR A...ijmvsc
In recent years, India’s service industry is developing rapidly. The objective of the study is to explore the
dimensions of customer perceived service quality in the context of the Indian banking industry. In order to
categorize the customer needs into quality dimensions, Factor analysis (FA) has been carried out on
customer responses obtained through questionnaire survey. Analytic Hierarchy Process (AHP) is employed
to determine the weights of the banking service quality dimensions. The priority structure of the quality
dimensions provides an idea for the Banking management to allocate the resources in an effective manner
to achieve more customer satisfaction. Technique for Order Preference Similarity to Ideal Solution
(TOPSIS) is used to obtain final ranking of different branches.
Application of Fuzzy Analytic Hierarchy Process and TOPSIS Methods for Destin...ijtsrd
Destination selection is one of the most become an extremely popular. Sometimes the terms tourism and tourism are used pejoratively to indicate a shallow interest in the societies or islands that traveler's tour. This system presents the use of fuzzy AHP and TOPSIS for deciding on the selection of destination as like the selection of island. In this system, eight countries that include in South East Asia Thailand, Singapore, Malaysia, Indonesia, Philippine, Vietnam, Cambodia, Brunei are used. At first, the user can choose the specific country to decide the island of these countries and their preferences attraction, environment, accommodation, transportation, restaurant, activity, entertainment and other facilities are taken as inputs and then display the list of alternatives that matched with user's preferences. Fuzzy analytic hierarchy process is used in determining the weight of criteria and alternatives. Technique for Order Preference by Similarity to Ideal Solution TOPSIS method is used for determining the final ranking of the alternatives. Finally, this system shows the list of destinations depend on user's preferences. Hnin Min Oo | Su Hlaing Hnin "Application of Fuzzy Analytic Hierarchy Process and TOPSIS Methods for Destination Selection" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27975.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-processing/27975/application-of-fuzzy-analytic-hierarchy-process-and-topsis-methods-for-destination-selection/hnin-min-oo
Data analysis is a process that involves gathering, modeling, and transforming data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making. It describes several major techniques for data analysis, including correlation analysis, regression analysis, factor analysis, cluster analysis, correspondence analysis, conjoint analysis, CHAID analysis, discriminant/logistic regression analysis, multidimensional scaling, and structural equation modeling.
This document provides information about data interpretation and different ways to present data. It discusses numerical data tables including time series tables, spatial tables, frequency distribution tables, and cumulative frequency tables. Examples are given to show how to calculate capacity utilization, sales growth percentages, and solve other problems using the data in tables. Cartesian graphs are also introduced as a way to show the variation of a quantity with respect to two parameters on the X and Y axes.
PROVIDING A METHOD FOR DETERMINING THE INDEX OF CUSTOMER CHURN IN INDUSTRYIJITCA Journal
Churn customer, one of the most important issues in customer relationship management and marketing is especially in industries such as telecommunications, the financial and insurance. In recent decades much
research has been done in this area. In this research, the index set for the reasons set reason churn customers for our customers is of particular importance. In this study we are intended to provide a formula for the index churn customers, the better to understand the reasons for customers to provide churn. Therefore, in order to evaluate the formula provided through six Classification methods (Decision tree QUEST, Decision tree C5.0, Decision tree CHAID, Decision trees CART, Bayesian network, Neural network) to evaluate the formula will be involved with individual indicators
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
A Formal Machine Learning or Multi Objective Decision Making System for Deter...Editor IJCATR
Decision-making typically needs the mechanisms to compromise among opposing norms. Once multiple objectives square measure is concerned of machine learning, a vital step is to check the weights of individual objectives to the system-level performance. Determinant, the weights of multi-objectives is associate in analysis method, associated it's been typically treated as a drawback. However, our preliminary investigation has shown that existing methodologies in managing the weights of multi-objectives have some obvious limitations like the determination of weights is treated as one drawback, a result supporting such associate improvement is limited, if associated it will even be unreliable, once knowledge concerning multiple objectives is incomplete like an integrity caused by poor data. The constraints of weights are also mentioned. Variable weights square measure is natural in decision-making processes. Here, we'd like to develop a scientific methodology in determinant variable weights of multi-objectives. The roles of weights in a creative multi-objective decision-making or machine-learning of square measure analyzed, and therefore the weights square measure determined with the help of a standard neural network.
This document summarizes a study that used machine learning algorithms to predict happiness based on survey data from over 4,600 users. The researchers analyzed data from 100+ survey questions along with demographics to predict if users identified as happy or unhappy. They tested multiple algorithms including naive Bayes, decision trees, random forests, gradient boosting, and support vector machines. Gradient boosting achieved the highest AUC score of 0.68 on test data, outperforming other algorithms. The researchers concluded machine learning can effectively predict happiness from subjective survey responses, with ensemble methods like gradient boosting and random forests performing best.
This document summarizes an article that proposes a novel cost-free learning (CFL) approach called ABC-SVM to address the class imbalance problem. The approach aims to maximize the normalized mutual information of the predicted and actual classes to balance errors and rejects without requiring cost information. It optimizes misclassification costs, SVM parameters, and feature selection simultaneously using an artificial bee colony algorithm. Experimental results on several datasets show the method performs effectively compared to sampling techniques for class imbalance.
Evaluation metric plays a critical role in achieving the optimal classifier during the classification training.
Thus, a selection of suitable evaluation metric is an important key for discriminating and obtaining the
optimal classifier. This paper systematically reviewed the related evaluation metrics that are specifically
designed as a discriminator for optimizing generative classifier. Generally, many generative classifiers
employ accuracy as a measure to discriminate the optimal solution during the classification training.
However, the accuracy has several weaknesses which are less distinctiveness, less discriminability, less
informativeness and bias to majority class data. This paper also briefly discusses other metrics that are
specifically designed for discriminating the optimal solution. The shortcomings of these alternative metrics
are also discussed. Finally, this paper suggests five important aspects that must be taken into consideration
in constructing a new discriminator metric.
Classification By Clustering Based On Adjusted ClusterIOSR Journals
This document summarizes a research paper that proposes a new technique called "Classification by Clustering" (CbC) to define decision trees based on cluster analysis. The technique is tested on two large HR datasets. CbC involves running a clustering algorithm on the dataset without using the target variable, calculating the target variable distribution in each cluster, setting a threshold to classify entities, fine-tuning the results by weighting important attributes, and testing the results on new data. The paper finds that CbC can provide meaningful decision rules even when conventional decision trees fail to do so, and in some cases CbC performs better. A new evaluation measure called Weighted Group Score is also introduced to assess models when conventional measures cannot be used
This document provides an introduction to statistics and its uses in business. It outlines two main branches of statistics - descriptive statistics which involves collecting, summarizing and presenting data, and inferential statistics which uses data from a sample to draw conclusions about a larger population. The document then discusses key statistical concepts like variables, data, populations, samples, parameters and statistics. It explains how descriptive and inferential statistics are used to summarize data, draw conclusions, make forecasts and improve business processes. Finally, it introduces the DCOVA process for examining and concluding from data which involves defining variables, collecting data, organizing data, visualizing data and analyzing data.
Exploratory data analysis data visualization:
Exploratory Data Analysis (EDA) is an approach/philosophy for data analysis that employs a variety of techniques (mostly graphical) to
Maximize insight into a data set.
Uncover underlying structure.
Extract important variables.
Detect outliers and anomalies.
Test underlying assumptions.
Develop parsimonious models.
Determine optimal factor settings
The approaches used in literature for solving combinatorial optimization problems have applied specific methodology or a
specific combination of methodologies to solve it. However, less importance is attached to modeling the solution for the given problem systematically. Modeling helps in analyzing the various parts of the solution clearly, thereby identifying which part of the methodology or combination of methodologies applied is efficient or inefficient. In order to find how efficient the different parts of the applied methodology is or methodologies are, it may be better to solve the given problem using the notion of hyper-heuristics. This can be done by solving the different parts of the given problem with many different methodologies realized, implemented and benchmarked, enabling to choose the best hybrid methodology. A theoretical model or representation of the problem’s solution may facilitate clear proposal and realization of the different methodologies for the various parts of the solution. The literature reveals that there is a need
for a generic model which could be used to represent the solution for combinatorial optimization problems. Therefore, inspired by the basic problem solving behavior exhibited by animals in their day to day life, a new bio-inspired hyper-heuristic generic model for solving combinatorial optimization problems has been proposed. To demonstrate the application of this generic model proposed a problem specific model is derived that solves the web services selection/composition problem. This specialized model has been realized with a trip planning case study and the results are discussed.
The document discusses data analysis and interpretation. It describes the different scales of measurement used in data analysis including nominal, ordinal, interval, and ratio scales. It also discusses various methods used for interpreting qualitative and quantitative data, such as using statistical techniques like mean and standard deviation for quantitative data. Finally, it covers different visualization techniques used in data interpretation like bar graphs, pie charts, tables, and line graphs.
The document provides an overview of a business statistics course, including topics covered, applications in different business fields, and examples of descriptive statistics. The course covers topics such as data collection, descriptive statistics, statistical inference, and the use of computers for analysis. Descriptive statistics are used to summarize parts cost data from 50 car tune-ups, finding an average cost of $79. Inferential statistics are used to estimate population characteristics based on sample data.
Selection of Equipment by Using Saw and Vikor Methods IJERA Editor
This document discusses methods for selecting equipment using multi-criteria decision making approaches. It presents the SAW (Simple Additive Weighting) and VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje) methods for equipment selection. The document outlines the steps for both SAW and VIKOR methods. It also discusses consistency testing to validate the results and ensure less than 0.1 consistency ratio. The methods are then applied to a case study of equipment selection at a spring manufacturing unit to demonstrate the process.
Accounting for variance in machine learning benchmarksDevansh16
Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, Justin Szeto, Naz Sepah, Edward Raff, Kanika Madan, Vikram Voleti, Samira Ebrahimi Kahou, Vincent Michalski, Dmitriy Serdyuk, Tal Arbel, Chris Pal, Gaël Varoquaux, Pascal Vincent
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.
This document discusses applying machine learning algorithms to three datasets: a housing dataset to predict prices, a banking dataset to predict customer churn, and a credit card dataset for customer segmentation. For housing prices, linear regression, regression trees and gradient boosted trees are applied and evaluated on test data using R2 and RMSE. For customer churn, logistic regression and random forests are used with sampling to address class imbalance, and evaluated using confusion matrix metrics. For credit card data, k-means clustering with PCA is used to segment customers into groups.
This document discusses preliminary data analysis techniques. It begins by explaining that data analysis is done to make sense of collected data. The basic steps of preliminary analysis are editing, coding, and tabulating data. Editing involves checking for errors and inconsistencies. Coding transforms raw data into numerical codes for analysis. Tabulation involves counting how many cases fall into each coded category. Examples of tabulations like simple counts and cross-tabulations are provided to show relationships between variables. Preliminary analysis helps detect errors and develop hypotheses for further statistical testing.
Decision Support Systems in Clinical EngineeringAsmaa Kamel
This document provides an overview of the Analytic Hierarchy Process (AHP) decision support system and presents a case study on using AHP to make medical equipment scrapping decisions. The key points are:
1) AHP breaks down a complex decision problem into a hierarchy, then uses pairwise comparisons to determine criteria weights and rank alternatives. It was used in this case study to evaluate 9 dialysis machines for potential scrapping.
2) Criteria for the dialysis machine scrapping decision included age, performance, safety record, and costs. Data was incomplete so the study simulated different scenarios to examine the impact.
3) AHP derived local and global priorities to determine each machine's overall priority for scra
This document discusses classification using decision tree models. It begins with an introduction to classification, describing it as assigning objects to predefined categories. Decision trees are then overviewed as a powerful classifier that uses a hierarchical structure to split a dataset. Important parameters for evaluating model accuracy are covered, such as precision, recall, and AUC. The document also describes an exercise using the Weka tool to build decision trees on a dataset about term deposit subscriptions. It concludes with discussing uses of decision trees for applications like marketing and medical diagnosis.
The document discusses various methods for analyzing and interpreting data. It describes descriptive analysis which helps summarize data patterns. Statistical analysis techniques like clustering, regression, and cohorts are explained. Inferential analysis makes judgments about differences between groups. Qualitative and quantitative methods are outlined for interpreting data through coding and establishing relationships. The purpose of data analysis and interpretation is to answer research questions and determine trends to support decision making.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
MULTI-PARAMETER BASED PERFORMANCE EVALUATION OF CLASSIFICATION ALGORITHMSijcsit
Diabetes disease is amongst the most common disease in India. It affects patient’s health and also leads to
other chronic diseases. Prediction of diabetes plays a significant role in saving of life and cost. Predicting
diabetes in human body is a challenging task because it depends on several factors. Few studies have reported the performance of classification algorithms in terms of accuracy. Results in these studies are difficult and complex to understand by medical practitioner and also lack in terms of visual aids as they arepresented in pure text format. This reported survey uses ROC and PRC graphical measures toimproveunderstanding of results. A detailed parameter wise discussion of comparison is also presented which lacksin other reported surveys. Execution time, Accuracy, TP Rate, FP Rate, Precision, Recall, F Measureparameters are used for comparative analysis and Confusion Matrix is prepared for quick review of each
algorithm. Ten fold cross validation method is used for estimation of prediction model. Different sets of
classification algorithms are analyzed on diabetes dataset acquired from UCI repository
Selecting the correct Data Mining Method: Classification & InDaMiTe-RIOSR Journals
This document describes an intelligent data mining assistant called InDaMiTe-R that aims to help users select the correct data mining method for their problem and data. It presents a classification of common data mining techniques organized by the goal of the problem (descriptive vs predictive) and the structure of the data. This classification is meant to model the human decision process for selecting techniques. The document then describes InDaMiTe-R, which uses a case-based reasoning approach to recommend techniques based on past user experiences with similar problems and data. An example of its use is provided to illustrate how it extracts problem metadata, gets user restrictions, recommends initial techniques, and learns from the user's evaluations to improve future recommendations. A small evaluation
Ijaems apr-2016-23 Study of Pruning Techniques to Predict Efficient Business ...INFOGAIN PUBLICATION
This document summarizes research on using various pruning techniques with decision trees to predict business decisions for a shopping mall. It discusses post-pruning techniques like using a validation set, error estimation, and significance testing. It also covers pre-pruning techniques like setting a threshold on the splitting measure. The document analyzes parameters like overfitting and underfitting. It presents experiments on diabetes and glass datasets in Weka, showing the effect of parameters like minimum number of objects and number of folds on accuracy and tree size. Increasing the minimum number of objects decreased tree size through more pruning while maintaining similar accuracy.
Grant Selection Process Using Simple Additive Weighting Approachijtsrd
Selection of educational grant is a key success factor for student learning and academic performance. Among popular methods, this paper contributes a real problem of selecting educational grant using data of grant application forms by one of the multi criteria decision making model, SAW method. This paper introduces nine criteria that are qualitative and positive for selecting grant for the students amongst fifteen application forms and also ranking them. Finally, the proposed method is demonstrated in a case study on selecting educational grant for students. Kyi Kyi Myint | Tin Tin Soe | Myint Myint Toe ""Grant Selection Process Using Simple Additive Weighting Approach"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25169.pdf
Paper URL: https://www.ijtsrd.com/computer-science/simulation/25169/grant-selection-process-using-simple-additive-weighting-approach/kyi-kyi-myint
Multi Criteria Decision Making Methodology on Selection of a Student for All ...ijtsrd
Selecting a student for all round excellent award is based on a complex, elaborate combination of abilities and skills. A multi criteria Decision Making method, AHP is used to help in making decision consistently by doing a pairwise comparison matrix process between criteria based on selected alternatives and determining the priority order of criteria and alternatives used. The results of these calculations are used to determine the outstanding student receiving a scholarship based on the final results of the AHP method calculation. The results demonstrated that the student ranking is more likely influenced by the relative importance of management, leadership and motivation by sub criteria, education, cooperation, innovation, disciplinary, attendance, knowledge, sports activity, social activity and awards. Kyi Kyi Mynit "Multi Criteria Decision Making Methodology on Selection of a Student for All Round Excellent Award" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26428.pdfPaper URL: https://www.ijtsrd.com/management/research-method/26428/multi-criteria-decision-making-methodology-on-selection-of-a-student-for-all-round-excellent-award/kyi-kyi-mynit
This document summarizes an article that proposes a novel cost-free learning (CFL) approach called ABC-SVM to address the class imbalance problem. The approach aims to maximize the normalized mutual information of the predicted and actual classes to balance errors and rejects without requiring cost information. It optimizes misclassification costs, SVM parameters, and feature selection simultaneously using an artificial bee colony algorithm. Experimental results on several datasets show the method performs effectively compared to sampling techniques for class imbalance.
Evaluation metric plays a critical role in achieving the optimal classifier during the classification training.
Thus, a selection of suitable evaluation metric is an important key for discriminating and obtaining the
optimal classifier. This paper systematically reviewed the related evaluation metrics that are specifically
designed as a discriminator for optimizing generative classifier. Generally, many generative classifiers
employ accuracy as a measure to discriminate the optimal solution during the classification training.
However, the accuracy has several weaknesses which are less distinctiveness, less discriminability, less
informativeness and bias to majority class data. This paper also briefly discusses other metrics that are
specifically designed for discriminating the optimal solution. The shortcomings of these alternative metrics
are also discussed. Finally, this paper suggests five important aspects that must be taken into consideration
in constructing a new discriminator metric.
Classification By Clustering Based On Adjusted ClusterIOSR Journals
This document summarizes a research paper that proposes a new technique called "Classification by Clustering" (CbC) to define decision trees based on cluster analysis. The technique is tested on two large HR datasets. CbC involves running a clustering algorithm on the dataset without using the target variable, calculating the target variable distribution in each cluster, setting a threshold to classify entities, fine-tuning the results by weighting important attributes, and testing the results on new data. The paper finds that CbC can provide meaningful decision rules even when conventional decision trees fail to do so, and in some cases CbC performs better. A new evaluation measure called Weighted Group Score is also introduced to assess models when conventional measures cannot be used
This document provides an introduction to statistics and its uses in business. It outlines two main branches of statistics - descriptive statistics which involves collecting, summarizing and presenting data, and inferential statistics which uses data from a sample to draw conclusions about a larger population. The document then discusses key statistical concepts like variables, data, populations, samples, parameters and statistics. It explains how descriptive and inferential statistics are used to summarize data, draw conclusions, make forecasts and improve business processes. Finally, it introduces the DCOVA process for examining and concluding from data which involves defining variables, collecting data, organizing data, visualizing data and analyzing data.
Exploratory data analysis data visualization:
Exploratory Data Analysis (EDA) is an approach/philosophy for data analysis that employs a variety of techniques (mostly graphical) to
Maximize insight into a data set.
Uncover underlying structure.
Extract important variables.
Detect outliers and anomalies.
Test underlying assumptions.
Develop parsimonious models.
Determine optimal factor settings
The approaches used in literature for solving combinatorial optimization problems have applied specific methodology or a
specific combination of methodologies to solve it. However, less importance is attached to modeling the solution for the given problem systematically. Modeling helps in analyzing the various parts of the solution clearly, thereby identifying which part of the methodology or combination of methodologies applied is efficient or inefficient. In order to find how efficient the different parts of the applied methodology is or methodologies are, it may be better to solve the given problem using the notion of hyper-heuristics. This can be done by solving the different parts of the given problem with many different methodologies realized, implemented and benchmarked, enabling to choose the best hybrid methodology. A theoretical model or representation of the problem’s solution may facilitate clear proposal and realization of the different methodologies for the various parts of the solution. The literature reveals that there is a need
for a generic model which could be used to represent the solution for combinatorial optimization problems. Therefore, inspired by the basic problem solving behavior exhibited by animals in their day to day life, a new bio-inspired hyper-heuristic generic model for solving combinatorial optimization problems has been proposed. To demonstrate the application of this generic model proposed a problem specific model is derived that solves the web services selection/composition problem. This specialized model has been realized with a trip planning case study and the results are discussed.
The document discusses data analysis and interpretation. It describes the different scales of measurement used in data analysis including nominal, ordinal, interval, and ratio scales. It also discusses various methods used for interpreting qualitative and quantitative data, such as using statistical techniques like mean and standard deviation for quantitative data. Finally, it covers different visualization techniques used in data interpretation like bar graphs, pie charts, tables, and line graphs.
The document provides an overview of a business statistics course, including topics covered, applications in different business fields, and examples of descriptive statistics. The course covers topics such as data collection, descriptive statistics, statistical inference, and the use of computers for analysis. Descriptive statistics are used to summarize parts cost data from 50 car tune-ups, finding an average cost of $79. Inferential statistics are used to estimate population characteristics based on sample data.
Selection of Equipment by Using Saw and Vikor Methods IJERA Editor
This document discusses methods for selecting equipment using multi-criteria decision making approaches. It presents the SAW (Simple Additive Weighting) and VIKOR (VIseKriterijumska Optimizacija I Kompromisno Resenje) methods for equipment selection. The document outlines the steps for both SAW and VIKOR methods. It also discusses consistency testing to validate the results and ensure less than 0.1 consistency ratio. The methods are then applied to a case study of equipment selection at a spring manufacturing unit to demonstrate the process.
Accounting for variance in machine learning benchmarksDevansh16
Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, Justin Szeto, Naz Sepah, Edward Raff, Kanika Madan, Vikram Voleti, Samira Ebrahimi Kahou, Vincent Michalski, Dmitriy Serdyuk, Tal Arbel, Chris Pal, Gaël Varoquaux, Pascal Vincent
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.
This document discusses applying machine learning algorithms to three datasets: a housing dataset to predict prices, a banking dataset to predict customer churn, and a credit card dataset for customer segmentation. For housing prices, linear regression, regression trees and gradient boosted trees are applied and evaluated on test data using R2 and RMSE. For customer churn, logistic regression and random forests are used with sampling to address class imbalance, and evaluated using confusion matrix metrics. For credit card data, k-means clustering with PCA is used to segment customers into groups.
This document discusses preliminary data analysis techniques. It begins by explaining that data analysis is done to make sense of collected data. The basic steps of preliminary analysis are editing, coding, and tabulating data. Editing involves checking for errors and inconsistencies. Coding transforms raw data into numerical codes for analysis. Tabulation involves counting how many cases fall into each coded category. Examples of tabulations like simple counts and cross-tabulations are provided to show relationships between variables. Preliminary analysis helps detect errors and develop hypotheses for further statistical testing.
Decision Support Systems in Clinical EngineeringAsmaa Kamel
This document provides an overview of the Analytic Hierarchy Process (AHP) decision support system and presents a case study on using AHP to make medical equipment scrapping decisions. The key points are:
1) AHP breaks down a complex decision problem into a hierarchy, then uses pairwise comparisons to determine criteria weights and rank alternatives. It was used in this case study to evaluate 9 dialysis machines for potential scrapping.
2) Criteria for the dialysis machine scrapping decision included age, performance, safety record, and costs. Data was incomplete so the study simulated different scenarios to examine the impact.
3) AHP derived local and global priorities to determine each machine's overall priority for scra
This document discusses classification using decision tree models. It begins with an introduction to classification, describing it as assigning objects to predefined categories. Decision trees are then overviewed as a powerful classifier that uses a hierarchical structure to split a dataset. Important parameters for evaluating model accuracy are covered, such as precision, recall, and AUC. The document also describes an exercise using the Weka tool to build decision trees on a dataset about term deposit subscriptions. It concludes with discussing uses of decision trees for applications like marketing and medical diagnosis.
The document discusses various methods for analyzing and interpreting data. It describes descriptive analysis which helps summarize data patterns. Statistical analysis techniques like clustering, regression, and cohorts are explained. Inferential analysis makes judgments about differences between groups. Qualitative and quantitative methods are outlined for interpreting data through coding and establishing relationships. The purpose of data analysis and interpretation is to answer research questions and determine trends to support decision making.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
MULTI-PARAMETER BASED PERFORMANCE EVALUATION OF CLASSIFICATION ALGORITHMSijcsit
Diabetes disease is amongst the most common disease in India. It affects patient’s health and also leads to
other chronic diseases. Prediction of diabetes plays a significant role in saving of life and cost. Predicting
diabetes in human body is a challenging task because it depends on several factors. Few studies have reported the performance of classification algorithms in terms of accuracy. Results in these studies are difficult and complex to understand by medical practitioner and also lack in terms of visual aids as they arepresented in pure text format. This reported survey uses ROC and PRC graphical measures toimproveunderstanding of results. A detailed parameter wise discussion of comparison is also presented which lacksin other reported surveys. Execution time, Accuracy, TP Rate, FP Rate, Precision, Recall, F Measureparameters are used for comparative analysis and Confusion Matrix is prepared for quick review of each
algorithm. Ten fold cross validation method is used for estimation of prediction model. Different sets of
classification algorithms are analyzed on diabetes dataset acquired from UCI repository
Selecting the correct Data Mining Method: Classification & InDaMiTe-RIOSR Journals
This document describes an intelligent data mining assistant called InDaMiTe-R that aims to help users select the correct data mining method for their problem and data. It presents a classification of common data mining techniques organized by the goal of the problem (descriptive vs predictive) and the structure of the data. This classification is meant to model the human decision process for selecting techniques. The document then describes InDaMiTe-R, which uses a case-based reasoning approach to recommend techniques based on past user experiences with similar problems and data. An example of its use is provided to illustrate how it extracts problem metadata, gets user restrictions, recommends initial techniques, and learns from the user's evaluations to improve future recommendations. A small evaluation
Ijaems apr-2016-23 Study of Pruning Techniques to Predict Efficient Business ...INFOGAIN PUBLICATION
This document summarizes research on using various pruning techniques with decision trees to predict business decisions for a shopping mall. It discusses post-pruning techniques like using a validation set, error estimation, and significance testing. It also covers pre-pruning techniques like setting a threshold on the splitting measure. The document analyzes parameters like overfitting and underfitting. It presents experiments on diabetes and glass datasets in Weka, showing the effect of parameters like minimum number of objects and number of folds on accuracy and tree size. Increasing the minimum number of objects decreased tree size through more pruning while maintaining similar accuracy.
Grant Selection Process Using Simple Additive Weighting Approachijtsrd
Selection of educational grant is a key success factor for student learning and academic performance. Among popular methods, this paper contributes a real problem of selecting educational grant using data of grant application forms by one of the multi criteria decision making model, SAW method. This paper introduces nine criteria that are qualitative and positive for selecting grant for the students amongst fifteen application forms and also ranking them. Finally, the proposed method is demonstrated in a case study on selecting educational grant for students. Kyi Kyi Myint | Tin Tin Soe | Myint Myint Toe ""Grant Selection Process Using Simple Additive Weighting Approach"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25169.pdf
Paper URL: https://www.ijtsrd.com/computer-science/simulation/25169/grant-selection-process-using-simple-additive-weighting-approach/kyi-kyi-myint
Multi Criteria Decision Making Methodology on Selection of a Student for All ...ijtsrd
Selecting a student for all round excellent award is based on a complex, elaborate combination of abilities and skills. A multi criteria Decision Making method, AHP is used to help in making decision consistently by doing a pairwise comparison matrix process between criteria based on selected alternatives and determining the priority order of criteria and alternatives used. The results of these calculations are used to determine the outstanding student receiving a scholarship based on the final results of the AHP method calculation. The results demonstrated that the student ranking is more likely influenced by the relative importance of management, leadership and motivation by sub criteria, education, cooperation, innovation, disciplinary, attendance, knowledge, sports activity, social activity and awards. Kyi Kyi Mynit "Multi Criteria Decision Making Methodology on Selection of a Student for All Round Excellent Award" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26428.pdfPaper URL: https://www.ijtsrd.com/management/research-method/26428/multi-criteria-decision-making-methodology-on-selection-of-a-student-for-all-round-excellent-award/kyi-kyi-mynit
Application Of Analytic Hierarchy Process And Artificial Neural Network In Bi...IJARIDEA Journal
Abstract— An appropriate decision to bid initiates all bid preparation steps. Selective bidding will reduce the number of proposals to be submitted by the contractor and saves tender preparation time which can be utilized for refining the estimated cost. Usually in industrial engineering applications final decision will be based on the evaluation of many alternatives. This will be a very difficult problem when the criteria are expressed in different units or the pertinent data are not easily quantifiable. This paper emphasizes on the use of Analytic Hierarchy Process(AHP) for analyzing the risk degree of each factor, so that decision the can be taken accordingly in deciding an appropriate bid.AHP helps to decide the best solution from various selection criteria.The study also focuses on suggesting a much broader applicability of AHP and ANN techniques on decisions of bidding.
Keywords— Analytic Hierarchy Process(AHP), Artificial Neural Network(ANN), Consistency Index(CI),
Consistency Ratio(CR), Random Index(RI), Risk degree.
GRID COMPUTING: STRATEGIC DECISION MAKING IN RESOURCE SELECTIONIJCSEA Journal
The rapid development of computer networks around the world generated new areas especially in computer instruction processing. In grid computing, instruction processing is performed by external processors available to the system. An important topic in this area is task scheduling to available external resources. However, we do not deal with this topic here. In this paper we intend to work on strategic decision making on selecting the best alternative resources for processing instructions with respect to criteria in special conditions. Where the criteria might be security, political, technical, cost, etc. Grid computing should be determined with respect to the processing objectives of instructions of a program. This paper seeks a way through combining Analytic Hierarchy Process (AHP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to help us in ranking and selecting available resources according to considerable criteria in allocating instructions to resources. Therefore, our findings will help technical managers of organizations in choosing as well as ranking candidate alternatives for processing program instructions.
This document discusses measuring the importance and weight of decision makers in group decision making processes. It introduces a method to calculate weights for decision makers based on the number of iterations their pairwise comparison matrices take to reach convergence when using the eigenvector method for criteria weighting. A case study applies this method to determine the weights of three decision makers (A, B, and C) who provide pairwise comparison matrices for weighting four criteria related to selecting a form. The number of iterations for each decision maker's matrix to converge determines their absolute and relative weights in the group decision making process.
Comparison of Max100, SWARA and Pairwise Weight Elicitation MethodsIJERA Editor
Decision making is used in every part of life and realised by each action taken. The presence of correct and satisfactory solution to problems is very important for person, institution and organizations. Multiple Criteria Decision Making (MCDM) techniques are developed for this purpose. Based upon the former studies, it is seen that weight elicitation methods used in solving MCDM problems, have an important role at defining the importance of criteria and obtaining the best and satisfying results for decision makers. Theaim of the paperis to compare the results of range variability between the criteria for Max100, Stepwise Weight Assessment Ratio Analysis (SWARA) and Pairwise Comparison weight elicitation methods and to give suggestion about conditions of using of the methods. It is the first time SWARA is compared with Pairwise Comparison and Max100 methods, and it makes this study different. When results of the study is considered, it is seen that variability of Pairwise Comparison method is higher than that Max100 and SWARA methods. Besides, Max100 is found as the easiest method to use, and Pairwise Comparison method’s way of scoring is defined as the most reliable. In the light of the results obtained from the methods, some conditions of usage are suggested.
Implementation of AHP-MAUT and AHP-Profile Matching Methods in OJT Student Pl...Gede Surya Mahendra
ABSTRACT
To improve the quality and quality of employment, OJT is very much needed by Monarch Bali students, but the process, which is still manual, makes decisions that are taken less fast, accurate, effective and efficient. In line with the roadmap of Monarch Bali, it is necessary to develop an automation system to be able to improve the performance of decision making for OJT student placement by making a DSS. The method used in this research is AHP-MAUT and AHP-PM. The decision makers in this study were 3 people, and out of a total of 500 OJT students, 8 OJT students for F&B class, 12 OJT students for Housekeeping class, 13 OJT students for Catering class, and 17 OJT students for Food Management class with a total of 50 OJT students. Implementation of AHP-MAUT, OJT students from the F&B class with the code StudentD04 have the highest preference value of 0.5724, and OJT students from the beverage class with the code StudentA02 have a preference value of 4.1155 calculated using AHP-PM, each being ranked first.
Keywords:
Analytical Hierarchy Process, Multi-Attribute Utility Theory, Profile Matching,
CRISP-DM,
On the Job Training
Analytical Hierarchy Process Approach An Application Of Engineering EducationBria Davis
This document describes how the analytical hierarchy process (AHP) was used to select a student from an engineering college in India to receive an all-round excellence award. Seven criteria were used to evaluate five student branches - electronics and communications engineering, electronics and electrical engineering, mechanical engineering, civil engineering, and computer science engineering. The AHP involved establishing a hierarchy, making pairwise comparisons between criteria and alternatives, calculating weights using the eigenvector method, and ensuring consistency. It was found that a student from the electronics and communications engineering branch received the award.
Implementation of AHP and TOPSIS Method to Determine the Priority of Improvi...AM Publications
Many problems of asset management based on audit results is an indicator of weakness implementation of asset management which adversely affects of the Audit Board of the Republic of Indonesia. All evaluation of the implementation of asset management is the first step in improving the quality of asset management. This study aims to build decision support system to help problems solving prioritization improved management of government assets with AHP and TOPSIS methods. Integration of AHP and TOPSIS methods are used to perform weighting and ranking alternatives. Weights are obtained by comparison of the level of interest criteria carried out by experts ranked. While alternative methods produced are based on the calculation method in which the best alternative has the shortest distance from the positive ideal solution and the farthest from the negative ideal solution. Alternative asset management is a low priority in the increasing in asset management. The results of this analysis, a system is used for prioritization based on defined criteria. The test results shows that the system can provide an alternative sequence that has an accuracy rate of 83% and has an average value of 4.91 of an evaluation system 5-point scale with the aspect of effectiveness, efficiency and user satisfaction.
Decision support systems, Supplier selection, Information systems, Boolean al...ijmpict
For organizations operating with number of products/services and number of suppliers, to select the right
supplier meeting all their requirements will be a challenging job. Such organizations need a good
decision support system to evaluate the suppliers effectively. Several decision support systems have been
reported to deal with complex selection process to decide the right supplier. Many mathematical models
have also been developed. This paper presents a new method, named as Bit Decision Making (BDM)
method, which treats such complex system of decision making as a collection and sequence of reasonable
number of meaningful and manageable sub-systems by identifying and processing the relevant decision
criteria in each sub-system. Help of Boolean logic and Boolean algebra is taken to assign binary digit
values to the selection criteria and generate mathematical equations that correlate the inputs to the output
at each stage of decision making. Each sub-system with its own mathematical model has been treated as a
standardized decision sub-system for that phase of making decision in evaluating suppliers. The sequence
and connectivity of the sub-systems along with their outputs finally lead to selection of the best supplier. A
real-world case of evaluation of information technology (IT) tenders has been dealt with for application of
the proposed method. The paper discusses in detail the theory, methodology, application and features of
the new method.
Supplier ranking and selection in a bakeryIOSR Journals
This document describes a study that used the Analytic Hierarchy Process (AHP) to evaluate and rank three suppliers (Supplier A, B, C) of a key ingredient for a bakery in Nigeria. The AHP involves breaking the decision down into a hierarchy of criteria and sub-criteria. Five main criteria were identified: quality, performance, service, cost, and supplier profile. Pairwise comparisons were made between criteria to calculate weights. Each supplier was then scored on each sub-criterion and an overall weight calculated. It was found that Supplier C provided the best overall value, though the suppliers were generally quite close in performance.
Selecting Best Tractor Ranking Wise by Software using MADM(Multiple –Attribut...IRJET Journal
This document discusses using a multi-attribute decision making (MADM) technique called TOPSIS to select the best tractor among alternatives. It begins with an introduction to MADM problems and the TOPSIS method. The TOPSIS method involves normalizing a decision matrix, determining ideal and negative ideal alternatives, calculating distances from each alternative to the ideals, and ranking alternatives based on relative closeness. The document then applies this process to select the best tractor among 6 models, using attributes like price, power, fuel efficiency, and maintenance cost to evaluate the alternatives. It aims to help buyers select the tractor best suited to their application needs based on technical specifications.
This document provides a draft proposal for a graduation project that aims to determine a quantitative definition for "significantly more" in the context of comparisons made using the Analytic Hierarchy Process (AHP) method. The proposal outlines that AHP uses a 1-9 scale to compare criteria, but this can sometimes lead to inconsistent results. The project will conduct surveys to understand if inconsistencies are due to the scale itself or individuals' judgments. Based on survey results, the team will develop an algorithm to customize the scale for each individual, reducing inconsistencies. The proposal describes the project tasks, methods, constraints, risks and schedule.
This document summarizes a knowledge engineering approach using analytic hierarchy process (AHP) to resolve conflicts between experts in risk-related decision making. It proposes using a modified version of AHP to increase transparency in the analysis procedure. This allows identification of major causes of inter-expert discrepancy, which are differences in unstated assumptions and subjective weightings of risk factors. The document demonstrates how AHP can systematically decompose complex decision problems, evaluate alternatives based on multiple criteria, and aggregate results to provide an overall evaluation that incorporates differing expert opinions in a consistent manner.
AHP technique a way to show preferences amongst alternativesijsrd.com
This article presents a review of the applications of Analytic Hierarchy Process (AHP). AHP is a multiple criteria decision-making tool that has been used in almost all the applications related with decision-making. Decisions involve many intangibles that need to be traded off. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales. It is these scales that measure intangibles in relative terms. The comparisons are made using a scale of absolute judgements that represents how much more; one element dominates another with respect to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes. An illustration is also included.
The Weights Detection of Multi-Criteria by using Solver IJECEIAES
Multi criteria, which are generally used for decision analysis, have certain characteristics that relate to the purpose of the decision. Multi criteria have complex structures and have different weights depending upon the consideration of assessors and the purpose of the decision also. Expert’s judgment will be used to detect the criteria weights that applied by assessors. The aim of this study is a model to detect the criteria weights and biases on the subcontractor selection and detecting the significant weights, as decisive criteria. A method, which is used to modeling the weights detection, is the Solver Application. Data, totaling 40 sets, has been collected that consist of the assessor’s assessment and the expert’s judgment. The result is a pattern of weights and biases detection. The proposed model have been able to detect of 20 criteria weights and biases, that consist of 4 criteria in the total weights of 60% (as decisive criteria) and 16 criteria in the total weights of 40%. A model has been built by training process performed by the Solver, which the result for MSE training is 9.73711e-08 and for MSE validation is 0.00900528. Novelty in the study is a model to detect pattern of weights criteria and biases on subcontractor selection by transferring the expert's judgment using Solver Application.
The pertinent single-attribute-based classifier for small datasets classific...IJECEIAES
Classifying a dataset using machine learning algorithms can be a big challenge when the target is a small dataset. The OneR classifier can be used for such cases due to its simplicity and efficiency. In this paper, we revealed the power of a single attribute by introducing the pertinent single-attributebased-heterogeneity-ratio classifier (SAB-HR) that used a pertinent attribute to classify small datasets. The SAB-HR’s used feature selection method, which used the Heterogeneity-Ratio (H-Ratio) measure to identify the most homogeneous attribute among the other attributes in the set. Our empirical results on 12 benchmark datasets from a UCI machine learning repository showed that the SAB-HR classifier significantly outperformed the classical OneR classifier for small datasets. In addition, using the H-Ratio as a feature selection criterion for selecting the single attribute was more effectual than other traditional criteria, such as Information Gain (IG) and Gain Ratio (GR).
Decision support system is an interactive system to support decision-making process through the alternatives derived from
the processing of data, information and design models. In this research will build a decision support system modeling for the
determination of admission scholarship, as long as this problem of determining admission scholarship often become obstacles in
distribution and is not directed at the destination as expected. Therefore, in order to give a better result and overcome obstacles in the
distribution of scholarships. The problems of determining admission scholarship will be resolved through Fuzzy approach to the Analytic Hierarchy Process (AHP) is modeled in a decision support system modeling. Where Fuzzy will perform the functions of representation based membership in the assessment criteria. So the results given Fuzzy will be approached with the weight vector
given by the Analytic Hierarchy Process (AHP) which would then be carried out by the ranking process Analiytic Hierarchy Process (AHP) to determine the best alternative will be selected as scholarship recipients. After Fuzzy AHP approach in modeling decision support systems, particularly in the determination of admission scholarships and given very good results and focus on the goal as expected.
Fuzzy Optimization Method In The Search And Determination of Scholarship Reci...Editor IJCATR
Decision support system is an interactive system to support decision-making process through the alternatives derived from
the processing of data, information and design models. In this research will build a decision support system modeling for the
determination of admission scholarship, as long as this problem of determining admission scholarship often become obstacles in
distribution and is not directed at the destination as expected. Therefore, in order to give a better result and overcome obstacles in the
distribution of scholarships. The problems of determining admission scholarship will be resolved through Fuzzy approach to the
Analytic Hierarchy Process (AHP) is modeled in a decision support system modeling. Where Fuzzy will perform the functions of
representation based membership in the assessment criteria. So the results given Fuzzy will be approached with the weight vector
given by the Analytic Hierarchy Process (AHP) which would then be carried out by the ranking process Analiytic Hierarchy Process
(AHP) to determine the best alternative will be selected as scholarship recipients. After Fuzzy AHP approach in modeling decision
support systems, particularly in the determination of admission scholarships and given very good results and focus on the goal as
expected
Fuzzy Optimization Method In The Search And Determination of Scholarship Reci...Editor IJCATR
Decision support system is an interactive system to support decision-making process through the alternatives derived from
the processing of data, information and design models. In this research will build a decision support system modeling for the
determination of admission scholarship, as long as this problem of determining admission scholarship often become obstacles in
distribution and is not directed at the destination as expected. Therefore, in order to give a better result and overcome obstacles in the
distribution of scholarships. The problems of determining admission scholarship will be resolved through Fuzzy approach to the
Analytic Hierarchy Process (AHP) is modeled in a decision support system modeling. Where Fuzzy will perform the functions of
representation based membership in the assessment criteria. So the results given Fuzzy will be approached with the weight vector
given by the Analytic Hierarchy Process (AHP) which would then be carried out by the ranking process Analiytic Hierarchy Process
(AHP) to determine the best alternative will be selected as scholarship recipients. After Fuzzy AHP approach in modeling decision
support systems, particularly in the determination of admission scholarships and given very good results and focus on the goal as
expected.
Similar to 30 14 jun17 3may 7620 7789-1-sm(edit)new (20)
This document describes an electronic doorbell system that uses a keypad and GSM for home security. The system consists of a doorbell connected to a microcontroller that triggers a GSM module to send an SMS to the homeowner when the doorbell is pressed. The homeowner can then respond via button press to open the door, or a message will be displayed if they do not respond. An authorized person can enter a password on the keypad, and if multiple wrong passwords are entered, a message will be sent to the homeowner about a potential burglary attempt. The system aims to provide notification to homeowners and prevent unauthorized access through use of passwords and messaging capabilities.
Augmented reality, the new age technology, has widespread applications in every field imaginable. This technology has proven to be an inflection point in numerous verticals, improving lives and improving performance. In this paper, we explore the various possible applications of Augmented Reality (AR) in the field of Medicine. The objective of using AR in medicine or generally in any field is the fact that, AR helps in motivating the user, making sessions interactive and assist in faster learning. In this paper, we discuss about the applicability of AR in the field of medical diagnosis. Augmented reality technology reinforces remote collaboration, allowing doctors to diagnose patients from a different locality. Additionally, we believe that a much more pronounced effect can be achieved by bringing together the cutting edge technology of AR and the lifesaving field of Medical sciences. AR is a mechanism that could be applied in the learning process too. Similarly, virtual reality could be used in the field where more of practical experience is needed such as driving, sports, neonatal care training.
Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus, it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus, the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented. The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices.
Graphs have become the dominant life-form of many tasks as they advance a
structure to represent many tasks and the corresponding relations. A powerful
role of networks/graphs is to bridge local feats that exist in vertices as they
blossom into patterns that help explain how nodal relations and their edges
impacts a complex effect that ripple via a graph. User cluster are formed as a
result of interactions between entities. Many users can hardly categorize their
contact into groups today such as “family”, “friends”, “colleagues” etc. Thus,
the need to analyze such user social graph via implicit clusters, enables the
dynamism in contact management. Study seeks to implement this dynamism
via a comparative study of deep neural network and friend suggest algorithm.
We analyze a user’s implicit social graph and seek to automatically create
custom contact groups using metrics that classify such contacts based on a
user’s affinity to contacts. Experimental results demonstrate the importance
of both the implicit group relationships and the interaction-based affinity in
suggesting friends.
This paper projects Gryllidae Optimization Algorithm (GOA) has been applied to solve optimal reactive power problem. Proposed GOA approach is based on the chirping characteristics of Gryllidae. In common, male Gryllidae chirp, on the other hand some female Gryllidae also do as well. Male Gryllidae draw the females by this sound which they produce. Moreover, they caution the other Gryllidae against dangers with this sound. The hearing organs of the Gryllidae are housed in an expansion of their forelegs. Through this, they bias to the produced fluttering sounds. Proposed Gryllidae Optimization Algorithm (GOA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show that the projected algorithms reduced the real power loss considerably.
In the wake of the sudden replacement of wood and kerosene by gas cookers for several purposes in Nigeria, gas leakage has caused several damages in our homes, Laboratories among others. installation of a gas leakage detection device was globally inspired to eliminate accidents related to gas leakage. We present an alternative approach to developing a device that can automatically detect and control gas leakages and also monitor temperature. The system detects the leakage of the LPG (Liquefied Petroleum Gas) using a gas sensor, then triggred the control system response which employs ventilator system, Mobile phone alert and alarm when the LPG concentration in the air exceeds a certain level. The performance of two gas sensors (MQ5 and MQ6) were tested for a guided decision. Also, when the temperature of the environment poses a danger, LED (indicator), buzzer and LCD (16x2) display was used to indicate temperature and gas leakage status in degree Celsius and PPM respectively. Attension was given to the response time of the control system, which was ascertained that this system significantly increases the chances and efficiency of eliminating gas leakage related accident.
Feature selection problem is one of the main important problems in the text and data mining domain. This paper presents a comparative study of feature selection methods for Arabic text classification. Five of the feature selection methods were selected: ICHI square, CHI square, Information Gain, Mutual Information and Wrapper. It was tested with five classification algorithms: Bayes Net, Naive Bayes, Random Forest, Decision Tree and Artificial Neural Networks. In addition, Data Collection was used in Arabic consisting of 9055 documents, which were compared by four criteria: Precision, Recall, F-measure and Time to build model. The results showed that the improved ICHI feature selection got almost all the best results in comparison with other methods.
The document proposes the Gentoo Penguin Algorithm (GPA) to solve the optimal reactive power problem. The goal is to minimize real power loss. GPA is inspired by the natural behaviors of Gentoo penguins. In GPA, penguin positions represent potential solutions. Penguins move toward other penguins with lower "cost" or higher heat concentration, representing better solutions. Cost is defined by heat concentration and distance between penguins. Heat radiation decreases with distance. The algorithm is tested on the IEEE 57 bus system and reduces real power loss effectively compared to other methods.
08 20272 academic insight on applicationIAESIJEECS
This research has thrown up many questions in need of further investigation.There was an expressive quantitative-qualitative research, which a common investigation form was used in.The dialogue item was also applied to discover if the contributors asserted the media-based attitude supplements their learning of academic English writing classes or not.Data recounted academic” insights toward using Skype as a sustaining implement for lessons releasing based on chosen variables: their occupation, year of education, and knowledge with Skype discovered that there were no important statistical differences in the use of Skype units because of medical academics major knowledge. There are statistically important differences in using Skype units. The findings also, disclosed that there are statistically significant differences in using Skype units due to the practice with Skype variable, in favors of academics with no Skype use practice. Skype instrument as an instructive media is a positive medium to be employed to supply academic medical writing data and assist education. Academics who do not have enough time to contribute in classes believe comfortable using the Skype-based attitude in scientific writing. They who took part in the course claimed that their approval of this media is due to learning academic innovative medical writing.
Cloud computing has sweeping impact on the human productivity. Today it’s used for Computing, Storage, Predictions and Intelligent Decision Making, among others. Intelligent Decision-Making using Machine Learning has pushed for the Cloud Services to be even more fast, robust and accurate. Security remains one of the major concerns which affect the cloud computing growth however there exist various research challenges in cloud computing adoption such as lack of well managed service level agreement (SLA), frequent disconnections, resource scarcity, interoperability, privacy, and reliability. Tremendous amount of work still needs to be done to explore the security challenges arising due to widespread usage of cloud deployment using Containers. We also discuss Impact of Cloud Computing and Cloud Standards. Hence in this research paper, a detailed survey of cloud computing, concepts, architectural principles, key services, and implementation, design and deployment challenges of cloud computing are discussed in detail and important future research directions in the era of Machine Learning and Data Science have been identified.
Notary is an official authorized to make an authentic deed regarding all deeds, agreements and stipulations required by a general rule. Activities carried out at the notary office such as recording client data and file data still use traditional systems that tend to be manual. The problem that occurs is the inefficiency in data processing and providing information to clients. Clients have difficulty getting information related to the progress of documents that are being taken care of at the notary's office. The client must take the time to arrive to the notary's office repeatedly to check the progress of the work of the document file. The purpose of this study is to facilitate clients in obtaining information about the progress of the work in progress, and make it easier for employees to process incoming documents by implementing an administrative system. This system was developed with the waterfall system development method and uses the Multi-Channel Access Technology integrated in the website to simplify the process of delivering information and requesting information from clients and to clients with Telegram and SMS Gateway. Clients will come to the office only when there is a notification from the system via Telegram or SMS notifying that the client must come directly to the notary's office, thus leading to an efficient time and avoiding excessive transportation costs. The overall functional system can function properly based on the results of alpha testing. The results of beta testing conducted by distributing the system feasibility test questionnaire to end users, get a percentage of 96% of users agree the system is feasible to be implemented.
In this work Tundra wolf algorithm (TWA) is proposed to solve the optimal reactive power problem. In the projected Tundra wolf algorithm (TWA) in order to avoid the searching agents from trapping into the local optimal the converging towards global optimal is divided based on two different conditions. In the proposed Tundra wolf algorithm (TWA) omega tundra wolf has been taken as searching agent as an alternative of indebted to pursue the first three most excellent candidates. Escalating the searching agents’ numbers will perk up the exploration capability of the Tundra wolf wolves in an extensive range. Proposed Tundra wolf algorithm (TWA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show the proposed algorithm reduced the real power loss effectively.
In this work Predestination of Particles Wavering Search (PPS) algorithm has been applied to solve optimal reactive power problem. PPS algorithm has been modeled based on the motion of the particles in the exploration space. Normally the movement of the particle is based on gradient and swarming motion. Particles are permitted to progress in steady velocity in gradient-based progress, but when the outcome is poor when compared to previous upshot, immediately particle rapidity will be upturned with semi of the magnitude and it will help to reach local optimal solution and it is expressed as wavering movement. In standard IEEE 14, 30, 57,118,300 bus systems Proposed Predestination of Particles Wavering Search (PPS) algorithm is evaluated and simulation results show the PPS reduced the power loss efficiently.
In this paper, Mine Blast Algorithm (MBA) has been intermingled with Harmony Search (HS) algorithm for solving optimal reactive power dispatch problem. MBA is based on explosion of landmines and HS is based on Creativeness progression of musicians-both are hybridized to solve the problem. In MBA Initial distance of shrapnel pieces are reduced gradually to allow the mine bombs search the probable global minimum location in order to amplify the global explore capability. Harmony search (HS) imitates the music creativity process where the musicians supervise their instruments’ pitch by searching for a best state of harmony. Hybridization of Mine Blast Algorithm with Harmony Search algorithm (MH) improves the search effectively in the solution space. Mine blast algorithm improves the exploration and harmony search algorithm augments the exploitation. At first the proposed algorithm starts with exploration & gradually it moves to the phase of exploitation. Proposed Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) has been tested on standard IEEE 14, 300 bus test systems. Real power loss has been reduced considerably by the proposed algorithm. Then Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) tested in IEEE 30, bus system (with considering voltage stability index)- real power loss minimization, voltage deviation minimization, and voltage stability index enhancement has been attained.
Artificial Neural Networks have proved their efficiency in a large number of research domains. In this paper, we have applied Artificial Neural Networks on Arabic text to prove correct language modeling, text generation, and missing text prediction. In one hand, we have adapted Recurrent Neural Networks architectures to model Arabic language in order to generate correct Arabic sequences. In the other hand, Convolutional Neural Networks have been parameterized, basing on some specific features of Arabic, to predict missing text in Arabic documents. We have demonstrated the power of our adapted models in generating and predicting correct Arabic text comparing to the standard model. The model had been trained and tested on known free Arabic datasets. Results have been promising with sufficient accuracy.
In the present-day communications speech signals get contaminated due to
various sorts of noises that degrade the speech quality and adversely impacts
speech recognition performance. To overcome these issues, a novel approach
for speech enhancement using Modified Wiener filtering is developed and
power spectrum computation is applied for degraded signal to obtain the
noise characteristics from a noisy spectrum. In next phase, MMSE technique
is applied where Gaussian distribution of each signal i.e. original and noisy
signal is analyzed. The Gaussian distribution provides spectrum estimation
and spectral coefficient parameters which can be used for probabilistic model
formulation. Moreover, a-priori-SNR computation is also incorporated for
coefficient updation and noise presence estimation which operates similar to
the conventional VAD. However, conventional VAD scheme is based on the
hard threshold which is not capable to derive satisfactory performance and a
soft-decision based threshold is developed for improving the performance of
speech enhancement. An extensive simulation study is carried out using
MATLAB simulation tool on NOIZEUS speech database and a comparative
study is presented where proposed approach is proved better in comparison
with existing technique.
Previous research work has highlighted that neuro-signals of Alzheimer’s disease patients are least complex and have low synchronization as compared to that of healthy and normal subjects. The changes in EEG signals of Alzheimer’s subjects start at early stage but are not clinically observed and detected. To detect these abnormalities, three synchrony measures and wavelet-based features have been computed and studied on experimental database. After computing these synchrony measures and wavelet features, it is observed that Phase Synchrony and Coherence based features are able to distinguish between Alzheimer’s disease patients and healthy subjects. Support Vector Machine classifier is used for classification giving 94% accuracy on experimental database used. Combining, these synchrony features and other such relevant features can yield a reliable system for diagnosing the Alzheimer’s disease.
Attenuation correction designed for PET/MR hybrid imaging frameworks along with portion making arrangements used for MR-based radiation treatment remain testing because of lacking high-energy photon weakening data. We present a new method so as to uses the learned nonlinear neighborhood descriptors also highlight coordinating toward foresee pseudo-CT pictures starting T1w along with T2w MRI information. The nonlinear neighborhood descriptors are acquired through anticipating the direct descriptors interested in the nonlinear high-dimensional space utilizing an unequivocal constituent guide also low-position guess through regulated complex regularization. The nearby neighbors of every near descriptor inside the data MR pictures are looked during an obliged spatial extent of the MR pictures among the training dataset. By that point, the pseudo-CT patches are evaluated through k-closest neighbor relapse. The planned procedure designed for pseudo-CT forecast is quantitatively broke downward on top of a dataset comprising of coordinated mind MRI along with CT pictures on or after 13 subjects.
The cognitive radio prototype performance is to alleviate the scarcity of spectral resources for wireless communication through intelligent sensing and quick resource allocation techniques. Secondary users (SU’s) actively obtain the spectrum access opportunity by supporting primary users (PU’s) in cognitive radio networks (CRNs). In present generation, spectrum access is endowed through cooperative communication-based link-level frame-based cooperative (LLC) principle. In this SUs independently act as conveyors for PUs to achieve spectrum access opportunities. Unfortunately, this LLC approach cannot fully exploit spectrum access opportunities to enhance the throughput of CRNs and fails to motivate PUs to join the spectrum sharing processes. Therefore, to overcome this con, network level cooperative (NLC) principle was used, where SUs are integrated mutually to collaborate with PUs session by session, instead of frame based cooperation for spectrum access opportunities. NLC approach has justified the challenges facing in LLC approach. In this paper we make a survey of some models that have been proposed to tackle the problem of LLC. We show the relevant aspects of each model, in order to characterize the parameters that we should take in account to achieve a spectrum access opportunity.
In this paper, the author provides insights and lessons that can be learned from colleagues at American universities about their online education experiences. The literature review and previous studies of online educations gains are explored and summarized in this research. Emerging trends in online education are discussed in detail, and strategies to implement these trends are explained. The author provides several tools and strategies that enable universities to ensure the quality of online education. At the end of this research paper, the researcher provides examples from Arab universities who have successfully implemented online education and expanded their impact on the society. This research provides a strategy and a model that can be used by universities in the Middle East as a roadmap to implement online education in their regions.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Height and depth gauge linear metrology.pdfq30122000
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
Mechatronics is a multidisciplinary field that refers to the skill sets needed in the contemporary, advanced automated manufacturing industry. At the intersection of mechanics, electronics, and computing, mechatronics specialists create simpler, smarter systems. Mechatronics is an essential foundation for the expected growth in automation and manufacturing.
Mechatronics deals with robotics, control systems, and electro-mechanical systems.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
2. IJEECS ISSN: 2502-4752
A Condorcet Voting theory based AHP approach for MCDM problems (Sweta Bhattacharya)
277
example, there could be situation wherein the decision maker finds it difficult to distinguish
among the scale points and conclude if an alternative is 6 or 7 times more important than the
other.
This paper focuses on creating a decision support system for analyzing the factors
using Analytical Hierarchy Process (AHP) in combination with Condorcet Theory. The unique
contribution of the paper lies in its use of Condorcet voting approach wherein the comparison
matrix is generated based on a voting method to decide on the importance of a criterion over
the other and then use of a quantitative ratio based approach to frame the pair wise comparison
matrix instead of the standard importance scale. In the paper, the traditional AHP based
approach and the Condorcet-AHP approach is applied on the same data set to compare the
results generated using both the techniques. The accuracy of the results are compared based
on the consistency ratio value and rankings of the criteria in similar studies conducted in the
same domain. The generated computed consistency ratio (CR) value and the final ranking of
the factors in this paper reveal Condorcet-AHP method to be a better approach in solving multi-
criteria decision making problems. The next section provides detailed description of both the
approaches.
The research method section discusses the detailed steps involved in the traditional
AHP and Condorcet-AHP approach. The results computed for both the approaches are also
shown in the result section which helps to understand the advantages and disadvantages of
both the approaches and conclude on the superiority of the Condorcet-AHP based approach.
2. Research Method
2.1 Analytical Hierarchy Process (AHP)
Analytical Hierarchy process is a method developed by Thomas Saaty in 1980 at the
Wharton School of Business which helps in solving complex decision making problems
considering subjective evaluation measurements [4-6]. It basically breaks a complex problem
into multiple hierarchical levels as depicted in Figure 1 [5-6]. AHP is based on the method of
“measurement through pairwise comparisons relying on the judgements of experts to derive
priority scales” [7-8]. The process uses a hierarchical structure which measures and analyses
the various factors simplifying its complexity and combines the factors to derive at the final
decision [8]. The AHP process consists of six major phases. The first phase involves defining
the problem considering all the associated assumptions and perspectives based on which
decision would be taken [9]. The second phase creates the hierarchical structure defining the
overall goal and the intermediate levels [9-10]. The third stage encompasses of using a pair-
wise comparison of the criteria in terms of the goal [10]. The pairwise comparison helps to
compute the relative weight of the criteria with respect to the goal and thus the pairwise
comparison matrix is created. Quantitative data can be used for performing the comparison by
finding the ratio and if quantitative data is not available qualitative comparison can be performed
using the standard importance scale by Thomas Saaty (1980) as shown in Table 1 [11].
The accuracy of the decision depends on expert’s ability of quantifying the comparison
of the criteria using the relevant values from the scale and constructing the pairwise matrix. This
matrix acts as the basis of all the computations involved in the AHP process. The fourth phase
involves calculation of the relative weights of the elements at each level. This is done by adding
the elements in the column to normalize the matrix and then deriving the normalized matrix by
diving each column element by its column sum. The normalized average of each criterion is
calculated by computing the average of each row. Finally the consistency ratio is calculated to
check for consistency issues and if found, the comparison matrix is reviewed and improvised by
the subject matter experts and decision takers to achieve consistency [11-12].
Table 1. Importance Scale and descriptions
Importance Scale Importance Description
1 Equal importance of “i” and “j”
3 Moderate importance of “i” over “j”
5 Strong importance of “i” over “j”
7 Very Strong importance of “i” over “j”
9 Absolute Importance of “i” over “j”
3. ISSN: 2502-4752
IJEECS Vol. 7, No. 1, July 2017 : 276 – 286
278
Figure 1. Structure of AHP
Assuming the size of the comparison matrix (A) is n × n where n denotes the number of
criteria being compared to a specific goal or parent criteria. The components of the matrix are
and these elements need to be transitive and reciprocative in order to make the matrix
consistent [13].
(1)
(2)
Where, i, j and k are the elements of the matrix A. The comparison matrix where
would look like the following [13]:
[ ] (3)
The matrix A is normalized to matrix N in order to perform a consistency check. The N
matrix would look like the following [13]:
[ ], Where = (4)
=
∑
, (5)
Where, sum of all the columns is equal to ∑
The relative weight of each row is computed by dividing the sum of the values of each
row by n. Hence the weight of “i” can be computed using the following formula [13]:
=
∑
(6)
The matrix A is considered as consistent if A × w = n × w and the equation is
considered as an Eigen value problem wherein the largest Eigen value is ≥ n. The
consistency ratio (CR) is calculated as [13]:
4. IJEECS ISSN: 2502-4752
A Condorcet Voting theory based AHP approach for MCDM problems (Sweta Bhattacharya)
279
CR = CI/RI = Consistency Index / Relative Index (7)
CI = (8)
RI = (9)
The value of CR determines the consistency of the judgement. The CR value of less
than 0.10 is considered acceptable or else revision of the values of is recommended [14].
The fifth step involves reviewing of the final decisions and ensuring that it coincides with
the expectations of the decision taker. The sixth and final stage involves complete
documentation of all the above mentioned stages for the purpose of record and analysis of the
process for continuous process improvement.
Analytical Hierarchy Process (AHP) has been widely used to solve various multi-criteria
decision making problems. Some of the most popular domains of AHP application have been in
selection, ranking and prioritization, design, evaluation, measurement and allocation related
issues. As an example, AHP was used to resolve supplier selection problem where three main
criteria and six sub criteria for supplier selection were evaluated to decide upon the best
supplier. The results of the selection were further validated by the supplier management team in
Carglass, Turkey resulting in better productivity in the company [15]. AHP was also used for the
selection of the best team leader for a project which yielded better accuracy and robustness in
choosing the most fitting leader [16]. Insurance decisions are often the most difficult ones to
make and AHP method was used to resolve such dilemmatic issues and rank insurance
companies in Turkey. The companies were ranked based on their financial criteria using AHP
which helped to choose the best insurance provider. The final results were validated performing
sensitivity analysis which helped to evaluate the robustness of the solution [17]. AHP method
has been used in electrical engineering domain for optimized multi-generated planning under
various uncertain factors which has improved voltage quality variation issues and reduced
power loss [18]. An uncommon application of AHP has been in the improvement of polygraph
survey results which are used as a tool for lie detection. AHP was used to improve the accuracy
of lie detection using actual survey results in order to obtain more authenticity of the results of
the tests [19]. AHP has been widely used in the manufacturing industry where design plays a
major role. The decision regarding design of a product development is extremely critical
especially in the manufacturing of real world products and systems. AHP in this sector has
immensely helped in the layout selection process where the best layout is selected from various
layout options [20]. AHP has also been used to resolve mining engineering problems. Ten
design patterns which impacted the usage of a backfill support system were identified for a
platinum mining project. These ten parameters were weighted as per their relative importance
and finally the results of the AHP evaluation helped to validate that a back fill support system
was much beneficial than a conventional support system in mining engineering [21].
Although AHP plays a major role in decision making and evaluating qualitative and
quantitative measure but the entire method is based on its third stage wherein the comparison
matrix is created based on subjective analysis of the criteria involved using the standardized
importance scale. Hence there exist chances of biased judgement lagging in accuracy. The
Condorcet theory voting algorithm discussed in the following section helps to deal with this issue
thereby increasing accuracy of the AHP technique.
2.2 Condorcet Theory Voting Algorithm
Condorcet theory is a method for pairwise comparison developed by the famous
philosopher and mathematician Marquis de Condorcet [22-23]. The theory is majorly used in
voting wherein candidates are compared pairwise and each voter submits a ballot, ranking the
preference of the nominated candidates. It is a majoritarian method wherein a candidate is
considered a winner in an election if it is able to win over all other candidates in a pairwise
comparison. In the sense, the candidates with higher number of votes in the pairwise
comparison is declared winner. The same theory is used for the pairwise comparison of the
criteria. As part of the study similar to voting method, a poll data set was used to find the
preference of the factors that affected teaching career choice pertinent to this case study. Each
score data from the respondents in the data set were considered and Condorcet voting
5. ISSN: 2502-4752
IJEECS Vol. 7, No. 1, July 2017 : 276 – 286
280
algorithm was applied. In a Condorcet voting algorithm each vote is given a point based on the
preference comparing each criterion over the other. If one criterion is preferred over the other
then the preferred one wins a point over the non-preferred criterion. Votes with equal priority
given to both the candidates are elimininated from consideration. The total count of points for all
the responses is counted in the data set to compute the total scores. Hence each vote is
considered important. After the scores are computed, the quantitative ratio method is used to
determine the importance of one criterion over the other. Since the comparison is based on
actual poll scores in addition to application of Condorcet voting theory, the comparison matrix
framed definitely proves to be more accurate in terms of the consistency ratio and also in
deriving the ranking of the criteria involved [22-23]. The detailed description of the Condorcet
theory algorithm is mentioned in the methodology section. Once the comparison matrix is
framed accurately, the regular steps of AHP method for normalized matrix creation and
normalized average computation is followed.
The next section defines the problem of the existing approach and highlights the
contribution of the paper in solving multi-criteria decision making problem. Both the approaches
are used on a higher education data set pertaining to teaching career choice. The data set
contains three factors (Altruistic, Extrinsic and Intrinsic) that affect teaching career choice
among engineering graduates. The traditional AHP and Condorcet – AHP method basically
ranks the importance of these three factors in the choice of teaching career related decisions.
The resultant ranking of the factors generated using both the methods are compared based on
the consistency ratio value and the aptness of the most prioritized factor mentioned in similar
career choice related studies.
2.3 Problem Definition and Research Gap
The major disadvantage of AHP method is the artificial constraint in the use of 9 – point
scale which makes it difficult for decision makers to differentiate scale points to conclude on the
importance of one criterion or factor over the other. AHP method has also been modified to a 2
point scale to encounter time constraints of decision makers wherein the decision maker could
decide on whether an alternative is more or less important than the other. But all of these
methods have the common disadvantage of being solemnly dependent on individual judgement
of the decision maker giving room for biased and erroneous results. The application of
Condorcet theory in the AHP process helps to resolve this issue by providing more accurate
comparison of the alternative and creation of pairwise comparison matrix based on actual voting
results on the preference based alternatives supported by a quantitative ratio-based method.
3. Results and Analysis
The higher education data set pertaining to teaching career choice has been chosen for
implementation of both the approaches. As per the data set, the main contributing factors that
affected career choice decisions among engineering graduates were altruistic, extrinsic and
intrinsic factors. But it was extremely important to rank the three contributing factors and decide
on the most important one from the perspective of existing graduates so that policy related
development can be made for that specific factor. This would be extremely beneficial for higher
education research as faculty crunch is a gleaming issue in the Indian higher education system.
The ranking would highlight the most significant reasons affecting teaching career choice
decisions and necessary policy amendments to the highest prioritized factor would enable more
individuals to choose teaching career eradicating faculty shortage issues in the country.
The source of the data set was a voting poll conducted on 300 students wherein the
students were asked to rate each of the three criteria and provide scores for the same. Out of
300, 250 opinions were complete and were considered for the analysis. Once the scores were
received two methodologies were performed:
1. A traditional AHP based ranking and Computation of consistency ratio (CR)
2. Condorcet-AHP based ranking and Computation of consistency ratio (CR)
3. Comparison of both approaches based on the consistency ratio (CR) value
In the AHP based approach the average score received for each of the criteria were
used as the basis of subjective judgement of creating the comparison matrix for AHP. In the
sense, the subjective assessment of deciding on the standard importance scale point in the
6. IJEECS ISSN: 2502-4752
A Condorcet Voting theory based AHP approach for MCDM problems (Sweta Bhattacharya)
281
traditional AHP method were based on the average scores received for each crtiteria in the
voting poll. The steps followed in the traditional AHP approach were [24]:
1. Determination of the relative importance of the criteria based on scores received from the
responses for the criteria acting as the basis of the subjective judgement as per the scale of
Thomas Saaty (1980) from 1 to 9
2. Creation of pairwise comparison matrix based on the ratings of subjective judgement
3. Normalization of the matrix to perform consistency check by dividing each element of the
matrix by its column sum
4. Computation of the relative weight of each row of the matrix by dividing the sum of the
values of each row by the total number of rows
5. Calculation of the Consistency Ratio (CR) to evaluate the accuracy of the pairwise
comparison of the criteria and also the ranking computed
Hence, the entire computation was based on the importance scale point decision which
was subjective and hence there existed scope of error when relating the average scores to the
standard importance scale in the traditional approach. This was the main point of consideration
in the second approach wherein actual poll results were considered in the Condorcet voting
theory to decide on the most preferred criteria followed by the ratio based quantitative approach
for creating the pair wise comparison matrix instead of the 9 point importance scale.
In the Condorcet – AHP approach, all the traditional steps of the AHP method as
mentioned above were followed except the process of creating the pairwise comparison matrix
using Condorcet voting algo and computation of quantitative ratio method to frame the matrix.
The algorithm used for Condorcet voting method is:
Step 1: count = 0
Step 2: for each of the P respondents’ Pi do
Step 3: If Pi ranks C1 above C2, count ++
Step 4: If Pi i ranks C2 above C1, count - -
Step 5: If count > 0, rank C1 better than C2
Step 6: Else rank C2 better than C1
The total scores of the criteria were computed using the Condorcet theory and the
comparison matrix was framed using quantitative ratio based approach instead of the traditional
scale of Thomas Saaty. The consistency ratio was compared for both the approaches. As
already mentioned that the CR value of less than 0.10 was considered acceptable, the lower the
value of the consistency ratio (CR), the ranking is considered to be more accurate.
3.1. Traditional AHP based Ranking
The average scores from the responses revealed the following comparisons for the
three criteria:
1. Extrinsic factors was not equally important than Altruistic factor, but was slightly lower than
moderately important based on subjective judgement. Hence the importance scale of
extrinsic to altruistic factor would be “2”
2. In case of pairwise comparison between intrinsic and altruistic factor, intrinsic was slightly
more than moderately important and lower than strongly important to altruistic factor. Hence
the importance rating was “4”
3. Extrinsic factor was not equally important to altruistic factor and slightly lower than
moderately important to altruistic factor. Hence the importance rating was “2”
The pairwise comparison matrix based on the subjective judgement is shown in Table 3
followed by the process of normalized matrix creation as shown in Table 4 and Table 5. The
computation of the relative weights for generation of ranking of the criteria is shown in Table 6.
Table 3. Pairwise Comparison Matrix
A E I
A 1 1/2 1/4
E 2 1 1/4
I 4 4 1
7. ISSN: 2502-4752
IJEECS Vol. 7, No. 1, July 2017 : 276 – 286
282
Table 4. Summation of the Columns
A E I
A 1 1/2 1/4
E 2 1 1/4
I 4 4 1
𝚺7 𝚺5.5 𝚺1.5
Table 5. Normalized Matrix (Dividing each element with the column sum)
0.143 0.090 0.167
0.286 0.180 0.167
0.571 0.721 0.667
Table 6. Computation of Relative Weights for ranking
Altruistic 0.143 0.090 0.167 Average = 0.133 Rank 3
Extrinsic 0.286 0.180 0.167 Average = 0.211 Rank 2
Intrinsic 0.571 0.721 0.667 Average = 0.653 Rank 1
Hence based on the relative weights it was concluded that intrinsic factor is the highest
ranked factor, followed by extrinsic and finally altruistic factor is the lowest ranked factor. The
calculation of consistency ratio (CR) was computed which required calculation of consistency
index (CI) and Random Index (RI). In order to compute the value of consistency index (CI), the
calculation of the largest Eigen value ( was needed to be done.
= (7*0.133+5.5*0.211+1.5*0.653) = 3.071
We thus considered the matrix to be consistent and the equation as an Eigen value
problem wherein the largest Eigen value, ≥ n, the value of “n=3” being a 3*3 matrix.
CI = = = 0.0355, RI = = 0.66,
Hence the value of CR = 0.0355/0.66 = 0.053
3.2. Condorcet-AHP based Ranking
In case of Condorcet-AHP based approach the individual voting in terms of preference
of the three factors from the 250 responses were considered. The opinions where equal
preferences were given for both the factors were eliminated from the counts considered. The
pairwise comparison of the votes were done for each criteria pairs namely A-E, A-I and E-I. The
consolidated results of the preferences are shown in Table 7, Table 8 and Table 9.
Table 7. Comparison of Altruistic (A) and Extrinsic (E) factor
Votes where A is given more priority than E 135
Votes where E is given more priority than A 104
Votes with equal priority to A and E (eliminated from count) 11
Quantitative Ratio of A:E = 135/104 1.298
Quantitative Ratio of E:A = 104/135 0.770
Table 8. Comparison of Altruistic (A) and Intrinsic (I) factor
Votes where A is given more priority than I 133
Votes where I is given more priority than A 106
Votes with equal priority to A and I eliminated 11
Quantitative Ratio of A:I = 133/106 1.254
Quantitative Ratio of I:A = 106/133 0.796
8. IJEECS ISSN: 2502-4752
A Condorcet Voting theory based AHP approach for MCDM problems (Sweta Bhattacharya)
283
Table 9. Comparison of Extrinsic (E) and Intrinsic (I) factor
Votes where E is given more priority than I 83
Votes where I is given more priority than E 156
Votes with equal priority to E and I eliminated 11
Quantitative Ratio of E:I= 83/156 0.532
Quantitative Ratio of I:E= 156/83 1.879
The pairwise comparison matrix generated from the quantitative ratio is shown in Table 10.
Table 10. Pairwise Comparison Matrix
A E I
A 1 1.298 1.254
E 0.770 1 0.532
I 0.796 1.879 1
The normalized matrix is shown in Table 11 and Table 12.
Table 11. Summation of the Columns
A E I
A 1 1.298 1.254
E 0.770 1 0.532
I 0.796 1.879 1
𝚺2.566 𝚺4.177 𝚺2.786
Table 12. Normalized Matrix (Dividing each element with the column sum)
0.389 0.310 0.450
0.300 0.239 0.190
0.310 0.449 0.358
The computation of the relative weights for ranking is shown in Table 13.
Table 13. Computation of Relative Weights for ranking
Altruistic 0.389 0.310 0.450 Average = 0.383 Rank 1
Extrinsic 0.300 0.239 0.190 Average = 0.243 Rank 3
Intrinsic 0.310 0.449 0.358 Average = 0.372 Rank 2
The results of the Condorcet-AHP based ranking reveal altruistic factor as the highest
ranked factor, followed by intrinsic and finally extrinsic factor is the lowest ranked factor.
The calculation of consistency ratio (CR) was computed based on the computation of
consistency index (CI) and Random Index (RI).
= (2.566*0.383+4.177*0.243+2.786*3.752) = 3.033.
Thus the matrix can be considered to be consistent and the equation as an Eigen value
problem wherein the largest Eigen value, ≥ n, the value of “n=3” because it was a 3*3
matrix.
CI = = = 0.0165, RI = = 0.66
9. ISSN: 2502-4752
IJEECS Vol. 7, No. 1, July 2017 : 276 – 286
284
Hence the value of Consistency Ratio (CR) = 0.0165/0.66 = 0.025
The results clearly revealed that:
Condorcet-AHP approach CR<Traditional AHP CR
The comparison of the Consistency Ratio (CR) for the traditional AHP based approach
and the Condorcet-AHP approach showed that the later one is a better one having a lower
consistency ratio yielding more accurate and significant ranking of the criterion factors.
3.3. Contribution of the proposed Condorcet-AHP based approach
The resultant ranking generated using the Condorcet-AHP approach completely
coincides with results of studies conducted in career choice related studies in various contexts.
The study by Pop & Turner (2009) showed altruistic factor as the major motive behind teaching
career choices [25]. The result of the proposed method is also supported by the study by
Richardson & Watt (2005) and Young (1995) where altruistic factor was identified as the most
significant factor motivating teaching career choices among graduates [26-27]. Similar facts
were revealed in the study conducted by Kyriacou & Coulthard (2000) where altruistic and
intrinsic factors were identified as the most significant factors for teaching career choices [28]. In
all the above mentioned studies, intrinsic factor stood as the second important factor affecting
teaching career decisions. Hence the ranking based on the Condorcet-AHP approach could be
considered more accurate than the traditional AHP method as it generated altruistic factor as
the highest prioritized reason followed by the intrinsic factor being the second most important
factor.
The contribution of the proposed Condorcet-AHP based approach is visible from the
value of the consistency ratio which is much lower in case of the proposed method (0.025) in
comparison to the consistency ration of the traditional method (0.053). The lower the value of
consistency ratio than 0.10, the judgement is considered to be more consistent. Hence in the
case of Condorcet-AHP based approach we can consider it to be more consistent that the
traditional AHP method based on the CR value.
Also, the accuracy in the results of the proposed method lies in the determination of priority
weights of the graduates who have given opinions on the factors and helps to identify their
chances to choose or reject the career option. In order to understand that, the overall priority
weight of each graduate was computed using the following formula:
Overall Priority Weight = [(Normalized Average score of a factor) * (Individual Score of
the respondent in that factor) / (Total score of all the respondents in that factor)] (10)
The graduates are ranked based on the overall priority weight wherein the highest
weight gets the first ranking followed by the others using similar approach. The difference in the
ranking using the traditional AHP and Condorcet-AHP based approach is shown for 20 students
in Figure 2 which further supports the fact that the Condorcet-AHP based approach has been
more accurate in solving multi-criteria decision making problem. The proposed Condorcet-AHP
based ranking has been compared with the career choice of these 20 students considered for
the study and their post graduate choices have matched to a significant level in reality.
10. IJEECS ISSN: 2502-4752
A Condorcet Voting theory based AHP approach for MCDM problems (Sweta Bhattacharya)
285
Figure 2. Ranking Difference of Tradition AHP and Condorcet – AHP based method
4. Conclusion
This paper solves the major weakness of the traditional AHP method due to the use of
subjective judgement and standard importance scale in framing the pairwise comparison matrix
which often leads to erroneous results and dilemma. The Condorcet voting theory based AHP
method helps to eliminate this issue using an unbiased voting method to decide on the
importance of the criterion followed by a quantitative ratio based approach for the pairwise
comparison matrix creation. In the paper, both the approaches were used on a career choice
related data set in the context of the Indian higher education system. The major factors
pertaining to career choice decisions considered were Altruistic, Extrinsic and Intrinsic factors
which were ranked using both the approaches and respective consistency ratios were
generated using bot the methods. The rankings generated using traditional AHP based
approach identified intrinsic factor as the highest ranking factor affecting teaching career choice
decisions. On the contrary, the Condorcet-AHP based approach computed altruistic factor as
the most important motivating factor behind teaching career choice decisions. This result
completely coincided with other studies done in the teaching career related spectrum in various
contexts which supported the accuracy of the result generated using Condorcet-AHP based
approach. The value of the consistency ratio also revealed Condorcet-AHP method to be more
consistent with lower value of consistency ratio in comparison to the traditional – AHP based
approach. Hence it can be concluded that the Condorcet-AHP based approach has been far
more accurate and successful in ranking and resolving multi-criteria decision making problem
and hence can be used in to resolve similar MCDM cases.
As an extension to this study and future research, deep learning methods and
techniques can be combined with the proposed approach to provide more insight and reveal
various other patterns and predictions on the same.
References
[1] Nallusamy S, Sri Lakshmana Kumar D, Balakannan K., Chakraborty, P S. MCDM Tools Application
for Selection of Suppliers in Manufacturing Industries using AHP, Fuzzy Logic and ANN. International
Journal of Engineering Research in Africa. 2016; 19: 130-137. Khan Akhtar, Maity Kalipada. A Novel
MCDM Approach for Simultaneous Optimization for Some Correlated Machining Parameters in
Tuning of CP – Titanium Grade 2. International Journal of Engineering Research in Africa. 2016; 22:
94-111. DOI: 10.4028/www.scientific.net/JERA.22.94.
[2] Seyed Hadi Mousavi-Nasab, Alireza Sotoudeh-Anvari. A comprehensive MCDM-based approach
using TOPSIS, COPRAS and DEA as an auxiliary tool for material selection problems. Materials and
Design. 2017; 121: 237-253. DOI: https://doi.org/10.1016/j.matdes.2017.02.041.
[3] Wind Yoram, Saaty Thomas L. Marketing applications of the analytic hierarchy process.
Management Science. 1980; 26(7): 641-658. DOI: http://dx.doi.org/10.1287/mnsc.26.7.641.
11. ISSN: 2502-4752
IJEECS Vol. 7, No. 1, July 2017 : 276 – 286
286
[4] Ho William. Integrated analytic hierarchy process and its applications–A literature review. European
Journal of Operational Research. 2008; 186(1): 221-228. DOI: 10.1016/j.ejor.2007.01.004
[5] Dweiri Fikri, Faris M Al-Oqla. Material selection using analytical hierarchy process. International
Journal of Computer Applications in Technology. 2006; 26(4): 182-189. DOI:
10.1504/IJCAT.2006.010763.
[6] Saaty, Thomas L. How to make a decision: the analytic hierarchy process. European Journal of
Operational Research. 1990; 48(1): 9-26. DOI: https://doi.org/10.1016/0377-2217(90)90057-I.
[7] Russo De FSM, Rosaria Roberto Camanho. Criteria in AHP: a systematic review of literature.
Procedia Computer Science. 2015; 55: 1123-1132. DOI: doi.org/10.1016/j.procs.2015.07.081
[8] Haller W, Tiedeman E, Whitaker R. Expert choice-User Manual. Pittsburgh, PA: Expert Choice. 1996.
[9] Saaty, Thomas L, Decision Making with Analytical Hierarchy Process. International Journal of
Services Sciences. 2008; 1(1): 83-98. DOI: https://doi.org/10.1504/IJSSci.2008.01759.
[10] Bhushan S, Bharath A. Network QoS Aware Service Ranking Using Hybrid AHP – Promethee
Method in Multi-Cloud Domain. International Journal of Engineering Research in Africa. 2016; 24:
137-152. DOI: 10.4028/www.scientific.net/JERA.24.153.
[11] Ming Xiang He, Xin An. Information Security Risk Assessment Based on Analytic Hierarchy Process.
Indonesian Journal of Electrical Engineering and Computer Science. 2016; 1(3): 656-654. DOI:
http://dx.doi.org/10.11591/ijeecs.v1.i3.pp656-664
[12] Dweiri Fikri, Kumar S, Khan SA, Jain V. Designing an integrated AHP based decision support system
for supplier selection in automotive industry. Expert Systems with Applications. 2016; 62: 272-283,
DOI: 10.1016/j.eswa.2016.06.030.
[13] Qiong Sun, Zhengran Gao. The Small and Medium-sized Enterprises Performance Evaluation Model
based on DEA and AHP Method. Indonesian Journal of Electrical Engineering and Computer
Science. 2013; 11(11): 6400-6405, DOI: http://dx.doi.org/10.11591/telkomnika.v11i11.3470.
[14] Eylem Koc, Hasan Arda Burhan. An Analytical Hierarchy Process (AHP) approach to a Real World
Supplier Selection Problem: A Case Study of Carglass Turkey, Global Business Management
Research. 2014; 6: 11-14.
[15] Zahraa Abed Aljasim Muhisn, Mazni Omar, Mazida Ahmad, Sinan Adnan Muhisn, Team Leader
Selection by Using an Analytic Hierarchy Process (AHP) Technique. Journal of Software. 2015;
10(10): 1216-1217. DOI: 10.17706/jsw.10.10.1216-1227.
[16] Ilyas Akhisar. Performance Ranking of Turkish Insurance Companies: The AHP Application.
Proceedings of the 8
th
International Management Conference. 2014: 27-34. DOI:
10.14784/JFRS.2014117324.
[17] Wanxing Sheng, Limei Zhang, Wei Tang, Jinli Wang, Hengfu Fang. Optimal Multi-Distributed
Generators Planning under Uncertainty using AHP and GA. Indonesian Journal of Electrical
Engineering and Computer Science. 2014; 12(4): 2582-2591. DOI:
http://dx.doi.org/10.11591/telkomnika.v12i4.4824.
[18] Zhixia Jiang, Yibo Liu, Pinchao Meng. Polygraph Survey and Evaluation Based on Analytic Hierarchy
Process. Indonesian Journal of Electrical Engineering and Computer Science. 2014; 12(7): 5585-
5590.
[19] Wei C C, Chien C F, Wang M J J. An AHP-Based Approach to ERP System Selection. International
Journal of Production Economics. 2005; 96(1): 47-62. DOI: 10.1016/j.ijpe.2004.03.004.
[20] Kluge P, Malan D F. The application of the analytical hierarchical process in complex mining
engineeing design problem. The Journal of the South African Institute of Mining and Metallurgy.
2011; 111: 847-855.
[21] Fernandes JEM, Gomes LFAM, Soares de Mello JCCB, Gomes Junior SF. Commuter Aircraft
Choice using a Modified Borda Method using the Median. Journal of Transport Literature. 2013; 7(2):
71-91. DOI: http://dx.doi.org/10.1590/S2238-10312013000200009.
[22] Morais D C, Almeida A T. Group decision making on water resources based on analysis of individual
rankings. Omega. 2012; 40: 42-52. DOI: 10.1007/978-3-319-11949-6.
[23] Qingli Yin. An Analytical Hierarchy Process Model for the Evaluation of College Experimental
Teaching Quality. Journal of Technology and Science Education. 2013; 3(2): 59-65. DOI:
http://dx.doi.org/10.3926/jotse.66.
[24] Pop Margareta Maria, Jeannine E Turner. "To be or not to be… a teacher? Exploring levels of
commitment related to perceptions of teaching among students enrolled in a teacher education
program. Teachers and Teaching: theory and practice. 2009; 15(6): 683-700.
[25] Richardson R, Watt H. I’ve decided to become a teacher: Influences on career change. Teaching and
Teacher Education. 2005; 21(5): 475-489. DOI: doi:10.1016/j.tate.2005.03.007.
[26] Lee Yung-Jaan, Raymond De Young, Robert W. Marans. Factors influencing individual recycling
behavior in office settings: A study of office workers in Taiwan. Environment and Behavior. 1995;
27(3): 380-403. DOI: 10.1177/0013916595273006.
[27] Kyriacou Chris, Melissa Coulthard. Undergraduates' views of teaching as a career choice. Journal of
Education for Teaching: International research and pedagogy. 2000; 26(2): 117-126. DOI:
10.1080/02607470050127036.