This document summarizes and compares several multiple attribute decision making methods that can be used for decision making in manufacturing environments. It describes improved versions of the Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), and Data Envelopment Analysis (DEA) methods. The improvements include normalizing attribute values, eliminating unnecessary comparison matrices in AHP, and determining attribute weights in a systematic way using AHP for TOPSIS. DEA is also briefly introduced as a performance measurement technique to evaluate alternative efficiencies.
This chapter presents several examples that demonstrate an integrated approach to optimization. The first example is a simple freight transfer problem that shows how inference methods from constraint programming and relaxation methods from mixed integer linear programming can work together to accelerate branching search. The freight transfer problem is formulated as an integer programming model and inference is applied through bounds propagation and generating valid knapsack cuts. The problem is also relaxed into a linear programming problem to provide bounds for the search.
This document discusses the team's approach to solving the Higgs Boson Machine Learning Challenge on Kaggle. It first provides background on the particle physics problem and the goal of classifying events as signal or background. It then describes the team's data preprocessing steps, including handling missing values, data normalization, and feature selection/derivation. Finally, it discusses the machine learning techniques tested, including Random Forest, Gradient Boosting, Neural Networks, and XGBoost classifiers. The team aimed to predict event weights to enable both classification and ranking of test events. Random Forest achieved an initial private score of 2.90576 but struggled with memory usage, leading the team to explore other techniques.
The document analyzes the relationship between world coffee market prices and retail coffee prices in the US using monthly price data from 2006 to 2014. It finds:
1) The price series are cointegrated, indicating a long-term relationship.
2) There is one-way Granger causality from world market prices to US retail prices.
3) The vector error correction model shows US retail prices adjust to correct deviations from the long-run equilibrium, while world market prices do not.
Improving the Unification of Software Clones Using Tree and Graph Matching Al...Nikolaos Tsantalis
This document presents an approach to improve the refactoring of software clones using tree and graph matching algorithms. The motivation is the limitations of current refactoring tools in optimally mapping clones with minimum differences and scaling to large code bases. The approach uses control and program dependence graphs to divide the matching problem into subproblems and find mappings that maximize matched statements while minimizing differences. An evaluation on 2342 clone groups from 7 projects found the approach could refactor 82% more groups than the state-of-the-art CeDAR tool, and 28% of groups involving clones in different files. Future work involves further empirical studies and handling more complex clone types.
This document provides an introduction to option management concepts including European and American call and put options. It discusses pricing models, arbitrage opportunities, and put-call parity relations. Several exercises are provided to practice these concepts, such as calculating option prices in a binomial model, describing arbitrage strategies, and proving properties of option prices.
Este documento presenta el objeto de estudio de una tesis sobre la novela La Insoportable Levedad del Ser de Milan Kundera. El objetivo es analizar cómo el amor determina el peso y la levedad en las vidas de las dos protagonistas Teresa y Sabina, revelando claridades y oscuridades que se relacionan íntimamente. El amor es un tema filosófico y simbólico central en la obra, manteniendo un vínculo constante entre los personajes donde el peso y la levedad se convierten en metáforas que atraviesan
This chapter presents several examples that demonstrate an integrated approach to optimization. The first example is a simple freight transfer problem that shows how inference methods from constraint programming and relaxation methods from mixed integer linear programming can work together to accelerate branching search. The freight transfer problem is formulated as an integer programming model and inference is applied through bounds propagation and generating valid knapsack cuts. The problem is also relaxed into a linear programming problem to provide bounds for the search.
This document discusses the team's approach to solving the Higgs Boson Machine Learning Challenge on Kaggle. It first provides background on the particle physics problem and the goal of classifying events as signal or background. It then describes the team's data preprocessing steps, including handling missing values, data normalization, and feature selection/derivation. Finally, it discusses the machine learning techniques tested, including Random Forest, Gradient Boosting, Neural Networks, and XGBoost classifiers. The team aimed to predict event weights to enable both classification and ranking of test events. Random Forest achieved an initial private score of 2.90576 but struggled with memory usage, leading the team to explore other techniques.
The document analyzes the relationship between world coffee market prices and retail coffee prices in the US using monthly price data from 2006 to 2014. It finds:
1) The price series are cointegrated, indicating a long-term relationship.
2) There is one-way Granger causality from world market prices to US retail prices.
3) The vector error correction model shows US retail prices adjust to correct deviations from the long-run equilibrium, while world market prices do not.
Improving the Unification of Software Clones Using Tree and Graph Matching Al...Nikolaos Tsantalis
This document presents an approach to improve the refactoring of software clones using tree and graph matching algorithms. The motivation is the limitations of current refactoring tools in optimally mapping clones with minimum differences and scaling to large code bases. The approach uses control and program dependence graphs to divide the matching problem into subproblems and find mappings that maximize matched statements while minimizing differences. An evaluation on 2342 clone groups from 7 projects found the approach could refactor 82% more groups than the state-of-the-art CeDAR tool, and 28% of groups involving clones in different files. Future work involves further empirical studies and handling more complex clone types.
This document provides an introduction to option management concepts including European and American call and put options. It discusses pricing models, arbitrage opportunities, and put-call parity relations. Several exercises are provided to practice these concepts, such as calculating option prices in a binomial model, describing arbitrage strategies, and proving properties of option prices.
Este documento presenta el objeto de estudio de una tesis sobre la novela La Insoportable Levedad del Ser de Milan Kundera. El objetivo es analizar cómo el amor determina el peso y la levedad en las vidas de las dos protagonistas Teresa y Sabina, revelando claridades y oscuridades que se relacionan íntimamente. El amor es un tema filosófico y simbólico central en la obra, manteniendo un vínculo constante entre los personajes donde el peso y la levedad se convierten en metáforas que atraviesan
Este documento evalúa los elementos clave de la comunicación. Identifica que el contexto permite la comprensión, el referente se ubica dentro del mensaje, la comunicación es un proceso que involucra varios elementos como el emisor, receptor y mensaje, y que en la afirmación dada los agentes de la comunicación son el emisor y receptor.
Summing up about growing and non growing perpetuities wacc levered and tax sa...Futurum2
In this note we reconsider in detail the proper discount rate for cash flows in perpetuity, the present value of tax savings and the calculation of terminal value. The note clarifies the use of real discount rates and concludes with a formulation that is inflation-neutral for a given assumption on the discount rate for the tax savings. We find that the only discount rate for tax savings that makes the value of the perpetuity inflation-neutral is Kd, the cost of debt. We also reconsider the intuitive approach to calculate the cost of capital for perpetuities from the nominal rates that compose that cost of capital, and then
converting it into real cost of capital using Fisher relationship.
La salsa es un género musical hispano resultado de la fusión del son cubano con el jazz y otros ritmos estadounidenses, el cual es movido y alegre. La música electrónica emplea instrumentos y tecnología electrónica para su producción e interpretación, y puede distinguirse del sonido electromecánico, siendo interesante y entretenida para hacer feliz a quien la escucha.
Haiku Deck is a presentation tool that allows users to create Haiku style slideshows. The tool encourages users to get started making their own Haiku Deck presentations which can be shared on SlideShare. In just a few sentences, it pitches the idea of using Haiku Deck to easily create visual presentations.
El documento resume los conceptos clave relacionados con la organización de eventos turísticos. Explica el concepto de turismo, su evolución histórica, tipologías y clases de turismo. También describe el mercado turístico, la oferta y demanda, efectos socioeconómicos, y las super estructuras y estructuras que conforman la industria turística.
Akkord Steel Construction Company LLC includes "Khirdalan steel structures plant" in Khirdalan city,
"Balakhani Constructional Ironworks" in Balakhani settlement and "Mingachevir Mechanical
Engineering Plant" in Mingachevir city.
The above plants are equipped with technical equipment, 3 and 4 axis shafts, plasma cutters, cutting,
boring, and planing machines along X, Y, Z axes, saws, pipe-, angle-, and channel-bending machines,
powder and gas welding machines, lathes, millers, drilling-, grinding-, slotting-, and clampingmachines,
sanding-painting lines, and absorbed painting lines; manufactured mainly in Finland,
Germany, Turkey, the USA, and Russia.
Due to full compliance of the quality of manufactured products with the requirements of ISO 9001-2010
international standard, the State Committee of Metrology and Patents of the Republic of Azerbaijan has
issued a certificate of conformity to our plants.
The document contains an image of the text "Internet Images" and music titled "You are my sunshine" by Trini Lopez. It also includes the name "Adrian a" at the bottom.
I•systems - THE IMPLEMENTATION OF BUSINESS PROCESS AUTOMATION IN YOUR BUS...Daniel Gola
This document discusses business process automation (BPA) and its implementation. BPA involves integrating applications, restructuring labor, and using software to automate processes to reduce costs. The document provides an example of how automating contract management eliminated scanning, illegible faxes, and manual reports. Key challenges to BPA include system integration, database centralization, complex processes, and company structure. Proper implementation involves analyzing processes to identify bottlenecks, specifying solutions, using agile development methods like Scrum for management, and ensuring customer participation to create a new tailored system that delivers benefits like reduced costs, increased productivity and competitiveness.
Transferring Software Testing Tools to PracticeTao Xie
ACM SIGSOFT Webinar co-presented by Nikolai Tillmann (Microsoft), Judith Bishop (Microsoft Research), Pratap Lakshman (Microsoft), Tao Xie (University of Illinois at Urbana-Champaign) http://www.sigsoft.org/resources/webinars.html
Decision-making is the process of selecting a course of action from alternatives to achieve desired goals. It involves identifying problems, gathering information, developing alternatives, evaluating alternatives based on criteria, and implementing a solution. Effective decision-making requires accuracy, considering the environment, timely decisions, communicating decisions, participative decision-making, and implementing decisions. It plays an important role in management. Challenges can include incomplete information, an unsupportive environment, lack of acceptance by others, ineffective communication, and incorrect timing.
The document provides 8 tips for coming up with a great business idea:
1. Look at problems that drive you mad and find solutions. The author's business came from a lack of support for entrepreneurs.
2. Take massive action to turn ideas into reality through consistent effort and learning from failures.
3. Believe in yourself and ignore naysayers, as the author did in pursuing their passion despite criticism.
4. Let your subconscious mind wander to spark ideas, like when relaxing in the bath.
5. Constantly search for better ways of doing things to find opportunities for innovation.
6. Think big about how to help many people in order to make a large impact.
7.
This document discusses various multi-criteria decision making (MCDM) methods. It describes the objectives and steps in MCDM methodology. Three MCDM methods are explained in detail: Compromise Programming (CP), Preference Ranking Organisation METHod of Enrichment Evaluation (PROMETHEE), and the Weighted Average Method. An example is provided to illustrate the application of each method.
This document proposes a new approach called Local Active Pixels (LAPP) for face recognition that reduces computational resources compared to Local Binary Patterns (LBP). LAPP identifies "active pixels" that contain essential image information using Brody Transform thresholds. Only active pixels are used for feature extraction and recognition, reducing features without sacrificing accuracy. The document describes LBP and Brody Transform, then presents the LAPP approach and experimental evaluation using several face datasets to demonstrate its performance.
Multiple regression, moderation, mediation, and path analysis techniques were discussed. Multiple regression allows predicting a criterion variable from multiple predictor variables. Moderation analyses how a third variable influences the relationship between two others. Mediation examines how a predictor affects an outcome through an intervening variable. Path analysis extends regression by examining relationships between multiple predictor and outcome variables. Fit indices assess how well a hypothesized model fits the data.
11.application of supply chain tools in powerplant a case of rayalaseema ther...Alexander Decker
The document summarizes research applying supply chain tools to optimize inventory management at Rayalaseema Thermal Power Plant in India. The researchers used a genetic algorithm to find the optimized ordering quantity and reorder point by considering plant data. They generated a solution demand matrix and population of chromosomes representing solutions. The genetic algorithm evaluated fitness and performed crossover, mutation and selection to generate optimized reorder points and quantities, reducing total costs compared to the current system. The approach was implemented in MATLAB and shows potential to improve inventory control in manufacturing.
The Evaluation of Topsis and Fuzzy-Topsis Method for Decision Making System i...IRJET Journal
This document discusses using fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) as an analytical tool for decision making in data mining. Fuzzy TOPSIS extends the traditional TOPSIS method to handle uncertainties by using fuzzy set theory. It involves defining ratings and weights as linguistic variables represented by fuzzy numbers. The key steps are normalizing the fuzzy decision matrix, determining fuzzy positive and negative ideal solutions, calculating distances from the ideal solutions, and determining a closeness coefficient to rank the alternatives. The literature review discusses previous research applying fuzzy set concepts to TOPSIS to address limitations of crisp data in modeling real-world decision problems.
This document discusses the Analytic Hierarchy Process (AHP), a multi-criteria decision making technique. AHP allows decisions to be made by structuring multiple criteria into a hierarchy. It involves pairwise comparisons of criteria and alternatives to obtain their relative priorities. Judgments are made using a fundamental scale and priorities are derived from the principal eigenvector of the comparison matrix. Consistency of judgments is ensured by calculating a consistency ratio. The AHP provides a systematic process to integrate both subjective and objective evaluations to help decision makers select the best alternative.
20060411 Analytic Hierarchy Process (AHP)Will Shen
The Analytic Hierarchy Process (AHP) is a decision-making tool that breaks down complex decisions into a series of pairwise comparisons. It allows decision makers to incorporate both qualitative and quantitative factors. The AHP works by:
1) Computing weights for each decision criterion through pairwise comparisons.
2) Scoring alternatives based on each criterion.
3) Multiplying the weights and scores to obtain overall scores for each alternative.
4) Ranking the alternatives based on their overall scores. Consistency is also checked to ensure reliable results. An example is provided to illustrate the AHP process.
Este documento evalúa los elementos clave de la comunicación. Identifica que el contexto permite la comprensión, el referente se ubica dentro del mensaje, la comunicación es un proceso que involucra varios elementos como el emisor, receptor y mensaje, y que en la afirmación dada los agentes de la comunicación son el emisor y receptor.
Summing up about growing and non growing perpetuities wacc levered and tax sa...Futurum2
In this note we reconsider in detail the proper discount rate for cash flows in perpetuity, the present value of tax savings and the calculation of terminal value. The note clarifies the use of real discount rates and concludes with a formulation that is inflation-neutral for a given assumption on the discount rate for the tax savings. We find that the only discount rate for tax savings that makes the value of the perpetuity inflation-neutral is Kd, the cost of debt. We also reconsider the intuitive approach to calculate the cost of capital for perpetuities from the nominal rates that compose that cost of capital, and then
converting it into real cost of capital using Fisher relationship.
La salsa es un género musical hispano resultado de la fusión del son cubano con el jazz y otros ritmos estadounidenses, el cual es movido y alegre. La música electrónica emplea instrumentos y tecnología electrónica para su producción e interpretación, y puede distinguirse del sonido electromecánico, siendo interesante y entretenida para hacer feliz a quien la escucha.
Haiku Deck is a presentation tool that allows users to create Haiku style slideshows. The tool encourages users to get started making their own Haiku Deck presentations which can be shared on SlideShare. In just a few sentences, it pitches the idea of using Haiku Deck to easily create visual presentations.
El documento resume los conceptos clave relacionados con la organización de eventos turísticos. Explica el concepto de turismo, su evolución histórica, tipologías y clases de turismo. También describe el mercado turístico, la oferta y demanda, efectos socioeconómicos, y las super estructuras y estructuras que conforman la industria turística.
Akkord Steel Construction Company LLC includes "Khirdalan steel structures plant" in Khirdalan city,
"Balakhani Constructional Ironworks" in Balakhani settlement and "Mingachevir Mechanical
Engineering Plant" in Mingachevir city.
The above plants are equipped with technical equipment, 3 and 4 axis shafts, plasma cutters, cutting,
boring, and planing machines along X, Y, Z axes, saws, pipe-, angle-, and channel-bending machines,
powder and gas welding machines, lathes, millers, drilling-, grinding-, slotting-, and clampingmachines,
sanding-painting lines, and absorbed painting lines; manufactured mainly in Finland,
Germany, Turkey, the USA, and Russia.
Due to full compliance of the quality of manufactured products with the requirements of ISO 9001-2010
international standard, the State Committee of Metrology and Patents of the Republic of Azerbaijan has
issued a certificate of conformity to our plants.
The document contains an image of the text "Internet Images" and music titled "You are my sunshine" by Trini Lopez. It also includes the name "Adrian a" at the bottom.
I•systems - THE IMPLEMENTATION OF BUSINESS PROCESS AUTOMATION IN YOUR BUS...Daniel Gola
This document discusses business process automation (BPA) and its implementation. BPA involves integrating applications, restructuring labor, and using software to automate processes to reduce costs. The document provides an example of how automating contract management eliminated scanning, illegible faxes, and manual reports. Key challenges to BPA include system integration, database centralization, complex processes, and company structure. Proper implementation involves analyzing processes to identify bottlenecks, specifying solutions, using agile development methods like Scrum for management, and ensuring customer participation to create a new tailored system that delivers benefits like reduced costs, increased productivity and competitiveness.
Transferring Software Testing Tools to PracticeTao Xie
ACM SIGSOFT Webinar co-presented by Nikolai Tillmann (Microsoft), Judith Bishop (Microsoft Research), Pratap Lakshman (Microsoft), Tao Xie (University of Illinois at Urbana-Champaign) http://www.sigsoft.org/resources/webinars.html
Decision-making is the process of selecting a course of action from alternatives to achieve desired goals. It involves identifying problems, gathering information, developing alternatives, evaluating alternatives based on criteria, and implementing a solution. Effective decision-making requires accuracy, considering the environment, timely decisions, communicating decisions, participative decision-making, and implementing decisions. It plays an important role in management. Challenges can include incomplete information, an unsupportive environment, lack of acceptance by others, ineffective communication, and incorrect timing.
The document provides 8 tips for coming up with a great business idea:
1. Look at problems that drive you mad and find solutions. The author's business came from a lack of support for entrepreneurs.
2. Take massive action to turn ideas into reality through consistent effort and learning from failures.
3. Believe in yourself and ignore naysayers, as the author did in pursuing their passion despite criticism.
4. Let your subconscious mind wander to spark ideas, like when relaxing in the bath.
5. Constantly search for better ways of doing things to find opportunities for innovation.
6. Think big about how to help many people in order to make a large impact.
7.
This document discusses various multi-criteria decision making (MCDM) methods. It describes the objectives and steps in MCDM methodology. Three MCDM methods are explained in detail: Compromise Programming (CP), Preference Ranking Organisation METHod of Enrichment Evaluation (PROMETHEE), and the Weighted Average Method. An example is provided to illustrate the application of each method.
This document proposes a new approach called Local Active Pixels (LAPP) for face recognition that reduces computational resources compared to Local Binary Patterns (LBP). LAPP identifies "active pixels" that contain essential image information using Brody Transform thresholds. Only active pixels are used for feature extraction and recognition, reducing features without sacrificing accuracy. The document describes LBP and Brody Transform, then presents the LAPP approach and experimental evaluation using several face datasets to demonstrate its performance.
Multiple regression, moderation, mediation, and path analysis techniques were discussed. Multiple regression allows predicting a criterion variable from multiple predictor variables. Moderation analyses how a third variable influences the relationship between two others. Mediation examines how a predictor affects an outcome through an intervening variable. Path analysis extends regression by examining relationships between multiple predictor and outcome variables. Fit indices assess how well a hypothesized model fits the data.
11.application of supply chain tools in powerplant a case of rayalaseema ther...Alexander Decker
The document summarizes research applying supply chain tools to optimize inventory management at Rayalaseema Thermal Power Plant in India. The researchers used a genetic algorithm to find the optimized ordering quantity and reorder point by considering plant data. They generated a solution demand matrix and population of chromosomes representing solutions. The genetic algorithm evaluated fitness and performed crossover, mutation and selection to generate optimized reorder points and quantities, reducing total costs compared to the current system. The approach was implemented in MATLAB and shows potential to improve inventory control in manufacturing.
The Evaluation of Topsis and Fuzzy-Topsis Method for Decision Making System i...IRJET Journal
This document discusses using fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) as an analytical tool for decision making in data mining. Fuzzy TOPSIS extends the traditional TOPSIS method to handle uncertainties by using fuzzy set theory. It involves defining ratings and weights as linguistic variables represented by fuzzy numbers. The key steps are normalizing the fuzzy decision matrix, determining fuzzy positive and negative ideal solutions, calculating distances from the ideal solutions, and determining a closeness coefficient to rank the alternatives. The literature review discusses previous research applying fuzzy set concepts to TOPSIS to address limitations of crisp data in modeling real-world decision problems.
This document discusses the Analytic Hierarchy Process (AHP), a multi-criteria decision making technique. AHP allows decisions to be made by structuring multiple criteria into a hierarchy. It involves pairwise comparisons of criteria and alternatives to obtain their relative priorities. Judgments are made using a fundamental scale and priorities are derived from the principal eigenvector of the comparison matrix. Consistency of judgments is ensured by calculating a consistency ratio. The AHP provides a systematic process to integrate both subjective and objective evaluations to help decision makers select the best alternative.
20060411 Analytic Hierarchy Process (AHP)Will Shen
The Analytic Hierarchy Process (AHP) is a decision-making tool that breaks down complex decisions into a series of pairwise comparisons. It allows decision makers to incorporate both qualitative and quantitative factors. The AHP works by:
1) Computing weights for each decision criterion through pairwise comparisons.
2) Scoring alternatives based on each criterion.
3) Multiplying the weights and scores to obtain overall scores for each alternative.
4) Ranking the alternatives based on their overall scores. Consistency is also checked to ensure reliable results. An example is provided to illustrate the AHP process.
Advance mathematics mid term presentation rev01Nirmal Joshi
The document describes finding regression coefficients to predict radiation levels using different techniques:
1. Linear regression using statistical methods found coefficients that predicted radiation well with R2 = 0.9994.
2. Genetic algorithm was also used, minimizing error (E2) yielded better fitting than maximizing R2, with coefficients predicting radiation levels accurately.
3. Both linear regression and genetic algorithm found coefficients that predicted radiation at the sixth location accurately according to ANOVA tables, making the models acceptable at 95% confidence level. Care must be taken in choosing the right data analysis technique and fitness function for problems.
The document describes the Analytic Hierarchy Process (AHP), a technique for quantifying subjective judgments on relative weights of criteria for decision making. AHP involves listing alternatives, criteria and goals in a hierarchy. Pairwise comparison matrices are made to rate alternatives based on each criterion. Priorities are determined by normalizing the matrices and taking the average of each row to get priority vectors. Consistency is checked and overall priorities are obtained by weighting criteria priorities by the overall matrix. The example illustrates the AHP process to select the best toothbrush manufacturer based on cost, reliability and delivery time.
This document summarizes the analysis of data from a pharmaceutical company to model and predict the output variable (titer) from input variables in a biochemical drug production process. Several statistical models were evaluated including linear regression, random forest, and MARS. The analysis involved developing blackbox models using only controlled input variables, snapshot models using all input variables at each time point, and history models incorporating changes in input variables over time to predict titer values. Model performance was compared using cross-validation.
Parameter Optimisation for Automated Feature Point DetectionDario Panada
Parameter optimization for an automated feature point detection model was explored. Increasing the number of random displacements up to 20 improved performance but additional increases did not. Larger patch sizes consistently improved performance. Increasing the number of decision trees did not affect performance for this single-stage model, unlike previous findings for a two-stage model. Overall, some parameter tuning was found to enhance the model's accuracy but not all parameters significantly impacted results.
Mixture Regression Model for Incomplete DataLoc Nguyen
The Regression Expectation Maximization (REM) algorithm, which is a variant of Expectation Maximization (EM) algorithm, uses parallelly a long regression model and many short regression models to solve the problem of incomplete data. Long regression model is entire regression function which is the resulted model and short regression models are partial regression functions which are inverses of entire regression function. I proposed REM in a different research in which an entire regression function is built parallelly with many partial inverse regression functions and then missing values are fulfilled by expectations relevant to both entire regression function and inverse regression functions. Experimental results proved resistance of REM to incomplete data, but accuracy of REM decreases insignificantly when data sample is made sparse with loss ratios up to 80%.
Like traditional regression analysis methods, accuracy of REM can be decreased if data varies complicatedly with many trends. In this research, I propose a so-called Mixture Regression Expectation Maximization (MREM) algorithm. MREM is full combination of REM and mixture model in which I use two EM processes in the same loop. MREM uses the first EM process for exponential family of probability distributions to estimate missing values as REM does. Consequently, MREM uses the second EM process to estimate parameters as mixture model method does. The purpose of MREM is to take advantages of both REM and mixture model. Unexpectedly, experimental result shows that MREM is less accurate than REM. I try to weight partial models of MREM by product of component probabilities and conditional probabilities or to select most appropriate partial model in order to improve estimation accuracy, but the final results are not as good as expected. However, MREM is essential because a different approach for mixture model can be referred by fusing linear equations of MREM into a unique curve equation proposed by some other researches.
This document summarizes a research paper that proposes two approaches called LLORMA for constructing local low-rank matrix approximations. The approaches approximate an observed matrix M as a weighted sum of several low-rank matrices, where each low-rank matrix is accurate in a local region of M. This relaxes the assumption that M has global low-rank structure. The paper analyzes the accuracy of the local low-rank modeling approaches and shows they improve prediction accuracy over classical low-rank approximation methods on recommendation tasks.
This document provides an overview of conjoint analysis, a research method used to determine how consumers value different attributes or features of a product. It discusses key aspects of conjoint analysis including: defining factors and values to study, creating profiles for respondents to rate, estimating utility values for each attribute from the ratings, and aggregating results across respondents. The example provided examines preferences for different attributes of dishwashing products like scent, packaging, and design.
Differential evolution (DE) algorithm has been applied as a powerful tool to find optimum switching angles for selective harmonic elimination pulse width modulation (SHEPWM) inverters. However, the DE’s performace is very dependent on its control parameters. Conventional DE generally uses either trial and error mechanism or tuning technique to determine appropriate values of the control paramaters. The disadvantage of this process is that it is very time comsuming. In this paper, an adaptive control parameter is proposed in order to speed up the DE algorithm in optimizing SHEPWM switching angles precisely. The proposed adaptive control parameter is proven to enhance the convergence process of the DE algorithm without requiring initial guesses. The results for both negative and positive modulation index (M) also indicate that the proposed adaptive DE is superior to the conventional DE in generating SHEPWM switching patterns.
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ijsc
In fuzzy decision-making processes based on linguistic information, operations on discrete fuzzy numbers
are commonly performed. Aggregation and defuzzification operations are some of these often used
operations. Many aggregation and defuzzification operators produce results independent to the decisionmaker’s
strategy. On the other hand, the Weighted Average Based on Levels (WABL) approach can take
into account the level weights and the decision maker's "optimism" strategy. This gives flexibility to the
WABL operator and, through machine learning, can be trained in the direction of the decision maker's
strategy, producing more satisfactory results for the decision maker. However, in order to determine the
WABL value, it is necessary to calculate some integrals. In this study, the concept of WABL for discrete
trapezoidal fuzzy numbers is investigated, and analytical formulas have been proven to facilitate the
calculation of WABL value for these fuzzy numbers. Trapezoidal and their special form, triangular fuzzy
numbers, are the most commonly used fuzzy number types in fuzzy modeling, so in this study, such numbers
have been studied. Computational examples explaining the theoretical results have been performed.
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ijsc
In fuzzy decision-making processes based on linguistic information, operations on discrete fuzzy numbers
are commonly performed. Aggregation and defuzzification operations are some of these often used operations. Many aggregation and defuzzification operators produce results independent to the decisionmaker’s strategy. On the other hand, the Weighted Average Based on Levels (WABL) approach can take into account the level weights and the decision maker's "optimism" strategy. This gives flexibility to the WABL operator and, through machine learning, can be trained in the direction of the decision maker's strategy, producing more satisfactory results for the decision maker. However, in order to determine the WABL value, it is necessary to calculate some integrals. In this study, the concept of WABL for discrete trapezoidal fuzzy numbers is investigated, and analytical formulas have been proven to facilitate the calculation of WABL value for these fuzzy numbers. Trapezoidal and their special form, triangular fuzzy
numbers, are the most commonly used fuzzy number types in fuzzy modeling, so in this study, such numbers have been studied. Computational examples explaining the theoretical results have been performed.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document summarizes research on using analytical hierarchy process (AHP) to evaluate environmental factors for determining residential land use suitability. AHP was used to determine weights for various environmental criteria like water availability, flood risk, air pollution, water quality, and distance from waste sites. Spatial data and maps of these factors were analyzed and overlaid based on the AHP weights to produce a final residential land suitability map for the study area of Pimpri-Chinchwad, India. The methodology involved structuring the decision problem hierarchically, making pairwise comparisons to obtain weights, and aggregating weighted criteria maps to determine overall suitability.
This document summarizes a study that used the fuzzy TOPSIS method to select the optimal type of spillway for a dam in northern Greece called Pigi Dam. Five alternative spillway types were evaluated based on nine criteria. The criteria were expressed as triangular fuzzy numbers to account for uncertainty. Weights for the criteria were determined using the AHP method and also expressed linguistically as fuzzy numbers. The fuzzy TOPSIS method was then used to rank the alternatives based on their distances from the ideal and negative-ideal solutions. The alternative with the highest relative closeness to the ideal solution was determined to be the optimal spillway type.
Similar to Decision making in manufacturing environment using graph theory and fuzzy multiple attribute decision making methods (20)
The chemistry of the actinide and transactinide elements (set vol.1 6)Springer
Actinium is the first member of the actinide series of elements according to its electronic configuration. Actinium closely resembles lanthanum chemically. The three most important isotopes of actinium are 227Ac, 228Ac, and 225Ac. 227Ac is a naturally occurring isotope in the uranium-actinium decay series with a half-life of 21.772 years. 228Ac is in the thorium decay series with a half-life of 6.15 hours. 225Ac is produced from 233U with applications in medicine.
Transition metal catalyzed enantioselective allylic substitution in organic s...Springer
This document provides an overview of computational studies of palladium-mediated allylic substitution reactions. It discusses the history and development of quantum mechanical and molecular mechanical methods used to study the structures and reactivity of allyl palladium complexes. In particular, density functional theory methods like B3LYP have been widely used to study reaction mechanisms and factors controlling selectivity. Continuum solvation models have also been important for properly accounting for reactions in solvent.
1) Ranchers in Idaho observed lambs born with cyclopia (one eye) due to ewes grazing on corn lily plants. Cyclopamine was identified as the compound responsible and was later found to inhibit the Hedgehog signaling pathway.
2) Nakiterpiosin and nakiterpiosinone were isolated from cyanobacterial sponges and shown to inhibit cancer cell growth. Their unique C-nor-D-homosteroid skeleton presented synthetic challenges.
3) The authors developed a convergent synthesis of nakiterpiosin involving a carbonylative Stille coupling and a photo-Nazarov cyclization. Model studies led them to propose a revised structure for n
This document reviews solid-state NMR techniques that have been used to determine the molecular structures of amyloid fibrils. It discusses five categories of NMR techniques: 1) homonuclear dipolar recoupling and polarization transfer via J-coupling, 2) heteronuclear dipolar recoupling, 3) correlation spectroscopy, 4) recoupling of chemical shift anisotropy, and 5) tensor correlation methods. Specific techniques described include rotational resonance, dipolar dephasing, constant-time dipolar dephasing, REDOR, and fpRFDR-CT. These techniques have provided insights into the hydrogen-bond registry, spatial organization, and backbone torsion angles of amyloid fibrils.
This document discusses principles of ionization and ion dissociation in mass spectrometry. It covers topics like ionization energy, processes that occur during electron ionization like formation of molecular ions and fragment ions, and ionization by energetic electrons. It also discusses concepts like vertical transitions, where electronic transitions occur much faster than nuclear motions. The document provides background information on fundamental gas phase ion chemistry concepts in mass spectrometry.
Higher oxidation state organopalladium and platinumSpringer
This document discusses the role of higher oxidation state platinum species in platinum-mediated C-H bond activation and functionalization. It summarizes that the original Shilov system, which converts alkanes to alcohols and chloroalkanes under mild conditions, involves oxidation of an alkyl-platinum(II) intermediate to an alkyl-platinum(IV) species by platinum(IV). This "umpolung" of the C-Pt bond facilitates nucleophilic attack and product formation rather than simple protonolysis back to alkane. Subsequent work has validated this mechanism and also demonstrated that platinum(IV) can be replaced by other oxidants, as long as they rapidly oxidize the
Principles and applications of esr spectroscopySpringer
- Electron spin resonance (ESR) spectroscopy is used to study paramagnetic substances, particularly transition metal complexes and free radicals, by applying a magnetic field and measuring absorption of microwave radiation.
- ESR spectra provide information about electronic structure such as g-factors and hyperfine couplings by measuring resonance fields. Pulse techniques also allow measurement of dynamic properties like relaxation.
- Paramagnetic species have unpaired electrons that create a magnetic moment. ESR detects transition between spin energy levels induced by microwave absorption under an applied magnetic field.
This document discusses crystal structures of inorganic oxoacid salts from the perspective of periodic graph theory and cation arrays. It analyzes 569 crystal structures of simple salts with the formulas My(LO3)z and My(XO4)z, where M are metal cations, L are nonmetal triangular anions, and X are nonmetal tetrahedral anions. The document finds that in about three-fourths of the structures, the cation arrays are topologically equivalent to binary compounds like NaCl, NiAs, and FeB. It proposes representing these oxoacid salts as a quasi-binary model My[L/X]z, where the cation arrays determine the crystal structure topology while the oxygens play a
Field flow fractionation in biopolymer analysisSpringer
This document summarizes a study that uses flow field-flow fractionation (FlFFF) to measure initial protein fouling on ultrafiltration membranes. FlFFF is used to determine the amount of sample recovered from membranes and insights into how retention times relate to the distance of the sample layer from the membrane wall. It was observed that compositionally similar membranes from different companies exhibited different sample recoveries. Increasing amounts of bovine serum albumin were adsorbed when the average distance of the sample layer was less than 11 mm. This information can help establish guidelines for flow rates to minimize fouling during ultrafiltration processes.
1) The document discusses phonons, which are quantized lattice vibrations in crystals that carry thermal energy. It describes modeling crystal vibrations using a harmonic lattice approach.
2) Normal modes of the lattice vibrations can be described as a set of independent harmonic oscillators. Quantum mechanically, these normal modes are quantized as phonons with discrete energy levels.
3) Phonons can be thought of as quasiparticles that carry momentum and energy in the crystal lattice. Their propagation is described using a phonon field approach rather than independent normal modes.
This chapter discusses 3D electroelastic problems and applied electroelastic problems. For 3D problems, it presents the potential function method for solving problems involving a penny-shaped crack and elliptic inclusions. It derives the governing equations and introduces potential functions to obtain the general static and dynamic solutions. For applied problems, it discusses simple electroelastic problems, laminated piezoelectric plates using classical and higher-order theories, and piezoelectric composite shells. It also presents a unified first-order approximate theory for electro-magneto-elastic thin plates.
Tensor algebra and tensor analysis for engineersSpringer
This document discusses vector and tensor analysis in Euclidean space. It defines vector- and tensor-valued functions and their derivatives. It also discusses coordinate systems, tangent vectors, and coordinate transformations. The key points are:
1. Vector- and tensor-valued functions can be differentiated using limits, with the derivatives being the vector or tensor equivalent of the rate of change.
2. Coordinate systems map vectors to real numbers and define tangent vectors along coordinate lines.
3. Under a change of coordinates, components of vectors and tensors transform according to the Jacobian of the coordinate transformation to maintain geometric meaning.
This document provides a summary of carbon nanofibers:
1) Carbon nanofibers are sp2-based linear filaments with diameters of around 100 nm that differ from continuous carbon fibers which have diameters of several micrometers.
2) Carbon nanofibers can be produced via catalytic chemical vapor deposition or via electrospinning and thermal treatment of organic polymers.
3) Carbon nanofibers exhibit properties like high specific area, flexibility, and strength due to their nanoscale diameters, making them suitable for applications like energy storage electrodes, composite fillers, and bone scaffolds.
Shock wave compression of condensed matterSpringer
This document provides an introduction and overview of shock wave physics in condensed matter. It discusses the assumptions made in treating one-dimensional plane shock waves in fluids and solids. It briefly outlines the history of the field in the United States, noting that accurate measurements of phase transitions from shock experiments established shock physics as a discipline and allowed development of a pressure calibration scale for static high pressure work. It describes some of the practical applications of shock wave experiments for providing high-pressure thermodynamic data, understanding explosive detonations, calibrating pressure scales, and enabling studies of materials under extreme conditions.
Polarization bremsstrahlung on atoms, plasmas, nanostructures and solidsSpringer
This document discusses the quantum electrodynamics approach to describing bremsstrahlung, or braking radiation, of a fast charged particle colliding with an atom. It derives expressions for the amplitude of bremsstrahlung on a one-electron atom within the first Born approximation. The amplitude has static and polarization terms. The static term corresponds to radiation from the incident particle in the nuclear field, reproducing previous results. The polarization term accounts for radiation from the atomic electron and contains resonant denominators corresponding to intermediate atomic states. The full treatment allows various limits to be taken, such as removing the nucleus or atomic electron, reproducing known results from quantum electrodynamics.
Nanostructured materials for magnetoelectronicsSpringer
This document discusses experimental approaches to studying magnetization and spin dynamics in magnetic systems with high spatial and temporal resolution.
It describes using time-resolved X-ray photoemission electron microscopy (TR-XPEEM) to image the temporal evolution of magnetization in magnetic thin films with picosecond time resolution. Results are presented showing the changing domain structure in a Permalloy thin film following excitation with a magnetic field pulse. Different rotation mechanisms are observed depending on the initial orientation of the magnetization with respect to the applied field.
A novel pump-probe magneto-optical Kerr effect technique using higher harmonic generation is also discussed for addressing spin dynamics in magnetic systems with femtosecond time resolution and element selectivity.
This document discusses nanomaterials for biosensors and implantable biodevices. It describes how nanostructured thin films have enabled the development of more sensitive electrochemical biosensors by improving the detection of specific molecules. Two common techniques for creating nanostructured thin films are described - Langmuir-Blodgett films and layer-by-layer films. These techniques allow for the precise control of film thickness at the nanoscale and have been used to immobilize biomolecules like enzymes to create biosensors. Recent research is also exploring how these nanostructured films and biomolecules can be used to create implantable biosensors for real-time monitoring inside the body.
Modern theory of magnetism in metals and alloysSpringer
This document provides an introduction to magnetism in solids. It discusses how magnetic moments originate from electron spin and orbital angular momentum at the atomic level. In solids, electron localization determines whether magnetic properties are described by localized atomic moments or collective behavior of delocalized electrons. The key concepts of metals and insulators are introduced. The document then presents the basic Hamiltonian used to describe magnetism in solids, including terms for kinetic energy, electron-electron interactions, spin-orbit coupling, and the Zeeman effect. It also discusses how atomic orbitals can be used as a basis set to represent the Hamiltonian and describes the symmetry properties of s, p, and d orbitals in cubic crystals.
This chapter introduces and classifies various types of damage that can occur in structures. Damage can be caused by forces, deformations, aggressive environments, or temperatures. It can occur suddenly or over time. The chapter discusses different damage mechanisms including corrosion, excessive deformation, plastic instability, wear, and fracture. It also introduces concepts that will be covered in more detail later such as damage mechanics, fracture mechanics, and the influence of microstructure on damage and fracture. The chapter aims to provide an overview of damage types before exploring specific mechanisms and analyses in later chapters.
This document summarizes research on identifying spin-wave eigen-modes in a circular spin-valve nano-pillar using Magnetic Resonance Force Microscopy (MRFM). Key findings include:
1) Distinct spin-wave spectra are observed depending on whether the nano-pillar is excited by a uniform in-plane radio-frequency magnetic field or by a radio-frequency current perpendicular to the layers, indicating different excitation mechanisms.
2) Micromagnetic simulations show the azimuthal index φ is the discriminating parameter, with only φ=0 modes excited by the uniform field and only φ=+1 modes excited by the orthogonal current-induced Oersted field.
3) Three indices are used to label resonance
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Decision making in manufacturing environment using graph theory and fuzzy multiple attribute decision making methods
1. Chapter 2
Improved Multiple Attribute Decision
Making Methods
The improved multiple attribute decision making methods for decision making in
the manufacturing environment are described in this chapter.
2.1 Improved Analytic Hierarchy Process Method
Analytic hierarchy process (AHP) is one of the most popular analytical techniques
for complex decision making problems [1, 2]. An AHP hierarchy can have as
many levels as needed to fully characterize a particular decision situation.
A number of functional characteristics make AHP a useful methodology. These
include the ability to handle decision situations involving subjective judgments,
multiple decision makers, and the ability to provide measures of consistency of
preferences [3]. Designed to reflect the way people actually think, AHP continues
to be the most highly regarded and widely used decision making method. AHP can
efficiently deal with objective as well as subjective attributes. In this method, a
pairwise comparison matrix is constructed using a scale of relative importance.
The judgments are entered using the fundamental scale of the AHP. The method
determines the consistent weights and evaluates the composite performance score
of alternatives to get the rank the alternatives. Higher the composite performance
scores of the alternative, higher the rank of that alternative.
In this book, the AHP method is improved by proposing a systematic way of
normalizing the values of the attributes and the conversion of subjective values
into objective values. In the original version of AHP, the method requires pairwise
comparison of various alternatives with respect to each of the attributes and a
pairwise comparison of attributes themselves. The size and number of the com-
parison matrices increases rapidly as the number of alternatives and/or attributes
increases. The AHP method is improved by eliminating the comparison matrices
R. V. Rao, Decision Making in the Manufacturing Environment Using Graph Theory and Fuzzy 7
Multiple Attribute Decision Making Methods, Springer Series in Advanced Manufacturing,
DOI: 10.1007/978-1-4471-4375-8_2, Ó Springer-Verlag London 2013
2. 8 2 Improved Multiple Attribute Decision Making Methods
Table 2.1 Pairwise Degree of importance Definition
comparison scale of attributes
1 Equal importance (no preference)
2 Intermediate between 1 and 3
3 Moderately more important
4 Intermediate between 3 and 5
5 Strongly more important
6 Intermediate between 5 and 7
7 Very strongly important
8 Intermediate between 7 and 9
9 Extremely strongly more important
1/2, 1/3, 1/4, 1/5, 1/6, Reciprocals of 2, 3, 4, 5, 6, 7, 8, and 9
1/7, 1/8, 1/9
required for alternatives. Also, by normalizing the values of attributes by a sys-
tematic way, the rank reversal problem is removed in the improved AHP method.
The steps of the improved AHP method are explained below:
2.1.1 Formulating the Decision Table
Step 1: Identify the selection attributes for the considered decision making prob-
lem and short-list the alternatives on the basis of the identified attributes satisfying
the requirements. A quantitative or qualitative value or its range may be assigned
to each identified attribute as a limiting value or threshold value for its acceptance
for the considered application. An alternative with each of its attribute, meeting the
requirements, may be short-listed. The short-listed alternatives may then be
evaluated using the proposed methodology. The values associated with the attri-
butes for different alternatives may be based on the available data or may be the
estimations made by the decision maker [4].
2.1.2 Deciding Weights of the Attributes
Step 2: Find out the relative importance of different attributes with respect to the
objective. To do so, one has to construct a pairwise comparison matrix using a
scale of relative importance. An attribute compared with it is always assigned the
value 1 so the main diagonal entries of the pairwise comparison matrix are all 1.
The numbers 3, 5, 7, and 9 correspond to the verbal judgments ‘moderate
importance’, ‘strong importance’, ‘very strong importance’, and ‘absolute
importance’ (with 2, 4, 6, and 8 for compromise between the previous values).
Table 2.1 presents the relative importance scale used in the AHP method.
Assuming M attributes, the pairwise comparison of attribute i with attribute
j yields a square matrix AM 9 M where rij denotes the comparative importance of
3. 2.1 Improved Analytic Hierarchy Process Method 9
Table 2.2 Random index (RI) values
n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
RI 0 0 0.52 0.89 1.11 1.25 1.35 1.4 1.45 1.49 1.51 1.54 1.56 1.57 1.59
attribute i with respect to attribute j. In the matrix, rij = 1 when i = j and
rji = 1/rij.
Attribute 2 1 2 3 --- --- M 3
1 r11 r12 r13 --- --- r1M
2 6 r21 r22 r23 --- --- r2M 7
6 7
AM Â M ¼ 3 6 r31 r32 r33 r3M 7 ð2:1Þ
6 7
--- 6 --- --- --- --- --- --- 7
6 7
--- 4 --- --- --- --- --- --- 5
M rM1 rM2 rM3 --- --- r3M
• Find the relative normalized weight (wj) of each attribute by (1) calculating the
geometric mean of ith row and (2) normalizing the geometric means of rows in
the comparison matrix. This can be represented as,
( )1=M
Y
M
GMj ¼ rij and ð2:2Þ
j¼1
X
M
wj ¼ GMj = GMj ð2:3Þ
j¼1
The geometric mean method of AHP is used in the present work to find out the
relative normalized weights of the attributes because of its simplicity and
easiness to find out the maximum Eigen value and to reduce the inconsistency
in judgments.
• Calculate matrix A3 and A4 such that A3 = A1 9 A2 and A4 = A3/A2, where
A2 = [w1, w2, …, wM]T.
• Find out the maximum Eigen value kmax (i.e. the average of matrix A4).
• Calculate the consistency index CI = (kmax - M)/(M - 1). The smaller the
value of CI, the smaller is the deviation from the consistency.
• Obtain the random index (RI) for the number of attributes used in decision
making [2]. Table 2.2 presents the RI values for different number of attributes.
• Calculate the consistency ratio CR = CI/RI. Usually, a CR of 0.1 or less is
considered as acceptable and it reflects an informed judgment that could be
attributed to the knowledge of the analyst about the problem under study.
4. 10 2 Improved Multiple Attribute Decision Making Methods
2.1.3 Calculating Composite Performance Scores
Step 3: The next step is to obtain the overall or composite performance scores for
the alternatives by multiplying the relative normalized weight (wj) of each attribute
(obtained in Step 2) with its corresponding normalized weight value for each
alternative (obtained in Step 1) and making summation over all the attributes for
each alternative.
X
M
À Á
Pi ¼ wj mij normal ð2:4Þ
j¼1
Where (mij)normal represents the normalized value of mij. Pi is the overall or
composite score of the alternative Ai. The alternative with the highest value of Pi is
considered as the best alternative.
2.2 Improved Technique for Order Preference by Similarity
to Ideal Solution Method
The Technique for Order Preference by Similarity to Ideal Solution (TOPSIS)
method was developed by Hwang and Yoon [5]. This method is based on the
concept that the chosen alternative should have the shortest Euclidean distance
from the ideal solution and the farthest from the negative ideal solution. The ideal
solution is a hypothetical solution for which all attribute values correspond to the
maximum attribute values in the database comprising the satisfying solutions; the
negative ideal solution is the hypothetical solution for which all attribute values
correspond to the minimum attribute values in the database. TOPSIS thus gives a
solution that is not only closest to the hypothetically best, that is also the farthest
from the hypothetically worst [4].
The main procedure of the improved TOPSIS method for the selection of the
best alternative from among those available is given below:
2.2.1 Formulating the Decision Table
Step 1: The first step is to determine the objective and to identify the pertinent
evaluation attributes. This step represents a matrix based on all the information
available on attributes. Each row of this matrix is allocated to one alternative and
one attribute to each column. Therefore, an element mij of the decision table gives
the value of the jth attribute in original real values, that is, non-normalized form
and units, for the ith alternative.
5. 2.2 Improved Technique for Order Preference 11
In the case of a subjective attribute (i.e. objective value is not available), a
ranked value judgment on a scale is adopted. Once a subjective attribute is rep-
resented on a scale then the normalized values of the attribute assigned for dif-
ferent alternatives are calculated in the same manner as that for objective
attributes. Normalized decision matrix, Rij, is obtained using the following
expression.
" #1=2
X
M
Rij ¼ mij = m2
ij ð2:5Þ
j¼1
2.2.2 Deciding Weights of the Attributes
P
Step 2: A set of weights wj (for j = 1, 2,…, M) such that wj = 1 may be
decided. The relative importance weights of the attributes can be assigned arbi-
trarily by the decision maker based on his/her preference. In this book, AHP
method is suggested for helping the decision maker to decide the relative
importance weights of attributes in a systematic manner. The relative importance
weights using AHP can be calculated as explained in the Sect. 2.1.2.
2.2.3 Calculating Composite Performance Scores
Step 3:
• Obtain the weighted normalized matrix Vij. This is obtained by the multipli-
cation of each element of the column of the matrix Rij with its associated weight
wj. Hence, the elements of the weighted normalized matrix Vij are expressed as:
Vij ¼ wj Rij ð2:6Þ
• Obtain the ideal (best) and negative ideal (worst) solutions in this step. The ideal
(best) and negative ideal (worst) solutions can be expressed as:
À max Á À min Á
V þ ¼ f Vij =j ; Vij =j0 =i ¼ 1; 2 ; . . .; Ng;
i i ð2:7Þ
È þ þ þ þ
É
¼ V1 ; V2 ; V3 ; . . .; VM
À min Á À max Á
V À ¼ f Vij =j 2 J ; Vij = 2 J 0 =i ¼ 1; 2; . . .; Ng;
i i ð2:8Þ
È À À À À
É
¼ V1 ; V2 ; V3 ; . . .; VM
6. 12 2 Improved Multiple Attribute Decision Making Methods
Where,
J = (j = 1, 2, …, M)/j is associated with beneficial attributes and
J0 = (j = 1, 2, …, M)/j is associated with non-beneficial attributes.
• V+ indicates the ideal (best) value of the considered attribute among the values of
j
the attribute for different alternatives. In case of beneficial attributes (i.e. whose
higher values are desirable for the given application), V+ indicates the higher
j
value of the attribute. In case of non-beneficial attributes (i.e. whose lower values
are desired for the given application), V+ indicates the lower value of the attribute.
j
• V- indicates the negative ideal (worst) value of the considered attribute among
j
the values of the attribute for different alternatives. In case of beneficial attri-
butes (i.e. whose higher values are desirable for the given application), V- j
indicates the lower value of the attribute. In case of non-beneficial attributes (i.e.
whose lower values are desired for the given application), V- indicates the
j
higher value of the attribute.
• Obtain the separation measures. The separation of each alternative from the
ideal one is given by Euclidean distance by the following Eqs.
( )0:5
X
M 2
Sþ
i ¼ Vij À Vjþ ; i ¼ 1; 2; . . .; N ð2:9Þ
j¼1
( )0:5
X
M 2
SÀ
i ¼ Vij À VjÀ ; i ¼ 1; 2; . . .; N ð2:10Þ
j¼1
• The relative closeness of a particular alternative to the ideal solution, Pi, can be
expressed in this step as follows.
À Á
Pi ¼ S À = S þ þ S À
i i i ð2:11Þ
• A set of alternatives is made in the descending order in this step, according to
the value of Pi indicating the most preferred and least preferred feasible solu-
tions. Pi may also be called as overall or composite performance score of
alternative Ai.
2.3 Data Envelopment Analysis Method
Data envelopment analysis (DEA), occasionally called frontier analysis, was first put
forward by [6]. It is a performance measurement technique which can be used for
evaluating the relative efficiency of alternatives for given decision making situation.
After the initial study by [6], DEA has got rapid growth and widespread acceptance.
Hashimoto [7] addressed a ranked voting system to determine an ordering of
candidates in terms of the aggregate vote by rank for each candidate. Sarkis [8]
carried out the evaluation of environmentally conscious manufacturing programs
7. 2.3 Data Envelopment Analysis Methodd 13
using a method involving the syn book of the analytic network process (ANP) and
DEA was carried out. Sarkis [9] provided an empirical evaluation of various DEA
ranking approaches and MADM techniques, which include outranking and multi-
attribute utility techniques, using case study information. Tone [10] proposed a
slacks-based measure (SBM) of efficiency based on input excesses and output
shortfalls. Each decision making unit can be improved and become more efficient by
deleting the input excess and augmenting the output shortfalls. Sun [11] reported on
an application of DEA to evaluate computer numerical control (CNC) machines in
terms of system specification and cost. The methodology proposed for the evaluation
of CNC machines is based on the combination of the Banker, Charnes, and Cooper
(BCC) model and cross-efficiency evaluation. Liu [12] developed a fuzzy DEA/AR
method that is able to evaluate the performance of FMS alternatives when the input
and output data are represented as crisp and fuzzy data. Wang et al. [13] proposed a
DEA model with assurance region (AR) for priority derivation in the AHP to
overcome the shortcomings of the DEAHP such as illogical local weights, over
insensitivity to some comparisons, information loss and overestimation of some
local weights, and provide better priority estimate and better decision conclusions
than the DEAHP. Cooper et al. [14] described various DEA models in their book.
DEA is an extreme point method and compares each alternative with only the
‘‘best’’ alternative. A fundamental assumption behind an extreme point method is
that if a given alternative, A, is capable of producing Y(A) units of output with
X(A) inputs, then other alternatives should also be able to do the same if they were
to operate efficiently. Similarly, if alternative B is capable of producing Y(B) units
of output with X(B) inputs, then other alternatives should also be capable of the
same production schedule. Alternatives A, B and others can then be combined to
form a composite or virtual alternative with composite inputs and composite
outputs. The heart of the analysis lies in finding the ‘‘best’’ virtual alternative for
each real alternative. If the virtual alternative is better than the original alternative
then the original alternative is ‘‘inefficient’’.
For a given MADM problem, alternatives (A1, A2, … and AN) and different
attributes affecting the selection of an alternative are identified. Attributes are
divided into two groups: (1) outputs: attributes for which higher values are
desirable or beneficial attributes and (2) inputs: attributes for which lower values
are desirable or non-beneficial attributes.
Suppose s input items and t output items are selected. Let the input and output
data for alternative Aj be (x1j, x2j, …, xsj), and (y1j, y2j, …, ytj), respectively. The
input data matrix X and the output data matrix Y can be prepared as follows,
0 1
x11 x12 . . . x1N
Bx x22 . . . x2N C
B 21 C
For non-beneficialattributes; X ¼ B .
B. . . . C
C
@. .
. .. . A.
xs1 xs2 . . . xsN
8. 14 2 Improved Multiple Attribute Decision Making Methods
0 1
y11 y12 ... y1N
B y21 y22 ÁÁÁ y2N C
B C
and for non-beneficial attributes; Y ¼ B . . . . C ð2:12Þ
@.. .
. .. . A
.
yt1 yt2 ÁÁÁ ytN
where, X is a (s 9 N) matrix and Y is a (t 9 N) matrix.
2.3.1 The Basic CCR Model
This is one of the most basic DEA models. Suppose the data is given in form of
matrices X and Y, the efficiency of each alternative is measured once. Hence N
optimizations are needed, one for each alternative Aj, to completely solve the
MADM problem. Let the Aj to be evaluated on any trial be designated as Ao, where
o ranges over 1, 2, …, N. The following fractional programming problem is solved
to obtain the values of the input ‘‘weights’’ (vi) (i = 1, 2, …, s) and output
‘‘weights’’ (ur) (r = 1, 2, …, t) as variables.
u1 y1o þ u2 y2o þ Á Á Á þ ut yto
ðFPo Þ max h¼ ð2:13Þ
v1 x1o þ v2 x2o þ Á Á Á þ vs xso
u1 y1j þ Á Á Á þ ut ytj
subject to; 1 ðj ¼ 1; Á Á Á ; N Þ
v1 x1j þ Á Á Á þ vs xsj ð2:14Þ
v1 ; v2 ; Á Á Á ; vs ! 0; u1 ; u2 ; Á Á Á ; ut ! 0
The constraints mean that the ratio of ‘‘virtual output’’ vs. ‘‘virtual input’’
should not exceed 1 for every alternative. The objective is to obtain weights (vi)
and (ur) that maximize the ratio of the alternative Ao, being evaluated. By virtue of
the constraints, the optimal objective value h* is at most 1. Based on the matrix
(X, Y), the CCR model is formulated in as an LP problem with row vector v for
input multipliers and row vector u as output multipliers. These multipliers are
treated as variables in the following LP problem. The above CCR model can be
replaced in matrix form by the following model,
ðLPo Þ max uyo
ð2:15Þ
subject to; vxo ¼ 1
ÀvX1 þ uX2 0
ð2:16Þ
v ! 0; u ! 0
The dual problem of LPo is expressed with a real variable h and a nonnegative
vector k ¼ ðk1 ; . . .; kN ÞT of variables as follows:
9. 2.3 Data Envelopment Analysis Methodd 15
ðDLPo Þ min h
ð2:17Þ
subject to; hx0 À Xk ! 0
Yk ! yo
ð2:18Þ
k ! 0
DLPo has a feasible solution h = 1, ko = 1, kj = 0 (j = o). Hence the optimal
h, denoted by h*, is not greater than 1. On the other hand, due to the nonzero (i.e.
semipositive) assumption for the data, k will be nonzero because yo C 0 and
yo = 0. Hence, h must be greater than zero. Putting this all together, we have
0 h* B 1.
2.3.2 Strengths and Limitations of Basic CCR Model
Strengths:
DEA can be a powerful tool and a few of the characteristics that make it
powerful are:
• DEA utilizes techniques such as mathematical programming which can handle
large numbers of variables and relations (constraints). DEA can handle multiple
inputs and multiple outputs.
• It also does not require prescribing the functional forms that are needed in
statistical regression approaches to find efficiency of alternatives.
• Inputs and outputs can have very different units. For example, one attribute
could be in units of lives saved and the other could be in units of dollars without
requiring an a priori tradeoff between the two.
Limitations:
The same characteristics that make DEA a powerful tool can also create
problems. Limitations of DEA are listed below:
• DEA is good at estimating ‘‘relative’’ efficiency of an alternative but it cannot
give absolute efficiency. In other words, it can tell how well an alternative is
performing compared to peers but not compared to a ‘‘theoretical maximum’’ of
that particular alternative.
• Since DEA weights of attributes are decided by the method itself such that the
efficiency of the alternative under consideration is maximized, decision maker’s
opinion is not considered for the final ranking.
• Since a standard formulation of DEA creates a separate linear program (LP) for
each alternative, large problems can be computationally intensive.
10. 16 2 Improved Multiple Attribute Decision Making Methods
2.3.3 Reduced CCR Model
Solving the basic CCR model gives the efficiencies of alternatives which are then
used to rank the alternatives. The maximum efficiency obtained for any alternative
by this model is 1. In many cases two or more alternatives get efficiencies equal to
1 upon ranking by the basic CCR model. In this situation it is not possible to rank
the alternatives completely using DEA scores. To overcome this situation a var-
iation of the CCR model is proposed by Andersen and Petersen [15] that allows the
use of DEA efficiency scores for complete ranking of alternatives. In their model,
they simply eliminate the test unit from the constraint set. The new formulation is
known as Reduced CCR (RCCR) model. The new formulation is represented as
follows:
Maximize uyo ð2:19Þ
Subject to; Vx0 ¼ 1
À vX þ uY 0; excluding the oth constraint ð2:20Þ
v ! 0; u ! 0
2.3.4 Improved RCCR/Assurance Region Model
It should be noted that efficiencies in RCCR model of DEA also depend on
accuracy of the values of the attributes available for the comparison of alternatives.
There are some cases in which the judgment by RCCR model is not adequate. In
other words, in some cases an alternative is not necessarily judged to be inefficient
by the RCCR model even though it is inefficient. So, it may not be representing the
true ranking of various alternatives. This is because of the fact that there is no
provision in RCCR model of DEA to add the information about the importance of
one attribute over the other. AR approach can be used to provide the information
about the comparative importance of attributes.
AR constraints for two input weights vi and vj is initiated by setting lower (LB)
and upper bounds (UB) on each weight [8]. These LB and UB may be ranges for
preference weights for each of the attributes as defined by the decision makers.
The AR constraints relate the weights and they bounds to each other. The gen-
eralized AR constraint sets that are derived from LB and UB data for non-
beneficial attributes are:
LBi UBi
vi ! vj and vi vj ð2:21Þ
UBj LBj
or
vj Á LBi À vi Á UBj 0 and vi Á LBj À vj Á UBi 0 ð2:22Þ
11. 2.3 Data Envelopment Analysis Methodd 17
Similar relations can be developed for all other non-beneficial and beneficial
attributes and they can be arranged in form of matrices P and Q for non-beneficial
and beneficial attributes, respectively. The basic CCR model is improved by
introducing an AR to add the decision maker’s perception in calculating the
efficiencies and written as given by Eqs. (2.23–2.24).
Maximize uyo
Subject to; vxo ¼ 1 ð2:23Þ
ÀvX þ uY 0
vP 0
uQ 0 ð2:24Þ
v ! 0; u ! 0:
This model of DEA can be effectively used for multiple attribute decision
making situations.
The RCCR/AR model helps the decision maker to restrict the attribute weights
within a range. But, here also the weights of attributes are determined by the DEA
method itself. So, the DEA efficiency scores obtained in this case are also not
reliable. In this book, it is suggested to set same values of lower bound and upper
bound in order to assign the exact weights to the attributes.
2.4 Improved Preference Ranking Organization Method
for Enrichment Evaluations
PROMETHEE (Preference Ranking Organization Method for Enrichment
Evaluations) was introduced by Brans et al. [16] and belongs to the category of
outranking methods. In this section, the focus is put on the PROMETHEE method
and is improved by incorporating a fuzzy conversion scale to convert the quali-
tative attribute into a quantitative attribute and AHP method is incorporated for
deciding the attributes’ weights.
The improved PROMETHEE method involves a pairwise comparison of
alternatives on each single attribute in order to determine partial binary relations
denoting the strength of preference of an alternative ‘a1’ over alternative ‘a2’. In
the evaluation table, the alternatives are evaluated on different attributes. These
evaluations involve mainly quantitative data. The implementation of improved
PROMETHEE requires additional types of information, namely:
• information on the relative importance that is the weights of the attributes
considered,
• information on the decision maker preference function, which he/she uses when
comparing the contribution of the alternatives in terms of each separate
attribute.
12. 18 2 Improved Multiple Attribute Decision Making Methods
It may be added here that the original PROMETHEE method can effectively deal
mainly with quantitative attributes. However, there exists some difficulty in the case
of qualitative attributes. In the case of a qualitative attribute (i.e. quantitative value
is not available); a ranked value judgment on a fuzzy conversion scale is adopted.
By using fuzzy set theory, the value of the attributes can be first decided as lin-
guistic terms, converted into corresponding fuzzy numbers and then converted to
the crisp scores. The presented numerical approximation system systematically
converts linguistic terms to their corresponding fuzzy numbers. A 11-point scale
fuzzy conversion scale is presented in this book, as shown in Fig. 2.1, to help the
users in assigning the quantitative values to the qualitative terms [17].
Values corresponding to the conversion scale are represented in Table 2.3 for
better understanding. Appendix I describes the development of the 11-point fuzzy
conversion scale shown in Fig. 2.1. Once a qualitative attribute is represented on a
scale then the alternatives can be compared with each other on this attribute in the
same manner as that for quantitative attributes.
The improved PROMETHEE methodology for decision making in the manu-
facturing environment is described below:
2.4.1 Formulation of Decision Table
Step 1: This step is similar to step 1 of the improved AHP method.
2.4.2 Deciding Weights of the Attributes
Step 2: In the PROMETHEE method suggested by Brans et al. [16], there is no
systematic way to assign weights of relative importance of attributes. Hence, in the
improved PROMETHEE method AHP method is suggested for deciding the
weights of relative importance of the attributes [18]. The procedure for the same is
as explained in the step 2 of the improved AHP method.
2.4.3 Improved PROMETHEE Calculations
Step 3: After calculating the weights of the attributes using AHP method, the next
step is to have the information on the decision maker preference function, which
he/she uses when comparing the contribution of the alternatives in terms of each
attribute. The preference function (Pj) translates the difference between the eval-
uations obtained by two alternatives (a1 and a2) in terms of a particular attribute,
into a preference degree ranging from 0 to 1. Let Pj, a1a2 be the preference function
associated to the attribute bj.
13. 2.4 Improved Preference Ranking Organization Method 19
Fig. 2.1 Linguistic terms to
fuzzy numbers conversion
(11-point scale) ([17];
Reprinted with permission
from Ó Elsevier 2010)
Table 2.3 Values of Qualitative measures of selection attribute Assigned value
selection attributes
Exceptionally low 0.0455
Extremely low 0.1364
Very low 0.2273
Low 0.3182
Below average 0.4091
Average 0.5000
Above average 0.5909
High 0.6818
Very high 0.7727
Extremely high 0.8636
Exceptionally high 0.9545
 Ã
Pj;a1a2 ¼ Gj bj ða1Þ À bj ða2Þ ð2:25Þ
0 Pj;a1a2 1 ð2:26Þ
Where Gi is a non-decreasing function of the observed deviation (d) between two
alternatives ‘a1’ and ‘a2’ over the attribute ‘bj’. In order to facilitate the selection of a
specific preference function, six basic types were proposed [16, 19, 20]. Preference
‘‘usual function’’ is equal to the simple difference between the values of the attribute
‘bj’ for alternatives ‘a1’ and ‘a2’. For other preference functions, not more than two
parameters (threshold q, p, or s) have to be fixed [21]. Indifference threshold ‘q’ is the
largest deviation to consider as negligible on that attribute and it is a small value with
respect to the scale of measurement. Preference threshold ‘p’ is the smallest deviation
to consider decisive in the preference of one alternative over another and it is a large
value with respect to the scale of measurement. Gaussian threshold‘s’ is only used
with the Gaussian preference function. It is usually fixed as an intermediate value
between indifference and a preference threshold.
14. 20 2 Improved Multiple Attribute Decision Making Methods
Fig. 2.2 Preference indices for a problem consisting of 3 alternatives and 4 attributes ([20];
Reprinted with permission from Ó Taylor and Francis 2012)
If the decision maker specifies a preference function Pi and weight wi for each
attribute ‘bj’ (j = 1, 2, …, M) of the problem, then the multiple attribute prefer-
ence index Ga1a2 is defined as the weighted average of the preference functions Pj:
Y X
M
a1a2
¼ wj Pj;a1a2 ð2:27Þ
j¼1
Ga1a2 represents the intensity of preference of the decision maker of alternative
‘a1’ over alternative ‘a2’, when considering simultaneously all the attributes. Its
value ranges from 0 to 1. This preference index determines a valued outranking
relation on the set of alternatives. As an example, the schematic calculation of the
preference indices for a problem consisting of 3 alternatives and 4 attributes is
given in Fig. 2.2.
For improved PROMETHEE outranking relations, the leaving flow, entering
flow, and the net flow for an alternative ‘a’ belonging to a set of alternatives A are
defined by the following Eqs.:
X
U þ ð aÞ ¼ Pxa ð2:28Þ
xeA
X
U À ð aÞ ¼ PaX ð2:29Þ
xeA
U ð aÞ ¼ U þ ð aÞ À U À ð aÞ ð2:30Þ
U+(a) is called the leaving flow, U-(a) is called the entering flow, and U(a) is
called the net flow. U+(a) is the measure of the outranking character of ‘a’
15. 2.4 Improved Preference Ranking Organization Method 21
(i.e. dominance of alternative ‘a’ over all other alternatives) and U-(a) gives the
outranked character of ‘a’ (i.e. degree to which alternative ‘a’ is dominated by all
other alternatives). The net flow, U(a), represents a value function, whereby a
higher value reflects a higher attractiveness of alternative ‘a’. The net flow values
are used to indicate the outranking relationship between the alternatives. For
example, for each alternative ‘a’, belonging to the set A of alternatives, Ga1a2 is an
overall preference index of ‘a1’ over ‘a2’, taking into account all the attributes,
U+(a) and U-(a). Alternative ‘a1’ outranks ‘a2’ if U(a1) [ U(a2) and ‘a1’ is said
to be indifferent to ‘a2’ if U(a1) = U(a2).
Y X
4
31
¼ wj Pj;31
j¼1
The proposed decision making framework using the improved PROMETHEE
method provides a complete ranking of the alternatives from the best to the worst
one using the net flows. A computer program is developed in the present work in
MATLAB environment that can be used for the improved PROMETHEE calcu-
lations. Any number of alternatives and the attributes can be considered and the
time required for computation is less as compared to DEA method.
2.5 Improved ELimination Et Choix Traduisant
la REalité Method
ELECTRE is one of the widely accepted methods for multiple attributes decision
making in various fields of science and technology. However, only a few appli-
cations are found in the field of manufacturing, such as manufacturing system
selection [22], facility location selection [23], material selection [24, 25], and
vendor selection [13, 26, 27]. Furthermore, the researchers had mainly focused
upon the quantitative attributes and had not effectively considered the fuzzy and/or
linguistic attributes.
The outranking method ELimination Et Choix Traduisant la REalité (ELECTRE)
i.e. ELimination and Choice Expressing the Reality, was developed by Roy [28].
Like all outranking methods, ELECTRE proceeds to a pairwise comparison of
alternatives in each single attribute in order to determine partial binary relations
denoting the strength of preference of one alternative over the other. The ELECTRE
method is a highly efficient multiple attribute decision making method, which takes
into account the uncertainty and vagueness, which are usually inherent in data
produced by predictions and estimations. Three different threshold values are to be
defined for this purpose. The thresholds of preference (p), indifference (q), and veto
(v) have been introduced in the ELECTRE method, so that outranking relations are
not expressed mistakenly due to differences that are less important. These three
thresholds can be defined as follows:
16. 22 2 Improved Multiple Attribute Decision Making Methods
Preference threshold (p):—Preference threshold (p) is a difference of objective
values of an attribute above which the decision maker strongly prefers an alter-
native over other for the given attribute. Alternative b is strictly preferred to
alternative a in terms of attribute i if, fi(b) C fi(a) ? p.
Indifference threshold (q):—Indifference threshold (q) is a difference of attri-
bute values beneath which the decision maker is indifferent between two alter-
natives for the given attribute. Alternative b is indifferent to alternative a in terms
of attribute i if, fi(b) fi(a) ? q.
Veto threshold (v):—Veto threshold (v) blocks the outranking relationship
between alternatives for the given attribute. Alternative a cannot outrank alter-
native b if the performance of b exceeds that of a by an amount greater than the
veto threshold, i.e. fi(b) C fi(a) ? v.
ELECTRE method in its basic form can successfully deal with quantitative
attributes. However, to deal with a qualitative attribute (i.e. quantitative value is
not available); a ranked value judgment on a fuzzy conversion scale (Table 2.3) is
used. Once a qualitative attribute is represented on a scale then the alternatives can
be compared with each other on this attribute in the same manner as that for
quantitative attributes.
The methodology for decision making in the manufacturing environment using
improved ELECTRE method can be described as below:
2.5.1 Construction of the Decision Table
Step 1: Form a decision table using the information available regarding the
alternatives and attributes. This step is similar to step-1 of the improved AHP
method discussed in the Sect. 2.1.1. In this book, ELECTRE method is improved
by incorporating a fuzzy conversion scale because the basic ELECTRE method
cannot deal with the qualitative attributes.
2.5.2 Calculating the Weights of the Attributes Using AHP
Step 2: The basic ELECTRE method also lacks in systematic method of deciding
relative importance weights of the attributes. Hence, in this book, AHP procedure is
suggested for deciding the relative importance weights as explained in Sect. 2.1.2.
2.5.3 Calculations Using ELECTRE for Final Ranking
Step 3: After calculation of weights, the operational implementation of the out-
ranking principles of improved ELECTRE is now described, assuming that all
17. 2.5 Improved ELimination Et Choix Traduisant la REalité Method 23
attributes are to be beneficial (i.e. higher value is desired). If fj(a1) is defined as the
score of alternative ‘‘a1’’ on attribute j and wj represents the weight of attribute j,
the concordance index C(a1,a2) is defined as follows:
1X M XM
Cða1; a2Þ ¼ wj cj ða1; a2Þ; where W ¼ wj ð2:31Þ
W j¼1 j¼1
where,
8
1; if fj ða1Þ þ qj ! fj ða2Þ
0; if fj ða1Þ þ pj fj ða2Þ;
cj ða1; a2Þ ¼ ð2:32Þ
pj þ fj ða1Þ À fj ða2Þ
otherwise:
: pj À qj
j¼ 1; 2;. . .; M
The concordance index C(a1,a2) indicates relative dominance of alternative
‘‘a1’’ over alternative ‘‘a2’’, based on the relative importance weightings of the
relevant decision attributes. In case, if any attribute is non-beneficial, negative of
the objective values can be considered. To calculate discordance, a threshold,
called the veto threshold, is defined. The veto threshold (vj ) allows for the pos-
sibility of alternative ‘‘a1’’ outranking ‘‘a2’’ to be refused totally if, for any one
attribute j; fj ða2Þ ! fj ða1Þ þ vj . The discordance index for each attribute j,
dj ða1; a2Þ is calculated as:
8
0; if fj ða1Þ þ pj ! fj ða2Þ
1; if fj ða1Þ þ vj fj ða2Þ;
dj ða1; a2Þ ¼ f ða2Þ À f ða1Þ À p ð2:33Þ
j
j j
: otherwise j¼ 1; 2;. . .; M
v j À pj
The discordance index dj ða1; a2Þ measures the degree to which alternative
‘‘a1’’ is worse than ‘‘a2’’. The essence of the discordance index is that any out-
ranking of ‘‘a2 ‘‘by ‘‘a1’’ indicated by the concordance index can be overruled if
there is any attribute for which alternative ‘‘a2’’ outperforms alternative ‘‘a1’’ by at
least the veto threshold. The final step in the model building phase is to combine
these two measures to produce a measure of the degree of outranking; that is, a
credibility index which assesses the strength of the assertion that ‘‘a1 is at least as
good as a2’’. The credibility degree for each pair ða1; a2Þ 2 A is defined as:
8
Cða1,a2Þ; if; dj ða1,a2Þ Cða1,a2Þ; 8j
where; j 2 Jða1,a2Þ is the set of criteria
Sða1,a2Þ ¼ such that dj ða1,a2Þ [ Cða1,a2Þ ð2:34Þ
Cða1,a2ÞÁ Q 1Àdj ða1,a2Þ
: otherwise
1ÀCða1,a2Þ
j2Jða1;a2Þ
18. 24 2 Improved Multiple Attribute Decision Making Methods
This concludes the construction of the outranking model. The next step in the
outranking approach is to create the hierarchy of the alternative solutions from the
elements of the credibility matrix. The determination of the hierarchy rank is
achieved by calculating the superiority ratio for each alternative. This ratio is
calculated from the credibility matrix and is the fraction of the elements’ sum of
every alternative’s respective column. The numerator represents the total domi-
nance of the specific alternative over the remaining alternatives and the denominator
the dominance of the remaining alternatives over the former. The numerator for each
alternative is also known as concordance credibility is calculated as follows:
X
/þ ða1Þ ¼ Sða1; a2Þ ð2:35Þ
a2 2A
The denominator for each alternative i.e. discordance credibility is calculated as
follows:
X
/À ða1Þ ¼ Sða2; a1Þ ð2:36Þ
a2 2A
Finally, the superiority ratio is obtained as:
þ
R ða1Þ ¼ / ða1Þ /À ða1Þ ð2:37Þ
The alternatives are then arranged in ascending order of their superiority ratio.
The alternatives with higher values of superiority ratio are preferred over the others.
2.6 Improved COmplex PRoportional ASsessment Method
This section describes another decision making method known as COPRAS
(COmplex PRoportional ASsessment) method for decision making in the manu-
facturing environment.
COPRAS is one of the MADM methods for decision making in various fields of
science and technology. The COPRAS method uses a stepwise ranking and
evaluating procedure of the alternatives in terms of significance and utility degree.
The success of the methodology is basically due to its simplicity and to its par-
ticular friendliness of use. However, only a few successful applications of
COPRAS method have been reported in literature in various fields for decision
making, such as construction [29], sustainability evaluation [30], buildings con-
struction [31, 29], road design [32], and education [33]. However, the researchers
had mainly focused upon the quantitative attributes and had not effectively con-
sidered the qualitative attributes.
The various steps of improved COPRAS method presented in this book for
decision making in the manufacturing environment are described below:
19. 2.6 Improved COmplex PRoportional ASsessment Method 25
2.6.1 Construction of the Decision Table
Step 1: Similar to step 1 of the improved AHP method discussed in Sect. 2.1.1,
prepare a decision table which shows data of various available alternatives and
attributes affecting their selection. The COPRAS method is improved by including
a fuzzy scale for conversion of converting qualitative data into quantitative data.
2.6.2 Calculating the Weights of the Attributes Using AHP
Step 2: This step of weight calculation is also similar to step 2 of the improved
PROMETHEE method discussed in Sect. 2.4.2. The AHP method is suggested in
the improved COPRAS method for systematically deciding the relative importance
weights of the attributes.
2.6.3 COPRAS Calculations for Final Ranking
Step 3: The procedure of the COPRAS method consists of the following steps:
• Preparing of the decision making matrix X:
2 3
m11 m12 Á Á Á m1M
6 m21 m22 Á Á Á m2M 7
6 7
X¼6 . . . 7 ð2:38Þ
4 .. .
. ÁÁÁ .
. 5
mN1 mN2 Á Á Á mNM
where, N is the number of alternatives and M is the number of attributes.
• Normalization of the decision making matrix X. The normalized values of this
matrix are calculated using following formula.
mij
xij ¼ ; i ¼ 1; 2; . . .; N and j ¼ 1; 2; . . .; M: ð2:39Þ
P
N
mij
i¼1
where, j refers to the attribute and i to the alternative. After this step, the
normalized decision making matrix can be presented as:
2 3
x11 x12 Á Á Á x1M
6 x21 x22 Á Á Á x2M 7
6 7
X¼6 . . . 7 ð2:40Þ
4 . . .
. ÁÁÁ . 5
.
xN1 xN2 Á Á Á xNM
20. 26 2 Improved Multiple Attribute Decision Making Methods
^
• Calculation of the weighted normalized decision matrix X . The weighted
normalized values bij are calculated as,
x
bij ¼ xij à wj ; i ¼ 1; 2; . . .; N and j ¼ 1; 2; . . .; M:
x ð2:41Þ
where, wj is significance (weight) of the jth attribute. After this step the
weighted normalized decision making matrix is formed as:
2^ ^ ^ 3
x x ÁÁÁ x
6 11 12 1M 7
6^ ^ ^ 7
^ 6 x x ÁÁÁ x 7
X ¼ 6 21 22
6 .
2M 7
7 ð2:42Þ
6 . .
. ÁÁÁ . 7 . 5
4 . . .
^ ^ ^
x x ÁÁÁ x
N1 N2 NM
• Calculate sums Pi of attributes values for which larger values are more pref-
erable i.e. beneficial attributes, for all the alternatives:
X
k
Pi ¼ bij
x ð2:43Þ
j¼1
Where, k is number of attributes which must be maximized i.e. beneficial
attributes. (It is assumed that in the decision matrix the first columns are of bene-
ficial attributes and columns for non-beneficial attributes are placed afterwards).
• Calculate sums Ri of attributes values in which smaller values are more pref-
erable i.e. non-beneficial attributes, for all the alternatives:
X
M
Ri ¼ bij
x ð2:44Þ
j¼kþ1
Hence (M-k) is the number of non-beneficial attributes which must be
minimized.
• Determining the minimal value of Rj:
Rmin ¼ min Ri ; i ¼ 1; 2; 3; . . .; N ð2:45Þ
i
• Calculation of the relative weight of each alternative Qi:
X
M X
M
Qi ¼ Pi þ ½ðRmin Ri Þ=ðRi ðRmin =Ri ÞÞŠ ð2:46Þ
i i
Equation (2.46) can be written as follows:
21. 2.6 Improved COmplex PRoportional ASsessment Method 27
XM X
M
Qi ¼ Pi þ ½ð Ri Þ=ðRi ð1=Ri ÞÞŠ ð2:47Þ
i i
• Determination of the optimality criterion K:
K ¼ max Qi ; i ¼ 1; 2; 3; . . .; N ð2:48Þ
i
• Determination of the priority of the alternative: The greater significance (rela-
tive weight of alternative) Qi, the higher is the priority (rank) of the alternative.
In the case of Qmax, the satisfaction degree is the highest.
• Calculation of the utility degree of each alternative:
Ni ¼ Qi =Qmax Þ Ã 100 % ð2:49Þ
where, Qi and Qmax are the significance of alternatives obtained from
Eq. (2.47).
The improved COPRAS method presented in this section uses a stepwise
ranking and evaluating procedure of the alternatives considering fuzzy scale for
qualitative attributes and calculating relative importance weights of attributes
using AHP method for better consistency in judgments. The ranking is determined
by examining utility degree calculated from Eq. (2.49). Complete ranking can be
obtained by arranging the alternatives in the descending order of their utility
degrees as higher values of utility degree are preferred over the lower ones. The
proposed decision making framework using improved COPRAS method provides
a complete ranking of the alternatives from the best to the worst one.
2.7 Improved Gray Relational Analysis Method
Gray relational analysis (GRA) is one of the derived evaluation methods based on
the concept of gray relational space (GRS). The GRA method is widely applied in
various areas, such as economics, marketing, and agriculture [34, 35].
The main procedure of GRA is firstly translating the performance of all alter-
natives into a comparability sequence. This step is called data pre-processing.
According to these sequences, a reference sequence (ideal target sequence) is
defined. Then, the gray relational coefficient between all comparability sequences
and the reference sequence for different values of distinguishing coefficient (n) are
calculated. Finally, based on these gray relational coefficients, the gray relational
grade between the reference sequence and every comparability sequences is cal-
culated. If an alternative gets the highest gray relational grade with the reference
sequence, it means that the comparability sequence is most similar to the reference
sequence and that alternative would be the best choice [36].
The steps of improved GRA are described below [37].
22. 28 2 Improved Multiple Attribute Decision Making Methods
Step 1: Data pre-processing
For a multiple attribute decision making problem having N alternatives and M
attributes, the general form of decision matrix is as shown in Table 1.1.
It may be mentioned here that the original GRA method can effectively deal
mainly with quantitative attributes. However, there exists some difficulty in the
case of qualitative attributes. In the case of a qualitative attribute (i.e. quantitative
value is not available); a ranked value judgment on a fuzzy conversion scale can be
adopted as explained in Sect. 2.4.
The term mij can be translated into the comparability sequence mij = (xi1,
xi2,…, xij,…, xiM), where xij is the normalized value of mij for attribute
j (j = 1,2,3,….,M) of alternative i (i = 1,2,3,….,N). After normalization, decision
matrix becomes the normalization matrix. However, the normalized values of mij
are determined by the use of the Eqs. (2.50)–(2.52), which are for beneficial type,
non-beneficial type, and target value type attributes, respectively. These are
described as follows [36]:
1. If the expectancy is larger-the-better (i.e. beneficial attribute), then it can be
expressed by
mij À minfmij g
xij ¼ for i ¼ 1; 2; . . .; N and j ¼ 1; 2; . . .; M ð2:50Þ
maxfmij g À minfmij g
2. If the expectancy is smaller-the-better (i.e. non-beneficial attribute), then it can
be expressed by
maxfmij g À mij
xij ¼ for i ¼ 1; 2; . . .; N and j ¼ 1; 2; . . .; M ð2:51Þ
maxfmij g À minfmij g
3. If the expectancy is nominal-the-best (i.e. closer to the desired value or target
value), then it can be expressed by
mij À mÃ
j
xij ¼ 1 À ð2:52Þ
maxfmaxfmij g À mà ; mà À minfmij gg
j j
for i ¼ 1; 2; . . .; N and j ¼ 1; 2; . . .; M
where m* is closer to the desired value of jth attribute.
j
Step 2: Reference sequence
In comparability sequence all performance values are scaled to [0, 1]. For an
attribute j of alternative i, if the value xij which has been processed by data pre-
processing procedure is equal to 1 or nearer to 1 than the value for any other
alternative, then the performance of alternative i is considered as best for the
attribute j. The reference sequence X0 is defined as (x01, x02,…, x0j,…,
x0M) = (1,1,…,1,…,1), where x0j is the reference value for jth attribute and it aims to
23. 2.7 Improved Gray Relational Analysis Method 29
find the alternative whose comparability sequence is the closest to the reference
sequence.
Step 3: Gray relational coefficient
Gray relational coefficient is used for determining how close xij is to x0j. The
larger the gray relational coefficient, the closer xij and x0j are. The gray relational
coefficient can be calculated by Eq. (2.53) [36].
ðDmin þ n Á Dmax Þ
cðx0j ; xij Þ ¼ for i ¼ 1; 2; . . .; N and j ¼ 1; 2; . . .; M ð2:53Þ
ðDij þ n Á Dmax Þ
In Eq. (2.53), c(x0j, xij) is the gray relational coefficient between xij and x0j and
Dij ¼ jx0j À xij j ;
È É
Dmin ¼ Min Dij ; i ¼ 1; 2; . . .; N; j ¼ 1; 2; . . .; M
È É
Dmax ¼ Max Dij ; i ¼ 1; 2; . . .; N; j ¼ 1; 2; . . .; M
-
n = distinguishing coefficient, n C (0, 1].
Distinguishing coefficient (n) is also known as the index for distinguishability.
Smaller the n is, the higher is its distinguishability. It represents the equation’s
‘‘contrast control’’. The purpose of n is to expand or compress the range of the gray
relational coefficient. Different n may lead to different solution results. Decision
makers should try several different n and analyze the impact on the GRA results
[36]. In many situations, n takes the value of 0.5 because this value usually offers
moderate distinguishing effects and good stability [38]. In this work, different n
values are considered for the analysis.
Step 4: Gray relational grade
The measurement formula for quantification in GRS is called the gray relational
grade. The gray relational grade (gray relational degree) indicates the degree of sim-
ilarity between the comparability sequence and the reference sequence. It is a weighted
sum of the gray relational coefficients and it can be calculated using Eq. (2.54) [36].
X
M
CðX0 ; Xi Þ ¼ wj Á cðx0j ; xij Þ for i ¼ 1; 2; . . .; N ð2:54Þ
j¼1
P
M
Where, wj ¼ 1
j¼1
The original GRA method has not specified any systematic method of deciding
the weights of relative importance of the attributes. Hence, in this method, an
(AHP) procedure is suggested for deciding the weights of relative importance of
the attributes.
In Eq. (2.54), C(X0, Xi) is the gray relational grade between the comparability
sequence Xi and reference sequence X0. It represents the level of correlation
between the reference sequence and the comparability sequence. wj is the weight
of attribute j and usually depends on decision makers’ judgment or the structure of
the proposed problem. The gray relational grade indicates the degree of similarity
24. 30 2 Improved Multiple Attribute Decision Making Methods
between the comparability sequence and the reference sequence. If an alternative
gets the highest gray relational grade with the reference sequence, it means that
comparability sequence is most similar to the reference sequence and that alter-
native would be the best choice [39].
In this book, several values of n are considered to find the rankings of given
alternatives. Each n gives its own ranking. To get the final GRA ranking ‘‘Mode
principle’’ is applied, which considers the effect of all n values. The ‘‘Mode’’ is the
value that occurs most often. In ‘‘Mode principle’’, the alternative having mode
number at rank 1 position is selected and given final GRA rank as 1. Similarly, the
alternative having mode number at rank 2 position is selected and given final GRA
rank as 2 and so on. Those alternatives, which have been already given final GRA
rankings, are not considered further to find next final GRA ranking.
2.8 Improved Utility Additive Method
The purpose of this method is to assess the additive utility functions which
aggregate multiple criteria in a composite criterion, using the information given by
a subjective ranking on a set of stimuli or actions (weak order comparison judg-
ments) and the multiple criteria evaluations of these actions. It is an ordinal
regression method using LP to estimate the parameters of the utility function.
The model assessed by Utility Additive (UTA) is not a single utility function,
but is a set of utility functions, all of them being models consistent with the
decision maker’s a priori preferences. In order to assess such a set of utility
functions, an ordinal regression method is used. Using LP, it adjusts optimally
additive nonlinear utility functions so that they fit data which consist of multiple
criteria evaluations of some alternatives and a subjective ranking of these alter-
natives given by the decision maker.
The UTA method proposed by Jacquet-Lagreze and Siskos [40] aims at
inferring one or more additive value functions from a given ranking on a reference
set AR. The method uses special LP techniques to assess these functions so that the
ranking obtained through these functions on AR is as consistent as possible with
the given one. This LP model is solved and marginal utility values are obtained.
Then the utility value of each alternative is calculated. Higher the utility value,
better the alternative. The ranking based on utility value is the ranking without
considering the weights of attributes, i.e. equal weightage is given to all attributes.
To consider the weights of attributes, the alternatives are chosen based on
weighted utility of alternatives. The procedure of UTA method is described below.
A set of alternatives, called ‘A’, is considered which is valued by a family of
attributes g = (g1, g2, …, gM). The method uses a classical operational attitude of
assessing a model of overall preference of an individual and leads to the aggre-
gation of all attributes into a unique criterion called a utility function U(g) [40].
U ðgÞ ¼ U ðg1 ; g2 ; . . .; gM Þ ð2:55Þ
25. 2.8 Improved Utility Additive Method 31
Let, P is the strict preference relation and I is the indifference relation. If
g(a) = [g1(a), g2(a), …, gM(a)] is the multiple attribute evaluation of an alterna-
tive ‘a’, then the following properties generally hold for the utility function U:
U ½gðaÞŠ [ U ½gðbÞŠ , aPb ð2:56Þ
U ½gðaÞŠ ¼ U ½gðbÞŠ , aIb ð2:57Þ
And the relation R ¼ P [ Iis a weak order.
The criteria (i.e. attributes) aggregation model in UTA is assumed to be an un-
weighted additive value function of the following form
X
M
UðgÞ ¼ uj ðgj Þ ð2:58Þ
j¼1
where uj(gj) is the marginal utility of the attribute gj for the given alternative,
which is entirely determined by the attribute gj. Let gà and gjà be respectively the
j
most and least preferred value (grade) of the attribute j.
The most common normalization constraints are the following:
8
P
M
uj jðgÃ Þ ¼ 1
j
j¼1 ð2:59Þ
:
uj ðgjÃ Þ ¼ 0; for all j ¼ 1; 2; . . .; M
On the basis of the above additive model and taking into account the preference
-
conditions, the value of each alternative a C AR may be written as
X
M
U 0 ½gðaÞŠ ¼ uj ½gj ðaÞŠ þ rðaÞ 8 a C AR ð2:60Þ
j¼1
where r(a) is a potential error relative to the utility
X
M
U½gðaÞŠ ¼ uj ½gj ðaÞŠ ð2:61Þ
j¼1
In order to estimate the corresponding marginal value functions in a piecewise
linear form, Jacquet-Lagreze and Siskos [40] proposed the use of linear interpo-
lation. For each attribute, the interval [gà ,gjà ] is cut into (aj-1) equal intervals, and
j
thus the end points gij are given by the formula:
iÀ1 Ã
gij ¼ gjà þ ðg À gjÃ Þ 8i ¼ 1; 2; . . .; aj ð2:62Þ
aj À 1 j
The marginal value of an alternative ‘a’ is approximated by a linear interpo-
lation, and thus, for gj ðaÞ 2 ½gij À giþ1 Š:
j
26. 32 2 Improved Multiple Attribute Decision Making Methods
gj ðaÞ À gij
uj ½gj ðaÞŠ ¼ uj ðgij Þ þ ½uj ðgiþ1 Þ À uj ðgij ÞŠ
j ð2:63Þ
giþ1 À gij
j
The set of reference alternatives AR = {a1, a2, …, aN} is also rearranged in
such a way that a1 is the head of the ranking (best) and aN its tail (worst). Since the
ranking has the form of a weak order R, for each pair of consecutive alternatives
(ak, ak+1) it holds either ak 1 akþ1 (preference) or ak $ akþ1 (indifference). Thus, if
Dðak ; akþ1 Þ ¼ U 0 ½gðak ÞŠ À U 0 ½gðakþ1 ÞŠ ð2:64Þ
Then one of the following holds:
Dðak ; akþ1 Þ ! d if ak 1 akþ1
ð2:65Þ
Dðak ; akþ1 Þ ¼ 0 if ak $ akþ1
where d is a small positive number so as to discriminate significantly two
successive equivalence classes of R.
The marginal value functions are finally estimated by means of the following
LP, in which objective function is depending on the r(a) and indicating the amount
of total deviation. P
LP model: MinimizeðFÞ ¼ rðaÞ
a2AR
Subject to:
P 9
LP model : Minimize ðFÞ ¼ rðaÞ
a2AR
Subject to:
'
Dðak ; akþ1 Þ ! d if ak 1 akþ1
8k
=
Dðak ; akþ1 Þ ¼ 0 if ak $ akþ1
ð2:66Þ
uj ðgiþ1 Þ À uj ðgij Þ ! 0 8i and j;
j
P
M
uj ðgÃ Þ ¼ 1; uj ðgjÃ Þ ¼ 0
j
j¼1
uj ðgij Þ ! 0; rðaÞ ! 0 8a 2 AR ; 8i and j ;
This LP model is solved and marginal utility values are obtained. Then the
utility value [U(a)] of each alternative is calculated. Higher the U(a) value, better
the alternative. The ranking based on U(a) is the ranking without considering the
weights of attributes. In this method, the improvement is made by incorporating
the weights to get rank of alternatives. To consider the weights of attributes, the
alternatives are chosen based on weighted utility of alternatives rather than the
utility value which is given in Eq. 2.67. In this method, an AHP procedure is
suggested for deciding the weights of relative importance of attributes.
27. 2.8 Improved Utility Additive Method 33
X
M
Weighted UðaÞ ¼ wj uj ½gj ðaÞŠ ð2:67Þ
j¼1
Steps for solving MADM problems using the improved UTA method are as
follows:
1. Decision matrix: find the decision matrix. The attributes of the decision matrix
can be objective or subjective in nature. The subjective values of attributes are
to be converted into corresponding crisp value. In this method, to take care of
subjectiveness of attributes a seven point fuzzy scale can be used which sys-
tematically converts the subjective values of attributes into corresponding crisp
values as explained in Sect. 2.4.
2. Reference sequence of alternatives: get the reference sequence (AR) of alter-
P
natives; it can be found based on M xij , where xij is the normalized value of
j¼1
jth attribute for ith alternative or it is decided by the decision maker(s).
3. Equations for utility values of the alternatives: divide the range for each attribute
in equal interval of parts and form the utility value equations for each alternative.
4. Mathematical formulation: formulate the mathematical model of the problem as
LP model.
5. Solution: solve the LP model to get the utility value of each alternative and then
get the weighted utility value of each alternative by using Eq. (2.67). If different
weights of attributes are given, then ranking of alternatives is based on the
weighted utility value of alternatives.
The UTA method is improved in this book by proposing a seven point fuzzy
scale for the systematic conversion qualitative attributes into corresponding
quantitative values. Another improvement made is the incorporation of the weights
of attributes and determining the rank of alternatives based on weighted utility
values of alternatives.
2.9 VIKOR Method
The compromise ranking method, also known as VIKOR (VIšekriterijumsko
KOmpromisno Rangiranje), was introduced as an applicable technique to imple-
ment within MADM. The foundation for compromise solution was established by
Yu [41] and Zeleny [42] and later advocated by Opricovic and Tzeng [43–46]. The
compromise solution is a feasible solution, which is closest to the ideal solution,
and a compromise means an agreement established by mutual concessions.
The VIKOR method is improved in this book by introducing the analytical
hierarchy process (AHP) procedure for deciding the weights of relative importance
of the attributes. Another improvement made is the incorporation of a new seven
point fuzzy scale for the systematic conversion of qualitative attributes into cor-
responding quantitative values.
28. 34 2 Improved Multiple Attribute Decision Making Methods
The multiple attribute merit for compromise ranking was developed from the
Lp-metric used in the compromise programming method [42].
( )1=p
X
M
p
Lp;i ¼ ðwj ½ðmij Þmax À mij Š=½ðmij Þmax À ðmij Þmin Þ ð2:68Þ
j¼1
1 p 1; i ¼ 1; 2; . . .. . .; N
where N is the number of alternatives and M is the number of attributes. mij
value (for i = 1,2,3,…,N; j = 1,2,3,…,M) indicates the values of attributes for
different alternatives.
Within the VIKOR method L1,i and L?,i are used to formulate the ranking
measure Ei and Fi respectively. The main procedure of VIKOR method is
described below:
Step 1: Decision matrix
The first step is to determine the objective and to identify the pertinent eval-
uation attributes and prepare a decision matrix. For a multiple attribute decision
making problem having m alternatives and n attributes, the general form of
decision matrix is as shown by Table 1.1.
Step 2: Calculate the values of Ei and Fi
The values of measures Ei and Fi are calculated using the formulae given below.
X
M
Ei ¼ wj ½ðmij Þmax À mij Š=½ðmij Þmax À ðmij Þmin Þ; for beneficial attribute ð2:69Þ
j¼1
X
M
Ei ¼ wj ½mij Àðmij Þmin Š=½ðmij Þmax Àðmij Þmin Þ; for nonÀbeneficialattribute ð2:70Þ
j¼1
Fi ¼ Maxm of fwj ½ðmij Þmax À mij Š=½ðmij Þmax À ðmij Þmin Þg; for beneficial attribute
ð2:71Þ
Fi ¼ Maxm of fwj ½mij À ðmij Þmin Š=½ðmij Þmax À ðmij Þmin Þg; for non
À beneficial attribute ð2:72Þ
Step-3: Calculate the values of Pi
! !
Ei À Ei;min Fi À Fi;min
Pi ¼ m þ ð1 À mÞ ð2:73Þ
Ei;max À Ei;min Fi;max À Fi;min
where Ei,max is the maximum value of Ei and Ei,min is the minimum value of Ei.
Fi,max is the maximum value of Fi and Fi,min is the minimum value of Fi.. v is
introduced as weight of the strategy of ‘the majority of attributes’. Normally, the
value of m is taken as 0.5. However, it can be taken any value from 0 to 1.
Step 4: Arrange the alternatives in the ascending order according to the values
of Pi, Ei, and Fi
29. 2.9 VIKOR Method 35
By arranging the alternatives in the ascending order of Pi, Ei and Fi values, the
three ranking lists can be obtained. Compromise ranking list for a given m is
obtained by ranking with Pi measure. The best alternative ranked by Pi is the one
with the minimum value of Pi.
Step 5: Compromise solution
For given attribute weights, proposed as a compromise solution, alternative Ak,
which is the best ranked by the measure P, if the following two conditions are
satisfied [47, 4]:
Condition 1: ‘‘Acceptable advantage’’ PðAl Þ À PðAk Þ ! ð1=ðm À 1ÞÞ. Al is the
second best alternative in the ranking list by P.
Condition 2: ‘‘Acceptable stability in decision making’’: Alternative Ak must
also be the best ranked by E or/and F. This compromise solution is stable within a
decision making process, which could be: ‘‘voting by majority rule’’ (when
v [ 0.5 is needed) or ‘‘by consensus’’ (when v 0.5) or ‘‘with veto’’ (when
v [ 0.5).
If one of the conditions is not satisfied, then a set of compromise solutions is
proposed, which consists of:
• Alternatives Ak and Al if only condition 2 is not satisfied.
• Alternatives Ak, Al, ….., Ap if condition 1 is not satisfied; and Ap is determined
À Á
by the relation P Ap À PðAl Þ % ð1=ðm À 1ÞÞ:
VIKOR is a helpful tool in MADM, particularly in a situation where the
decision maker is not able, or does not know to express preference at the beginning
of system design. The obtained compromise solution could be accepted by the
decision makers because it provides a maximum ‘‘group utility’’ (represented by
Ei,min) of the ‘‘majority’’, and a minimum of the individual regret (represented by
Fi,min) of the ‘‘opponent’’.
2.10 Improved Ordered Weighted Averaging Method
Ordered weighted averaging (OWA) method is based on OWA operator. Yager [48]
introduced the OWA operator to provide a method for aggregating multiple inputs
that lie between the maximum and minimum operators. The aggregation of OWA
operator is generally composed of the following three steps [49]: (1) reorder the
input arguments (attribute values) in descending order, (2) determine the weights
associated with the OWA operator by using a proper method, and (3) utilize the
OWA weights to aggregate the reordered arguments (attribute values).
In OWA method, the paired judgments on the alternatives are the decision
maker’s holistic judgments. The paired judgments can be obtained on comparing
the attributes data for a pair of alternatives. For the given set of paired judgments
the problem is formulated as a LP model which gives the ordered weight of the
attributes.
30. 36 2 Improved Multiple Attribute Decision Making Methods
The combined goodness values of the alternatives are obtained using the
ordered weights. Any alternative with the highest combined goodness measure will
be considered the most preferred decision. The components of input vector are to
be ordered (descending order) before multiplying them by the ordered weights.
The OWA method is improved in this book to deal with the subjective as well as
objective attributes.
Steps of OWA method are given below:
Step 1: Decision matrix and its normalization
This step includes getting the measures of attributes for each alternative and the
normalization of the attribute data. The attributes may be objective or subjective.
The subjective attributes are converted into corresponding crisp scores as
explained in Sect. 2.4. The normalization is necessary to keep all the attributes on
same scale. For an MADM problem, if there are N alternatives and M attributes,
the ith alternative can be expressed as Yi = (mi1, mi2, …., mij, …., miM) in decision
matrix form, where mij is the performance value of attribute j (j = 1,2,….,M) for
alternative i (i = 1,2,….,N). The term Yi can be translated into the comparability
sequence Xi = (xi1, xi2,…, xij,…, xiM) using Eqs. (2.74) and (2.75), where xij is the
normalized value of mij for attribute j (j = 1,2,3,….,M) of alternative
i (i = 1,2,3,….,N).
Let xij is the normalized value of yij for attribute j of alternative i, then
mij
xij ¼ ; if jth attribute is beneficial ð2:74Þ
maxj ðmij Þ
minj ðmij Þ
xij ¼ ; if jth attribute is non-beneficial ð2:75Þ
mij
Step 2: Paired judgments on the alternatives
Paired judgments on the alternatives are the decision maker’s holistic judg-
ments between the alternatives. The paired judgments can be obtained on com-
paring the attributes data for a pair of alternatives. If A = {A1, A2, …, AN) is the
set of alternatives and let h A Â A denote the set of ordered pair (i, j), where i is
designated as a preferred alternative.
Step 3: Find OWA weights consistent with the ordered pairs
An OWA operator of dimension n is a mapping f : Rn ? R defined as,
X
M
f ðx1 ; . . .; xM Þ ¼ wj :bj ¼ w1 b1 þ w2 b2 þ . . . þ wM bM ð2:76Þ
j¼1
where bj is the jth largest element in the set of inputs {x1, x2, …, xM}, and {w1, w2,
P
…, wM} are the ordered weights. It is assumed that wj B 1 for all j and M wj ¼ 1:
j¼1
Vector W = {w1, w2, …, wM} is called order weights vector and f is the combined
goodness measure of a decision alternative if the inputs are its evaluations with
respect to M attributes. Any alternative with the highest f value will be considered
the most preferred decision. The components of input vector are to be ordered
(descending order) before multiplying them by the ordered weights [50, 51].
31. 2.10 Improved Ordered Weighted Averaging Method 37
The solution would be consistent with the decision maker’s holistic judgments
-
between alternatives if f ðAiÞ À f ðAjÞ [ 0 for every priory ordered pair (i, j) C h.
-
Now, for all (i, j) C h,
X
M
ðbik À bjk Þwk [ 0 for wk c W ð2:77Þ
k¼1
where bik and bjk are the reordered values of the attributes of the alternative Ai and
Aj respectively. Now the goal is to the determine the optimum ordered weights for
which the condition
X
M
ðbik À bjk Þwk ! e ð2:78Þ
k¼1
-
for every ordered pair (i, j) C h is violated as minimally as possible in which e is a
small arbitrary positive number to make the problem tractable by the LP. To attain
the objective ‘‘as minimally as possible’’, the auxiliary variable dij is introduced
which is given in Eq. (2.79).
X
M
ðbik À bjk Þwk þ dij ! e ð2:79Þ
k¼1
-
For every ordered pair (i, j) C h and the sum of auxiliary variables is minimized
[50].
The mathematical model is:
P 9
Minimize dij
ði;jÞ2h
P
M
ðbik À bjk Þwk þ dij ! e for all ði; jÞ c h
=
k¼1 ð2:80Þ
P
M
wk ¼ 1
k¼1
;
dij ! 0 and wk ! 0 Ã
Step 4: Find combined goodness measures of the alternatives and get the
ranking.
Higher is the combined goodness measure f(Ai) value, higher is the preference
given to that alternative. Rank of ith alternative is based on f(Ai) value.
The next chapter presents the applications of various improved MADM
methods discussed in this chapter for different decision making situations of the
manufacturing environment.
32. 38 2 Improved Multiple Attribute Decision Making Methods
References
1. Saaty TL (1980) The analytic hierarchy process. McGraw Hill, New York
2. Saaty TL (2000) Fundamentals of decision making and priority theory with AHP. RWS
Publications, Pittsburg
3. Triantaphyllou E (2000) Multi-criteria decision making methods: a comparative study.
Springer, London
4. Rao RV (2007) Decision making in the manufacturing environment using graph theory and
fuzzy multiple attribute decision making methods. Springer, London
5. Hwang CL, Yoon K (1981) Multiple attribute decision making: methods and applications: a
state-of-the-art survey. Springer, London
6. Charnes C, Cooper WW, Rhodes E (1978) Measuring the efficiency of decision making units.
Eur J Oper Res 2:429–444
7. Hashimoto A (1997) A ranked voting system using a DEA/AR exclusion model: A note. Eur
J Oper Res 97:600–604
8. Sarkis J (1999) A methodological framework for evaluating environmentally conscious
manufacturing programs. Comput Ind Eng 36:793–810
9. Sarkis J (2000) Comparative analysis of DEA as a discrete alternative multiple criteria
decision tool. Eur J Oper Res 123:543–557
10. Tone K (2001) A slacks-based measure of efficiency in data envelopment analysis. Eur J
Oper Res 130:498–509
11. Sun S (2002) Assessing computer numerical control machines using data envelopment
analysis. Int J Prod Res 40(9):2011–2039
12. Liu ST (2008) A fuzzy DEA/AR approach to the selection of flexible manufacturing
systems’. Comput Ind Eng 54:66–76
13. Wang YM, Chin KS, Poon GKK (2008) A data envelopment analysis method with assurance
region for weight generation in the analytic hierarchy process. Decis Support Syst
45(4):913–921
14. Cooper W, Seiford L, Tone K (2002) Data envelopment analysis. Kluwer Academic, London
15. Andersen P, Petersen NC (1993) A procedure for ranking efficient units in data envelopment
analysis. Manage Sci 39(10):1261–1264
16. Brans JP, Vinvke P, Mareschal B (1986) How to select and how to rank projects: The
PROMETHEE method. Eur J Oper Res 24:228–238
17. Rao RV, Patel BK (2010) A subjective and objective integrated multiple attribute decision
making method for material selection. Mater Des 31(10):4738–4747
18. Rao RV, Patel BK (2010) Decision making in the manufacturing environment using an
improved PROMETHEE method. Int J Prod Res 48(16):4665–4682
19. Brans JP, Mareschal B, Vincke P (1984) PROMETHEE: a new family of outranking methods
in multicriteria analysis. Oper Res 3:477–490
20. Marinoni O (2005) A stochastic spatial decision support system based on PROMETHEE. Int
J Geogr Inf Sci 19(1):51–68
21. Brans JP, Mareschal B (1994) The PROMCALC and GAIA decision support system for
MCDA. Decis Support Syst 12:297–310
22. Rehman A, Subash Babu A (2009) The evaluation of manufacturing systems using
concordance and disconcordance properties. Int J Serv Oper Manage 5(3):326–349
23. Leyva-Lopez JC, Fernandez-Gonzalez E (2003) A new method for group decision support
based on ELECTRE III methodology. Eur J Oper Res 148:14–27
24. Shanian A, Savadogo O (2007) A methodological concept for material selection of highly
sensitive components based on multiple criteria decision analysis. Expert Syst Appl
36:1362–1370
25. Shanian A, Milani AS, Carson C, Abeyaratne RC (2008) A new application of ELECTRE III
and revised Simos’ procedure for group material selection under weighting uncertainty.
Knowl-Based Syst 21(7):709–720
33. References 39
26. Montazer GA, Saremi HQ, Ramezani M (2009) Design a new mixed expert decision aiding
system using fuzzy ELECTRE III method for vendor selection. Expert Syst Appl
36(8):10837–10847
27. Sevkli M (2010) An application of the fuzzy ELECTRE method for supplier selection. Int J
Prod Res 48(12):3393–3405
28. Roy B (1991) The outranking approach and the foundations of ELECTRE methods. Theory
Decis 31:49–73
29. Zavadskas EK, Kaklauskas A, Turskis Z, Tamošaitien_ J (2008) Selection of the effective
e
dwelling house walls by applying attributes values determined at intervals. J Civ Eng Manage
14(2):85–93
30. Viteikiene M, Zavadskas EK (2007) Evaluating the sustainability of Vilnius city residential
areas. J Civ Eng Manage 8(2):149–155
31. Kaklauskas A, Zavadskas EK, Raslanas S, Ginevicius R, Komka A, Malinauskas P (2006)
Selection of low-e tribute in retrofit of public buildings by applying multiple criteria method
COPRAS: a Lithuanian case. Energy Build 38:454–462
32. Zavadskas EK, Kaklauskas A, Peldschus F, Turskis Z (2007) Multi-attribute assessment of
road design solution by using the COPRAS method. Baltic J Road Bridge Eng 2(4):195–203
33. Datta S, Beriha GS, Patnaik B, Mahapatra SS (2009) Use of compromise ranking method for
supervisor selection: a multi-criteria decision making (MCDM) approach. Int J Vocat Tech
Educ 1(1):7–13
34. Deng JL (1989) Introduction to gray system theory. J gray Syst 1(1):1–24
35. Deng JL (2005) The primary methods of gray system theory. Huazhong University of Science
and Technology Press, Wuhan
36. Kuo Y, Yang T, Huang GW (2008) The use of gray relational analysis in solving multi
attribute decision making problems. Comput Ind Eng 55:80–93
37. Rao RV, Singh D (2010) An improved gray relational analysis as a decision making method
for manufacturing situations. Int J Decis Sci Risk Manage 2:1–23
38. Chang TC, Lin SJ (1999) gray relational analysis of carbon dioxide emissions from industrial
production and energy uses in Taiwan. J Environ Manage 56(4):247–257
39. Fung CP (2003) Manufacturing process optimization for wear property of fiber-reinforced
polybutylene terephthalate composites with gray relational analysis. Wear 254:298–306
40. Jacquet-Lagreze E, Siskos Y (1982) Assessing a set of additive utility functions for
multicriteria decision making: The UTA method. Eur J Oper Res 10(2):151–164
41. Yu PL (1973) A class of solutions for group decision problems. Manage Sci 19:936–946
42. Zeleny M (1982) Multiple Criteria Decision Making. McGraw-Hill, New York
43. Opricovic S, Tzeng GH (2002) Multicriteria planning of post-earthquake sustainable
reconstruction. Comput-Aided Civ Infrastruct Eng 17:211–220
44. Opricovic S, Tzeng GH (2003) Fuzzy multicriteria model for post-earthquake land use
planning. Nat Hazards Rev 4:59–64
45. Opricovic S, Tzeng GH (2004) Compromise solution by MCDM methods: a comparative
analysis of VIKOR and TOPSIS. Eur J Oper Res 156:445–455
46. Opricovic S, Tzeng GH (2007) Extended VIKOR method in comparison with outranking
methods. Eur J Oper Res 178:514–529
47. Tzeng GH, Lin CW, Opricovic S (2005) Multi-criteria analysis of alternative-fuel buses for
public transportation. Energy Policy 33:1373–1383
48. Yager RR (1988) On ordered weighted averaging aggregation operators in multi-criteria
decision making. IEEE Trans Syst Man Cybern 18:183–190
49. Xu ZS (2005) An overview of methods for determining OWA weights. Int J Intell Syst
18:843–865
50. Ahn BS (2008) Preference relation approach for obtaining OWA operators weights. Int J
Approx Reason 47:166–178
51. Zarghami M, Szidarovszky F (2008) Fuzzy quantifiers in sensitivity analysis of OWA
operator. Comput Ind Eng 54:1006–1018