The Analytic Network Process (ANP) is a method for decision making and forecasting that accounts for dependence and feedback. It allows for alternatives and criteria to depend on each other in a network structure rather than a hierarchy. Feedback improves the priorities and makes predictions more accurate. The ANP involves comparing elements to obtain their priorities, organizing the criteria in a control hierarchy, and deriving a weighted supermatrix to represent the influence of elements on each other with respect to different criteria. The limiting supermatrix obtained from the weighted supermatrix can then be used to read off the desired priorities and make decisions by combining benefits, costs, opportunities, and risks.
The Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP) are techniques for multi-criteria decision making. AHP structures decisions as a hierarchy, while ANP structures them as a network to account for interdependencies. Both use pairwise comparisons to measure weights and rank alternatives. The four major steps of ANP are: 1) model construction, 2) pairwise comparison matrices to derive local priorities, 3) supermatrix formation to obtain global priorities, and 4) selection of the best alternative based on overall priorities. The document then provides an example case study of applying ANP to analyze strengths, weaknesses, opportunities, and threats for an insurance company in Iran.
This document provides an overview of the Analytic Network Process (ANP), which generalizes the Analytic Hierarchy Process (AHP) to problems that involve feedback and interdependence. The ANP uses a network structure rather than hierarchical levels and derives priority scales from matrices representing the influences between different elements. It involves constructing a supermatrix for each control criterion, weighting the supermatrices, and synthesizing the results. The ANP allows for inner and outer dependence between clusters of elements and can represent complex decision problems with feedback loops.
The document provides an overview of the Analytic Network Process (ANP), a general framework for dealing with decisions that does not assume independence between elements. The ANP uses a network structure to represent influences between elements and clusters, capturing feedback. It involves a control hierarchy to derive priorities for benefits, costs, opportunities, and risks, which are then synthesized. The ANP extends the Analytic Hierarchy Process (AHP) by allowing for dependence between elements.
Introduction to principal component analysis (pca)Mohammed Musah
This document provides an introduction to principal component analysis (PCA), outlining its purpose for data reduction and structural detection. It defines PCA as a linear combination of weighted observed variables. The procedure section discusses assumptions like normality, homoscedasticity, and linearity that are evaluated prior to PCA. Requirements for performing PCA include the variables being at the metric or nominal level, sufficient sample size and variable ratios, and adequate correlations between variables.
Canonical correlation analysis was used to detect potential bias in faculty promotion scoring at the American University of Nigeria (AUN). The analysis compared scores from three promotion committees and tested whether any committee showed bias that influenced candidates' promotability. The analysis found:
1) It could discriminate between candidates deemed promotable versus non-promotable, rejecting the hypothesis that it couldn't do so.
2) There were no significant differences in scoring between committees, rejecting the hypothesis that it couldn't detect bias.
3) Only the president's committee showed significant score weight influence on promotability, rejecting the hypothesis that it couldn't detect overbearing influences.
The study demonstrated canonical correlation analysis can be an effective tool for unbiased faculty
The document provides an overview of confirmatory factor analysis (CFA). It defines CFA and explains that CFA requires specifying the number of factors and which variables load on which factors before analysis. The document outlines the 6 stages of CFA: 1) defining constructs, 2) developing the measurement model, 3) designing a study, 4) assessing the measurement model, 5) specifying the structural model, and 6) assessing the structural model. It emphasizes that CFA confirms or rejects preconceived theories about relationships between observed and latent variables.
The document compares different statistical significance tests for evaluating information retrieval systems:
1) Randomization, bootstrap, and Student's t-test produced similar significance values and are recommended.
2) The Wilcoxon and sign tests produced different p-values and can incorrectly predict or fail to detect significant differences between systems.
3) The randomization test is recommended as it can use any evaluation metric and does not assume a specific distribution of test statistics.
Sensitivity analysis is the study of how uncertainty in the inputs of a mathematical model propagates to uncertainty in the model's outputs. It is useful for understanding relationships between inputs and outputs, identifying important inputs, and reducing uncertainty. Sensitivity analysis typically involves running the model many times while varying inputs, and calculating sensitivity measures from the resulting outputs to determine which inputs most influence uncertainty in the outputs. Common methods include variance-based approaches and screening methods.
The Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP) are techniques for multi-criteria decision making. AHP structures decisions as a hierarchy, while ANP structures them as a network to account for interdependencies. Both use pairwise comparisons to measure weights and rank alternatives. The four major steps of ANP are: 1) model construction, 2) pairwise comparison matrices to derive local priorities, 3) supermatrix formation to obtain global priorities, and 4) selection of the best alternative based on overall priorities. The document then provides an example case study of applying ANP to analyze strengths, weaknesses, opportunities, and threats for an insurance company in Iran.
This document provides an overview of the Analytic Network Process (ANP), which generalizes the Analytic Hierarchy Process (AHP) to problems that involve feedback and interdependence. The ANP uses a network structure rather than hierarchical levels and derives priority scales from matrices representing the influences between different elements. It involves constructing a supermatrix for each control criterion, weighting the supermatrices, and synthesizing the results. The ANP allows for inner and outer dependence between clusters of elements and can represent complex decision problems with feedback loops.
The document provides an overview of the Analytic Network Process (ANP), a general framework for dealing with decisions that does not assume independence between elements. The ANP uses a network structure to represent influences between elements and clusters, capturing feedback. It involves a control hierarchy to derive priorities for benefits, costs, opportunities, and risks, which are then synthesized. The ANP extends the Analytic Hierarchy Process (AHP) by allowing for dependence between elements.
Introduction to principal component analysis (pca)Mohammed Musah
This document provides an introduction to principal component analysis (PCA), outlining its purpose for data reduction and structural detection. It defines PCA as a linear combination of weighted observed variables. The procedure section discusses assumptions like normality, homoscedasticity, and linearity that are evaluated prior to PCA. Requirements for performing PCA include the variables being at the metric or nominal level, sufficient sample size and variable ratios, and adequate correlations between variables.
Canonical correlation analysis was used to detect potential bias in faculty promotion scoring at the American University of Nigeria (AUN). The analysis compared scores from three promotion committees and tested whether any committee showed bias that influenced candidates' promotability. The analysis found:
1) It could discriminate between candidates deemed promotable versus non-promotable, rejecting the hypothesis that it couldn't do so.
2) There were no significant differences in scoring between committees, rejecting the hypothesis that it couldn't detect bias.
3) Only the president's committee showed significant score weight influence on promotability, rejecting the hypothesis that it couldn't detect overbearing influences.
The study demonstrated canonical correlation analysis can be an effective tool for unbiased faculty
The document provides an overview of confirmatory factor analysis (CFA). It defines CFA and explains that CFA requires specifying the number of factors and which variables load on which factors before analysis. The document outlines the 6 stages of CFA: 1) defining constructs, 2) developing the measurement model, 3) designing a study, 4) assessing the measurement model, 5) specifying the structural model, and 6) assessing the structural model. It emphasizes that CFA confirms or rejects preconceived theories about relationships between observed and latent variables.
The document compares different statistical significance tests for evaluating information retrieval systems:
1) Randomization, bootstrap, and Student's t-test produced similar significance values and are recommended.
2) The Wilcoxon and sign tests produced different p-values and can incorrectly predict or fail to detect significant differences between systems.
3) The randomization test is recommended as it can use any evaluation metric and does not assume a specific distribution of test statistics.
Sensitivity analysis is the study of how uncertainty in the inputs of a mathematical model propagates to uncertainty in the model's outputs. It is useful for understanding relationships between inputs and outputs, identifying important inputs, and reducing uncertainty. Sensitivity analysis typically involves running the model many times while varying inputs, and calculating sensitivity measures from the resulting outputs to determine which inputs most influence uncertainty in the outputs. Common methods include variance-based approaches and screening methods.
There are two main statistical techniques for comparing systems: independent sampling and correlated sampling. When comparing two systems, it is necessary to use confidence intervals. There are three possible scenarios when computing confidence intervals depending on if the sampling is independent or correlated. When comparing several designs, the goals may be to estimate each performance measure, compare to a present system, or select the best. The Bonferroni approach can be used to make statements about multiple alternatives while controlling the overall confidence level. Design of experiments tools like factorial designs, screening, and response surface methods can help understand the effect of design alternatives on performance measures.
Missing data occurs when no data value is stored for a variable in an observation, usually due to manual errors or incorrect measurements. There are three types of missing data: missing completely at random, missing at random, and missing not at random. Several methods can be used to deal with missing data, including reducing the dataset, treating missing values as a special value, replacing with the mean, replacing with the most common value, and using the closest fit to impute missing values. Proper handling of missing data is important to avoid bias and distortions in analyzing the data.
This document discusses different methods for analyzing data from various types of research. It describes analyzing qualitative research data by identifying and sorting text segments into categories. Descriptive research data is analyzed using descriptive statistics like frequencies, averages and variability. Correlational research examines relationships between quantifiable variables using techniques like correlation. Multivariate research analyzes multiple independent variables simultaneously through methods such as multiple regression, discriminant analysis, and factor analysis. Experimental research data can be analyzed using t-tests to compare means between groups and one-way analysis of variance to examine differences between multiple groups.
A Formal Machine Learning or Multi Objective Decision Making System for Deter...Editor IJCATR
Decision-making typically needs the mechanisms to compromise among opposing norms. Once multiple objectives square measure is concerned of machine learning, a vital step is to check the weights of individual objectives to the system-level performance. Determinant, the weights of multi-objectives is associate in analysis method, associated it's been typically treated as a drawback. However, our preliminary investigation has shown that existing methodologies in managing the weights of multi-objectives have some obvious limitations like the determination of weights is treated as one drawback, a result supporting such associate improvement is limited, if associated it will even be unreliable, once knowledge concerning multiple objectives is incomplete like an integrity caused by poor data. The constraints of weights are also mentioned. Variable weights square measure is natural in decision-making processes. Here, we'd like to develop a scientific methodology in determinant variable weights of multi-objectives. The roles of weights in a creative multi-objective decision-making or machine-learning of square measure analyzed, and therefore the weights square measure determined with the help of a standard neural network.
Imputation Techniques For Market Research Datasets With Missing Values Salford Systems
This document discusses techniques for handling missing data in statistical analysis and modeling. It compares different modeling approaches on three datasets - one on shoe preferences from a stated preference survey, one on diabetes risk factors, and one on homeowner characteristics. It finds that classification and regression tree (CART) and multivariate adaptive regression splines (MARS) techniques are preferred for imputing missing values when the data contains mixed variable types and interactions among variables. CART can sequentially impute missing values for each variable while preserving the multivariate structure of the data.
This document provides an overview of data mining techniques discussed in Chapter 3, including parametric and nonparametric models, statistical perspectives on point estimation and error measurement, Bayes' theorem, decision trees, neural networks, genetic algorithms, and similarity measures. Nonparametric techniques like neural networks, decision trees, and genetic algorithms are particularly suitable for data mining applications involving large, dynamically changing datasets.
There are three main areas of statistics: descriptive statistics, inferential statistics, and regression. Descriptive statistics describes data through measures of central tendency like mean, median, and mode and measures of dispersion like range, variance, and standard deviation. Inferential statistics makes predictions and comparisons about populations using sample data through techniques like t-tests and the general linear model. Regression analyzes the relationships between variables using methods such as analysis of variance, nonlinear regression, and rank correlation.
IRJET- Analyzing Voting Results using Influence MatrixIRJET Journal
This document discusses analyzing voting results using an influence matrix. It proposes modeling voting outcomes as results from an opinion dynamics process, where opinions evolve according to social influence. It formulates estimating the maximum posteriori opinions and influence matrix from voting data. The influence matrix technique is described to solve fluid flow problems numerically. It demonstrates vote prediction and dynamic results visualization based on the estimated influence matrix. Future work could explore modeling stubborn agents' topic-dependent beliefs instead of independently.
Application of the analytic hierarchy process (AHP) for selection of forecast...Gurdal Ertek
In this paper, we described an application of the Analytic Hierarchy Process (AHP) for the ranking and selection of forecasting software. AHP is a multi-criteria decision making (MCDM) approach, which is based on the pair-wise comparison
of elements of a given set with respect to multiple criteria. Even though there are applications of the AHP to software selection problems, we have not encountered a study that involves forecasting software. We started our analysis by filtering
among forecasting software that were found on the Internet by undergraduate students as a part of a course project. Then we processed a second filtering step, where we reduced the number of software to be examined even further. Finally we
constructed the comparison matrices based upon the evaluations of three “semiexperts”, and obtained a ranking of forecasting software of the selected software using the Expert Choice software. We report our findings and our insights, together with the results of a sensitivity analysis.
http://research.sabanciuniv.edu.
The document discusses techniques for imputing missing data (<NA>) in R. It introduces common imputation methods like MICE, missForest, and Hmisc. MICE creates multiple imputations using chained equations to account for uncertainty, while missForest uses random forests to impute missing values. Hmisc offers functions to impute missing values using methods like mean, regression, and predictive mean matching. The goal is to understand missing data, learn imputation methods, and choose the best approach for a given dataset.
This document discusses classification using decision tree models. It begins with an introduction to classification, describing it as assigning objects to predefined categories. Decision trees are then overviewed as a powerful classifier that uses a hierarchical structure to split a dataset. Important parameters for evaluating model accuracy are covered, such as precision, recall, and AUC. The document also describes an exercise using the Weka tool to build decision trees on a dataset about term deposit subscriptions. It concludes with discussing uses of decision trees for applications like marketing and medical diagnosis.
: “Generalized Linear Models” is an online course offered at Statistics.com. Statistics.com is the leading provider of online education in statistics, and offers over 100 courses in introductory and advanced statistics. Courses typically are taught by leading experts. Some course highlights -
A. Taught by renowned International Faculty (Not self-paced learning)
B. Instructor led and Peer learning
C. Flexible and Convenient schedule
D. Practical Application and Software skills
For more details please contact info@c-elt.com.
Website: www.india.statistics.com
Exploratory factor analysis (EFA) is a statistical technique used to identify the underlying relationships between measured variables. EFA can group variables into a smaller number of factors and reduce complexity in the data. The document discusses EFA methodology, including conducting EFA in SPSS, determining the number of factors, rotating factors, and interpreting results. Assumptions of EFA and different extraction and rotation methods are also covered.
Final generalized linear modeling by idrees waris iugcId'rees Waris
This document discusses generalized linear models (GLM). It begins by introducing the topic and outlines the main points to be covered, including the history of GLM, assumptions for using GLM, and how to run GLM in SPSS. The document then covers the components of GLM, including the random, systematic, and link components. It discusses various distributions and link functions that can be used in GLM. The document concludes by providing an example of how to analyze shipping damage incident data using Poisson GLM in SPSS.
PCA is a technique used to simplify complex datasets by transforming correlated variables into a set of uncorrelated variables called principal components. It identifies patterns in high-dimensional data and expresses the data in a way that highlights similarities and differences. PCA is useful for analyzing data and reducing dimensionality without much loss of information. It works by rotating the existing axes to capture major variability in the data while ignoring smaller variations.
This document discusses factors that influence the selection of data analysis strategies and provides a classification of statistical techniques. It notes that the previous research steps, known data characteristics, statistical technique properties, and researcher background all impact strategy selection. Statistical techniques can be univariate, analyzing single variables, or multivariate, analyzing relationships between multiple variables simultaneously. Multivariate techniques are further classified as dependence techniques, with identifiable dependent and independent variables, or interdependence techniques examining whole variable sets. The document provides examples of common univariate and multivariate techniques.
The document compares techniques for handling incomplete data when using decision trees. It investigates the robustness and accuracy of seven popular techniques when applied to different proportions, patterns and mechanisms of missing data in 21 datasets. The techniques include listwise deletion, decision tree single imputation, expectation maximization single imputation, mean/mode single imputation, and multiple imputation. The results suggest important differences between the techniques, with multiple imputation and decision tree single imputation generally performing better than the others. The choice of technique depends on factors like the amount and nature of the missing data.
This document provides a tutorial for using the SuperDecisions software to build multi-criteria decision models. It explains how to install the software, build a decision hierarchy, make pairwise comparisons between criteria and alternatives, view results, and perform sensitivity analysis. The tutorial uses a sample model to select the best car out of three alternatives based on criteria like price, miles per gallon, prestige, and comfort. It demonstrates how to construct the hierarchy in SuperDecisions, enter pairwise comparison judgments, and view the resulting supermatrix before and after synthesis.
An Analytic Network Process Modeling to Assess Technological Innovation Capab...drboon
To handle swift changes in global environment, Technological Innovation Capabilities (TICs) is one crucial and unique strategy to increase firms’ competitiveness. This research proposed a systematic framework of TICs assessment by employing Analytic Network Process (ANP) method for solving the complicate decision-making and assessing the interrelationship among various evaluation factors, whereas the relative important weight data were provided by industrial experts based on pair-wise comparison. With the novel TIC assessment model, high-level managers could easily gain management information to rationalizes the decision-making process based on the most important criteria which affect the firms’ competitive advantages and the highest priority factors which were needed to be handled. The last section also displayed the application of TICs assessment on three Thai automotive parts firms, as case study.
There are two main statistical techniques for comparing systems: independent sampling and correlated sampling. When comparing two systems, it is necessary to use confidence intervals. There are three possible scenarios when computing confidence intervals depending on if the sampling is independent or correlated. When comparing several designs, the goals may be to estimate each performance measure, compare to a present system, or select the best. The Bonferroni approach can be used to make statements about multiple alternatives while controlling the overall confidence level. Design of experiments tools like factorial designs, screening, and response surface methods can help understand the effect of design alternatives on performance measures.
Missing data occurs when no data value is stored for a variable in an observation, usually due to manual errors or incorrect measurements. There are three types of missing data: missing completely at random, missing at random, and missing not at random. Several methods can be used to deal with missing data, including reducing the dataset, treating missing values as a special value, replacing with the mean, replacing with the most common value, and using the closest fit to impute missing values. Proper handling of missing data is important to avoid bias and distortions in analyzing the data.
This document discusses different methods for analyzing data from various types of research. It describes analyzing qualitative research data by identifying and sorting text segments into categories. Descriptive research data is analyzed using descriptive statistics like frequencies, averages and variability. Correlational research examines relationships between quantifiable variables using techniques like correlation. Multivariate research analyzes multiple independent variables simultaneously through methods such as multiple regression, discriminant analysis, and factor analysis. Experimental research data can be analyzed using t-tests to compare means between groups and one-way analysis of variance to examine differences between multiple groups.
A Formal Machine Learning or Multi Objective Decision Making System for Deter...Editor IJCATR
Decision-making typically needs the mechanisms to compromise among opposing norms. Once multiple objectives square measure is concerned of machine learning, a vital step is to check the weights of individual objectives to the system-level performance. Determinant, the weights of multi-objectives is associate in analysis method, associated it's been typically treated as a drawback. However, our preliminary investigation has shown that existing methodologies in managing the weights of multi-objectives have some obvious limitations like the determination of weights is treated as one drawback, a result supporting such associate improvement is limited, if associated it will even be unreliable, once knowledge concerning multiple objectives is incomplete like an integrity caused by poor data. The constraints of weights are also mentioned. Variable weights square measure is natural in decision-making processes. Here, we'd like to develop a scientific methodology in determinant variable weights of multi-objectives. The roles of weights in a creative multi-objective decision-making or machine-learning of square measure analyzed, and therefore the weights square measure determined with the help of a standard neural network.
Imputation Techniques For Market Research Datasets With Missing Values Salford Systems
This document discusses techniques for handling missing data in statistical analysis and modeling. It compares different modeling approaches on three datasets - one on shoe preferences from a stated preference survey, one on diabetes risk factors, and one on homeowner characteristics. It finds that classification and regression tree (CART) and multivariate adaptive regression splines (MARS) techniques are preferred for imputing missing values when the data contains mixed variable types and interactions among variables. CART can sequentially impute missing values for each variable while preserving the multivariate structure of the data.
This document provides an overview of data mining techniques discussed in Chapter 3, including parametric and nonparametric models, statistical perspectives on point estimation and error measurement, Bayes' theorem, decision trees, neural networks, genetic algorithms, and similarity measures. Nonparametric techniques like neural networks, decision trees, and genetic algorithms are particularly suitable for data mining applications involving large, dynamically changing datasets.
There are three main areas of statistics: descriptive statistics, inferential statistics, and regression. Descriptive statistics describes data through measures of central tendency like mean, median, and mode and measures of dispersion like range, variance, and standard deviation. Inferential statistics makes predictions and comparisons about populations using sample data through techniques like t-tests and the general linear model. Regression analyzes the relationships between variables using methods such as analysis of variance, nonlinear regression, and rank correlation.
IRJET- Analyzing Voting Results using Influence MatrixIRJET Journal
This document discusses analyzing voting results using an influence matrix. It proposes modeling voting outcomes as results from an opinion dynamics process, where opinions evolve according to social influence. It formulates estimating the maximum posteriori opinions and influence matrix from voting data. The influence matrix technique is described to solve fluid flow problems numerically. It demonstrates vote prediction and dynamic results visualization based on the estimated influence matrix. Future work could explore modeling stubborn agents' topic-dependent beliefs instead of independently.
Application of the analytic hierarchy process (AHP) for selection of forecast...Gurdal Ertek
In this paper, we described an application of the Analytic Hierarchy Process (AHP) for the ranking and selection of forecasting software. AHP is a multi-criteria decision making (MCDM) approach, which is based on the pair-wise comparison
of elements of a given set with respect to multiple criteria. Even though there are applications of the AHP to software selection problems, we have not encountered a study that involves forecasting software. We started our analysis by filtering
among forecasting software that were found on the Internet by undergraduate students as a part of a course project. Then we processed a second filtering step, where we reduced the number of software to be examined even further. Finally we
constructed the comparison matrices based upon the evaluations of three “semiexperts”, and obtained a ranking of forecasting software of the selected software using the Expert Choice software. We report our findings and our insights, together with the results of a sensitivity analysis.
http://research.sabanciuniv.edu.
The document discusses techniques for imputing missing data (<NA>) in R. It introduces common imputation methods like MICE, missForest, and Hmisc. MICE creates multiple imputations using chained equations to account for uncertainty, while missForest uses random forests to impute missing values. Hmisc offers functions to impute missing values using methods like mean, regression, and predictive mean matching. The goal is to understand missing data, learn imputation methods, and choose the best approach for a given dataset.
This document discusses classification using decision tree models. It begins with an introduction to classification, describing it as assigning objects to predefined categories. Decision trees are then overviewed as a powerful classifier that uses a hierarchical structure to split a dataset. Important parameters for evaluating model accuracy are covered, such as precision, recall, and AUC. The document also describes an exercise using the Weka tool to build decision trees on a dataset about term deposit subscriptions. It concludes with discussing uses of decision trees for applications like marketing and medical diagnosis.
: “Generalized Linear Models” is an online course offered at Statistics.com. Statistics.com is the leading provider of online education in statistics, and offers over 100 courses in introductory and advanced statistics. Courses typically are taught by leading experts. Some course highlights -
A. Taught by renowned International Faculty (Not self-paced learning)
B. Instructor led and Peer learning
C. Flexible and Convenient schedule
D. Practical Application and Software skills
For more details please contact info@c-elt.com.
Website: www.india.statistics.com
Exploratory factor analysis (EFA) is a statistical technique used to identify the underlying relationships between measured variables. EFA can group variables into a smaller number of factors and reduce complexity in the data. The document discusses EFA methodology, including conducting EFA in SPSS, determining the number of factors, rotating factors, and interpreting results. Assumptions of EFA and different extraction and rotation methods are also covered.
Final generalized linear modeling by idrees waris iugcId'rees Waris
This document discusses generalized linear models (GLM). It begins by introducing the topic and outlines the main points to be covered, including the history of GLM, assumptions for using GLM, and how to run GLM in SPSS. The document then covers the components of GLM, including the random, systematic, and link components. It discusses various distributions and link functions that can be used in GLM. The document concludes by providing an example of how to analyze shipping damage incident data using Poisson GLM in SPSS.
PCA is a technique used to simplify complex datasets by transforming correlated variables into a set of uncorrelated variables called principal components. It identifies patterns in high-dimensional data and expresses the data in a way that highlights similarities and differences. PCA is useful for analyzing data and reducing dimensionality without much loss of information. It works by rotating the existing axes to capture major variability in the data while ignoring smaller variations.
This document discusses factors that influence the selection of data analysis strategies and provides a classification of statistical techniques. It notes that the previous research steps, known data characteristics, statistical technique properties, and researcher background all impact strategy selection. Statistical techniques can be univariate, analyzing single variables, or multivariate, analyzing relationships between multiple variables simultaneously. Multivariate techniques are further classified as dependence techniques, with identifiable dependent and independent variables, or interdependence techniques examining whole variable sets. The document provides examples of common univariate and multivariate techniques.
The document compares techniques for handling incomplete data when using decision trees. It investigates the robustness and accuracy of seven popular techniques when applied to different proportions, patterns and mechanisms of missing data in 21 datasets. The techniques include listwise deletion, decision tree single imputation, expectation maximization single imputation, mean/mode single imputation, and multiple imputation. The results suggest important differences between the techniques, with multiple imputation and decision tree single imputation generally performing better than the others. The choice of technique depends on factors like the amount and nature of the missing data.
This document provides a tutorial for using the SuperDecisions software to build multi-criteria decision models. It explains how to install the software, build a decision hierarchy, make pairwise comparisons between criteria and alternatives, view results, and perform sensitivity analysis. The tutorial uses a sample model to select the best car out of three alternatives based on criteria like price, miles per gallon, prestige, and comfort. It demonstrates how to construct the hierarchy in SuperDecisions, enter pairwise comparison judgments, and view the resulting supermatrix before and after synthesis.
An Analytic Network Process Modeling to Assess Technological Innovation Capab...drboon
To handle swift changes in global environment, Technological Innovation Capabilities (TICs) is one crucial and unique strategy to increase firms’ competitiveness. This research proposed a systematic framework of TICs assessment by employing Analytic Network Process (ANP) method for solving the complicate decision-making and assessing the interrelationship among various evaluation factors, whereas the relative important weight data were provided by industrial experts based on pair-wise comparison. With the novel TIC assessment model, high-level managers could easily gain management information to rationalizes the decision-making process based on the most important criteria which affect the firms’ competitive advantages and the highest priority factors which were needed to be handled. The last section also displayed the application of TICs assessment on three Thai automotive parts firms, as case study.
ANP-GP Approach for Selection of Software Architecture StylesWaqas Tariq
Abstract Selection of Software Architecture for any system is a difficult task as many different stake holders are involved in the selection process. Stakeholders view on quality requirements is different and at times they may also be conflicting in nature. Also selecting appropriate styles for the software architecture is important as styles impact characteristics of software (e.g. reliability, performance). Moreover, styles influence how software is built as they determine architectural elements (e.g. components, connectors) and rules on how to integrate these elements in the architecture. Selecting the best style is difficult because there are multiple factors such as project risk, corporate goals, limited availability of resources, etc. Therefore this study presents a method, called SSAS, for the selection of software architecture styles. Moreover, this selection is a multi-criteria decision-making problem in which different goals and objectives must be taken into consideration. In this paper, we suggest an improved selection methodology, which reflects interdependencies among evaluation criteria and alternatives using analytic network process (ANP) within a zero-one goal programming (ZOGP) model. Keywords: Software Architecture; Selection of Software Architecture Styles; Multi-Criteria Decision Making; Interdependence; Analytic Network Process (ANP); Zero-One Goal Programming (ZOGP)
The document describes how to create an AHP ratings model. Key steps include:
1) Building a hierarchical model with criteria but not including alternatives.
2) Opening the ratings screen to evaluate alternatives against the criteria.
3) Adding the criteria as column headings, entering categories for criteria, pairwise comparing categories, and then rating each alternative against the criteria categories.
This document describes the Analytic Network Process (ANP) model for complex decision making. The ANP model includes the following key elements:
1. A top-level network with four merit nodes: benefits, opportunities, costs, and risks.
2. Subnetworks below each merit node containing control criteria hierarchies to evaluate each merit.
3. Additional subnetworks for high priority control criteria containing decision alternatives.
4. Pairwise comparisons to obtain weights for criteria, alternatives, and influences between elements. Limit matrices converge the results.
5. Sensitivity analysis identifies the best alternative for different priorities of the merit nodes like benefits, costs, and risks. The document provides an example ANP model and
ANP can be used to model market share by creating a network of factors that influence alternatives' market share. The model connects marketing, product, and other factors to shoe brand alternatives like Nike, Reebok, and Adidas. Pairwise comparisons of factors and alternatives produce priorities that estimate each brand's relative market share. Validating ANP results against external data demonstrates the model's ability to incorporate judgment.
This document provides a tutorial for using the SuperDecisions software to build Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP) decision models. The tutorial is divided into two parts:
1. Building AHP hierarchical decision models, which includes instructions on installing the software, creating a model with a goal, criteria and alternatives, making pairwise comparisons, and obtaining results.
2. Building ANP network decision models, which introduces concepts of ANP, demonstrates simple and complex network models using sample templates, and provides guidance on performing comparisons, obtaining results, and conducting sensitivity analysis.
The tutorial uses examples and screenshots to illustrate key software functions and steps for constructing both hierarchical and network decision models
The document introduces the Analytic Network Process (ANP), which is an extension of the Analytic Hierarchy Process (AHP) that allows for inner and outer dependence relationships between decision elements. In ANP, criteria are prioritized based on their importance to actual alternatives rather than to an abstract goal. Feedback comparisons are made between alternatives and criteria to establish priority vectors. A network supermatrix incorporates these dependencies to arrive at a final synthesis. The results may differ from AHP because ANP accounts for interdependencies between decision elements that influence priorities.
Here are the two structures:
Structure 1:
Marketing Strategy (50%)
West Side (25%)
Store 1 (12.5%)
Store 2 (12.5%)
City Centre (25%)
Store 3 (25%)
East Side (25%)
Store 4 (12.5%)
Store 5 (12.5%)
Structure 2:
Marketing Strategy (50%)
Store 1 (10%)
Store 2 (10%)
Store 3 (10%)
Store 4 (10%)
Store 5 (10%)
He incorrectly assumes the criteria weights are arbitrary when they depend on the assumed results and structure. No wonder he gets different results - AHP is being misapplied. The
This document provides a tutorial for using the SuperDecisions software to build decision models using the Analytic Hierarchy Process (AHP) or Analytic Network Process (ANP). It explains the basic concepts of clusters and elements, and how to create a hierarchical model by defining the goal, criteria and alternative clusters, adding elements to each cluster, and connecting the elements. The tutorial also provides an overview of performing pairwise comparisons to obtain priority weights in the decision models. The overall purpose is to demonstrate how to use the SuperDecisions software to structurally model decisions and obtain results using AHP or ANP.
This document provides a tutorial on using the SuperDecisions software for Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP) models. It outlines the basic process of creating clusters, nodes, links between nodes, making pairwise comparisons, and obtaining results. It demonstrates how to build a simple 3-level hierarchy to choose the best car based on criteria like prestige, price, and miles per gallon. It also discusses features like different comparison modes, improving inconsistency, sensitivity analysis, and building a ratings model instead of a relative model.
This document describes a study that uses the analytic hierarchy process (AHP) and analytic network process (ANP) to help solve supplier selection problems for a textile company. The methodology involves using AHP to identify tangible and intangible criteria, and ANP to analyze interdependencies between criteria. Pairwise comparisons are made between goals, criteria, sub-criteria and alternatives. Supermatrix calculations are performed to determine the best alternative, which in this case was Outlet B. The study aims to provide a real-world solution for a jeans manufacturer facing declining sales and market share.
University Cafeteria Operations using Business Process Model and Notation (B...Moumie Soulemane
A student wants to eat at their university cafeteria. The student identifies themselves to the waiter, either physically or with their ID card. The waiter asks the student what they would like from the menu. After the student selects their food, the waiter serves it to them. The student eats their food, finishes, and cleans up after themselves by arranging furniture and collecting dishes. The waiter then cleans the table. The process is recorded in a database.
This is a simple example to illustrate the difference between customer orientation and Process Experience. It centres around the issue of which processes to manage - your internal processes or the customer processes. Will be continued in separate installments on blog.taraneon.com. Also available as a download at www.taraneon.com.
1) The seven pillars of the Analytic Hierarchy Process (AHP) are: ratio scales, paired comparisons, sensitivity of judgments, homogeneity and clustering, additive synthesis, rank preservation/reversal, and group decision making.
2) Ratio scales are central to the AHP. Ratios scales allow priorities to be multiplied and added, generalizing decision theory to dependent and interdependent decisions. The AHP derives ratio scales from pairwise comparisons using reciprocal judgments and the principal eigenvector.
3) Additive synthesis within a hierarchical structure transforms a multi-dimensional scaling problem into a uni-dimensional ratio scale, allowing alternatives to be compared on different criteria. Both rank preservation and rank reversal can occur, so the A
The document provides an overview of factor analysis, including:
- Factor analysis is a statistical technique used to reduce a large number of variables into a smaller number of underlying factors or components according to patterns of correlation between variables.
- The two main types are exploratory factor analysis, which is used when the underlying factors are unknown, and confirmatory factor analysis, which is used to test hypotheses about a predetermined factor structure.
- Key steps in factor analysis include determining the appropriateness of the data, extracting factors using various criteria, rotating factors to improve interpretation, and interpreting the results including factor loadings and communalities.
1) The document discusses characteristics of good community leaders based on a study of 102 respondents. It identifies 4 key factors that describe good leadership: positive characteristics (vision, communication skills, character, personality), spontaneous decision characteristics (spending time with subordinates, fearless attitude), negative characteristics (not being punctual, honest, lacking monitoring skills), and spiritual characteristics (thinking about enlisting help rather than problems, not being aggressive).
2) Factor analysis was used to identify these 4 factors, which together account for about 61% of the variability in responses. Variables were grouped under each factor based on their loadings in the rotated component matrix.
3) The 4 factors provide an overview of the characteristics respondents associated with good
This document provides an overview of factor analysis, including exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). EFA is used to discover the underlying constructs influencing a set of measured variables, while CFA tests whether a specified set of constructs is influencing responses as predicted. The document outlines the theoretical basis for factor analysis and provides details on performing EFA, including extracting factors, rotating solutions, and interpreting results. It also covers performing CFA, evaluating model fit, and comparing models. Combining EFA and CFA on separate data sets is recommended to generate and test theories.
Factor analysis is a technique that is used to reduce a large number of variables into fewer numbers of factors. The basic assumption of factor analysis is that for a collection of observed variables there are a set of underlying variables called factors (smaller than the observed variables), that can explain the interrelationships among those variables.
A Solution To The Random Assignment Problem On The Full Preference DomainJoe Andelija
This document summarizes a research paper that proposes a new algorithm for solving the random assignment problem when agents' preferences allow for indifference between objects. The algorithm extends the probabilistic serial mechanism to the full preference domain by interpreting it as an iterative algorithm to compute maximum flow in a network. However, the authors also prove that on the full preference domain, it is impossible for any mechanism to find an assignment that is both envy-free and ordinally efficient while also satisfying a weak strategyproofness property.
The document discusses factor analysis as an exploratory and confirmatory multivariate technique. It explains that factor analysis is commonly used for data reduction, scale development, and evaluating the dimensionality of variables. Factor analysis determines underlying factors or dimensions from a set of interrelated variables. It reduces a large number of variables to a smaller number of factors. The key steps in factor analysis include computing a correlation matrix, extracting factors, rotating factors, and making decisions on the number of factors.
The Next Normal - Five Stages of EmergenceMalcolm Ryder
Patterns of activity within a community, market, or organization occur in persistent combinations that we refer to as "normal". When those patterns change enough, and a new set of interactions become more persistent, a new normal emerges. This discussion looks at how to foresee the emergence of a next normal.
This document discusses factor analysis, including:
- Factor analysis is used for data reduction, scale development, and assessing dimensionality. It identifies underlying factors or dimensions from a set of interrelated variables.
- The key steps in factor analysis are computing a correlation matrix, extracting factors using methods like principal component analysis, rotating factors, and determining the optimal number of factors.
- The document provides guidance on interpreting factor analysis results and deciding how many factors best represent the data.
This document provides an overview of exploratory factor analysis (EFA). EFA is used to uncover the underlying structure of a set of variables and identify latent variables that are inferred rather than directly observed. It allows investigation of factor structures and helps understand response patterns. The document discusses EFA assumptions, procedures for extraction and rotation of factors, and methods for determining the appropriate number of factors.
factor analysis (basics) for research .pptMsHumaJaved
Factor analysis is a technique used to reduce a large number of variables into fewer underlying factors. It examines whether survey items are correlated and "hang together" to represent an underlying construct. Factor analysis assumes there are a smaller set of underlying factors that can explain the interrelationships among observed variables. It is commonly used for data reduction, scale development, and assessing dimensionality. The process involves computing a correlation matrix, extracting factors, rotating factors, and determining the number of factors. Key decisions include which extraction method to use and how many factors to retain based on eigenvalues and scree plots.
Factor analysis is a statistical technique used to identify underlying factors that explain the pattern of correlations within a set of observed variables. It groups variables that are highly correlated with each other into factors to reduce data dimensionality. The key steps are extracting factors with eigenvalues greater than 1, evaluating factor loadings to interpret the grouping of variables, and rotating factors to maximize interpretability of the results. SPSS output includes correlation coefficients, KMO/Bartlett's tests of sampling adequacy, eigenvalues, communalities, scree plots, and rotated component matrices.
The document discusses exploratory factor analysis (EFA). EFA is used to identify patterns of correlations among observed variables and group them into fewer unobserved variables called factors. The key steps of EFA include data screening, factor extraction to identify factors, factor rotation for interpretability, and interpretation of results. The document also provides examples of important EFA concepts like communalities, eigenvalues, scree plot, factor loadings, and reliability. It summarizes an EFA conducted on variables related to consumer mobile phone purchasing behavior, which identified 4 factors: after sales services, looks and ranges, availability of parts and add-on technology, and brand and features.
The document discusses various methods for scaling and measuring constructs, including unidimensional and multidimensional scaling. It describes exploratory factor analysis and confirmatory factor analysis as two common methods for analyzing multidimensional data. Exploratory factor analysis is used to uncover the underlying factor structure without prior hypotheses, while confirmatory factor analysis tests a hypothesized factor structure.
Multiple discriminant analysis (MDA) is used to classify cases into groups when there are more than two categories. MDA derives multiple discriminant functions to discriminate between groups, with the first function accounting for the most variation between groups. The number of functions derived is usually equal to the number of groups minus one or the number of predictor variables, whichever is smaller. MDA outputs include standardized discriminant function coefficients, structure correlations, group centroids, and a classification matrix assessing prediction accuracy.
The document discusses three key aspects of construct validity: substantive validity, structural validity, and external validity. It provides details on each aspect and how they relate to different phases of test construction and evaluation, including developing item pools, analyzing item properties, validating the measure against other criteria. The overall process involves conceptualizing the construct, creating items, analyzing relationships between items and criteria to provide evidence for the measure.
This study examines customer switching behavior for bank services. It identifies four dimensions of customer satisfaction: personal, financial, environmental, and convenience factors. The study develops and tests hypotheses about the relationship between customer satisfaction and switching likelihood. It finds that satisfaction is negatively related to switching, and that importance of a service mediates this relationship. A survey was administered and factor analysis was conducted. The results show the effect of importance varies across service categories, and that competition levels also influence switching decisions. The findings suggest banks should focus more on personal, atmospheric, and convenience factors to reduce switching.
Decision Support Systems in Clinical EngineeringAsmaa Kamel
This document provides an overview of the Analytic Hierarchy Process (AHP) decision support system and presents a case study on using AHP to make medical equipment scrapping decisions. The key points are:
1) AHP breaks down a complex decision problem into a hierarchy, then uses pairwise comparisons to determine criteria weights and rank alternatives. It was used in this case study to evaluate 9 dialysis machines for potential scrapping.
2) Criteria for the dialysis machine scrapping decision included age, performance, safety record, and costs. Data was incomplete so the study simulated different scenarios to examine the impact.
3) AHP derived local and global priorities to determine each machine's overall priority for scra
The document provides an overview of regression analysis techniques, including linear regression and logistic regression. It explains that regression analysis is used to understand relationships between variables and can be used for prediction. Linear regression finds relationships when the dependent variable is continuous, while logistic regression is used when the dependent variable is binary. The document also discusses selecting the appropriate regression model and highlights important considerations for linear and logistic regression.
Factor analysis is a statistical method used to describe variability among observed correlated variables in terms of a potentially lower number of unobserved variables called factors. It identifies patterns of correlations between observed variables and groups variables that are highly correlated into factors. There are two main types: exploratory factor analysis, which is used to uncover the underlying structure of a relatively large set of variables without making prior assumptions, and confirmatory factor analysis, which tests whether measures of a construct load on factors as expected based on pre-existing theories. Factor analysis involves calculating factor loadings, eigenvalues, rotation methods, and determining the number of factors to extract.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Communications Mining Series - Zero to Hero - Session 1
Anp slideshow july_2001
1. The Analytic Network Process (ANP)
for Decision Making and Forecasting
with Dependence and Feedback
• With feedback the alternatives depend on the criteria as
in a hierarchy but may also depend on each other.
• The criteria themselves can depend on the alternatives
and on each other as well.
• Feedback improves the priorities derived from judgments
and makes prediction much more accurate.
1
3. Feedback Network with components having
Inner and Outer Dependence among Their Elements
C4
C1
Feedback
Arc from component
C4 to C2 indicates the
outer dependence of the
elements in C2 on the
elements in C4 with respect
to a common property.
C2
C3
Loop in a component indicates inner dependence of the elements in that component
with respect to a common property.
3
4. Inner and Outer Dependence
and the Control Hierarchy
In a network, the elements in a component may be people (e.g., individuals in the
White House) and those in another component may also be people (e.g., individuals
in Congress).
The elements in a component may influence other elements in the same component
(inner dependence) and those in other components (outer dependence) with respect
to each of several properties. We want to determine the overall influence of all the
elements.
In that case we must organize the properties or criteria and prioritize them in the
framework of a control hierarchy (or a network), perform comparisons and
synthesize to obtain the priorities of these properties. We then derive the influence
of elements in the feedback system with respect to each of these properties. Finally,
we weight the resulting influences by the importance of the properties and add to
obtain the overall influence of each element.
4
5. Main Operations of the ANP
• Relative measurement: Reciprocal relation
• Judgments: Homogeneity
• Hierarchy or Network: Structure of problem; the control hierarchy
• Priorities, Dominance and Consistency: Eigenvector
• Composition, Additive to also handle dependence through the supermatrix
• Supermatrix: Dependence
• Neural Firing: Fredholm Kernel and Eigenfunctions
5
6. Inner and Outer Dependence
and the Control Hierarchy cont.
Control hierarchies fall in four groups:
• Benefits, Costs, Risks, & Opportunities.
Benefits and costs measure the positive and negative contributions or importance
of the alternatives if they happen, but will they happen?
Risks and opportunities measure the likelihood that the alternatives will happen
and make positive and respectively negative contributions.
Each one is a hierarchy (or a network) by itself. The overall priorities of the
alternatives with respect to each of these are then combined by forming the ratios:
Benefits x Opportunities
Costs x Risks
to obtain their final overall priorities for a decision.
6
7. Weighting The Components
In the ANP one often needs to prioritize the influence of the components
themselves on each other component to which the elements belong. This
influence is assessed through paired comparisons with respect to
a control criterion.
The priority of each component is used to weight the priorities of all the
elements in that component. The reason for doing this is to enable us to
perform feedback multiplication of priorities by other priorities in a cycle, an
infinite number of times. The process would not converge unless the resulting
matrix of priorities is column stochastic (each of its columns adds to one).
To see that one must compare clusters in real life, we note that if a person is
introduced as the president it makes much difference, for example, whether he
or she is the President of the United States or the president of a local labor
group.
7
8. Functional - Structural Criteria
Independence - Dependence
1--Criteria completely independent from alternatives - Absolute
Measurement, Intensities and Standards.
2--Criteria quasi dependent on alternatives - Relative
Measurement: Rescale the weight of a criterion by the number of
alternatives and their measurement (normalization).
3--Criteria completely dependent on alternatives - Feedback
network - the Supermatrix.
8
9. Why ANP?
• The power of the Analytic Network Process (ANP) lies in its use of
ratio scales to capture all kinds of interactions and make accurate
predictions, and, even further, to make better decisions. So far, it has
proven itself to be a success when expert knowledge is used with it to
predict sports outcomes, economic turns, business, social and political
decision outcomes.
• The ANP is a mathematical theory that makes it possible for one to
deal systematically with all kinds of dependence and feedback. The
reason for its success is the way it elicits judgments and uses
measurement to derive ratio scales. Priorities as ratio scales are a
fundamental kind of number amenable to performing the basic
arithmetic operations of adding within the same scale and multiplying
different scales meaningfully as required by the ANP.
9
10. Mutual Influence Among Several Elements
In order to distinguish among the influence of several homogeneous elements that is exerted on a single
element, the number of influencing elements cannot be more than a few. The reason is that the element
that is influenced must be able to distinguish between the various influences and respond to them in
relatives terms. If their number is large, the relative influence of each would be a small part of the total.
On the other hand, if the number of elements is small, the relative influence of each one on any other
single element would be large and distinguishable. A small change in the influence of any of these
elements would not alter the receiving elements estimation of its overall influence. When the number of
elements is large, they need to be put in different clusters.
Unidirectional Influence
A single powerful element may influence numerous other elements that do not influence it in return or
influence each other. If many elements influence a single element without feedback, their number can
be arbitrarily large.
10
11. The Questions to Answer About the
Dominance of Influence
Two kinds of questions encountered in the ANP:
1. Given a criterion, which element has greater influence (is more dominant) with
respect to that criterion?
Use one of the following two questions throughout an exercise.
2. Given a criterion and given an element X in any cluster, which element in the
same cluster or a different cluster has greater influence on X with respect to that
criterion?
2’. Given a criterion and given an element X in any cluster, which element in the
same or in a different cluster is influenced more by X with respect to that criterion.
11
12. Example of Control Hierarchy
Optimum Function of A System
Environmental
Economic
Social
Influence is too general a concept and must be specified in
terms of particular criteria. It is analyzed according to each
criterion and then synthesized by weighting with these priorities
of the “control” criteria belonging to a hierarchy or to a system.
12
13. The Supermatrix
Take a control criterion. The priorities of the elements derived from paired comparisons with
respect to that control criterion are arranged both vertically and horizontally according to
components. The elements in each component are listed for that component in a matrix known
as the Supermatrix. Each vector taken from a paired comparison matrix is part of the column of
the supermatrix representing the impact with respect to the control criterion of the elements of
that component on a single element of the same or another component listed at the top.
The Weighted Supermatrix
All the clusters are pairwise compared according to their influence on a given cluster X with
respect to the control criterion. This yields a vector of priorities of the impact of all the clusters
on a given criterion. Each component of a vector is used to weight all the elements in the block
of column priorities of the supermatrix corresponding to the impact of the elements of that
cluster on X. The process is repeated for all the clusters resulting in a weighted supermatrix.
In each block of the supermatrix, a column is either a normalized eigenvector with possibly
some zero entries, or all of its elements are equal to zero. In either case it is weighted by the
priority of the corresponding cluster on the left. If it is zero, that column of the supermatrix
must be normalized after weighting by the cluster’s weights. This operation is equivalent to
assigning a zero value to the cluster on the left when weighting a column of a block with zero
entries and then re-normalizing the weights of the remaining clusters.
13
14. The Limiting Supermatrix
The weighted supermatrix is now column stochastic from which one then derives the
limiting supermatrix. There are four major cases to consider in deriving the limiting
supermatrix depending on the simplicity or multiplicity of the principle eigenvalue and on
the reducibility and irreducibility of the matrix.
How to Read Off the Answer
The desired priorities of the criteria and alternatives with respect to the corresponding
control criterion can be read off the supermatrix as given or they may be structurally
adjusted according to the number of elements in each cluster and appropriately re-weighted.
How to Combine Benefits, Costs, Opportunities, Risks
One must first combine the supermatrices for the benefits, then for the costs, then for the
opportunities and then for the risks by using the weights of the control criteria for each.
One then takes the ratio
benefits x opportunities / costs x risks
for the alternatives and selects the alternative with the largest ratio.
14
19. The School Hierarchy as Supermatrix
Goal
Learning
Friends
School life
Vocational training
College preparation
Music classes
Alternative A
Alternative B
Alternative C
Goal
0
0.32
0.14
0.03
0.13
0.24
0.14
0
0
0
Learning
0
0
0
0
0
0
0
0.16
0.59
0.25
Friends
0
0
0
0
0
0
0
0.33
0.33
0.34
School life Vocational training College preparation Music classes
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.45
0.77
0.25
0.69
0.09
0.06
0.5
0.09
0.46
0.17
0.25
0.22
A
0
0
0
0
0
0
0
1
0
0
B
0
0
0
0
0
0
0
0
1
0
C
0
0
0
0
0
0
0
0
0
1
Limiting Supermatrix & Hierarchic Composition
Goal
Learning
Friends
School life
Vocational training
College preparation
Music classes
Alternative A
Alternative B
Alternative C
Goal
0
0
0
0
0
0
0
0.3676
0.3781
0.2543
Learning
0
0
0
0
0
0
0
0.16
0.59
0.25
Friends
0
0
0
0
0
0
0
0.33
0.33
0.34
School life Vocational training College preparation Music classes
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.45
0.77
0.25
0.69
0.09
0.06
0.5
0.09
0.46
0.17
0.25
0.22
19
A
0
0
0
0
0
0
0
1
0
0
B
0
0
0
0
0
0
0
0
1
0
C
0
0
0
0
0
0
0
0
0
1
20. Criteria Independent from Alternatives
When the criteria do not depend on the alternatives, the
latter are kept out of the supermatrix and are evaluated in
the usual hierarchic way by the distributive or ideal
modes to make possible rank preservation or reversal as
desired. The priorities of the criteria in terms of which
the alternatives are evaluated hierarchically are taken
from the limiting supermatrix. Here again benefit, cost,
opportunity, and risk evaluation can be made to
determine the ranks of the alternatives.
20
21. Structural Adjust
After & Before the Final Results
After computing the limiting results, if it is desired to group
together elements from two or more clusters to determine their
relative influence, the priorities of each cluster may be multiplied
by the relative number of elements in that cluster to the total
number in the set of clusters and then the entire set is normalized.
One may think to do such structural adjustment in the weighting
process of the original supermatrix. There may be occasions
where that is what should be done.
21
22. The Management of a Water Reservoir
Here we are faced with the decision to choose
one of the possibilities of maintaining the water
level in a dam at: Low (L), Medium (M) or High
(H) depending on the relative importance of Flood
Control (F), Recreation (R) and the generation of
Hydroelectric Power (E) respectively for the three
levels. The first set of three matrices gives the
prioritization of the alternatives with respect to the
criteria and the second set, those of the criteria in
terms of the alternatives.
22
23. A Feedback System with Two Components
Flood
Control
Low
Level
Recreation
Intermediate
Level
HydroElectric
Power
High
Level
23
24. 1) Which level is best for flood control?
Flood Control
Low
Low
1
Medium 1/5
High
1/7
Med
5
1
1/4
High Eigenvector
7
.722
4
.205
1
.073
Consistency Ratio = .107
3) Which level is best for power generation?
Power Generation
Low
Low
1
Medium 5
High
9
Med
1/5
1
5
High Eigenvector
1/9
.058
1/5
.207
1
.735
2) Which level is best for recreation?
Recreation
Low
Low
1
Medium 7
High
5
Med
1/7
1
1/3
High Eigenvector
1/5
.072
3
.649
1
.279
Consistency Ratio = .056
Consistency Ratio = .101
24
25. Flood Control
Recreation
Hydro-Electric
Power
2) At
Intermediate
Level, which
attribute is
satisfied best?
Flood Control
Recreation
Hydro-Electric
Power
F
1
1/3
1/5
Low Level Dam
R
E
3
5
1
3
1/3
1
Eigenvector
.637
.258
.105
1) At Low
Level, which
attribute is
satisfied best?
Consistency Ratio = .033
Flood Control
Recreation
Hydro-Electric
Power
F
1
5
9
Intermediate Level Dam
F
R
E
1
1/3
1
3
1
3
1
1/3
1
Eigenvector
.200
.600
.200
Consistency Ratio = .000
High Level Dam
R
E
1/5
1/9
1
1/4
4
1
Eigenvector
.060
.231
.709
3) At High
Level, which
attribute is
satisfied best?
Consistency Ratio = .061
25
26. The six eigenvectors were then introduced as
columns of the following stochastic supermatrix.
F
F
R
E
L
M
H
0
0
0
.722
.205
.073
R
E
L
M
H
0
0
0
.072
.649
.279
0
0
0
.058
.207
.735
.637
.258
.105
0
0
0
.200
.600
.200
0
0
0
.060
.231
.709
0
0
0
One must ensure that all columns sum to unity exactly.
26
27. The final priorities for both, the height of the dam and for the criteria were
obtained from the limiting power of the supermatrix. The components were
not weighted here because the matrix is already column stochastic and
would give the same limiting result for the ratios even if multiplied by the
weighting constants.
Its powers stabilize after a few iterations. We have
F
F
R
E
L
M
H
R
E
L
M
H
0
0
0
.223
.372
.405
0
0
0
.223
.372
.405
0
0
0
.223
.372
.405
.241
.374
.385
0
0
0
.241
.374
.385
0
0
0
.241
.374
.385
0
0
0
27
28. The columns of each block of this matrix are
identical, so that in the top right block we can
read off the overall priority of each of the three
criteria from any column, and read off the overall
priorities of the three alternatives from any
column of the bottom left block. It is clear from
this analysis that for the kind of judgments
provided, there is preference for a high dam with
priority .405 for hydroelectric power generation
with priority .385.
28
29. Choosing a Car: Foreign or Domestic?
Cost
A
E
J
A
E
J
1
1/5
1/3
5
1
3
3
1/3
1
Eigenvector
.637
.105
.258
American
C
R
D
C
1
1/3
1/4
R
D
3
1
1
4
1
1
Consistency Ratio = .033
Repair Cost
A
E
J
A
E
1
1/5
1/2
J
5
1
3
Eigenvector
2
1/3
1
.582
.109
.309
A
E
J
A
E
J
1
5
3
1/5
1
1/3
1/3
3
1
Eigenvector
.105
.637
.258
Consistency Ratio = .033
.634
.192
.174
Consistency Ratio = .008
European
C
R
D
C
R
D
1
1
2
1
1
2
1/2
1/2
1
Consistency Ratio = .003
Durability
Eigenvector
Eigenvector
.250
.250
.500
Consistency Ratio = .008
Japanese
C
R
D
C
R
D
1
1/2
1
2
1
2
1
1/2
1
Eigenvector
.400
.200
.400
Consistency Ratio = .000
29
31. Date and Strength of Recovery of U.S. Economy
Primary Factors
Subfactors
Adjustment Period
Required for
Turnaround
Conventional
adjustment
Economic
Restructuring
Consumption (C)
Exports (X)
Investment (I)
Fiscal Policy (F)
Monetary Policy (M)
Confidence (K)
Financial Sector (FS)
Defense Posture (DP)
Global Competition (GC)
3 months
6 months
12 months
24 months
The U.S. Holarchy of Factors for Forecasting Turnaround in Economic Stagnation
31
32. Table 1: Matrices for subfactor importance relative to primary factors influencing the Timing of Recovery
Panel A: Which subfactor has the greater potential to influence Conventional Adjustment and how strongly?
C
Consumption
(C)
Exports
(E)
Investment
(I)
Confidence
(K)
Fiscal Policy
(F)
Monetary Policy (M)
E
I
K
F
M
Vector
Weights
1
1/7
1/5
5
2
5
7
1
5
5
5
7
5
1/5
1
5
3
5
1/5
1/5
1/5
1
1/5
1
1/2
1/5
1/3
5
1
5
1/5
1/7
1/5
1
1/5
1
0.118
0.029
0.058
0.334
0.118
0.343
Panel B: Which subfactor has the greater potential to influence Economic Restructuring and how strongly?
FS
Financial
Sector
(FS)
Defense
Posture
(DS)
Global
Competition (GC)
DP
GC
Vector
Weights
1
3
3
0.584
1/3
1
3
0.281
1/3
1/3
1
0.135
32
33. Table 2: Matrices for relative influence of subfactors on periods of adjustment (months) (Conventional Adjustment)
For each panel below, which time period is more likely to indicate a turnaround if the relevant factor is the sole driving force?
Panel A: Relative importance of targeted time periods for
consumption to drive a turnaround
3
6
12
24
Vec. Wts.
3 months
6 months
12 months
24 months
1
5
7
7
1/5
1
5
5
1/7
1/5
1
3
1/7
1/5
1/3
1
.043
.113
.310
.534
Panel C: Relative importance of targeted time periods for
investment to drive a turnaround
3
6
12
24
Vec. Wts.
3 months
6 months
12 months
24 months
1
1
5
5
1
1
5
5
1/5
1/5
1
3
1/5
1/5
1/3
1
.078
.078
.305
.538
Panel E: Relative importance of targeted time periods for
monetary policy to drive a turnaround
3
3 months
6 months
12 months
24 months
1
1/5
1/7
1/7
6
12
24
5
1
1/5
1/7
7
5
1
5
7
7
1/5
1
Vec. Wts.
.605
.262
.042
.091
Panel B: Relative importance of targeted time periods for
exports to drive a turnaround
3
3 months
6 months
12 months
24 months
1
1
5
5
6
1
1
5
5
12
24
1/5
1/5
1
1
1/5
1/5
1
1
Vec. Wts.
.083
.083
.417
.417
Panel D: Relative importance of targeted time periods for
fiscal policy to drive a turnaround
3
6
12
24
Vec. Wts.
3 months
6 months
12 months
24 months
1
1
3
5
1
1
5
5
1/3
1/5
1
1
1/5
1/5
1
1
.099
.087
.382
.432
Panel F: Expected time for a change of confidence
indicators of consumer and investor activity to support a
turnaround in the economy
3
6
12
24
Vec. Wts.
3 months
6 months
12 months
24 months
1
1/3
1/5
1/5
3
1
1/5
1/5
5
5
1
1/5
33
5
5
5
1
.517
.305
.124
.054
34. Table 3: Matrices for relative influence of subfactors on periods of adjustment (months) (Economic Restructuring)
For each panel below, which time period is more likely to indicate a turnaround if the relevant factor is the sole driving force?
Panel A: Financial system restructuring time
3
6
12
24
Vec. Wts.
3 months
6 months
12 months
24 months
1
3
5
7
1/3
1
5
7
1/5
1/5
1
5
1/7
1/7
1/5
1
Panel B: Defense readjustment time
3
6
12
24
.049
.085
.236
.630
3 months
6 months
12 months
24 months
1
3
5
7
1/3
1
5
7
1/5
1/5
1
5
1/7
1/7
1/5
1
Vec. Wts.
.049
.085
.236
.630
Panel C: Global competition adjustment time
3
6
12
24
Vec. Wts.
3 months
6 months
12 months
24 months
1
1
5
5
1
1
5
5
1/5
1/5
1
3
1/5
1/5
1/3
1
.078
.078
.305
.538
Table 4: Most likely factor to dominate during a specified time period
Which factor is more likely to produce a turnaround during the specified time period?
Panel A: 3 Months
CA
R
CA
1
1/5
R
5
1
Vec. Wts.
.833
.167
Panel B: 6 Months
CA
CA 1
R 1/5
R
5
1
Vec. Wts.
.833
.167
Conventional Adjustment
Restructuring
Panel C: 1 Year
CA
CA 1
R
1
R
1
1
CA
R
Panel D: 2 Years
Vec. Wts.
.500
.500
CA R
CA 1 1/5
R
5
1
34
Vec. Wts.
.167
.833
37. Synthesis/Results
When the judgments were made, the AHP framework was
used to perform a synthesis which produced the following
results. First a meaningful turnaround in the economy
would likely require an additional ten to eleven months,
occurring during the fourth quarter of 1992. This forecast is
derived from weights generated in the first column of the
limiting matrix in Table 6, coupled with the mid-points of
the alternate time periods (so as to provide unbiased
estimates:
.224 x 1.5 + .151 x 4.5 + .201 x 9 + .424 x 18 =
10.45 months from late December 1991/early January 1992
37
39. Table 7: Matrices for Primary and Subfactors for Strength of Recovery
Panel A: Which primary factor will be more influential in determining the Strength of Recovery?
Vector
CA
R
Weights
Conventional
Adjustment (CA)
Restructuring (R)
1
5
1/5
1
.167
.833
Panel B: Which subfactor is more important in influencing Conventional Adjustment?
C
Consumption
(C)
Exports
(E)
Investment
(I)
Confidence
(K)
Fiscal Policy
(F)
Monetary Policy (M)
E
I
K
F
M
Vector
Weights
1
1/7
1/3
1
1/7
1/3
7
1
5
5
1
7
3
1/5
1
3
3
5
1
1/5
1/3
1
1/7
1/3
7
1
1/3
7
1
7
3
1/5
1/5
3
1/7
1
0.317
0.037
0.099
0.305
0.035
0.207
Panel C: Which subfactor is more important in influencing Economic Restructuring?
FS
Financial
Sector
(FS)
Defense
Posture
(DS)
Global
Competition (GC)
DP
GC
Vector
Weights
1
1/5
1/3
0.105
5
1
3
0.637
3
1/3
1
0.258
CI = 0.037
39
40. Table 8: Matrices for relative influence of subfactors on Strength of Recovery (Conventional Adjustment)
For each panel below, which intensity is more likely to obtain if the designated factor drives the recovery?
Panel A: Relative likelihood of the strength of recovery if
consumption drives the expansion
V
S
M
W
Vec. Wts.
Very Strong (V) 1
Strong (S)
1
1/5
Moderate (M)
1/7
Weak (W)
1
1
1/5
1/7
5
5
1
1/3
7
7
3
1
.423
.423
.104
.051
CI = 0.028
Panel B: Relative likelihood of the strength of recovery if
exports drives the expansion
V
Very Strong (V) 1
Strong (S)
1
3
Moderate (M)
5
Weak (W)
S
M
1
1
3
5
1/3
1/3
1
3
W
1/5
1/5
1/3
1
Vec. Wts.
.095
.095
.249
.560
CI = 0.016
Panel C: Relative likelihood of the strength of recovery if
investment drives the expansion
V
S
M
W
Vec. Wts.
Panel D: Relative likelihood of the strength of recovery if
confidence drives the expansion
V
S
M
W
Vec. Wts.
Very Strong (V) 1
Strong (S)
1
3
Moderate (M)
1/2
Weak (W)
Very Strong (V) 1
Strong (S)
1
1/3
Moderate (M)
1/5
Weak (W)
1
1
3
1/2
1/3
1/3
1
1/6
2
2
6
1
.182
.182
.545
.091
CI = 0.0
1
1
1/3
1/5
3
3
1/3
1/7
5
5
7
1
.376
.376
.193
.054
CI = 0.101
Panel E: Relative likelihood of the strength of recovery if
fiscal policy drive the expansion
V
S
M
W
Vec. Wts.
Panel F: Relative likelihood of the strength of recovery if
monetary policy drives the expansion
V
S
M
W
Vec. Wts.
Very Strong (V)
Strong (S)
Very Strong (V) 1
Strong (S)
1
5
Moderate (M)
3
Weak (W)
Moderate (M)
Weak (W)
1
1
5
1
1
1
5
1
1/5
1/5
1
1/5
CI = 0.0
1
1
5
1
.125
.125
.625
.125
1
1
5
3
1/5
1/5
1
1/7
CI = 0.101
40
1/3
1/3
7
1
.084
.084
.649
.183
41. Table 9: Matrices for relative influence of subfactors on Strength of Recovery (Restructuring)
For each panel below, which intensity is more likely to obtain if the designated factor drives the recovery?
Panel A: Relative likelihood of the strength of recovery if
financial sector drives the expansion
V
S
M
W
Vec. Wts.
Very Strong (V) 1
Strong (S)
1
3
Moderate (M)
5
Weak (W)
1
1
3
5
1/3
1/3
1
1/3
1/5
1/5
1/3
1
Panel B: Relative likelihood of the strength of recovery if
defense posture drives the expansion
V
S
M
W
Vec. Wts.
.095
.095
.249
.560
Very Strong (V) 1
Strong (S)
3
5
Moderate (M)
7
Weak (W)
CI = 0.016
1/3
1
3
5
1/5
1/3
1
3
1/7
1/5
1/3
1
.055
.118
.262
.565
CI = 0.044
Panel C: Relative likelihood of the strength of recovery if
global competition drives the expansion
V
S
M
W
Vec. Wts.
Very Strong (V) 1
Strong (S)
1
3
Moderate (M)
5
Weak (W)
1
1
3
5
1/3
1/3
1
1
1/5
1/5
1
1
.101
.101
.348
.449
CI = 0.012
Table 10: Overall Results for Strength of Recovery
% GNP Growth
Very Strong
(5.56.5) Strong
(4.5-.5) Moderate
(3-4.5)
Weak
(2-3)
0.108
0.141
0.290
0.461
% GNP Recovery Rate*
3.6
*% GNP Recovery rate calculated using the relative strength of conventional adjustment and restructuring in Table 5 Panel A
each used to multiply midpoints of % GNP Growth and then summed.
41
42. Hamburger Model
Estimating Market Share of Wendy’s, Burger King and McDonald’s
with respect to the single economic control criterion
42
43. How to Pose the Question to
Make Paired Comparisons
• One must answer questions of the following kind: given
McDonald’s (in the Alternatives cluster) is its economic
strength derived more from Creativity or from Frequency
(both in the Advertising cluster)? Conversely, given
Creativity in the Advertising cluster who is more
dominant, McDonald’s or Burger King?
• Then, again, by comparing the dominance impact of the
clusters of Advertising and Quality of Food on the
economic success of McDonald by weighting and
normalizing we can relate the relative effect of elements in
these different clusters.
43
49. Strategic Planning
for the Future of the
University of Pittsburgh Medical Center
Using the Analytic Network Process(ANP)
49
50. Evaluate Strategies for the University Health Network to Compete in a Managed Care Environment
Benefits Control Model
Costs Control Model
B e n e f it s
S o c ia l
Social
Benefits
Network
Submodel
Risks Control Model
C o s ts
E c o n o m ic
P o lit ic a l
S o c ia l
Economic
Benefits
Network
Submodel
Political
Benefits
Network
Submodel
Social
Costs
Network
Submodel
E c o n o m ic
Economic
Costs
Network
Submodel
R is k s
P o lit ic a l
Political
Costs
Network
Submodel
S o c ia l
E c o n o m ic
Social
Risks
Network
Submodel
50
Economic
Risks
Network
Submodel
P o lit ic a l
Political
Risks
Network
Submodel
51. List of Clusters and Elements (Not all the Clusters appear in all 9 of the sub-models.)
Cluster Names
Cluster Elements
Clients
Businesses- businesses that offer employees health care plans
Consumers- individuals who purchase their own health coverage
Insurers- companies who sell health insurance
Competition
Competitors- other hospitals in Pittsburgh that compete w/ UPMC
Convenience
Time- expended by customer scheduling, traveling, and actual waiting room
Safety- safety of location
Internal Stakeholders
Physicians- working for UPMC
Administrators- planners, managers, decision makers of UPMC
Alliances- outside organizations, involved: isurers, hospitals, physician networks
Staff- non-physician, non-administrative personel
Public Relations
Public Relations- UPMC’s public image: TV, Newspaper, Radio
Quality
Specialty quality non-general health services,
Diversity- range of health services offered by UPMC
Care- quality of general health services
Research- quality of research at UPMC
*Strategies
Improve and Measure Outcomes- measure effectiveness to improve service
Capitalization- negotiated insurance contracts with fixed payments
Develop a Primary Network- increase number of primary care physicians
Internal Cost Reduction- cut facilities, employees, and high cost procedures
Teach Primary Care- shift focus from curative care to preventive care
Variety of Services
Internal Medicine and Surgery- Curative specialty services and hospitalization
Cancer Treatment- cancer treatment cure
Outpatient Care- preventive care and short term medical treatments
*Strategies apperar in every sub-moadel as alternatives of choice
51
54. Pre-start (early December 1995)
Wild Card Games
Benefits
Costs
B/C
Miami vs.
Buffalo
1996 Playoffs
Team
0.701
0.745
0.612
0.590
1.145
1.263
Indianapolis vs.
San Diego
0.687
0.660
0.622
0.650
1.105
1.015
Detroit vs.
Philadelphia
0.625
0.695
0.636
0.580
0.983
1.198
Atlanta vs.
Green Bay
0.590
0.785
0.612
0.515
All predictions correct except for two games below.
0.964
1.524
Second Round
Pittsburgh vs.
Buffalo
0.740
0.704
0.581
0.605
1.274
1.164
Indianapolis vs.
Kansas City
0.695
0.750
0.590
0.575
1.178
1.304
Kansas kicker missed 3 field goals & ruined them.
No way to know his ailments that day.
Green Bay vs.
San Francisco
0.755
0.751
0.590
0.585
1.280
1.284
Was too close to determine the winner.
Green Bay won.
Philadelphia vs.
Dallas
0.732
0.759
0.641
0.576
1.142
1.318
Divisional Playoffs
Dallas vs.
Green Bay
0.742
0.756
0.540
0.561
1.370
1.350
Pittsburgh vs.
Indianapolis
0.699
0.741
0.555
0.598
1.260
1.240
The Super Bowl
Dallas vs.
Pittsburgh
0.761
0.748
0.728
0.735
1.045
1.018
54
55. Pre-start (early December 1996)
The first predictions were wrong
on three games which then
required revision.
Playoff Predictions
Pre-Start
Predicted Outcomes
AFC
1997 Playoffs
Team
Benefits
Costs
B/C
Winner
Las Vegas
Wild Cards
Indianapolis
Pittsburgh
0.588
0.592
0.489
0.477
1.2
1.24
Pittsburgh
Jacksonville
Buffalo
0.601
0.594
0.501
0.487
1.2
1.22
Buffalo
Wrong prediction.
Pittsburgh
New England
0.609
0.516
0.479
0.419
1.27
1.23
Pittsburgh
Wrong prediction.
Buffalo
Denver
0.551
0.62
0.488
0.447
1.13
1.39
Denver
Wrong prediction.
Pittsburgh
Denver
0.633
0.686
0.523
0.5318
1.21
1.29
Denver
Philadelphia
San Francisco
0.557
0.621
0.467
0.444
1.19
1.4
San Francisco
Minnesota
Dallas
0.545
0.571
0.488
0.476
1.12
1.2
Dallas
San Francisco
Green Bay
0.585
0.685
0.5
0.46
1.17
1.49
Green Bay
Dallas
Carolina
0.522
0.51
0.494
0.448
1.06
1.14
Carolina
Carolina
Green Bay
0.511
0.643
0.498
0.521
1.03
1.23
Green Bay
0.618
0.556
0.457
0.476
1.35
1.17
Green Bay
Conference Finals
NFC
Wild Cards
Conference Finals
Super Bowl
Green Bay
Denver
55
56. Post-start (before Conference Finals)
Playoff P
redictions
A gains error in one game.
Predicted Outcomes
AFC
Team
Benefits
Costs
B/C
Winner
Actual
Conference Finals
Jacksonville
Denver
0.545
0.612
0.488
0.447
1.12 Denver
1.37
Jacksonville
New England
0.576
0.645
0.515
0.519
1.12 New Eng
1.24
NE
0.627
0.653
0.554
0.506
1.13
1.29 Green Bay
Jax
Super Bowl
New England
Green Bay
56
58. Benefits Supermatrix
Local Weights
Offense
QB Ability
Running
Emotions Play Above
Ability
Coaching
Emotional State
Outside Home Field
Road Ahead
Teams
Dallas
Green Bay
Offensive
Global
0.0297
0.3140
Local
0.0864
0.9136
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
0.0309
0.1962
0.7724
0.1055
0.8945
0.9693
0.0308
QB Ability
Emotions
Running Play Above
1.0000
Outside
Coaching
0.8000
0.2000
Emotions
1.0000
Home Field
.8000
.2000
Road Ahead
1.0000
Teams
Dallas Green Bay
0.8000
0.2000
1.0000
0.7500
0.2500
1.0000
1.0000
1.0000
1.0000
0.7500
0.2500
1.0000
CLUSTER WEIGHTS
Offense
Offense
0.0000
Emotions 0.2176
Outside
0.0914
Team
0.6910
Emotions
0.2449
0.0000
0.0902
0.6648
Outside
0.6442
0.0852
0.0000
0.2706
Teams
0.7172
0.1947
0.0881
0.0000
58
1.0000
1.0000
1.0000
0.2000
0.8000
59. Weighted Supermatrix
Cluster Weights
Offense
QB Ability
Running
Emotions Play Above
Ability
Coaching
Emotional State
Outside Home Field
Road Ahead
Teams
Dallas
Green Bay
Offensive
Global
0.0297
0.3140
Local
0.0864
0.9136
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
0.0309
0.1962
0.7724
0.1055
0.8945
0.9693
0.0308
QB Ability
Emotions
Running Play Above
0.7308
Coaching
0.1959
0.0490
Outside
Emotions
0.7308
Home Field
0.5154
0.1288
Teams
Road Ahead
Dallas Green Bay
0.7125
0.1781
0.6442
0.0852
0.0639
0.0213
1.0000
1.0000
0.2692
0.0902
0.4986
0.1662
0.6885
0.3115
0.0538
0.2153
0.2706
0.0219
0.0875
0.2706
Limiting Benefits Supermatrix
Offensive
Offense
Emotions
Outside
Teams
QB Ability
Running
Play Above
Ability
Coaching
Emotional State
Home Field
Road Ahead
Dallas
Green Bay
Emotions
Global
0.0297
0.3140
Local
0.0297
0.3140
QB Ability
0.0297
0.3140
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
Running Play Above
0.0297
0.0297
0.3140
0.3140
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
Outside
Teams
Coaching
0.0297
0.3140
Emotions
0.0297
0.3140
Home Field
0.0297
0.3140
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
59
Road Ahead
0.0297
0.3140
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
Dallas Green Bay
0.0297 0.0297
0.3140 0.3140
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
0.0037
0.0235
0.0923
0.0433
0.3670
0.1227
0.0039
60. Costs Supermatrix
Local Weights
Offense
History
Outside
Teams
Road Ahead
Immature Players
Not Full Strength
Cinderella
Play Bey Ability
Past Failures
Mental State
Weather
Dallas
Green Bay
Offensive
Global
0.1529
0.0000
0.2261
0.0011
0.2002
0.0738
0.0121
0.1683
0.1653
0.0000
Local
0.4034
0.0000
0.5966
0.0041
0.7278
0.2683
0.0673
0.9332
1.0002
0.0000
Emotions
Immature Not Full
Road Ahead Players Strength
Outside
Cinderella
1.0000
Play
beyond
Past
Mental
Ability Failures State
0.8000
0.8000
0.2000
1.0000 0.2000
1.0000
0.8333
0.1667
0.8571
0.1429
1.0000
Emotions
0.3614
0.0000
0.0650
0.5736
1.0000
1.0000
Outside
0.6267
0.0936
0.0000
0.2797
Weather
1.0000
0.7500
0.2500
1.0000
CLUSTER WEIGHTS
Offense
Offense
0.0000
Emotions 0.0877
Outside
0.1392
Team
0.7731
Teams
Teams
0.7172
0.1947
0.0881
0.0000
60
1.0000
Dallas Green Bay
0.7500
0.2000
0.2500 0.8000
0.7500
0.2500
1.0000
0.8333 1.000
0.1667
61. Weighted Supermatrix
Cluster Weighted
Offense
History
Outside
Teams
Road Ahead
Immature Players
Not Full Strength
Cinderella
Play Bey Ability
Past Failures
Mental State
Weather
Dallas
Green Bay
Offensive
Global
0.1529
0.0000
0.2261
0.0011
0.2002
0.0738
0.0121
0.1683
0.1653
0.0000
Local
0.4034
0.0000
0.5966
0.0041
0.7278
0.2683
0.0673
0.9332
1.0002
0.0000
Emotions
Immature Not Full
Road Ahead Players Strength
Outside
Cinderella
1.0000
Play
beyond
Past
Mental
Ability Failures State
0.2891
0.5014
0.0723
Weather
1.0000 0.1253
0.0936
0.8333
0.1667
1.0000
Teams
0.6267
0.0702
0.0234
0.8571
0.1429
0.2797
Dallas Green Bay
0.5379
0.1434
0.1793 0.5738
0.1947
0.0734
0.0147
0.2797
Limiting Costs Supermatrix
Offensive
Offense
History
Outside
Teams
Road Ahead
Immature Players
Not Full Strength
Cinderella
Play Bey Ability
Past Failures
Mental State
Weather
Dallas
Green Bay
Emotions
0.1460
0.0487
0.0881
Outside
Teams
Global
Local
0.1529 0.1529
Immature Not Full
Road Ahead Players Strength
0.1529
0.1529 0.1529
Cinderella
0.1529
Play
beyond
Ability
0.1529
Weather
0.1529
Dallas
0.1529
Green Bay
0.1529
0.0000 0.0000
0.0000
0.0000 0.0000
0.0000
0.0000 0.0000 0.0000 0.0000
0.0000
0.0000
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
Past
Mental
Failures State
0.1529 0.1529
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
61
0.2261
0.0011
0.2002
0.0738
0.0212
0.1683
0.1653
0.0000
62. Benefits Intensity Priorities
Quarterback (0.030):
Average (0.091)
Running Game (0.314):
Average (0.084)
Good (0.281)
Good (0.211)GB
Play Above Potential (0.004):
Average (0.075)
Good (0.229)D
Coaching Ability
to Inspire (0.023):
Not A lot (0.078)
Emotional State (0.092):
Apathy (0.082)
Home Field
Advantage (0.043):
Neutral (0.105)
The Road Ahead (0.367):
No Effect (0.082)
Somewhat (0.205)D
High Ability (0.691)GB,D
High Ability (0.705)D
High Play Level (0.696)GB
Heroic (0.717)GB
Mediocre (0.236)
Excitement (0.682)GB,D
Some Effect (0.258)GB
Significant
Effect (0.637)D
Some Effect (0.236)D
Very
Confident (0.682)GB
Dallas’ Effect on the Ultimate
Outcome (0.123):
Medium (0.280)
Low Effect (0.094)
Greatly
Influenced (0.627)GB,D
Green Bay’s Effect on the Ultimate
Outcome (0.004):
Medium (0.258)
Not Much (0.105)
Greatly
Influenced (0.637)GB,D
Costs Intensity Priorities
The Road Ahead (0.153):
Low Effect (0.085)
Somewhat (0.271)GB,D
High Effect (0.644)
Not at Full Strength
(0.226):
Few Injuries (0.091)
Some
Injuries (0.218)
Big Injury
Problems (0.644) GB,D
Playing Beyond Ability (0.200):
Not a factor (0.094) GB,D
May Falter (0.288)
Venerable (0.627)
Past Failures (0.074):
Good History (0.082) GB
Can’t get
it Gone (0.682)
Mental State
of Preparedness (0.012):
Ready (0.122)GB
Mixed Past (0.236)D
May be
Hurt (0.230)D
Unready (0.648)
Cinderalla Team (0.001): Good Team
Not Cinderalla (0.082) GB,D Lucky (0.236)
It’s Midnight (0.682)
Weather Sensitivity (0.168): Small
Anything Goes (0.095)D
Sensitivity (0.250)GB
High
Sensitivity (0.655)
Dallas’ Effect (0.165):
Small (0.163)
High (0.540)GB,D
Medium (0.297)
Green Bay’s Effect (0.000)
Small (0.105)D
Medium (0.258)
Big Effect (0.637)GB
Immature Players (0.000)
Some
Veterans (0.082)D
Experience (0.236)GB
Young
Players (0.682)
Each of the two teams obtained a total score from the intensities.
62
63. Illustrative Considerations in the Evaluation
of 1996 Dallas - Green Bay Game
For the Benefits Model:
- With respect to Green Bay, Quarterback is equally to moderately more important than Dallas. Here we
are comparing an aspect of the Green Bay team to their opponent, Dallas effectively, we are asking
ourselves, which is more important to Green Bay’s success, the fact that they have Brett Favre, or the fact
that they are playing Dallas. The judgment was made that while Favre is an outstanding quarterback, the
fact that he is facing Dallas may be enough to counteract his abilities.
- With respect to Dallas, the Road Ahead is strongly more important than Home Field Advantage. The
Road Ahead refers to future games that the team may have to play if the team continues on. Here, the
relative ease of the road ahead for Dallas, based on the record of the AFC in the Super Bowl, causes it to
be less important than the fact that Dallas is playing Green Bay, possibly its biggest obstacle to winning
the Super Bowl, on its home turf.
- With respect to Dallas, Running Game is equally to moderately stronger than Quarterback. This
judgment is based on the fact that while Dallas’ quarterback is excellent the team’s Running Game is
quite often the league’s best.
- With respect to Dallas, Quarterback is strongly to very strongly more important than Coaching
Inspiration. The basis for this is the fact that Barry Switzer has exhibited no great gift for inspiration, the
team simply is full of talent, especially in the quarterback position.
63
64. For the Costs Model:
- With respect to Green Bay, Mental State is strongly more important than Weather Sensitivity, simply because Green Bay’s Mental State could be more easily called
into question (may not be tough enough) than their Weather Sensitivity (they are very insensitive to poor weather conditions.
- With respect to Dallas, Mental State is moderately more important than Weather Sensitivity. While the team is not highly Weather Sensitive, their arrogant attitudes
causes us a bit of concern that it may be their undoing.
- With respect to Green Bay, Not at Full Strength is moderately more important than The Road Ahead. The basis for this being that Reggie White, a very important
player on the team, is not 100%, and this is likely to have a larger impact than any AFC team that Green Bay might meet in the Super Bowl because, as we stated
before, AFC teams do not traditionally pose a threat. Conversely, if we looked at an AFC matchup, the Road Ahead would in most cases have a large impact due to the
fact that the AFC teams are usually unsuccessful against the NFC teams in the Super Bowl.
- With respect o Green Bay, Dallas is strongly more important than Cinderella. This translates to mean that any Cinderella story that Green Bay may be enjoying is
likely to be overshadowed by the fact that they are playing Dallas. While Green Bay is not widely considered to be a Cinderella, the label would have a larger effect on
a team like the Indianapolis Colts when they played Kansas City.
- With respect to Dallas, Not at Full Strength is strongly more important than Immature Players. While Dallas has many veterans, its biggest problem in this
comparison could be injuries to key players such as Charles Haley.
- With respect to Dallas, Past Failures are equally important as Play Beyond Ability. Not only is Dallas playing up to its potential, it has a few grave failures of the
past to look back on.
- Now that we have looked at several examples of judgments, we can move on to the results of the model. The elements in the model are given weights based on our
judgment. We can rate the teams using information that we have collected. For instance, if Green Bay’s passing statistics are traditionally low against Dallas, Green
Bay’s likelihood of success against Dallas is comprised by the fact that the team relies heavily on that kind of play. We determined that passing is important to Green
Bay in our judgments, and find that their passing suffers against Dallas in the statistical data that we collected.
Conclusion
It is our hope to use this model to forecast future Super Bowl competitions. Undoubtedly, there will be additional modifications. This basic ideas learned here can be
used to forecast the outcome of other competitive games. It appears that the use of intangibles is significantly more important in the forecast than the strict accuracy
of the statistics, although one cannot do without the statistics which tell more about performance than about attitude and environment.
64
65. Prediction of the 1997 Australian Tennis Open
Two models were used to predict the matches for the top 16 ranked players in the tournament. In the first
model, a feedback network modeled past performance. Here, we examined performances of players in
previous tournaments. The factors and weights were then included in the second model.
In the second model, a hierarchy was developed to model the intensities that will be used in the ratings
module to rate the players. Past Performance from the network model in the first stage was the first
criterion added. Another two criteria: Technique and Conditioning were also included.
Prediction:
7 of the top 8 players were correctly predicted to meet in the final rounds of the tournament with the final
between Sampras and Chang. In reality, the final was a match between Moya and Sampras, with the top
seed winning the outcome. As Moya was ranked 58th in the world prior to the start of the tournament, he
was not even included in the model.
65
67. Hong Kong Competes with Singapore
and somewhat less with Tokyo
as Financial Center in Asia
in the 21st Century
Gang Hu (Tianjin), Chia-Shuan Huang (Taiwan), Hong Li (Beijing), Thomas Saaty (Pittsburgh),
Torsten Schmidt (Germany), and Yu-Chan Wang (Taiwan).
67
68. The Purpose
The purpose of this project is to study the potential impact imposed through the takeover of Hong
Kong by China in 1997. The analysis focuses on the following questions:
• What set of criteria does an Asian location have to meet in order to be a Financial Center?
• Which city is the best candidate for the Financial Center in the Asia-Pacific region in 1996?
• What is the most likely policy of the Chinese government towards Hong Kong after 1997?
• What impact does the Chinese policy have towards Hong Kong as a Financial Center?
• Which city is the most likely candidate to be the Financial Center is the Asia-Pacific region in
the year 2000?
68
69. The Approach
These five questions are studied with the methodology and technique provided by a combination
of the AHP and ANP. A dual-model approach was developed. The first model, the “Financial
Center model” which is an ANP model, was used to examine the first two of the above questions.
The second model, the “Mainland China Policy model” is an AHP model, used to focus on the
third question above to generate a policy package most likely to be adopted by the Chinese
government. Based on changed in the political, economic, and social environments incurred by the
estimated policy package, the “Financial Center model” was re-evaluated. The fourth and fifth
questions above are thus answered.
The two models complement one another because:
1. The Financial Center model provides the relevant factors for a focused examination
under the China model in order to find the relevant factors which may be changed by
the Chinese government, and
2. The China model provides a package of feasible (for mainland China) and likely
policies to be adopted by the Chinese government after 1997. Based on the package of
policies, a second evaluation of the Financial Center model was made in order to
estimate the future status of Hong Kong as a Financial Center.
69
70. Influencing Factors
A).Economic-Benefits:
1. Geographic advantage.
2. Free flow of information.
3. Free flow of people.
4. Free flow of capital.
5. Internationalization.
6. Investment.
7. Educated workforce.
8. Convertible currencies.
9. Assistance from government.
11. Modern infrastructure.
12. Deregulated market.
B).Political-Benefits:
1. Efficient government.
2. Independent legal system.
3. Assistance from government.
4. Free flow of people.
5. Free flow of information.
C).Social-Benefits:
1. Free flow of people.
2. Free flow of information.
3. Educated workforce.
4. Open culture.
5. Internationalized language.
6. Availability of business
professionals.
D).Economic-Costs:
1. Labor cost.
2. Corruption.
3. Protection from government.
4. Operating cost.
5. Tax.
E).Political-Costs:
1. Tax.
2. Corruption.
3. Protection from government.
F).Social-Costs:
1. Environment.
2. Corruption.
3. Protection from government.
G).Economic-Opportunities:
1. Investment.
2. Access to potential market.
3. Regional economic growth,
membership of international
organizations (GATT.WTO).
H).Political- Opportunities:
1. Political credit.
2. Investment.
3. Membership of international
organization (GATT,WTO).
I).Social-Opportunities:
1. Social wealth.
2. Access to potential market.
J).Political-Risks:
1. Political instability.
2. Instability of local government.
3. Political restriction.
K).Economic-Risks:
1. Instability of local financial market.
2. Inflation.
3. Competition from local business.
L).Social-Risks:
1. Industry resistance.
2. Public industry.
3. Instability of local society.
70
71. The Set of Four Control Hierarchies
Benefits Control Model
Opportunities Control Model
*
Risks Control Model
Costs Control Model
*
71
75. Results
There are twelve supermatrices associated with the
complete model. With each of these supermatrices
is associated a cluster priority matrix, a weighted
supermatrix, and a limiting supermatrix from which
the priorities of the three contending centers are
derived. These twelve sets of priorities are
weighted by the priorities of the corresponding
control criteria and summed to obtain the final
ranking.
75
76. The output from the first Financial Center model
We assume that all the situation will remain the same after 1997. In other words, the
main land China government will adopt a set of feasible policies toward Hong Kong. Based on
this assumption, we made the judgments. After synthesis, we got the results below:
Opportunities
Benefits
Economic
Political
Social
Hong Kong
0.4131
0.5034
0.4416
Singapore
0.2836
0.1503
0.2164
Tokyo
0.3033
0.3465
0.342
Political
0.3813
0.2935
0.3252
Social
0.4511
0.2227
0.3261
Economic Political
0.3086
0.4278
0.2186
0.4387
0.4728
0.1335
Social
0.4387
0.2365
0.3248
Risks
Costs
Hong Kong
Singapore
Tokyo
Hong Kong
Singapore
Tokyo
Economic
0.4096
0.2874
0.303
Economic Political
0.2393
0.1922
0.2804
0.3625
0.4803
0.4453
Social
0.2519
0.1803
0.5678
Hong Kong
Singapore
Tokyo
The overall result is listed below:
Alternatives
Hong Kong
Singapore
Tokyo
Rank
(B*O)/(C*R)
1
1.6498
2
0.906
3
0.5338
It is clear that Hong Kong has the highest priority, which means if mainland China
government adopt all the policies described above toward Hong Kong, it will remain to be the
financial center in the Asia-Pacific region.
76
77. Likely Policies Followed by China
Affecting the Future of Hong Kong
About 50 potential Chinese policies were identified and
ranked in a hierarchy. The most likely policies were
identified and the network sub-models were re-assessed
given this information. The hierarchy and the policies are
shown next.
77
80. The output from the Chinese policies model
We picked 18 different factors (in the Financial Center model) which are highly
dependent on the policies of the mainland China government. For each of the factors, we divided
it into three situations(positive +, mutual 0, negative -), which denote the different Chinese
policies toward it. And then, we put them into the China government model(absolute hierarchy
model, including four sub-model: benefits, costs, opportunities and risks). After synthesis, we
got the overall score for each policy. Based on the scores, we draw the optimal and most likely
policies package(it is shown below).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Optimal and most likely policies
free flow of information 0
free flow of people 0
educated workforce +
convertible currency +
deregulated market 0
assistance from government +
inflation +
independent legal system 0
political restrcitions 0
instability of local society +
availability of business professionals +
public insecurity +
corruption +
tax +
protectionist barrier 0
investment +
political credit +
access to potential market +
80
81. The output from the second Financial Center model
Based on the optimal and most likely policies package we got, we made another set of
judgments for the Financial Center model. This is, with the assumptions we have made, an
estimation of the location of the financial center in the Asian-Pacific region. The results are
listed below:
Benefits
Hong Kong
Singapore
Tokyo
Opportunities
Economic Political
0.3814
0.4435
0.2992
0.2187
0.3194
0.3378
Social
0.4238
0.2306
0.3456
Costs
Hong Kong
Singapore
Tokyo
Hong Kong
Singapore
Tokyo
Economic
Political
0.4096
0.3813
0.2874
0.2935
0.303
0.3252
Social
0.4511
0.2227
0.3261
Economic
Political
0.4278
0.4278
0.4387
0.4387
0.1335
0.1335
Social
0.4387
0.2365
0.3248
Risks
Economic
Political
0.2966
0.2814
0.2561
0.3161
0.4475
0.4025
The overall result is as below
Social
0.3045
0.1657
0.5298
Alternatives
Hong Kong
Singapore
Tokyo
Hong Kong
Singapore
Tokyo
Rank
(B*O)/(C*R)
1
1.1822
2
1.1093
3
0.5949
We can see that Hong Kong can still maintain the financial center status after 1997, but
the gap between Hong Kong and other cities is much smaller. Especially, Singapore becomes
very competitive.
81
82. Original Economic Benefits Sub-Model Supermatrix
(Truncated to save space)
Economic Benefits
Local:
Sing
Toky
Hong
assi
free
conv
mode
good
dere
Singapore
0
0
0
0.3196
0.1692
0.1396
0.1692
0.3333
0.2081
Tokyo
0
0
0
0.122
0.4434
0.5278
0.4434
0.3333
0.1311
Hong Kong
0
0
0
0.5584
0.3874
0.3325
0.3874
0.3333
0.6608
0.0538
0.0459
0.0501
0
0
0
0
0
0.044
0.0526
0.0369
0
0
0
0
0
convertible currency
0.0379
0.0796
0.0625
0
0
0
0
0
modern infrastructure
0.0843
0.189
0.0925
0
0
0
0
0
good auditing systems
0.105
0.0801
0.0619
0
0
0
0
0
deregulated market
0.0367
0.0296
0.1166
0
0
0
0
0
geographic advantages
0.1713
0.1599
0.1593
0
0
0
0
0
free flow of information
0.0168
0.0837
0.0594
0
0
0
0
0
free flow of capital
0.1389
0.0917
0.0928
0
0
0
0
0
educated workforce
0.0602
0.0996
0.0537
0
0
0
0
0
internationalized language
0.0961
0.0283
0.0555
0
0
0
0
0
investment from outside
0.1551
0.06
0.1588
0
0
0
0
0
assistance from government
free flow of people
82
83. Weighted Economic Benefits Sub-Model Supermatrix
(Truncated to save space)
Economic Benefits
Weighted:
Sing
Toky
Hong
assi
free
conv
mode
good
dere
Singapore
0
0
0
0.3196
0.1692
0.1396
0.1692
0.3333
0.2081
Tokyo
0
0
0
0.122
0.4434
0.5279
0.4434
0.3333
0.1311
Hong Kong
0
0
0
0.5584
0.3874
0.3325
0.3874
0.3333
0.6608
0.0538
0.0459
0.0501
0
0
0
0
0
0.044
0.0526
0.0369
0
0
0
0
0
convertible currency
0.0379
0.0796
0.0625
0
0
0
0
0
modern infrastructure
0.0843
0.189
0.0925
0
0
0
0
0
good auditing systems
0.105
0.0801
0.0619
0
0
0
0
0
deregulated market
0.0367
0.0296
0.1166
0
0
0
0
0
geographic advantages
0.1713
0.1599
0.1593
0
0
0
0
0
free flow of information
0.0168
0.0837
0.0594
0
0
0
0
0
free flow of capital
0.1389
0.0917
0.0928
0
0
0
0
0
educated workforce
0.0602
0.0996
0.0537
0
0
0
0
0
internationalized language
0.0961
0.0283
0.0555
0
0
0
0
0
investment from outside
0.1551
0.06
0.1588
0
0
0
0
0
assistance from government
free flow of people
83
84. Limiting Economic Benefits Sub-Model Supermatrix
(Truncated to save space)
Economic Benefits
Synthesized: Global
Sing
Toky
Hong
assi
free
conv
mode
good
dere
Singapore
0.2836
0.2836
0.2836
0.2836
0.2836
0.2836
0.2836
0.2836
0.2836
Tokyo
0.3033
0.3033
0.3033
0.3033
0.3033
0.3033
0.3033
0.3033
0.3033
Hong Kong
0.4131
0.4131
0.4131
0.4131
0.4131
0.4131
0.4131
0.4131
0.4131
assistance from government
0.0499
0.0499
0.0499
0.0499
0.0499
0.0499
0.0499
0.0499
0.0499
free flow of people
0.0437
0.0437
0.0437
0.0437
0.0437
0.0437
0.0437
0.0437
0.0437
convertible currency
0.0607
0.0607
0.0607
0.0607
0.0607
0.0607
0.0607
0.0607
0.0607
modern infrastructure
0.1194
0.1194
0.1194
0.1194
0.1194
0.1194
0.1194
0.1194
0.1194
good auditing systems
0.0796
0.0796
0.0796
0.0796
0.0796
0.0796
0.0796
0.0796
0.0796
deregulated market
0.0676
0.0676
0.0676
0.0676
0.0676
0.0676
0.0676
0.0676
0.0676
geographic advantages
0.1629
0.1629
0.1629
0.1629
0.1629
0.1629
0.1629
0.1629
0.1629
free flow of information
0.0547
0.0547
0.0547
0.0547
0.0547
0.0547
0.0547
0.0547
0.0547
free flow of capital
0.1055
0.1055
0.1055
0.1055
0.1055
0.1055
0.1055
0.1055
0.1055
educated workforce
0.0695
0.0695
0.0695
0.0695
0.0695
0.0695
0.0695
0.0695
0.0695
internationalized language
0.0588
0.0588
0.0588
0.0588
0.0588
0.0588
0.0588
0.0588
0.0588
investment from outside
0.1278
0.1278
0.1278
0.1278
0.1278
0.1278
0.1278
0.1278
0.1278
84
85. Normalized by Cluster - Results from Limiting
Economic Benefits Sub-Model Supermatrix
Economic Benefits
Synthesized Local:
Singapore
0.2836
Tokyo
0.3033
Hong Kong
0.4131
assistance from government
0.0499
free flow of people
0.0437
convertible currency
0.0607
modern infrastructure
0.1194
good auditing systems
0.0796
deregulated market
0.0675
geographic advantages
0.1629
free flow of information
0.0547
free flow of capital
0.1055
educated workforce
0.0695
internationalized language
0.0588
investment from outside
0.1278
85
86. The Result
• The first result from the Financial Center model:
•If the Chinese government is able to maintain the current status of Hong Kong, Hong Kong
would still be the Financial Center is the Asia-Pacific region in 2000.
• The first result from the Mainland China Policy model:
•For interests of the mainland Chinese government, no negative policy should be adopted towards
Hong Kong after 1997. A careful and sensitive approach towards the future Hong Kong policy is
suggested by this result, which is also reinforced by the next result.
• The second result from the Financial Center model:
•Although Hong Kong may still be the best choice for a Financial Center, Singapore will become
a strong competitor for the Center in 2000.
86
87. Conclusions
1) Based on the first output of our Financial Center model, we can see that if all conditions remain the
same, in other words, if China adopts all the positive policies toward Hong Kong, in other words if the
Chinese government is able to maintain or even improve the current status of Hong Kong, it is quite sure
that Hong Kong will remain one of the important financial centers in the Asia-Pacific region.
2) Among the influencing factors of the financial center status, many of them are dependent directly on
the government’s policies. Therefore, Hong Kong’s future as a financial center is highly dependent on
the political attitude of the Chinese government.
3) Based on the result of our mainland China policy model, we found, among the 18 factors, the
Chinese government should adopt positive policies on 12 of them, and mixed policies on 6 of them. In
other words, for the interests of China itself (not Hong Kong), China should avoid implementing
negative policies, as defined in this study towards Hong Kong as a financial center.
4) Based on the second output of the Financial Center model, Hong Kong will maintain its financial
status after 1997. But at the same time, Singapore will become very competitive. Therefore, our
conclusion is that if Chinese government adopts rational policies toward Hong Kong as estimated in this
study, Hong Kong will remain the number one financial center of the Asian-Pacific region. But at the
same time, the position of Hong Kong as a financial center will be weakened. If any negative policies
are implemented, Singapore will become the number one financial center of this Asia-Pacific region
followed by Hong Kong.
87
88. Feedback Measurement as the Limiting Power of the Supermatrix
The eigenvectors of the paired comparison matrices are each part of a column of the supermatrix. The supermatrix may not be column stochastic. Its
column blocks would be weighted by the priorities of the clusters to render the matrix stochastic. The supermatrix must now be raised to powers to
capture all the interactions and feedback among its elements. What is desired is its limiting power
limW k
k
The power of a matrix is function of that matrix. Entire functions (series expansion converges for all values) of a matrix can be represented by the
formula:
II(λjI-W)
Wk =
n
i=1
λ
k
i
j≠i
II(λj-λi)
j≠i
if the eigenvalues are distinct, or if they are not then by:
m
W =
k
i=1
1
d mi-1
(mi-1)! d λmi-1
One is the largest eigenvalue of a stochastic matrix. This follows from
λ max max
i
λki (λI-W)-1
n
II (λ- λi)
i=1
n
II (λ- λi)
i=mi+1
λ = λi
n
a
j=1
ij
and the sum on the right is equal to the one for a column stochastic matrix. It is obvious that the moduli of the remaining eigenvalues of a stochastic
matrix are less than or equal to one.
One is a simple eigenvalue if the matrix is positive. It can be a multiple eigenvalue or there may be other eigenvalues whose moduli are equal to one
if there is a sufficient number of zeros in the matrix so that it is reducible. When the supermatrix has some zero entries, it may be that some power of
it is positive and hence the matrix remains positive for still larger powers and is called primitive.
One is a simple eigenvalue of a primitive matrix. One may be a simple eigenvalue whether the matrix is primitive or not or a multiple eigenvalue yet
there may not be other eigenvalues whose moduli is equal to one. The powers take on a certain form in the limit for each of these three cases. On the
other hand if there are other such roots whose module is one, the powers of the supermatrix would cycle with a period of cyclicity and the limit is
given by the same expression in all three cases namely when the supermatrix is imprimitive or when one is a simple or a multiple eigenvalue.
88
89. Irreducible Stochastic ( λ = 1 is a simple root)
W∞ =
No other roots with modulus equal to one
(primitive)
Case A
Primitive if trace is positive.
Raise W to powers.
All columns the same and any column can also be
obtained as the solution of the eigenvalue
problem Ww = w.
Other roots with modulus equal to one
(imprimitive with cyclicity c).
Case A′
1
( I − W c )( I − W ) −1 (W c ) ∞
c
89
c≥ 2
90. Reducible Stochastic
W∞ =
No other roots with modulus equal to one
Case B
λ =1
Simple
−1
( I −W ) ∆ (1) = Adjo int( I − W ) normalized
∆ ′(1)
∆ ′(1)
Case C
λ =1
multiple
n1
n1 ∑ ( − 1) k
k=0
n1! ∆( n1 − k ) (λ )
(λ I − W ) − k −1 |λ =1
( n1 )
(n1 − k ) ∆ (λ )
Other roots with modulus equal to one (cyclic
with cyclicity c).
Case B′
1
( I − W c )( I − W ) −1 (W c ) ∞
c
c≥ 2
Case C′
1
( I − W c )( I − W ) −1 (W c ) ∞
c
c≥ 2
The desired outcome for Case C can often be obtained by introducing loops at all sinks and raising the
matrix to limiting powers.
90
91. Computationally, the foregoing classification may be simplified along the following lines. Define
λ 1. We have:
Proper
| λi | < 1 i > 1
λ 1 = 1 simple root
A primitive stochastic matrix is proper
(1)
Fully Regular
The index k=1 in the diagonal primitive block
matrices of the normal form
λ max =
Improper
| λ i | ≤ 1 (for several i)
Roots of unity of cyclicity c.
(2)
1
( I − W c )( I − W ) −1 (W c ) ∞
c
( I − W ) −1 ∆ (1) Adjo int( I − W )
=
∆ ′(1)
∆ ′(1)
Normalize the columns of the adjoint to get W∞ .
When W is primitive one can simply raise W to
very large powers on a personal computer.
(3)
λ 1 = 1 multiple root If and only if matrices A1,…, Ak in upper part of
of multiplicity n1
diagonal of normal form are primitive
n1
n1 ∑ (− 1) k
k=0
(4)
1
( I − W c )( I − W ) −1 (W c ) ∞
c
n1! ∆( n1 − k ) (λ )
(λ I − W ) − k −1 |λ =1
( n1 − k )! ∆( n1 ) (λ )
A simple practical rule for obtaining a limiting matrix for a given n by n nonnegative and stochastic
supermatrix W is first to test it for irreducibility with the condition (I+W)n-1 > 0. If it is irreducible, then
λ max = 1 is simple and one of the two formulas applies. It is then tested for cyclicity and the answer is
obtained using the above.
When alternatives do not feed back into the criteria, it is best not to include them in the supermatrix. The
reason is that if the supermatrix cycles, then the average value would first have to be calculated. The
average weights of the criteria are then used to weight the alternatives in a separate hierarchy.
91
92. ANP PROJECTS
1996 United States Presidential Election
A Day at the Races: Predicting a Harness Race at the Meadows: An Application of the ANP
A Prediction of Opportunities for Job Growth by U.S. Region
Alternative Fuels for Automobiles
An ANP Approach for Commodity Markets Demand/Supply Ratio Model
Analysis of the Market for 32-Bit Operating Systems
Bridge Management Decision
Choosing the Best Location for Permanent Storage of High-Level Nuclear Waste
Commodity Markets
Convocation Center
Corporate Market Value in the Computer Industry
Corporate Restructuring at Chrysler
Corporate Strategies for Competitors
Crime and Punishment
Disney America: Should Disney Build a Theme Park?
Given $10 Million, What Would be the Best Allocation to Each of the Proposed Programs that
Contribute to Decreasing Gang Activity?
Health Insurance Systems
How to Implement Flex Time
Justify the Existence of the Economic Black Market
Lake Levels and Flow Releases
Management Consulting Model
Market Share Predictions for Aqueous Intra-Nasal Steroids
Medical Center: Strategic Planning with the ANP
Mergers and Acquisitions
Mode of Transportation to School
Modeling a Reservoir Operations for Managing of Ecological Interests
Multi-Objective Decision Making Analysis with Engineering and Business Applications
92
93. ANP REFERENCES cont’d
NBA Playoffs for 1991
Net Dollar Value for IBM, Apple, Intel and Microsoft
Network Analysis of Illegal Drug Marketing in the United States
Planning Strategies for Incubator Space using the ANP
Predicting the Outcome of Legislative Debate over Superfund Reauthorization
Predicting the Winner of the 1995-1996 NHL Stanley Cup
Predicting the Winner of the 1996 Chase Championship with the ANP
Prediction of 1997 Australian Tennis Open
Prediction of the 1997 Wimbledon Tennis Championships
Prediction of the CPU Market
Prioritizing Flow Alternatives for Social Objectives
Ranking Countries in Telecommunications as a Subset of Locating a Business Problem
Stadium Placement and Optimal Funding
Strategic Staffing – Extra Care Providers
Strategies for Improvement at the Joseph M. Katz Graduate School of Business
Teenage Pregnancy
Telecommunications Network Design and Performance
The Decision to Market Nimbex (new drug) vs. Continuing to Market Tracruim (old drug)
The Emerging Information Technologies of the Future: The “Prize” of Firms and Industries
The Future of East Central Europe
The Future of Major League Baseball in Pittsburgh: Strategic Planning with the ANP
The Future of the University of Pittsburgh’s Medical Center
The Middle East
The Optimal MBA Program Structure
The Teenage Smoking Problem
Transportation to Work
Understanding the Tiananmen Massacre in China
What will be the worth? (Predicting Average Starting Salaries for MBA Graduates)
Where to Invest in Capital Markets
93