Experimental comparison of ranking techniquesThe existing research on the field of MCDA ranking problems has been mainly focused on thedevelopment of appropriate methodologies for supporting the decision making process inmulticriteria ranking problems. At the practical level, the use of MCDA ranking techniques in real-world ranking problems has demonstrated the capabilities that this approach provides to decisionmakers.Nevertheless, the implementation in practice of any scientific development is always the last stageof a research. Before this stage, experiments need to be performed in a laboratory environment,under controlled data conditions in order to investigate the basic features on the scientificdevelopment under consideration. Such an investigation and the corresponding experimentalanalysis enable the derivation of useful conclusions on the potentials that the proposed researchhas in practice and the possible problems that may be encountered during its practicalimplementation (Doumpus and Zopounidis, 2002).Within the field of MCDA experimental studies are rather limited. Some MCDA researchersconducted experiments to investigate the features and peculiarities of some MCDA ranking andchoice methodologies (Stewart, 1993, 1996; Carmone et al., 1997; Zanakis et al., 1998).Comparative studies involving MCDA ranking techniques have been heavily oriented towards theAHP technique (Triantaphyllou, 2000).The present paper follows this line of research to investigate the ranking performance of theMOEA procedure presented in section X, as opposed to another widely used ranking method,namely the NFR, which will be presented in section X. the investigation, is based on an extensivesimulation experiment.The considered methodsEvery study investigating the ranking performance of a new methodology relatively to otherthechniques, should consider techniques which are: well-established among researchers, theconsidered techniques should consider different different underlying assumptions/functionality.On the basis of these remarks, the experimental investigation of the ranking performance of theMOEA procedure considers the ranking method NFR.The NFR is among the most widely used ranking methods. Despite their shortcomings, even todayit is still almost always used in the exploitation phase of PROMETHEE II and ELECTRE III methods.MOEA procedure has been developed as an alternative to NFR, following an evolutionaryalgorithms approach.
Experimental designThe factorsThe comparison of the MOEA procedure to the method NFR is performed through an extensivesimulation. The simulation approach provides a framework to conduct the comparison underseveral data conditions and derive useful conclusions on the relative performance of theconsidered methods given the features and properties of the data. The term performance referssolely on the ranking accuracy of the methods.The experiment presented in this paper is only concerned with the investigation of the rankingaccuracy of ranking methods on experimental data conditions. In particular, the conductedexperimental study investigates the performance of the methods on the basis of the following 2factors.F1: Ranking proceduresF2: Size of the ranking problems (cardinality of the set of decision alternatives)Table XXX presents the levels considered for each factor in the simulation experiment.Factors LevelsF1: Ranking procedures 1.- NFR 2.- MOEA procedureF2: Size of the multicriteria ranking problems 1.- 6 2.- 8 3.- 10 4.- 12 5.- 18Table XXX. Factors investigated in the experimental designThe methods defined by the factor F1 are compared (in terms of their ranking accuracy) underdifferent data conditions defined by the factor F2. Factor F2 is used to define the size of thereference set(training sample) (the number of decision alternatives that it includes). The factor hasfive levels corresponding to 6, 8, 10, 12, and 18 alternatives. Generally small training samplecontain limited information about the ranking problem being examined, but the correspondingcomplexity of the problem is also limited. On the other hand, larger samples provide richerinformation, but they also lead to increased complexity of the problem. Thus, the examination ofthe five levels for this factor enables the investigation of the performance of the rankingprocedures under all these cases. This specification enables the derivation of useful conclusions onthe performance of the methods in a wide range of situations that are often met in practice (manyreal-world ranking problems involve this number of decision alternatives).Data generation procedure
An important aspect of the experimental comparison is the generation of the data having therequired properties defined by the factors described in the previous subsection.In this study we proposed a methodology for the generation of the data. The general outline ofthis methodology is presented in Appendix A . The outcome of this methodology is the generationof a matrix and a vector consisting of a value outranking relation and the associated ranking ofalternatives which is consistent with the value outranking relation in terms of the test criterion ofsection 3.This experiment is repeated 5,000 times for the factor F2 (5 levels). Overall, 25,000 reference set(Value Outranking Relation, Ranking) are considered. Each reference set is used to develop aranking through the methods specified by factor F1 (cf. Table XXX). This ranking is then applied tothe corresponding ranking of the reference set to test its generalizing ranking performance.The simulation was conducted on a PC with processor Intel® Corel™ 2 Duo (2.20 GHz). Somecomputer programs were written in Visual.Net programming environment in order to generatesimulated value outranking relations and rankings.Analysis of resultsThe results obtained from the simulation experiment involve the ranking error rates of themethods in the reference sets. The analysis that follows is focused on the ranking performance ofthe methods. The error rates obtained using the reference sets provide an estimation of thegeneralizing performance of the methods, measuring the ability of the methods to provide correctrecommendations on the ranking of alternatives.A first important note on the obtained results is that the main effects regarding the factors F1 andF2 are all significant. This clearly shows that each of these factors has a major impact on theranking performance of the methods.For i) the percentage of times the two approaches (NFR and MOEA) yielded a different indicationof the best two and three alternatives, and ii) the number of times the two rankings derived fromNFR and MOEA were different from the “correct” ranking, the MOEA procedure provides the bestresults. i.e. the MOEA procedure provides significantly the lower error rates. In the case ofexpressing differences in ranking discrepancies, with regard to the number of times one method isbetter than the other, the MOEA procedure provides considerably better results compared to theNFR.The interaction which is found significant in this experiment for the explanation of the differencesin the performance of the methods, involves the size of the reference set. The results of table XXX
show that the increase of the size of the reference set (number of alternatives) reduces theperformance of both methods. This is an expected result, since in this experiment larger referencesets are associated with an increased complexity of the ranking problem. The most sensitivemethod to the size of the reference set is NFR. Nevertheless, it should be noted that irrespectiveof the reference set size the considered MOEA procedure always perform better than the NFRmethod.Summary of Major FindingsThe experiment presented in this paper provides useful results regarding the efficiency of a MCDAranking method compared to another well established MCDA ranking method. Additionally, itfacilitated the investigation of the relative performance of the two MCDA ranking methods. Theconducted extensive experiment helped in considering the relative ranking performance of thesemethods for a variety of data size.Overall, the main finding of the experimental analysis presented in this paper can be summarizedin the following points: 1. The considered MCDA ranking method MOEA procedure can be considered as an efficient alternative to the widely used NFR, at least in cases where the assumptions of these techniques are not met in the data under consideration. Furthermore, the MOEA procedure appears to be quite effective compared to other ranking methods. Of course, in this analysis only the NFR method was considered. Therefore, the obtained results regarding the comparison of MOEA procedure and other MCDA ranking methods should be further extended considering a wider range of methods, such as Min in favour (Bouyssou), extension of prudence principle (working paper Dias and Lamboray), etc. the results of table xxx show that the MOEA procedure outperform, in all cases, the NFR method. The high efficiency of the considered MCDA ranking method is also illustrated in the results presented in table yyy. The analysis of Table xxx shows that the MOEA procedure provides the lowest error rate in all cases. The results of Tables xxx and yyy lead to the conclusion that the modeling framework of the MCDA ranking method MOEA procedure is more efficient in addressing ranking problems than the NFR. 2. The test criterion proposed for evaluating ranking procedures and the procedure proposed for generating value outranking relations and the associated ranking in accordance with the test criterion, seems to be well-suited to the study of ranking problems. Extending this procedure to consider the generation of more general value outranking relations will contribute to a more complete analysis of a ranking method. This will enable the modeling of the incomparability and intransitivities among pairs of alternatives. Modeling such cases within an experimental study would be an interesting
further extension of this analysis in order to formulate a better view of the impact of the test criterion on the ranking methods.The experimental analysis presented in this paper did not address this issue. Instead, the focalpoint of interest was the investigation of the ranking performance of the NFR and the MOEAprocedure. The obtained results can be considered as encouraging for the MOEA procedure.Moreover, they provide the basis for further analysis along the lines of the above remarks.