Developing effective meta heuristics for a probabilistic


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Developing effective meta heuristics for a probabilistic

  1. 1. Discrete OptimizationDeveloping effective meta-heuristics for a probabilisticlocation model via experimental designHari K. Rajagopalan a, F. Elizabeth Vergara a, Cem Saydam a,*, Jing Xiao baBusiness Information Systems and Operations Management Department, The Belk College of Business,The University of North Carolina at Charlotte, Charlotte, NC 28223, United StatesbComputer Science Department, College of Information Technology, The University of North Carolina at Charlotte Charlotte,NC 28223, United StatesReceived 7 January 2005; accepted 9 November 2005Available online 26 January 2006AbstractThis article employs a statistical experimental design to guide and evaluate the development of four meta-heuristicsapplied to a probabilistic location model. The meta-heuristics evaluated include evolutionary algorithm, tabu search,simulated annealing, and a hybridized hill-climbing algorithm. Comparative results are analyzed using ANOVA.Our findings show that all four implementations produce high quality solutions. In particular, it was found that onaverage tabu search and simulated annealing find their best solutions in the least amount of time, with relatively smallvariability. This is especially important for large-size problems when dynamic redeployment is required.Ó 2005 Elsevier B.V. All rights reserved.Keywords: Meta-heuristics; Experimental design; Location1. IntroductionThe goal of emergency medical services (EMS)is to reduce mortality, disability, and suffering inpersons [9,21]. EMS administrators and managersoften face the difficult task of locating a limitednumber of ambulances in a way that will yieldthe best service to a constituent population. Awidely used EMS system performance metric isthe response time to incidents. Typically, callsoriginating from a population center are assumedto be covered if they can be reached within a timethreshold. This notion of coverage has been widelyaccepted and is written into the EMS Act of 1973,which requires that in urban areas 95% of requestsbe reached in 10 minutes, and in rural areas, callsshould be reached in 30 minutes or less [3].0377-2217/$ - see front matter Ó 2005 Elsevier B.V. All rights reserved.doi:10.1016/j.ejor.2005.11.007*Corresponding author. Tel.: +1 704 687 2047; fax: +1 704687 6330.E-mail addresses: (H.K. Rajagopalan), (F.E. Vergara), (C.Saydam), (J. Xiao).European Journal of Operational Research 177 (2007) 83–
  2. 2. The literature on ambulance location problemsis rich and diverse. Most early models tended to bedeterministic in nature and somewhat less realisticin part due to computational limitations faced bythe researchers at that time. Recent advances bothin hardware and software, and more importantlyin meta-heuristics, as well as the availability ofcomplex and efficient solvers have enabledresearchers to develop more realistic and sophisti-cated models. A recent review by Brotcorne et an excellent source for readers to trace the devel-opments in this domain [9]. Also, reviews by Owenand Daskin [30], and Schilling et al. [36] examineand classify earlier models.Probabilistic location models acknowledge thepossibility that a given ambulance may not beavailable, and therefore model this uncertaintyeither using a queuing framework or via a mathe-matical programming approach, albeit with somesimplifying assumptions. Probabilistic locationmodels built upon a mathematical programmingframework have evolved from two classical deter-ministic location models. The first model is a cov-erage model, known as the set covering locationproblem (SCLP) developed by Toregas et al. [37].The objective of this model is to minimize thenumber of facilities required to cover a set ofdemand points within a given response-time stan-dard. In this model, all response units (ambu-lances) are assumed to be available at all timesand all demand points are treated equally. The sec-ond model is the maximal covering location prob-lem (MCLP) by Church and ReVelle [11]. TheMCLP represents an important advance in thatit acknowledges the reality of limited resourcesavailable to the EMS managers by fixing the num-ber of response units while attempting to maximizethe population covered within the response-timestandard. Various extensions of these deterministicmodels have also been developed [9,11,31,35].In contrast to previous deterministic models,Daskin’s maximum expected coverage locationproblem (MEXCLP) model was among the firstto explicitly incorporate the probabilistic dimen-sion of the ambulance location model [14]. Daskinremoved the implicit assumption that all units areavailable at all times by incorporating a system-wide unit busy probability into an otherwise deter-ministic location model. Given a predeterminednumber of response units, MEXCLP maximizesthe expected coverage subject to a response-timestandard or, equivalently, a distance threshold.Daskin also designed an efficient vertex substitu-tion heuristic in order to solve the MEXCLP. Thisheuristic works well for small to medium sizedproblems. However, a comparative study con-ducted by Aytug and Saydam found that forlarge-scale applications genetic algorithm imple-mentations provided faster and better results,although not sufficiently fast for real-time dynamicredeployment applications [1].In their 1989 article, ReVelle and Hogan [31]suggest two important probabilistic formulationsknown as the maximum availability location prob-lem I and problem II (MALP I & II). Both modelsdistribute a fixed number of response units inorder to maximize the population covered with aresponse unit available within the response-timestandard with a predetermined reliability. InMALP I, a system-wide busy probability is com-puted for all units, which is similar to MEXCLP,while in MALP II, the region is divided into neigh-borhoods and local busy fractions for responseunits in each neighborhood is computed, assumingthat the immediate area of interest is isolated fromthe rest of the region. Recently, Galvao et al. [16]presented a unified view of MEXCLP and MALPby dropping their simplifying assumptions andusing Larson’s hypercube model [25] to computeresponse unit specific busy probabilities. Theydemonstrated, using a simulated annealing meth-odology, that both extended models (referred toas EMEXCLP and EMALP) produced betterresults over the previous models, with some addi-tional computational burden.One of the ways that EMS managers canimprove the system performance is by the dynamicredeployment of ambulances in response todemand that fluctuates throughout the week, dayof the week, and even hour by hour within a givenday. For example, morning rush hour commutesshift workers from suburban communities towardstheir workplaces in cities or major employmentcenters, and returns workers home at the end ofthe workday. Given that operating conditions fluc-tuate, dynamic redeployment strategies require fast84 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  3. 3. and effective methods that can be applied in real-time for typically large-scale problem domains.To our knowledge, this problem has been scarcelystudied with the notable exception of a study byGendreau et al. [18]. In their novel approach,Gendreau et al. build upon their earlier double cov-erage model [17] and incorporate several practicalconsiderations. They show that their model canbe successfully solved using a parallelized versionof their tabu search algorithm first developed fora static version of the double coverage model. Ina recent review article, Brotcorne et al. suggeststhat any developments in this field are likely to takeplace in the area of dynamic redeployment modelsalong with fast and accurate heuristics [9].A crucial factor for dynamic redeploymentmodels is the size of zones used to represent poten-tial demand areas. The accuracy and realism ofthese models are increased by decreasing zonesizes, which in turn increases the number of zones.Given that the solution space of locating mresponse units within n zones is nm, the need forfast and powerful heuristic algorithms for large-scale applications becomes self-evident.In this paper we report our work in applyingfour meta-heuristic approaches to the MEXCLP,the use of statistical experimental design to ensureunbiased and objective analyses [4], and presentthe results, findings, and comparative analyses,which demonstrate the effectiveness of each meta-heuristic on small- to large-scale problems.This paper is organized as follows. In Section 2MEXCLP is reviewed in detail. Section 3 describesthe four meta-heuristics, and Section 4 containsthe experimental design. Section 5 presents adetailed report on the results and analyses. Section6 includes concluding remarks and suggestions forfuture research.2. A probabilistic location modelMEXCLP proposed by Daskin [13,14] maxi-mizes the expected coverage of demand subjectto a distance r (or time threshold t) by locatingm response units on a network of n zones. MEX-CLP assumes that all units operate independentlyand each has an identical system-wide busy prob-ability, p. If a zone is covered by i units(i = 0,1,2,. . .,m), then the probability that it iscovered by at least one unit is 1 À pi. Let hj denotethe demand at node j, thus the expected coverageof zone j is hj (1 À pi). Let akj = 1 if response unitin zone k is within distance r of zone j (covers j), 0otherwise; yij = 1 if zone j is covered at least itimes, 0 otherwise; and let xj = the number ofunits located in zone j. Daskin formulates theMEXCLP as an integer linear programming(ILP) model as follows:MaximizeXnj¼1Xmi¼1ð1 À pÞpiÀ1hjyij ð1ÞSubject toXmi¼1yij ÀXnk¼1akjxk 6 0 8j; ð2ÞXnj¼1xj 6 m; ð3Þxj ¼ 0; 1; 2 . . . ; m 8j; ð4Þyij ¼ 0; 1 8i; j. ð5ÞThe objective function (1) maximizes the expectednumber of calls that can be covered, constraint (2)counts the number of times zone j is covered andrelates the decision variables yij to the first set ofdecision variables, xj. Constraint (3) specifies thenumber of response units available, and constraint(4) allows multiple units to be located at any zone.This formulation can be rewritten using fewervariables with a non-linear objective function[34]. Let yj denote the number of times zone j iscovered and rewrite the expected coverage for zonej as hjð1 À pyj Þ and constraint (2) asPnk¼1akjxk ¼yj. The non-linear version is formulated as fol-lows:MaximizeXnj¼1hjð1 À pyjÞ ð6ÞSubject toXnk¼1akjxk ¼ yj 8j; ð7ÞXnk¼1xk 6 m; ð8Þxk ¼ 0; 1; 2; . . . ; m 8k; ð9Þyj ¼ 0; 1; 2; . . . ; m 8j. ð10ÞH.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 85
  4. 4. The original MEXCLP and the non-linear formula-tion above are theoretically identical since MEX-CLP searches the feasible space using binaryvariables that sum up to the number of times eachnode is covered whereas the non-linear versionsearches the feasible space using positive integerswhich count the number of times each zone is cov-ered. The MEXCLP model is useful for implementa-tion via software like CPLEX [12]. We use CPLEXto determine the optimal solution for the MEXCLPILP formulation. The non-linear model lends itselfto easier implementation for meta-heuristic searchmethods such as tabu search and evolutionary algo-rithms where the objective function (Eq. (6)) can bedirectly coded as the fitness function.In order to solve MEXCLP, we begin by gener-ating a hypothetical city (county, region) spreadacross 1024 sq. miles and impose a 32 mile by32 mile grid over it. Each problem size utilizes thisgrid; however each problem size imposes a differ-ent level of granularity. For example, an 8 · 8 gridwould contain 64 zones with each zone being asquare of 4 miles by 4 miles, a 16 · 16 grid wouldcontain 256 zones with each zone being a square of2 miles by 2 miles, and a 32 · 32 grid would con-tain 1024 zones with each zone being a square of1 mile by 1 mile.Furthermore, we randomly generated uni-formly and non-uniformly distributed call(demand) rates for each of 1024 zones. Uniformdistribution spreads demand for calls somewhatevenly across the region. However, non-uniformlydistributed demand simulates a more realistic sce-nario for calls where there are significant varia-tions across the region as shown in Fig. 1. Thedemand rates for 64 and 256 zone problems arethen computed by a straightforward aggregationscheme using the demand rates for the 1024 zonerepresentation. Furthermore, for each grid (prob-lem) size twenty demand (call) distributions aregenerated, 10 having a uniform demand distribu-tion and 10 having a non-uniform distribution.These two elements, the grid representation anddemand distributions, define the basis for theproblem which can be briefly restated as follows:for a given problem size and call demand distribu-tion, locate m units (ambulances) to maximize theexpected coverage.3. Meta-heuristic search methodsLocation/relocation problems are typically NP-complete problems [33]. The size of the solutionspace of locating m response units in n zones isnm. Because of the complex combinatorial natureof these problems, there have been variousattempts to identify near-optimal solutionsthrough the use of meta-heuristic search methods;most recently, evolutionary algorithms [1,5,7,8,17,18,23,33,34].Meta-heuristic search methods try to find goodsolutions by searching the solution space using twokinds of search pressures: (1) exploration, and (2)exploitation. Exploration pressures cause themeta-heuristics to evaluate solutions which areoutside the current neighborhood of a given solu-tion, while exploitation pressures help the meta-heuristics use the current position of a solutionin a neighborhood to search for better solutionswithin that neighborhood.In this paper, we consider four meta-heuristicsearch methods, evolutionary algorithm (EA),tabu search (TS), simulated annealing (SA), andhybridized hill-climbing (HC), and attempt toadapt each meta-heuristic’s method of explorationand exploitation to help identify good solutions inMEXCLP.EA is a meta-heuristic search method that usesthe concept of evolution to generate solutions tocomplex problems that typically have very large10203010203000.010.02Fig. 1. Non-uniformly distributed call (demand) data.86 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  5. 5. search spaces, and hence are difficult to solve [22].EAs ‘‘are a class of general purpose (domain inde-pendent) search methods’’ [26]. There are numer-ous and varied examples of how EAs can beapplied to complex problems [2,26,38], includingthe problems in the location domain [1,6].TS is a meta-heuristic search method developedby Glover [19,20]. A unique feature of TS is its useof a memory, or list. Once a solution is enteredinto a TS memory, they are then tabu, or disal-lowed, for some time. The idea is to restrict thealgorithm from visiting the same solution morethan once in a given time period. The explorationpressure in TS is its ability to accept a worse solu-tion as it progresses through its search. Tabusearch has been successfully applied to variousproblem domains [29], including covering locationmodels [17,18].SA is a meta-heuristic that accepts a worse solu-tion with some probability [24]. The ability of SAto ‘‘go downhill’’ by accepting a worse solutionacts as a mechanism for exploratory pressure andcan help SA escape local maxima. SA has beensuccessfully used for location modeling [9,15].HC is an iterative improvement search method.Therefore, strictly speaking HC is not a meta-heu-ristic, however in our implementation HC ishybridized via the inclusion of mutation opera-tors. The algorithm, as originally devised, ‘‘is astrategy which exploits the best solution for possi-ble improvement; [however] it neglects explora-tion of the search space’’ [26]. The addition ofthe mutation operators in the hybridized HCgreatly improves exploration of the search space;allowing HC to search each neighborhood whereit lands in a systematic way before moving tothe next randomly determined location in thesearch space.3.1. Data representationThe data representation used for all meta-heu-ristics is a vector. A vector is a one-dimensionalarray of size m + 1, where m is the number ofresponse units in the system. Index 0 in each vectorcontains the vector’s fitness value (fv) as deter-mined by the evaluation function. Each of theremaining indices within a vector, indices 1through m, represents a specific response unitand the value recorded within each of these indicesindicates the zone to which that unit is assigned.The zone numbers range between 1 and n, wheren is the number of zones in the grid. Therefore, nequals 64, 256, and 1024 for each problem size,respectively. Fig. 2 illustrates the data representa-tion, i.e., vector, used for all meta-heuristics.3.2. Algorithm structureAll four meta-heuristic algorithms have the fol-lowing common structure:BEGINInitializationEvaluation of data vector(s)LOOP:Operation(s) to alter data vector(s)Evaluation of data vector(s)UNTIL termination conditionEND3.2.1. Initialization and evaluationIn the initialization step each response unit israndomly assigned a zone number within the prob-lem grid; this random solution is then evaluatedusing Eq. (6). The fitness value is then stored ina vector. TS, SA, and HC perform their searchusing a single vector; in contrast, EA uses a popu-lation of vectors. Given that the objective is tomaximize expected coverage, higher evaluation,i.e. larger fitness values, represent better solutions.3.2.2. OperatorsOperators are procedures used within meta-heuristics that help define the different levels ofsearch pressures: exploration and exploitation. Inthis study we utilize a number of operators suchas low-order mutation, high-order mutation, andcrossover. However, due to the structure of thefv 12 28 30 1 2 … mVectorFig. 2. Data representation.H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 87
  6. 6. different meta-heuristics, not all operators are usedin all the meta-heuristics.Crossover is a genetic operator that needs atleast two vectors to create new vectors, called off-spring. We use a one-point crossover where twoparents create two offspring [26,27]. Crossover isused to promote exploratory pressure which forcesthe meta-heuristic to look for a solution that is inbetween other solutions already identified withinthe search space.The idea behind the mutation operator is toprovide exploitation pressure in order to searchthe neighborhood for a better solution. In ourimplementation, the low-order mutation operatoris used to perform a neighborhood search. Neigh-borhood searches are commonly encountered inmeta-heuristics [27]. The definition of a neighbor-hood emphasizes the difference between the com-peting search pressures of exploration andexploitation. We define a neighborhood as theadjacent zones immediately surrounding a givenresponse unit’s location. For example, in Fig. 3,given a response unit in zone 28 the neighborhoodzones are 19, 20, 21, 27, 29, 35, 36 and 37.When this procedure is called, the algorithmreads the zone within a selected index of a vector.The neighborhood zones are then systematicallyplaced in the selected index within the vector,freezing the values in all other indices, and the fit-ness value for the resulting vector is calculated. Atthe end of the neighborhood search, the vectorthat has the highest fitness value is returned.The low-order mutation operator promotesexploitation pressure. It is possible to design analgorithm with stronger exploitation pressure bysearching the entire neighborhood for all mresponse units. That is, completing a comprehen-sive neighborhood search for all indices simulta-neously. However, this would require 8mevaluations, which may not be computationallyfeasible for large values of m. Consequently, foreach implementation of the low-order mutationoperator we compute, on average, 8 * m evalua-tions searching within one selected index, whileholding all other indices constant.High-order mutation is another genetic muta-tion operator, and is applied to one parent vectorin order to create one offspring. To begin the high-order mutation, two cut locations are randomlydetermined. A third number, between 0 and 1, isthen randomly generated. If the number is lessthan 0.5, then the ‘‘genetic material’’ between thecut points remains unaltered and the data outsidethe cuts is changed. The selected indices are filledin with randomly generated numbers between 1and n where n is the number of zones in the prob-lem grid. However, if the third randomly generatednumber is greater than or equal to 0.5, then the‘‘genetic material’’ outside the cuts remains unal-tered and the data between the cuts is changedby filling in the selected indices with randomly gen-erated numbers between 1 and n. Fig. 4 illustratesthe high-order mutation operator.The high-order mutation operator is desirablebecause it keeps part of the earlier solution whilemoving away from the neighborhood. High-ordermutation has higher exploratory pressure thanlow-order mutation.3.2.3. Termination conditionPreliminary tests indicated that the improve-ments experienced by all four meta-heuristics wereless significant after 100 iterations. For that rea-son, all meta-heuristics for all problem sizes wereprogrammed to terminate after 100 iterations,unless a significant improvement had occurred inthe last 20 iterations. If so, the algorithm wasallowed to run longer. An algorithm requiringmore than 100 iterations continues to run untilno improvements are identified for 20 successiveiterations.3.3. Evolutionary algorithm (EA)EAs create, manipulate, and maintain a popula-tion of solutions to problems, where each solution1 2 3 4 5 6 7 89 10 11 12 13 14 15 1617 18 19 20 21 22 23 2425 26 27 28 29 30 31 3233 34 35 36 37 38 39 4041 42 43 44 45 46 47 4849 50 51 52 53 54 55 5657 58 59 60 61 62 63 64Fig. 3. Grid representation of zones for 64 zone (8 · 8)problem.88 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  7. 7. is called a vector (commonly referred to as chro-mosome) representing a possible solution to theproblem. Interestingly, one can think of the EAsas performing multiple searches in ‘‘parallelbecause each [chromosome] in the populationcan be seen as a separate search’’ [32].The larger the population size, typically, thebetter the final solution is; however, computationtime increases proportionally with an increase inpopulation size. Preliminary tests were run todetermine the population size, and the optimalgenetic operator probability rates; these parame-ters are in Table 2. The initial experiments wererun by taking a candidate problem and runningit using crossover, high-order, and low-ordermutation rates from 0.1 to 0.9 (increasing incre-ments of 0.1). This was done for population sizesof 20, 50, and 100. Therefore, we collected datafrom 2187 runs (9 * 9 * 9 * 3 = 2187). From thisdata, we examined the payoff between solutionquality and time to get to the solution and decidedon the following parameters: population size = 20,crossover rate = 0.5, high-order mutationrate = 0.1, and low-order mutation rate = 0.1.The EA procedure is as follows:1. Generate a population of vectors of size 20.2. Each vector has a probability of being selectedfor crossover, high-order or low-order mutationas described in Table 2. The vector with thehighest evaluation function has a probabilityof 1.0 of being selected for all three operations.3. A vector will not have multiple operations doneon it in the same generation, but two copies ofthe same vector can be selected for two differentoperations. For example, a result of mutation inone generation will not be used for crossover inthe same generation however; the parent whichwas used for mutation may be selected forcrossover.4. The operators of crossover, high-order, andlow-order mutation are applied to the selectedvectors.5. The old population and the offspring are puttogether in a pool and the next generation iscreated by tournament selection [26,27] pre-formed on two randomly selected vectors fromthe pool. Of the two vectors, the vector with ahigher evaluation function is the winner and iscopied into the next generation’s population.An elitist strategy is also employed, where thebest vector is always carried forward.6. Once the new population is selected, the processgoes back to step 2 until the termination condi-tion is reached.3.4. Tabu search (TS)TS uses only one vector to traverse the searchspace, consequently the quality of the final solu-tion for TS is dependent on the quality of the ini-tial vector. As with all the meta-heuristics, TS usesthe low-order mutation operator to provideexploitation pressure, and the best vector foundby using the low-order mutation operator isaccepted regardless of whether it is worse thanthe original solution. By continually acceptingParent: Parent:fv 23 19 14 54 20 28 42 30 fv 23 19 14 54 20 28 42 300 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8Randomly selected indices Randomly selected indicesOffspring: Offspring:fv 2 22 14 54 16 61 3 27 fv 23 19 36 15 20 28 42 300 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8*Assuming a 8x8 grid, i.e., 64 possible zones*Random number less than 0.5*Assuming a 8x8 grid, i.e., 64 possible zones*Random number greater than or equal to 0.5↑ ↑ ↑ ↑Fig. 4. High-order mutation (mutate ends).H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 89
  8. 8. the best solution in a vector’s neighborhood, TS isable to systematically search the problem’s searchspace. The best solution found throughout therun of the TS algorithm is always recorded. How-ever, the best solution found might not be the solu-tion contained in the final (last) vector when thealgorithm terminates.To tune the memory parameter a candidateproblem was selected randomly and was run fortabu list sizes from 10 chromosomes to 200(increasing increments of 10) for 8, 9, 10, 11 servers(total runs 20 * 4 = 80). We noticed in our initialruns that there were minor fluctuations close tothe values which were multiples of servers. Wehypothesized that since our greedy search opera-tor, or low-order mutation operator, changes onlyone index in the vector at every iteration, system-atically starting from the first index and movingto the last index, and since the vector size is m,or the number of servers, having m chromosomesin the tabu list stores the zones of all the serversbeing changed once. Having 3 * m in the tabu liststores the zones of all the servers being changedthree times. The reason that we did not use justthe zones that are being changed, but includedthe entire chromosome as tabu was due to the factthat the problem does not depend upon one loca-tion but the location of all servers at any point oftime. Again, the tabu list size of 3 * m was decidedupon by running initial experiments with valuesfrom 1 * m to 20 * m for a candidate problem.3.5. Simulated annealing (SA)SA, like TS, uses a single vector to search for agood solution to a given problem, and like TS thefinal solution is also dependent on the quality ofthe initial vector. SA starts by using a randomlygenerated vector, and then uses the low-ordermutation operator to evolve the solution.After initial trial runs with problem sizes of 64,256 and 1024 zones, we noticed that SA tends toget stuck at local optima, especially for larger sizedproblems. The problem of getting stuck at localoptima was more pronounced for SA than forany of the other methods. Upon closer investiga-tion, it was determined that the temperatureparameter had not been properly tuned. Due tothe large number of problems to be solved it wasnot practical to tune the temperature for everyproblem. Therefore, in our implementation, weuse an approach developed by Chiyoshi et al.[10] to automatically adjust the temperatureparameter during runtime through the use of afeedback loop. This method employs the ratio ofthe number of solutions accepted to the numberof solutions generated as a feedback loop fromthe system. When this ratio drops below 40% theexploration pressure is not strong enough andthe temperature is raised, however, when the ratiogoes above 60% the exploration pressure is toohigh and the temperature is lowered. After someexperimentation with initial temperature valuesof 20, 40 and 60, an initial temperature of 60 wasselected.3.6. Hill-climbing (HC)HC also maintains a single vector throughoutthe run of the algorithm, which is altered at eachgeneration by searching the neighborhood for bet-ter solutions. After initial trial runs with smallproblem sizes (64 zones with 8 response units),HC often got stuck at local optima when usingthe low-order mutation operator. The random-restart operator provides enormous exploratorypressure, and forces the meta-heuristic to beginsearching in a different region within the searchspace, thus escaping local optima. This processwas not sufficient for HC to effectively competewith the other three meta-heuristics. Therefore,in an effort to hybridize HC, the high-order muta-tion operator was introduced as an intermediatestep between the low-order mutation operatorand random-restart procedure to alleviate thisproblem.HC begins with a randomly generated vector,and then employs the low-order mutation opera-tor. Only if the new vector is better than the oldvector will the new vector be passed to the nextiteration. The low-order mutation operator is useduntil m iterations have passed without any newsolution being accepted, where m is the numberof servers. We use m iterations because of theway the low-order mutation operator is designed.The low-order mutation operator, regardless of90 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  9. 9. whether employed in HC, SA, TS, or EA, alwaystakes the first index of the vector which representsthe first server and mutates it with a greedy neigh-borhood search. This constitutes one iteration forHC. This new vector will only be accepted if it isbetter than existing vector. In the second iteration,the second server is mutated and so on. Therefore,after m iterations, all m server positions have beenchanged. If in HC, m iterations have passed and nonew solution has been accepted, then it means thatan attempt has been made to change all the serverpositions individually and that the current positionis the local optimum.Once no more improvements can be foundthrough the use of the low-order mutation opera-tor, the algorithm then performs high-order muta-tion on the vector. After the vector undergoes thehigh-order mutation, the algorithm begins search-ing the new neighborhood using the low-ordermutation operator again. After the high-ordermutation is applied m times without an improve-ment it is determined that the neighborhood hasbeen exploited. The random-restart procedure isthen used to start the search in a newneighborhood.The ‘‘best’’ result from the multiple restarts isdeemed to be the ‘‘best’’ solution achieved by theHC algorithm. This procedure is referred to asrandom-restart hill-climbing [32].4. Experimental designTo test the meta-heuristics we use a statisticalexperimental design approach advocated by Barret al. [4] which enables objective analyses and pro-vides statistically significant conclusions, includinginteractions among independent variables. Table 1summarizes the elements unique to each of thefour meta-heuristics. Table 2 summarizes theunique parameters used by each meta-heuristic,and Table 3 summarizes the approximate numberof evaluation functions required for each meta-heuristic.Table 4 shows the different independent vari-ables and dependent variables. There are threeindependent variables, (1) distribution (D), (2) size(S), and (3) heuristic search method (HS), and twoTable1Summaryofallmeta-heuristicsMeta-heuristicOperatorsusedCommondatarepresentationRandomlygeneratedinitialvector(s)UsessinglevectorCommonfitnessevaluationfunctionOtheruniqueparametersMutationCrossoverSelectionLow-orderHigh-orderOperatorGenerationalEA········•Prob(crossover)•Prob(mutation)•Low-order•High-orderHC······SA·····TemperatureTS·····Memory(list)sizeH.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 91
  10. 10. dependent variables, (1) quality of solution (QOS)and (2) time to reach best solution (TBS).Problems with demand that is distributed uni-formly across a grid are easier for all heuristicmethods to solve as opposed to when demand isdistributed non-uniformly. With regard to a uni-form demand distribution, randomly assignedlocations for response units have less deviation inthe quality of their solutions, and are faster toidentify solutions that adequately cover demand.For non-uniform demand distributions, the ran-dom assignment does not give any assurancethat the quality of the solution will be consistent.In the case of non-uniform demand distributions,the initial randomly generated chromosome canhave vastly different fitness values. Therefore, wecan hypothesize that (H1) problems with uni-formly distributed demand will be solved with abetter QOS and with smaller TBS values (faster),than problems with non-uniform demanddistribution.In our implementation of the different meta-heuristic methods, the exploitation pressure is thesame. All of the methods use the low-order muta-tion operator for exploitation. However eachmeta-heuristic has different degrees of explorationpressures. HC uses high-order mutation and a ran-dom-restart procedure, SA and TS can acceptworse solutions, and EA uses random generationof an initial population, a crossover and a high-order mutation operators. Therefore, we hypothe-size that (H2) the four heuristic search methodswill solve problems with different values of TBSand QOS.As can be seen from Table 4, all the indepen-dent variables are categorical, and the dependentvariables are continuous. Since there is more thanone independent variable it is necessary to setupthe experiment to consider the effects of each inde-pendent variable upon each of the dependent vari-ables individually. This process also takes intoconsideration interactions between independentvariables. Therefore, we use a randomized facto-rial design and ANOVA [4,28] to test for directeffects of the independent variables and theirinteractions.A problem configuration is defined as the prob-lem’s grid size and the type of demand distributionTable2Summaryofuniqueparametersusedforeachmeta-heuristicMeta-heuristicParametersProbabilityoflow-ordermutationProbabilityofhigh-ordermutationProbabilityofcrossoverPopulationsizeTemperatureMemory(list)sizeEA0.10.10.520N/AN/AHCN/AN/AN/A1N/AN/ASAN/AN/AN/A1•Probability=#ofsolutionsaccepted/#ofsolutionsgenerated•FeedbackLoop:–if(Probability<0.4),thenTemperature=Temperature*(1+((0.4ÀProbability)/0.4))–elseif(ProbabilityP0.6),thenTemperature=Temperature*(1/(1+((ProbabilityÀ0.6)/0.4)))N/ATSN/AN/AN/A1N/A3*m,wherem=the#ofresponseunitsinthesystem92 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  11. 11. for all four values of m (8, 9, 10 and 11), where m isthe number of response units in the system. Thereare six problem configurations (three sizes and twodistributions). Each problem configuration has 10demand distributions and each demand distribu-tion is run 10 times for each of the four values ofm for a total of 400 test problems. This is donefor each of the four meta-heuristics. When testingacross the four meta-heuristics there are a total of1600 (4 · 400) data points.The solutions for each of the four meta-heuris-tic methods were compared with the optimumsolution identified through integer linear program-ming using CPLEX. All runs are done on a DellPC with an Intel Pentium 4 2.4 GHz processor,and 512MB of RAM.When evaluating the results, two metrics areused: (1) the average quality of solution (QOS),or how close the solution given by each meta-heu-ristic is to the optimum solution, and (2) the aver-age time at which the best solution (TBS) wasfound by each meta-heuristic. When comparingthe computing effort of the different algorithms,the number of iterations is a misleading metricsince a meta-heuristic search method might requirefewer iterations, and yet still perform more evalu-ation functions, thereby requiring more time tocomplete than another meta-heuristic. The numberof evaluations done for each meta-heuristic duringan iteration can also vary substantially; therefore,the number of iterations to reach the ‘‘best’’ solu-tion is not strictly comparable. For these reasons,we use time (seconds) to reach the ‘‘best’’ solutionfor each meta-heuristic as a proxy for computingeffort.5. Results and discussionWe solved 400 problems and computed theaverage and standard deviation of solution quality(QOS) and time to best solution (TBS) for each ofthe six problem configurations and heuristic com-binations. The total number of problems solvedwas 9600. Therefore, the sample size for the exper-iments is 400. Table 5 displays the mean and stan-dard deviation for solution quality (QOS) and time(TBS), respectively.Overall, the results show that our implementa-tions of all four meta-heuristics produce very goodresults, particularly with respect to solution qual-ity. The average of solution quality is nearly 99%with a standard deviation of about 1% across allproblem configurations and all meta-heuristics(9600 problems). The data was tested usingANOVA, and post hoc Tamhane’s analysis wascompleted to find specific differences. Tamhane’sanalysis was chosen over Bonferroni’s or Tukey’sTable 3The approximate number of evaluation functions required for each meta-heuristicMeta-heuristic Initial number of evaluations Approximate number of evaluations per 100 iterationsRandom generation Crossover High-order mutation Low-order mutation Random restart TotalEA 20 8 * 100 2 * 100 16 * 100 N/A 2620HC 1 N/A 3 * 1 8 * 95 2 * 1 766SA 1 N/A N/A 8 * 100 N/A 801TS 1 N/A N/A 8 * 100 N/A 801Table 4Experimental setupIndependent variables Dependent variables(1) Distribution (D) (1) Quality of solution(measured as the ratioof given solution tooptimum solution) (QOS)a. Uniform (U)b. Non-uniform (NU)(2) Size (S) (2) Time to reach thebest solution (TBS)a. 64 Zone (64)b. 256 Zone (256)c. 1024 Zone (1024)(3) Heuristic searchmethod (HS)a. Hill-climbing (HC)b. Tabu search (TS)c. Simulated annealing (SA)d. Evolutionaryalgorithm (EA)H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 93
  12. 12. because we could not assume equal variancesamong the different categories of the independentvariables. Tables 6–8 show the ANOVA and thepost hoc Tamhane’s analysis done with respectto QOS. An alpha of 0.05 was set for statisticalsignificance.The ANOVA results from Table 6 show thatthe variables S (size) and HS (heuristic search) sig-nificantly (alpha < 0.01) affect QOS. We can alsosee that variable D (distribution) also significantlyaffects QOS (alpha < 0.05). Therefore we can inferfrom Table 6 that the quality of solution dependson problem size, the type of meta-heuristic, andthe kind of demand distribution.Table 6 also shows significant two way interac-tions between the variables S and D (alpha <0.01), S and HS (alpha < 0.01) and between Dand HS (alpha < 0.05). The three way interactionbetween S, D and HS is insignificant (alpha =0.39). Closer examination of the interactionbetween S and HS shows that for 64 zone prob-lems HC performs significantly better than TSand SA, but for 256 zone and 1024 zone problemsthere is no significant difference among these threemeta-heuristics. Investigation into the interactionbetween D and HS shows that while HC, SA andTS do significantly better with uniform demanddistributions than non-uniform distributions, EATable 5Performance of different meta-heuristic search methods with respect to the quality of solution (QOS) and time (in seconds) needed toreach the best solution (TBS)Distribution Size Mean (std. dev.) N = 9600CPLEX HC TS SA EAUniform 64 QOS 1.0000 (0.00) 0.9924 (0.76) 0.9882 (1.09) 0.9884 (0.93) 0.9927 (0.62)TBS 0.19 (0.15) 0.36 (0.17) 0.20 (0.09) 0.19 (0.08) 2.39 (1.95)256 QOS 1.0000 (0.00) 0.9906 (0.94) 0.9893 (1.05) 0.9899 (0.97) 0.9936 (0.58)TBS 3.92 (28.83) 2.35 (0.79) 1.88 (0.68) 1.86 (0.68) 9.82 (4.69)1024 QOS 1.0000 (0.00) 0.9873 (1.18) 0.9885 (1.05) 0.9873 (1.26) 0.9918 (0.76)TBS 137.16 (61.42) 37.67 (13.98) 37.96 (14.84) 38.11 (13.41) 143.47 (45.73)Non-uniform 64 QOS 1.0000 (0.00) 0.9926 (0.84) 0.9886 (1.11) 0.9885 (1.11) 0.9940 (0.62)TBS 0.16 (0.11) 0.36 (0.18) 0.20 (0.09) 0.19 (0.09) 2.56 (2.30)256 QOS 1.0000 (0.00) 0.9903 (0.89) 0.9890 (1.02) 0.9890 (1.01) 0.9937 (0.61)TBS 2.66 (2.00) 2.35 (0.79) 1.98 (0.73) 1.84 (0.68) 10.30 (4.84)1024 QOS 1.0000 (0.00) 0.9861 (1.20) 0.9852 (1.38) 0.9857 (1.15) 0.9918 (0.71)TBS 146.33 (78.79) 37.07 (14.67) 35.75 (13.60) 37.78 (13.84) 147.47 (48.63)Table 6ANOVA results: Dependent variable quality of solutionSource Sum of squares Degrees of freedom Mean square F Sig. Observed poweraCorrected model 0.0617b23 0.0027 28.148 0.000 1.000Intercept 9404.6136 1 9404.6136 98,661,395.587 0.000 1.000Size (S) 0.0157 2 0.0079 82.425 0.000 1.000Distribution (D) 0.0005 1 0.0005 4.814 0.028 0.593Heuristics (HS) 0.0368 3 0.0123 128.628 0.000 1.000S * D 0.0017 2 0.0009 8.973 0.000 0.974S * HS 0.0056 6 0.0009 9.814 0.000 1.000D * HS 0.0008 3 0.0003 2.922 0.033 0.698S * D * HS 0.0006 6 0.0001 1.045 0.393 0.421Error 0.9128 9576 0.0001Total 9405.5881 9600Corrected Total 0.9745 9599aComputed using alpha = .05.bR-Squared = .063 (adjusted R-squared = .061).94 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  13. 13. does better with non-uniform demand distributionthan it does with uniform demand distributions.Finally, the examination of the interactionbetween D and S shows that for problems withuniform distribution the QOS does not deterioratewhen S increases from 64 zones to 256 zones, butdoes deteriorate for 1024 zone problems; while fornon-uniform distribution, the QOS does deterio-rate when S increases from 64 zone to 256 zoneproblems.The post hoc Tamhane’s tests in Table 7 showsthat for all problem configurations, EA tends toprovide slightly better solution quality than theother three algorithms. The slightly better perfor-mance of EA might be explained by the fact thatEA performs, on average, more than three timesas many evaluation functions as the other threealgorithms, as illustrated in Table 3. Also,although statistically significant, these differencesare (a) practically negligible, and (b) are achievedat great costs in terms of time (Table 5).Also Table 7 shows that HC performs betterthan SA and TS. However, this result can beexplained when taking into account the significantinteraction between variables HS and S. It is onlyin 64-zone problems HC solutions are better thanTS and SA solutions. However, this affects theentire average as a whole and post hoc analysisshows that HC performs better than SA and TS.Removing 64 zone size problems shows that HCTable 7Post hoc analysis (Tamhane’s test) for factor heuristic search method and dependent variable quality of solutionHeuristic searchmethod (I)Heuristic searchmethod (J)Mean difference(I À J)Std. error Sig. 95% Confidence intervalLowerboundUpperboundHC TS 0.001743*0.0003095 0.000 0.000929 0.002558SA 0.001769*0.0003028 0.000 0.000972 0.002566EA À0.003035*0.0002463 0.000 À0.003684 À0.002387TS HC À0.001743*0.0003095 0.000 À0.002558 À0.000929SA 0.000026 0.0003197 1.000 À0.000816 0.000867EA À0.004779*0.0002668 0.000 À0.005481 À0.004076SA HC À0.001769*0.0003028 0.000 À0.002566 À0.000972TS À0.000026 0.0003197 1.000 À0.000867 0.000816EA À0.004805*0.0002590 0.000 À0.005486 À0.004123EA HC 0.003035*0.0002463 0.000 0.002387 0.003684TS 0.004779*0.0002668 0.000 0.004076 0.005481SA 0.004805*0.0002590 0.000 0.004123 0.005486*The mean difference is significant at the .05 level.Table 8Post hoc analysis (Tamhane’s test) for factor size and dependent variable quality of solutionSize (I) Size (J) Mean difference (I À J) Std. error Sig. 95% Confidence intervalLower bound Upper bound64 Zone 256 Zone À.000005 .0002313 1.000 À.000557 .0005471024 Zone .002711*.0002596 .000 .002092 .003331256 Zone 64 Zone .000005 .0002313 1.000 À.000547 .0005571024 Zone .002717*.0002578 .000 .002101 .0033321024 Zone 64 Zone À.002711*.0002596 .000 À.003331 À.002092256 Zone À.002717*.0002578 .000 À.003332 À.002101*The mean difference is significant at the .05 level.H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 95
  14. 14. does not perform significantly better than SA orTS. This result for small sized problems might beexplained by considering the exploratory pressureused by HC. The random-restart procedure worksvery well in small problems when the algorithmrandomly jumps around the solution space trollingfor the best solution. In larger problems, however,jumping around is not as effective because thesolution space is bigger, which negates any advan-tage random jumping has over the more systematicexploratory pressures of SA and TS. In general,there is no statistically significant differencebetween the quality of the solutions for SA andTS.Table 8 breaks down the effect of variable S onQOS. As can be seen from the Tamhane’s test inTable 8 there is no significant difference betweenthe QOS value when the problem size increasesfrom 64 to 256 zones. However, with 1024 zoneproblems the QOS deteriorates by a very small(0.0027) but statistically significant amount(alpha < 0.001). Tables 9 and 10 show theANOVA and Tables 11 and 12 show the posthoc Tamhane’s analysis done with respect to TBS.Table 9 shows the ANOVA results for the dif-ferent meta-heuristic search methods withoutCPLEX as a category. As can be seen all threeindependent variables S, D and HS (alpha < 0.01)significantly affect time to best solution. Also, alltwo way interactions, S and D, S and HS, and Dand HS (alpha < 0.01), are significant. The threeway interaction between S, D and HS is not signif-icant (alpha = 0.080 and power = 0.707). Investi-gation into the interaction between S and Dshows that problems with 64 zones and uniformdemand distributions were solved faster thanproblems with 64 zones and non-uniform demand.However, for 256 zones and 1024 zones the TBSvalues were not statistically different for uniformand non-uniform demand distributions. Similarly,the interaction between S and HS reveals that eventhough EA was slower than the other three meta-heuristics in all three problem sizes (64, 256 and1024), it is at 1024 zone problems where this differ-ence becomes dramatic. Finally, the interactionbetween D and HS reveals that while the TBS val-ues for TS and SA are different for uniform andnon-uniform demand distributions, they are morenoticeable for EA.To compare CPLEX with other meta-heuristics,CPLEX times were included as another categoryin the variable HS; ANOVA results change (Table10). Variable D is no longer significant(alpha = 0.146) and all two way interactionsinvolving variable D become insignificant (S andD alpha = 0.078, and D and HS alpha = 0.052).This can be explained by the fact that the TBS val-ues for CPLEX are not affected significantly by thedemand distributions while the meta-heuristic’sTBS values are. The interaction between S andHS can be explained from the data in Table 5.The real difference in time between CPLEX andTable 9ANOVA results: Dependent variable time to best solution (HS not including CPLEX solutions)Source Sum of squares Degrees of freedom Mean square F Sig. Observed poweraCorrected model 1.5E+13b23 6.6E+11 2749.84 0.000 1.000Intercept 5.3E+12 1 5.3E+12 22432 0.000 1.000S 8E+12 2 4E+12 16747 0.000 1.000D 3.2E+09 1 3.2E+09 13.29 0.000 0.954HS 2.9E+12 3 9.8E+11 4105.67 0.000 1.000S * D 5E+09 2 2.5E+09 10.53 0.000 0.989S * HS 4.1E+12 6 6.9E+11 2893.74 0.000 1.000D * HS 6.5E+09 3 2.2E+09 9.09 0.000 0.996S * D * HS 2.7E+09 6 4.5E+08 1.88 0.080 0.707Error 2.3E+12 9576 2.4E+08Total 2.3E+13 9600Corrected total 1.7E+13 9599aComputed using alpha = .05.bR-Squared = .869 (adjusted R-squared = .868).96 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  15. 15. EA on the one hand, and HC, TS, and SA on theother occurs in 1024 size problems. At smallerproblem sizes the difference is much less. Also,average and standard deviation of TBS valuesfor CPLEX deteriorate much quicker as problemsizes grow larger.The post hoc Tamhane’s analysis in Table 11shows that there is a significant difference betweenTable 10ANOVA results: Dependent variable time to best solution (HS includes CPLEX solutions)Source Sum of squares Degrees of freedom Mean square F Sig. Observed poweraCorrected model 2.70E+07b29 9.30E+05 1803.87 .000 1.000Intercept 9.50E+06 1 9.50E+06 18427.11 .000 1.000S 1.61E+07 2 8.03E+06 15571.53 .000 1.000D 1.09E+03 1 1.09E+03 2.12 .146 .307HSc4.04E+06 4 1.01E+06 1956.92 .000 1.000S * D 2.63E+03 2 1.32E+03 2.55 .078 .512S * HSc6.86E+06 8 8.57E+05 1662.47 .000 1.000D * HSc4.83E+03 4 1.21E+03 2.34 .052 .684S *D * HSc1.30E+04 8 1.62E+03 3.15 .001 .970Error 6.17E+06 11,970 5.16E+02Total 4.26E+07 12,000Corrected total 3.31E+07 11,999aComputed using alpha = .05.bR-Squared = .814 (adjusted R-squared = .813).cHS includes CPLEX as the fifth category.Table 11Post hoc analysis (Tamhane’s test) for factor heuristic search method and dependent variable time to best solutionHeuristic type (I) Heuristic type (J) Mean difference (I À J) Std. error Sig. 95% Confidence intervalLower bound Upper boundHC TS .36433 .544182 .999 À1.15988 1.88854SA .09662 .547622 1.000 À1.43722 1.63046EA À39.31011*1.503689 .000 À43.52328 À35.09694CPLEX À35.04816*1.626450 .000 À39.60536 À30.49096TS HC À.36433 .544182 .999 À1.88854 1.15988SA À.26771 .545975 1.000 À1.79694 1.26152EA À39.67444*1.503090 .000 À43.88593 À35.46294CPLEX À35.41248*1.625896 .000 À39.96814 À30.85683SA HC À.09662 .547622 1.000 À1.63046 1.43722TS .26771 .545975 1.000 À1.26152 1.79694EA À39.40673*1.504339 .000 À43.62171 À35.19175CPLEX À35.14478*1.627050 .000 À39.70366 À30.58590EA HC 39.31011*1.503689 .000 35.09694 43.52328TS 39.67444*1.503090 .000 35.46294 43.88593SA 39.40673*1.504339 .000 35.19175 43.62171CPLEX 4.26195 2.146738 .383 À1.75090 10.27480CPLEX HC 35.04816*1.626450 .000 30.49096 39.60536TS 35.41248*1.625896 .000 30.85683 39.96814SA 35.14478*1.627050 .000 30.58590 39.70366EA À4.26195 2.146738 .383 À10.27480 1.75090*The mean difference is significant at the .05 level.H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 97
  16. 16. the population based meta-heuristic, EA, andother three single chromosome based meta-heuris-tics. As expected, EA is much slower than theother three meta-heuristics HC, TS, and SA. Eventhough TS is the fastest meta-heuristic, it is not sig-nificantly faster than the other two single chromo-some meta-heuristics (SA and HC). The singlechromosome meta-heuristics outperform CPLEXsignificantly, whereas the difference between EAand CPLEX is not statistically significant. Finally,Table 12 shows that as the problem size increases,TBS increases significantly (alpha < 0.001).The hypotheses that the four heuristic searchmethods will solve problems at different values ofTBS and QOS can be partially accepted. EA is def-initely statistically different from the other threemeta-heuristics both in TBS (worse) and QOS(better). The other three meta-heuristics are statis-tically different only under certain conditions (e.g.,problem size).The hypothesis that problems with uniformlydistributed demand will be solved with a betterQOS and with smaller TBS values than prob-lems with non-uniform demand distribution alsocan be accepted. The demand distribution doesmake a difference in the performance of themeta-heuristic methods, but does not affectCPLEX.The process of developing these different imple-mentations of meta-heuristics gave the researcherssome insights into the problem domain and certainoperator settings. It appears that neighborhoodsize and the quality of initial (starting) solution(vector) have a significant effect on the perfor-mance of the algorithms.The effectiveness of the low-order mutationoperator designed for the four meta-heuristicsdepends heavily on the size of the neighborhood.For this research the neighborhood size was deter-mined as those zones immediately adjacent to agiven zone, as illustrated in Fig. 3. In preliminarytests, neighborhood size was set equal to n, thenumber of zones in the problem grid, and an entireset of tests for the meta-heuristics and for all prob-lem sizes was run. The quality of solutionsimproved noticeably for all the meta-heuristicswith the average quality of solution very close to100% of optimum, and standard deviation reducedto less than 0.2% in the worst case. However, thetime needed to run the algorithms became prohib-itive, especially for 1024-zone problems. Futureresearch might attempt to estimate the optimumneighborhood size for this problem domain.While running our initial experiments wenoticed that the quality of an initial vector makesa tremendous difference in the final quality of solu-tions for TS and SA. It is also important for HC,and the impact is felt every time HC does a ran-dom restart. The average fitness value of a ran-domly generated vector is approximately 80% ofthe optimum solution. It is proposed that an effec-tive way to promote good results for TS, SA, andHC might be to generate, say, 50 vectors ran-domly, and then take the best vector as the startingvector, thereby increasing the average value of theinitial vector to approximately 90%. This methodcould also be used by HC as a modified random-restart procedure.Overall, TS and SA on average find their bestsolutions in the least amount of time and with rel-Table 12Post hoc analysis (Tamhane’s test) for factor size and dependent variable time to best solutionSize (I) Size (J) Mean difference (I À J) Std. error Sig. 95% Confidence intervalLower bound Upper bound64 Zone 256 Zone À3.21646*.066322 .000 À3.37487 À3.058041024 Zone À79.15633*1.031603 .000 À81.62056 À76.69210256 Zone 64 Zone 3.21646*.066322 .000 3.05804 3.374871024 Zone À75.93987*1.033314 .000 À78.40818 À73.471561024 Zone 64 Zone 79.15633*1.031603 .000 76.69210 81.62056256 Zone 75.93987*1.033314 .000 73.47156 78.40818*The mean difference is significant at the .05 level.98 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  17. 17. atively small variability as indicated by the stan-dard deviation for the computing times (Table5). With the exception of EA, the meta-heuristicssignificantly outperform CPLEX as the problemsize gets larger both in average times and standarddeviations. Indeed, relatively large standard devia-tions in CPLEX times are indicative of the factthat these are NP-hard problems. For any givenproblem instance it is possible for ILP solvers likeCPLEX to take an extraordinary amount of time[1], therefore, ILP solvers are not viable alterna-tives for time critical applications. This is quiteimportant for particularly large-size problems(1024 zones) where dynamic redeployment prac-tices will demand fast and accurate solutions.Under the most realistic of all scenarios with1024 zones, non-uniformly distributed demand,TS performs the best with an average time of35.75 seconds and a standard deviation of 13.60,which implies that a limit of 1 minute delay to findthe best redeployment strategy can be met (solved)accurately with a probability of 96.25%.6. ConclusionsThe contribution of this paper is threefold. First,to our knowledge this is the first time that multiplemeta-heuristics have been systematically developedand applied to the maximum expected coveringlocation problem (MEXCLP). This allowed us toobjectively evaluate the performance of variousmeta-heuristics for a given problem. There was aconcerted effort to code each meta-heuristic in alike manner and utilize like procedures so thatresults would be meaningful and generalizable.Specifically, the meta-heuristics used common datarepresentation, common evaluation function, and acommon mutation operator. In addition, the initialvector(s) of all meta-heuristics were randomly gen-erated. Second, we confirm and strengthen the ear-lier findings reported in [1] that an EA can producehigh quality solutions, however, non-EAapproaches such as TS can be used to solve large-scale implementations of MEXCLP with a highdegree of confidence in QOS, in less time. Finally,this research is unique in that it is an experimentallydesigned comparative study. The results of the fourmeta-heuristics are statistically compared. Themean and standard deviation are reported for eachheuristic for six different problem configurations(i.e., 2 distribution types · 3 problem sizes). Thesignificance of the statistical tests, using ANOVA,between meta-heuristics and their interaction withother variables is also reported. This allows for aclear, unbiased assessment of the performance ofeach meta-heuristic for this problem instance.Results suggest that any one of the four meta-heuristics could be used for this problem domainwhile there are pros and cons to each meta-heuris-tic examined. However, it is important for the suc-cessful implementation of EA, SA, and TS that theparameters be tuned properly. For HC, the onlyparameter to be tuned is the size of neighborhood;this makes HC very simple to use. Overall TS andSA give very good results with minimal computa-tional effort (time) for particularly large problems.As can be seen from our implementation of thefour meta-heuristic methods, common operatorswere developed that can be used in various appli-cations. In addition, as we demonstrated withSA, parameters need not be static. Parameterscan dynamically evolve as an algorithm runs; thiseliminates the necessity to run a plethora of preli-minary tests in an effort to identify optimal param-eter values. Trying to identify optimal staticparameter values can be problematic since thereis no guarantee that these parameter values willremain constant for different implementations ofa problem, or different problem sizes. Therefore,we support the process of dynamically adaptingparameter values based on conditions of the algo-rithm through the use of feedback loops.Another variation that might warrant investiga-tion is the use of the crossover operator withinHC. The idea is to apply the crossover operatorto the chromosome recorded as the best solutionfound in different runs to create new search areas,in lieu of the random-restart operator. It is alsopossible to merge the different strengths of thesemeta-heuristics by having them cooperate witheach other. One idea might be to run EA for ashort time in order to get a population of goodsolutions, and then randomly select chromosomesfrom that population to be used for the initialchromosomes for HC, SA, or TS.H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 99
  18. 18. Finally, this experiment has focused on the dif-ferent exploratory pressures unique to each meta-heuristic by keeping the exploitative pressure con-stant. Future research might explore the impact ofdifferent exploitative pressures with respect to localsearch methods, like vertex substitution, [15] andcomparing these results with the greedy neighbor-hood search algorithm, i.e., low-order mutationoperator, which is currently being used.References[1] H. Aytug, C. Saydam, Solving large-scale maximumexpected covering location problems by genetic algorithms:A comparative study, European Journal of OperationalResearch 141 (2002) 480–494.[2] H. Aytug, M. Khouja, F.E. Vergara, A review of the use ofgenetic algorithms to solve production and operationsmanagement problems, International Journal of Produc-tion Research 41 (17) (2003) 3955–4009.[3] M. Ball, L. Lin, A reliability model applied to emergencyservice vehicle location, Operations Research 41 (1993) 18–36.[4] R.S. Barr, B.L. Golden, J.P. Kelly, M.G.C. Resende, W.R.Stewart, Designing and reporting on computational exper-iments with heuristic methods, Journal of Heuristics 1 (1)(1995) 9–32.[5] J. Beasley, Lagrangean heuristics for location problems,European Journal of Operational Research 65 (1993) 383–399.[6] J.E. Beasley, P.C. Chu, A genetic algorithm for the setcovering problem, European Journal of OperationalResearch 94 (1996) 392–404.[7] S. Benati, G. Laporte, Tabu Search algorithms for the(rjXp)-medianoid and (rjp) centroid problems, LocationScience 2 (1994) 193–204.[8] L. Brotcorne, G. Laporte, F. Semet, Fast heuristics forlarge scale covering location problems, Computers andOperations Research 29 (2002) 651–665.[9] L. Brotcorne, G. Laporte, F. Semet, Ambulance locationand relocation models, European Journal of OperationalResearch 147 (2003) 451–463.[10] F.Y. Chiyoshi, R.D. Galvao, A statistical analysis ofsimulated annealing applied to the p-median problem,Annals of Operations Research 96 (2000) 61–74.[11] R. Church, C. ReVelle, The maximal covering locationproblem, Papers of Regional Science Association 32 (1974)101–118.[12] CPLEX, Using the CPLEX Callable Library, CPLEXOptimization, Inc., Incline Village, NV, 1995.[13] M.S. Daskin, Application of an expected covering modelto emergency medical service system design, DecisionSciences 13 (1982) 416–439.[14] M.S. Daskin, A maximal expected covering locationmodel: Formulation, properties, and heuristic solution,Transportation Science 17 (1983) 48–69.[15] M.S. Daskin, Network and Discrete Location, John Wiley& Sons Inc., New York, 1995.[16] R.D. Galvao, F.Y. Chiyoshi, R. Morabito, Towardsunified formulations and extensions of two classicalprobabilistic location models, Computers and OperationsResearch 32 (1) (2005) 15–33.[17] M. Gendreau, G. Laporte, F. Semet, Solving an ambulancelocation model by tabu search, Location Science 5 (2)(1997) 75–88.[18] M. Gendreau, G. Laporte, F. Semet, A dynamic modeland parallel tabu search heuristic for real time ambu-lance relocation, Parallel Computing 27 (2001)1641–1653.[19] F. Glover, Future paths for integer programming and linksto artificial intelligence, Computers and OperationsResearch 13 (1986) 533–549.[20] F. Glover, M. Laguna, Tabu Search, Kluwer, Boston, MA,1997.[21] J.B. Goldberg, Operations research models for the deploy-ment of emergency services vehicles, EMS ManagementJournal 1 (1) (2004) 20–39.[22] H.J. Holland, Adaptation in Natural and ArtificialSystems, University of Michigan Press, Ann Arbor, 1975.[23] J. Jaramillo, J. Bhadury, R. Batta, On the use of geneticalgorithms to solve location problems, Computers andOperations Research 29 (2002) 761–779.[24] S. Kirkpatrick, C.D. Gelatt Jr., M.P. Vecchi, Optimizationby simulated annealing, Science 220 (1983) 671–680.[25] R.C. Larson, A hypercube queuing model for facilitylocation and redistricting in urban emergency services,Computers and Operations Research 1 (1974) 67–95.[26] Z. Michalewicz, Genetic Algorithms + Data Struc-tures = Evolution Programs, third ed., Springer, NewYork, 1999.[27] Z. Michalewicz, D.B. Fogel, How to Solve it: ModernHeuristics, Spring-Verlag, 2000.[28] M.J. Norusis, SPSS 10.0 Guide to Data Analysis, Prentice-Hall, Inc., Upper Saddle River, New Jersey, 2000.[29] I.H. Osman, G. LaPorte, Metaheuristics: A bibliography,Annals of Operations Research 63 (1996) 513–628.[30] S.H. Owen, M.S. Daskin, Strategic facility location: Areview, European Journal of Operational Research 111(1998) 423–447.[31] C. Revelle, K. Hogan, The maximum availability loca-tion problem, Transportation Science 23 (1989) 192–200.[32] S. Russel, P. Norvig, Artificial Intelligence: A ModernApproach, Prentice-Hall, New Jersey, 1995.[33] C. Saydam, J. Repede, T. Burwell, Accurate Estimation ofExpected Coverage: A Comparative Study, Socio-Eco-nomic Planning Sciences 28 (2) (1994) 113–120.[34] C. Saydam, H. Aytug, Accurate estimation of expectedcoverage: Revisited, Socio-Economic Planning Sciences 37(2003) 69–80.100 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  19. 19. [35] D. Schilling, D. Elzinga, J. Cohon, R. Church, C. ReVelle,The TEAM/FLEET models for simultaneous facility andequipment sitting, Transportation Science 13 (1979) 163–175.[36] D.A. Schilling, V. Jayaraman, R. Barkhi, A review ofcovering problems in facility location, Location Science 1(1) (1993) 25–55.[37] C. Toregas, R. Swain, C. ReVelle, L. Bergman, Thelocation of emergency service facilities, OperationsResearch 19 (1971) 1363–1373.[38] F.E. Vergara, M. Khouja, Z. Michalewicz, An evolution-ary algorithm for optimizing material flow in supplychains, Computers and Industrial Engineering 43 (2002)407–421.H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 101