An Integer Programming Representation for Data Center Power-Aware Management - Report

961 views

Published on

Report for CANO (Communication Networks Optimization) project.

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
961
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
18
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

An Integer Programming Representation for Data Center Power-Aware Management - Report

  1. 1. 2012An Integer Linear Programming Representation for DataCenter Power-Aware Management -ILP implementation and Heuristic design Ioanna Tsalouchidou Arinto Murdopo Josep Subirats Communication Networks Optimization 2012
  2. 2. Table of Contents1. Introduction .......................................................................................................................................... 42. ILP problem formulation ....................................................................................................................... 5 2.1. Problem satement ........................................................................................................................ 5 2.2. Scheduling Approach .................................................................................................................... 5 2.3. Revenue, Power cost, Migration cost and Quality of Service as factors ...................................... 7 2.4. The final Model ............................................................................................................................. 73. Metaheuristic Design .......................................................................................................................... 10 3.1. GRASP Overview ......................................................................................................................... 10 3.2. GRASP Heuristic Implementation ............................................................................................... 11 3.3. Random Job Choosing improvement .......................................................................................... 124. Dataset Generation ............................................................................................................................. 13 4.1. Node capacity array generation.................................................................................................. 13 4.2. Min/Max job CPU requirement array generation....................................................................... 135. Experiment Result ............................................................................................................................... 14 5.1. Experiment Platform and Configuration ..................................................................................... 14 5.2. Execution Time............................................................................................................................ 14 5.3. GRASP Heuristic Alpha Value ...................................................................................................... 15 5.4. GRASP Heuristic Solution Rate Quality ....................................................................................... 18 5.5. Maximum Benefit Comparison - CPLEX and GRASP Heuristic .................................................... 196. Conclusions ......................................................................................................................................... 217. Future Work ........................................................................................................................................ 22 7.1. Implementation of the Local Search Phase of GRASP ................................................................ 22 7.2. Simulations with Multiple Problem Instances ............................................................................ 228. References .......................................................................................................................................... 242
  3. 3. Table of FiguresFigure 1: Scheduling problem ....................................................................................................................... 5Figure 2: Job scheduling ................................................................................................................................ 6Figure 3: Simple scheduling approach .......................................................................................................... 6Figure 4: Final scheduling ILP model ............................................................................................................. 8Figure 5: General GRASP algorithm ............................................................................................................ 10Figure 6: Jobs benefit evaluation ............................................................................................................... 11Figure 7: Execution time of the CPLEX solution .......................................................................................... 14Figure 8: Execution time for 100H200J R .................................................................................................... 15Figure 9: NR GRASP Heuristic results .......................................................................................................... 16Figure 10: R GRASP Heuristic results .......................................................................................................... 17Figure 11: Normalized benefit over time for NR GRASP Heuristic.............................................................. 18Figure 12: Normalized benefit over time for R GRASP Heuristic ................................................................ 19Figure 13: CPLEX and R/NR GRASP Heuristic results .................................................................................. 20Figure 14: Possible local search phase approach........................................................................................ 223
  4. 4. 1. IntroductionThis project is based on the paper “An Integer Linear Programming Representation for DataCenterPower-Aware Management” [1]. This paper describes the problem of the placement of jobs in a data-centre in order to maximize its benefit, while taking into account the revenue obtained for executing thejobs, SLA penalization costs, migration costs and energy consumption costs.In this project, the integer linear problem described in the aforementioned paper has beenimplemented using IBM ILOG CPLEX, in order to be able to obtain the optimal solution of differentproblem instances. The main contribution of this project has been the implementation of a GRASPmetaheuristic (with two variants) which tries to find a good solution to the same problem in much lesstime than the original ILP. The metaheuristic has been implemented using Java, and has been tested andcompared against the results provided by the CPLEX ILP implementation.The structure of this document is as follows. Section 2 summarizes the problem statement as describedin [1]. In Section 3, an overview of the GRASP heuristic is provided, along with an explanation on theimplemented metaheuristic. Closely related to Section 3, Section 4 explains the dataset generationprocedure, which is used to run the simulations. Performance and quality results of the metaheuristicare presented in Section 5. Finally, conclusions on the work done are exposed in Section 6, while inSection 7 the possible improvements which could be made on it are envisioned.4
  5. 5. 2. ILP problem formulation2.1. Problem satementIn the proposed paper a data-centre is modelled as a set of physical machines, processors and jobs. Foreach element a set of constraints have to be accomplished in order to be coherent with the model.The solution to the problem comes when we find an optimal way to assign resources (in the paper, justthe number of required CPUs is considered) to jobs keeping in mind the electrical power consumption ofthe resources, the migration costs (in case a job has to be moved to another physical machine), thequality of service penalizations and finally the revenue obtained by running these jobs. In the end, apositive benefit has to be obtained.In this direction the above problem was defined as a function that needs to be maximized (the obtainedbenefit) with the optimal balance of these four parameters. Moreover, there is a set of conditions andrestrictions that set the jobs on the resources while keeping the solution viable and real.2.2. Scheduling ApproachThe proposed model uses the benefits of the virtualization technology in order for the data centres tobe able to run several jobs in one or multiple physical machines, to share resources and above all to beable to migrate jobs from one machine to another. This technology enables the dynamic allocation ofjobs along with consolidation, the scheduling policy that reduces the number of used resources byallocating the most number of jobs in the least number of machines without trade-off in the QoS orperformance. Figure 1: Scheduling problem5
  6. 6. As it is shown in Figure 2, the problem that needs to be solved at each scheduling round is the decisionof which resources will be assigned to each job in order to maximize the overall benefit taking intoconsideration the job revenue, the migration costs, the power costs and the tolerable QoS loss. Figure 2: Job schedulingFor the scheduling process each job is supposed to have determined resource requirements, such asCPU quota or memory space, in order to run properly. In Figure 3 we can see the constraints of theproblem as they were primarily constructed. Figure 3: Simple scheduling approachThe scheduling problem is therefore described as a matrix (schedule[Hosts,Jobs]) describing whether aparticular job is allocated into a particular host (schedule[host,job]=1) or not (schedule[host,job]=0). The“Unique” constraint in this case is restrictive; given that it imposes that all the jobs must be allocated inorder for the solution to be feasible. The “Capacity” constraint imposes that the capacity of each one ofthe hosts must not be exceeded.6
  7. 7. 2.3. Revenue, Power cost, Migration cost and Quality of Service as factorsThe next step is the introduction of more constraints that will help us to model the maximization of theeconomic benefit. Jobs are translated to money according to the data-centre pricing policy in the ServiceLevel Agreement (SLA). The Benefit equation is defined as follows: Benefit = Revenue - Costs.In the next step towards the final model we try to focus on the problem of power cost and consumptionand the way in which we can possibly minimize it. As it is already known, the power curve for a givenhost grows logarithmically, which means that in the case of two machines with many processors withonly one active processor, the consumption is much higher than one working machine with two activeprocessors.In the following solution we consider that the number of CPUs in a host is a natural value so that thescheduling will consider separately the processors of a given host. Moreover, the number of active CPUsin a host must not surpass the maximum number of CPUs that a host has available.Another handicap that is introduced through consolidation is the migration cost that is defined as thecost that represents the amount of time wasted in moving a job from a host to another. During thistime, there is no revenue obtained from the execution of the job, and therefore must be subtractedfrom the overall benefit.The final step is to define the Quality of Service as an input to the scheduling problem. The systemallows some degradation in the provided QoS, as specified in the SLA. In order to improve consolidationand reduce power consumption, QoS can be relaxed introducing at the same time a penalization. Thispenalization is specified in Service Level Agreement. Therefore, the schedule can be altered while takinginto consideration these economic consequences.In order to be able to measure the level of accomplishment of the job goals and the SLA, the authorsdefine the Health function which in its implementation is a linear function that varies from cpumin tocpumax. In other words, a CPU assignment of cpumin (depends on each job) means a health of 0(maximum QoS penalization), which equals to running the job in the minimal conditions, whereas a CPUassignment of cpumax means a health of 1 (no QoS penalization), which equals to the optimal executionof the job.2.4. The final ModelAs a result of all the above steps the final model is obtained as in Figure 4. This is also the final version ofthe model that has been implemented in CPLEX.7
  8. 8. Figure 4: Final scheduling ILP modelThe presented ILP is based on the primary model, but incorporates the aspects described in Section 2.3.Further details about the complete ILP model, the constraints, the variables and the parameters that arespecified in the above formulas are in depth explained in [1]It is worth mentioning that in the paper, while the problem is formulated, a variable multiplicationappears. This converts the problem into a non-linear problem, which is harder and slower to solve. Inorder to solve this problem, the author decomposes the constraint which took care of not exceeding themaximum CPUs of each physical machine into several constraints (“MaxCPU”, “Capacity”, and8
  9. 9. “QosAux1-4”). This also introduces the “quota” variable. Refer to *1+ for more details of thisdecomposition.It is also interesting to observe that the author introduced a relaxation in the “Unique” constraint in thefinal problem model. In this case, it is not mandatory to schedule all the jobs in order for the solution tobe feasible. Therefore, in this case some jobs might not be scheduled in a scheduling round.9
  10. 10. 3. Metaheuristic Design3.1. GRASP OverviewThe chosen metaheuristic for this project has been the GRASP (Greedy RAndomized Search Procedure)metaheuristic. This metaheuristic consists in building as many feasible solutions as specified by amaximum number of iterations (construction phase) and evaluate its benefit using the objectivefunction. In each of these solutions, a local search evaluation (local search phase) is applied in order tofind better solutions starting from a feasible solution. From all the evaluated solutions, the one withmaximum (when maximizing) or minimum (when minimizing) benefit is selected. This algorithm can beobserved in Figure 5. Figure 5: General GRASP algorithmThis metaheuristic introduces randomization in the construction phase. This phase starts with an emptyCandidate List (CL) of possible placements for a given job (following the example of our problem). Thenusing a greedy function which evaluates the individual benefit incurred by this particular placement, theCL is filled. From this list, a Restricted Candidate List (RCL) containing the placements whose benefitaccomplishes Equation 1 is built. From the RCL, an element is selected randomly. Equation 1This randomness assures that different solutions will be built each time, although its “amount ofrandomness” depends on Alpha. The RCL size depends on Alpha. If Alpha=0, it has a pure deterministicbehaviour as it always selects the best placement (only one element in the RCL). On the other hand, avalue of Alpha=1 has a completely random behaviour (all the combinations are present in the RCL),given that any combination, regardless of its obtained benefit, can be selected.10
  11. 11. 3.2. GRASP Heuristic ImplementationIn this project, the general GRASP design has been followed. Although the local search phase has notbeen implemented, a possible way to do it has been envisioned and is explained in the Future Worksection.The interesting part of the GRASP design is the Greedy Function. In our case, the algorithm evaluates theindividual benefit of placing each one of the jobs in all of the hosts of the infrastructure. The individualbenefit takes into account the amount of revenue obtained by running a given job, incurred powerconsumption costs of the scheduled job, QoS penalties when it’s not given the maximum amount of CPUit requires and migration costs in case it is moved from one host in the initial schedule to another one inthe new one.As an initial approach, all the jobs are evaluated in order (first job number 1, then job number 2 and soon). Given that each job has a minimum and maximum CPU requirements, each placement to aparticular host is evaluated with all the possible CPU assignments. Note that each of these CPUassignments will provide a different benefit, given that less power will be used but also the incurred SLApenalty will be different. In case a particular placement is not possible because the destination hostdoesn’t have enough free CPUs, an individual benefit of negative infinity is assigned. In this case, thiscombination is not included in the Candidate List, because it would greatly modify the threshold which isused later to build the Reduced Candidate List. Figure 6: Jobs benefit evaluation11
  12. 12. When all the possible placements of a particular job have been evaluated, the CL is reduced into the RCLas explained in Section 3.1. Then, one element is picked up randomly, which will be the chosencombination. As in the general GRASP implementation, the greedy behaviour is introduced whenselecting the adopted combination from the RCL, given that it is a random selection. This combinationspecifies the destination host for a particular job, as well as the assigned CPU (which is in the rangebetween its minimum and maximum required CPU).When all the jobs have been evaluated and the scheduling matrix has been built, it is returned to thegeneral GRASP algorithm, along with the global benefit of this particular placement. If the obtainedplacement achieves a better benefit than previous placements, it is stored along with its benefit.3.3. Random Job Choosing improvementThe first approach considered in this project took each of the jobs to be placed from a list, where thejobs were always kept in the same order. However, it is clear that the last jobs in the list will have lesspossible hosts to be placed and less possible CPU assignments, as some of the hosts may be already fullwith other jobs. Therefore, the first jobs in the queue have advantage over the last ones. We will referthis approach as NR GRASP heuristic in subsequent sections.Taking this into account, a possible way to fix this is to pick the jobs to be placed from the job list inrandom order. Then, even if a job has had disadvantage in an iteration of the GRASP heuristic because ithas been evaluated at the last place, it might be evaluated in the first place in another iteration. We willrefer this second approach as the R GRASP heuristic throughout this document.12
  13. 13. 4. Dataset Generation4.1. Node capacity array generationIn order to be able to generate large datasets to perform large simulations, an automatic way togenerate the parameters has been implemented. One of these parameters is the capacity in terms ofnumber of CPUs of each host. Although the model exposed in the paper only considered up-to-4 CPUnodes, we have extended the power model in order to cope with up-to-8 CPU nodes in order to obtainricher results.The capacity of each node is expressed as an array where each position represents the CPU capacity ofhost “i”. It can have a capacity of 1, 2, 4 or 8 CPUs, which is assigned randomly.4.2. Min/Max job CPU requirement array generationAs in the CPU capacity array generation, the minimum job CPU requirement is assigned a random valuefrom 1 to 10. Therefore, each position of the “consMin” array represents the minimum CPUrequirement of job “j”, ranging from 1 to 9. Even though 9 and 10 CPUs are not available in any node, itwas considered that having jobs which can’t be run in the infrastructure and have to be refused was amore realistic characterization.Once the minimum CPU consumption array has been computed, the maximum CPU consumption arrayis generated. Each job’s maximum CPU requirement is calculated as its minimum CPU requirement plusan extra 1 or 2 CPUs, chosen randomly.13
  14. 14. 5. Experiment Result5.1. Experiment Platform and ConfigurationWe used same computer to execute our CPLEX and Java heuristic code. It was a Dell Latitude E6410 withIntel i7 M640 @2.8 GHz CPU, 8 Gigabytes of RAM, and 64-bit Windows 7 Operating System. We usedIBM ILOG CPLEX Optimization Studio 12.4 and our GRASP Java heuristic code was executed on top ofJRE 1.6.0_24-b07 from Oracle.We simulated the following problem sizes: 5H10J (5-hosts-10-jobs), 15H30J, 20H40J, 30H60J, 40H80Jand 100H200J. For each problem size, we experimented with multiple Alpha ( ) values ranging from 0 to1, with steps of 0.1 (0, 0.1, 0.2, 0.3 …. 1).For the heuristic executions, we used multiple iteration configurations as follows: 10, 100, 1000, 10000and 100000 iterations. We measured different performance aspects of both R and NR heuristic types.The obtained results are discussed in the following subsections.5.2. Execution TimeFirst of all, we measured the execution time in CPLEX for different problem sizes and compared them. Asexpected, CPLEX takes a very long time to complete when the problem size is big. In our case, CPLEX wasable to complete a 20H40J instance in 228 seconds.When the problem size was increased to 30H60J,CPLEX didn’t finish its execution, even though we let it run for 66 hours. We used the available CPLEXresults to measure the quality of the GRASP heuristic solution. Figure 7 shows the execution time of ourCPLEX solution. Figure 7: Execution time of the CPLEX solution14
  15. 15. Next, we continued to measure the execution time of the GRASP heuristic. GRASP is able to run forlarger problem sizes. Its execution time strongly depends on the number of iterations and theconfigured problem size. We measured the execution time for all of the aforementioned problem sizesand iteration configurations. In Figure 8, the execution time for the highest problem size of 100H200J isshown. Figure 8: Execution time for 100H200J RThe x-axis is displayed in logarithmic scale. In normal scale, the heuristic execution time has a linearcorrelation with the number of iterations. It is interesting to observe that the GRASP heuristic performsmuch better than CPLEX in terms of execution time. For example, the execution time of the GRASPheuristic with 100H200J and 100000 iterations is 80 seconds faster compared to CPLEX with 20H40J.However, the superiority of the GRASP heuristic can not be evaluated only in terms of execution time, italso has to be taken into account the quality of the obtained solution. Therefore we also need tomeasure the performance of our GRASP heuristic in terms of solution quality. This aspect will bediscussed in section 5.5.5.3. GRASP Heuristic Alpha ValueIn this section, we obtain the optimal Alpha value for the GRASP heuristic. For every problem size andevery iteration, we obtained the diagram depicting the relation between Alpha, Benefit, Number ofIterations, and type of GRASP heuristic. NR refers to not-random job selection GRASP (NR GRASPheuristic), and R refers to random job selection GRASP (R GRASP heuristic).15
  16. 16. Figure 9: NR GRASP Heuristic resultsFigure 9 shows the results for the NR GRASP heuristic with four types of problem sizes. Based on theseresults, it is shown that maximum benefit will be achieved with low Alpha values (0.1, 0.2, 0.3). We alsocan see that the more the number of the iterations, the higher the obtained benefit is. As it wasexplained in Section 3.1, a value of Alpha=0 has a completely deterministic behaviour, given that onlythe combination with the best benefit is chosen when building the Reduced Candidate List, andtherefore is always the one selected in the end. Note that no randomness is introduced in the jobchoosing either, as they are selected always in order. This behaviour can be clearly observed in theprevious figure: regardless of the number of iterations, the obtained benefit is exactly the same.16
  17. 17. Figure 10: R GRASP Heuristic resultsFigure 10 shows the results for the R GRASP heuristic with four types of problem sizes, the same onesthat we used to measure NR GRASP heuristic Alpha value. They show the same trend as NR GRASPheuristic whereby low Alpha values (0, 0.1, 0.2, 0.3) result in higher benefit compared to other Alphavalues. The number of iterations also determine the quality of the benefit and the higher the iteration,the higher the obtained benefit value. It is also interesting to note that for high problem sizes (30H60J,40H80J, 100H200J), the highest value is obtained when Alpha is 0. This result means that earlyrandomization in picking up jobs to be evaluated is more prevalent in producing better results thanrandomization in the selection of candidates from the Restricted Candidate List (whose size depends onthe value of Alpha).Both NR and R GRASP heuristic therefore obtain better benefits when the Alpha Value is small, typicallybetween 0 to 0.3, both included.17
  18. 18. 5.4. GRASP Heuristic Solution Rate QualityIn this section, we analyze how fast our GRASP heuristic algorithm reaches some significant percentagesof the best solution it provides. We define “significant percentages” as more than 90% of the bestsolution that can be found using GRASP (sometimes, it reaches the optimal solution).To perform this analysis, we used the largest problem size configuration (100H200J). For NR GRASPheuristic, we used an Alpha value of 0.1 and 100000 iterations, while for R GRASP heuristic we used anAlpha value of 0 and 100000 iterations. Figure 11: Normalized benefit over time for NR GRASP HeuristicFigure 11 shows the normalized benefit over time for the NR GRASP heuristic. The y-axis is thenormalized benefit, which is the percentage of benefit compared to maximum benefit that can beobtained in this configuration. Note that since we were not able to use CPLEX to obtain the best benefitfor 100H200J, we used the maximum benefit obtained by NR GRASP heuristic to calculate thenormalized benefit. The x-axis denotes the time in mili-seconds. From the figure, we observe that within12.377 seconds, our NR GRASP heuristic is able to reach around 99.95% of its maximum benefit.18
  19. 19. Figure 12: Normalized benefit over time for R GRASP HeuristicFigure 12 shows the results of the R GRASP heuristic. As in the NR GRASP heuristic results, we used themaximum benefit obtained by the R GRASP heuristic to calculate the normalized benefit. The R GRASPheuristic is able to reach 93% of maximum benefit within 0.617 seconds and 97.3% of maximum benefitwithin 8.813 seconds.Both NR and R GRASP heuristics have a very good performance in terms of speed to reach more than90% of its maximum benefit.5.5. Maximum Benefit Comparison - CPLEX and GRASP HeuristicCPLEX is guaranteed to obtain the optimal result for our problem statements. However, it has adrawback in its execution time. CPLEX execution time is very long and it is not affordable for big problemsizes. On the other hand, heuristics in general have smaller execution times but their results should becompared to those of CPLEX in order to evaluate its quality. Figure 13 shows the comparison betweenCPLEX and GRASP heuristic results for multiple problem sizes. It also presents two different instances ofR GRASP heuristics, one with 10000 iterations and the other with 100000 iterations.19
  20. 20. Figure 13: CPLEX and R/NR GRASP Heuristic resultsFor the smallest problem size (5H10J), all of them have the same maximum benefit. However, at 10H20J,we already observe that NR GRASP with 100000 iterations is not able to reach the optimal solutionobtained by CPLEX. But R GRASP with 10000 iterations and 100000 iterations managed to produceCPLEX’s optimal benefit values. The same trend is observed for 15H30J problem size, but things startedto get interesting in 20H40J where R GRASP with 10000 iterations was not able to produce maximumvalue produced by CPLEX. Here we can observe that the number of iterations also affect the maximumobtained benefit. We also can claim that our R GRASP heuristic produces good results, and its obtainedmaximum benefit equals to the CPLEX’s optimal benefit for all the simulations where CPLEX data wasavailable.For bigger problem sizes (30H60J, 40H80J, 100H200J) we were not able to obtain CPLEX results due to itsextremely long execution times. We observe that R GRASP heuristic with 100000 iterations produces themaximum benefit compared to the other configurations of the GRASP heuristic. Execution time of100000-iteration-R GRASP heuristic is around 5 minutes and it is still reasonable.We could not confirmwhether the resulting value is the optimal benefit value but based on good comparison results withCPLEX for smaller problem sizes, we are assured that the resulting value is pretty good also. In this case,we can improve the quality of the solution by increasing the number of iterations if we have enoughtime and resources to perform that.20
  21. 21. 6. ConclusionsIn this project, we have implemented an ILP problem for data-center job scheduling and management.We used IBM ILOG CPLEX to solve our ILP model. We also implemented two types of GRASP heuristic tosolve the scheduling problem, they are Non Random Job Selection GRASP (NR GRASP heuristic) andRandom Job Selection GRASP (R GRASP heuristic). During the GRASP heuristic methods implementation,we found that complex ILP restrictions or constraints can be translated into relatively easy heuristic Javacode.R GRASP heuristic performs better than NR GRASP heuristic. For four small problem sizes that CPLEX isable to produce maximum benefit, we found that R GRASP heuristic with 100000 iteration also managedto obtain CPLEX’s optimal benefit. However, NR GRASP heuristic is only able to produce around 83.5% ofthe maximum benefit for 15H30J and 88.11% of the maximum benefit for 10H20J case.In both GRASP implementation, lower Alpha values produce better results. And interestingly, for RGRASP heuristic, the best result is obtained when Alpha is 0. We also observe that more iterations willachieve better results, but the execution time will increase.Regarding the scalability of the solution, based on our experiment, CPLEX does not really scale well as itsexecution time grows exponentially. When we increased the problem sizes, CPLEX was not able to finishits optimization process although we let it run for 66 hours. On the other hand, GRASP heuristic scaleswell. We were able to use it to solve the problem for bigger problem sizes with acceptable times andsolution qualities. With the biggest problem size that CPLEX can run, R GRASP heuristic is still able to findthe same benefit as CPLEX, which is the optimal one.21
  22. 22. 7. Future Work7.1. Implementation of the Local Search Phase of GRASPWe could further improve our implementation of the GRASP heuristic by including the Local-Searchphase in our heuristic. One way to implement the Local-Search is to perform benefit comparisonsbetween the solution obtained by the Greedy Function and a modification of this same solution in whichsome jobs are not migrated between nodes. Refer to Figure 14. After the Greedy Function obtains aNew Schedule, we create local search candidate (a neighbour of the obtained solution) where Job 1 andJob 5 are not migrated with respect to the Old Schedule. We can repeat this process by producingseveral local search candidates (neighbours of the original solution) and compare their results with theoriginal New Schedule obtained by the Greedy Function. Figure 14: Possible local search phase approach7.2. Simulations with Multiple Problem InstancesWhen evaluating the performance and quality of the solutions obtained by the GRASP heuristic, onlyone problem instance for each problem size was executed. This means that only one set of parameters22
  23. 23. (CPUs per host, minimum and maximum CPU requirements per job) was generated to perform thesimulations for each problem size. In order to have a more realistic view of the GRASP performance,multiple sets of parameters for a given problem size could be generated, and evaluate the performanceof the heuristic for different inputs of the same size.23
  24. 24. 8. References[1] Josep Ll. Berral, Ricard Gavaldà, Jordi Torres. “An Integer Linear Programming Representation forDataCenter Power-Aware Management” Research Report number: UPC-LSI-10-21-R, November 2010.24

×