Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing

551 views
465 views

Published on

This paper shows the importance of fair scheduling in grid environment such that all the tasks get equal amount of time for their execution such that it will not lead to starvation. The load balancing of the available resources in the computational grid is another important factor. This paper considers uniform load to be given to the resources. In order to achieve this, load balancing is applied after scheduling the jobs. It also considers the Execution Cost and Bandwidth Cost for the algorithms used here because in a grid environment, the resources are geographically distributed. The implementation of this approach the proposed algorithm reaches optimal solution and minimizes the make span as well as the execution cost and bandwidth cost.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
551
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
17
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing

  1. 1. International Journal of Research in Computer ScienceeISSN 2249-8265 Volume 2 Issue 3 (2012) pp. 41-49© White Globe Publicationswww.ijorcs.org MAX MIN FAIR SCHEDULING ALGORITHM USING IN GRID SCHEDULING WITH LOAD BALANCING R.Gogulan1, A.Kavitha2, U.Karthick Kumar3 1 Phd Research scholar, School of Computer Sciences, Bharath University, Selaiyur, Chennai Email: r.gogul@lycos.com 2 Research scholar, School of Computer Sciences, Bharath University, Selaiyur, Chennai, Tamil Nadu Email: gogulan_kavitha@yahoo.com 3 Assistant Professor, Department of MCA & Software Systems, VLB Janakiammal College of Arts and Science Coimbatore, Tamil Nadu Email: u.karthickkumar@gmail.comAbstract: This paper shows the importance of fair requirements are satisfied and costs are subject to anscheduling in grid environment such that all the tasks extraordinarily complicated problem. Allocating theget equal amount of time for their execution such that resources to the proper users so that utilization ofit will not lead to starvation. The load balancing of the resources and the profits generated are maximized isavailable resources in the computational grid is also an extremely complex problem. From aanother important factor. This paper considers computational perspective, it is impractical to build auniform load to be given to the resources. In order to centralized resource allocation mechanism in such aachieve this, load balancing is applied after large scale distributed environment.scheduling the jobs. It also considers the Execution In a Grid Scheduler, the mapping of Grid resourcesCost and Bandwidth Cost for the algorithms used here and an independent job in optimized manner is sobecause in a grid environment, the resources are hard. So the combination of uninformed search andgeographically distributed. The implementation of this informed search provide the good optimal solution forapproach the proposed algorithm reaches optimal mapping a resources and jobs, to provide minimalsolution and minimizes the make span as well as the turnaround time with minimal cost and minimize theexecution cost and bandwidth cost. average waiting time of the jobs in the queue. A heuristic algorithm is an algorithm that ignores whether the solution to the problem can be proven toKeywords: Grid Scheduling, QOS, Load balancing, be correct, but which usually produces a good solution.Fair scheduling, Execution Cost, CommunicationCost. Heuristics are typically used when there is no way I. INTRODUCTION to find an optimal solution, or when it is desirable to give up finding the optimal solution for an Grid computing has been increasingly considered improvement in run time. A grid scheduler, oftenas a promising next-generation computing platform called resource broker, acts as an interface between thethat supports wide area parallel and distributed user and distributed resources. It hides thecomputing since its advent in the mid-1990s [1]. It complexities of the computational grid from the user.couples a wide variety of geographically distributed The scheduler does not have full control over the gridcomputational resources such as PCs, workstations, and it cannot assume that it has a global view of theand clusters, storage systems, data sources, databases, gridcomputational kernels, and special purpose scientificinstruments and presents them as a unified integrated Similarly, for resource suppliers, it is hard to evaluateresource [2]. The complete grid definition built using the profit of putting resource into a grid without such aall main characteristics and uses may be considered measurement. For both users and suppliers, joining aimportant for several reasons [6]. Grids address issues grid will incur more security and maintenance costsuch as security, uniform access, dynamic discovery, than having only their own computational resources todynamic aggregation, and quality of services [7]. execute their own tasks. The remaining section of this In computational grids, heterogeneous resources paper is organized as follows. Section 2 explains thewith different systems in different places are related work. Section 3 Notation and problemdynamically available and distributed geographically. formulation is explain, Section 4 explain ExistingThe user’s resource requirements in the grids vary Method and in section 5 detail the Proposed Methoddepending on their goals, time constraints, priorities and section 6 describes comparison of tables andand budgets. Allocating their tasks to the appropriate charts and in section 7 present the conclusion andresources in the grids so that performance future work. www.ijorcs.org
  2. 2. 42 R.Gogulan, A.Kavitha, U.Karthick Kumar II. RELATED WORK completion time is used in AFTO to order the processor in increasing order. Fair Share scheduling [4] is compared with SimpleFair Task Order Scheduling, Adjusted Fair Task Order • MMFS rule: MMFS is applied here to compensateScheduling and Max-Min Fair Share Scheduling the overflow and underflow processor.algorithm are developed and tested with existing • LB rule: After MMFS rule LB rule is applied onlyscheduling algorithms. K. Somasundaram, S. for overflow processor to reduce the overallRadhakrishnan compares Swift Scheduler with First completion time of the processor.Come First Serve ,Shortest Job First and with SimpleFair Task Order based on processing time analysis, Begincost analysis and resource utilization[5]. ThamaraiSelvi describes the advantages of standard algorithmssuch as shortest processing time, longest processing Initialization of Algorithmtime, and earliest deadline first. Pal Nilsson and Michal Pioro have discussed MaxMin Fair Allocation for routing problem in a Calculate Total processor capacity andcommunication Network [8]. Hans Jorgen Bang, Demand rateTorbjorn Ekman and David Gesbert has proposedproportional fair scheduling which addresses theproblem of multiuser diversity scheduling together Apply fair share approach to evaluatewith channel prediction[9]. Daphne Lopez, S. V. Fair rateKasmir raja has described and compared FairScheduling algorithm with First Come First Serve andRound Robin schemes [10]. Load Balancing is one of Find Non Adjusted and Adjusted FCTthe big issues in Grid Computing [11], [12]. B.Yagoubi, described a framework consisting ofdistributed dynamic load balancing algorithm in Apply SFTO and AFTO ruleperspective to minimize the average response time ofapplications submitted to Grid computing. Grosu and Chronopoulos [13], Penmatsa and Apply MMFS rule for overflow andChronopoulos [14] considered static load balancing in underflow processora system with servers and computers where serversbalance load among all computers in a round robinfashion. Qin Zheng, Chen-Khong Tham, Bharadwaj Step=Step +1Veradale to address the problem of determining whichgroup an arriving job should be allocated to and howits load can be distributed among computers in the Apply LB rule for overflow processorgroup to optimize the performance and also proposedalgorithms which guarantee finding a load distributionover computers in a group that leads to the minimum Yesresponse time or computational cost [12]. Step < = NIII.NOTATION AND PROBLEM FORMULATION No • Initialization of Algorithm:Number of task and number of resource are initialized at the beginning Return the best of the algorithm. solution • Calculate the total processor capacity and demand rateis calculated from workload by difference Let N be the number of tasks that have to be between deadline and grid access delay. scheduled and workload wi of task Ti, i=1, 2… N is the • Evaluate fair rate: From the max min fair share duration of the task when executed on a processor of approach calculate fair rate depend on the number unit computation capacity. Let M be the number of of processor and processor capacity. processors and that the computation capacity of processor j is equal to cj units of capacity. The total • Non Adjusted and Adjusted FCT: By using fair computation capacity C of the Grid is defined [4] as rate adjusted and non-adjusted fair completion time is calculated as per SFTO and AFTO. M (1) • SFTO and AFTO rule: Non adjusted fair C =∑ cj completion time is used in SFTO to order the j=1 processor in increasing order and adjusted fair www.ijorcs.org
  3. 3. Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing 43 Let dij be the communication delay between user i requesting service are queued for scheduling accordingand processor j. More precisely, dij is the time that to their fair completion times. The fair completion timeelapses between the times a decision is made by the of a task is found by first estimating its fair task ratesresource manager to assign task Ti to processor j and using a max-min fair sharing algorithm.the arrival of all files necessary to run task Ti toprocessor j. A. Estimation of the Task Fair Rates Each task Ti is characterized by a deadline Di that Max-Min Fair Sharing scheme, small demandeddefines the time by which it is desirable for the task to computation rates Xi get all the computation powercomplete execution. Let γj be the estimated completion they require, whereas larger rates share leftovers. Max-time of the tasks that are already running on or already Min Fair Sharing algorithm is described as follows.scheduled on processor j. γj is equal to zero when notask has been allocated to processor j at the time a task The demanded computation rates Xi, i =1, 2. . . Nassignment is about to be made; otherwise, γj of the tasks are sorted in ascending order, say, X1 < X2corresponds to the remaining time until the completion < _ _ _ < XN . Initially, we assign capacity C/N to theof the tasks that are already allocated to processor j. task T1 with the smallest demand X1, where C is theWe define the earliest starting time of task Ti on total grid computation capacity. If the fair share C/N isprocessor j[4] as more than the demanded rate X1 of task T1, the unused excess capacity of C/N – X1 is again equally shared to δij=max{dij,γj} (2) the remaining tasks N-1 so that each of them gets an additional capacity (C / N + (C / N – X1)) / (N – 1).δij is the earliest time at which it is feasible for task Tito start execution on processor j. We define theaverage of the earliest starting times of task Ti over all This may be larger than task T2 needs, in whichthe M available processors[4] as case, the excess capacity is again equally shared among the remaining N-2 tasks, and this process M continues until there is no computation capacity left to ∑ δij cj distribute or until all tasks have been assigned capacity j=1 (3) equal to their demanded computation rates. When the δi = process terminates, each task has been assigned no M more capacity than it needs, and, if its demand was not ∑ cj j=1 satisfied, no less capacity than any other task with a greater demand has been assigned. We denote by ri(n)where δi as the grid access delay for task Ti. In the fair the non adjusted fair computation rate of the task Ti atscheduling algorithm, the demanded computation rate the nth iteration of the algorithm. Then, ri(n) isXi of a task Ti will play an important role and is given[4] bydefined [4] as n Xi if Xi < ∑ O (k) wi k=0 Xi = (4) ri (n) = ; n ≥ 0, (5) Di - δ i n n ∑ O (k) if Xi ≥ ∑ O (k) Here, Xi can be viewed as the computation capacity k=0 k=0that the Grid should allocate to task Ti for it to finishjust before its requested deadline Di if the allocated Wherecomputation capacity could be accessed at the meanaccess delay δi. N C - ∑ ri (n -1) VI. EXISTING METHOD i=1 O (k) = ,n≥1 (6) The scheduling algorithms do not adequately Card {N (n)}address congestion, and they do not take fairnessconsiderations into account. For example, the ECT With O (0) = C/N. (7)rule, tasks that have long execution time have a higherprobability of missing their deadline even if they havea late deadline. Also, with the EDF rule, a task with a Where, N (n) is the set of tasks whose assigned fairlate deadline is given low priority until its deadline rates are smaller than their demanded computationapproaches, giving no incentive to the users. rates at the beginning of the nth iteration, that is, To overcome these difficulties, in this section N (n) = {Ti: Xi > ri (n -1)} and N (0) = N, (8)provide an alternative approach, where the tasks www.ijorcs.org
  4. 4. 44 R.Gogulan, A.Kavitha, U.Karthick Kumar Whereas, the function card (.) returns the must be estimated. This can be done in two ways. Incardinality of a set. The process is terminated at the the first approach, each time an unused processorfirst iteration no at which either O (n0) = 0 or the capacity is available; it is equally divided among allnumber card {N (n0)} =0. The former case indicates active tasks. In the second approach, the rates of allcongestion, whereas the latter indicates that the total active tasks are recalculated using the max-min fairgrid computation capacity can satisfy all the demanded sharing algorithm, based on their respective demandedtask rates [4], that is, rates. N ∑ Xi < C (9) The estimated fair rate of each task is a function of i=1 time, denoted by ri(t). Here, introduce a variable called the round number, which defines the number of rounds The non adjusted fair computation rate ri of task Ti of service that have been completed at a given time. Ais obtained at the end of the process as non-integer round number represents a partial round of service. The round number depends on the number and ri = ri (n0 ). (10) the rates of the active tasks at a given time. In particular, the round number increases with a rateB. Fair Task Queue Order Estimation equal to the sum of the rates of all active tasks, equal to 1 / ∑i ri (t). Thus, the rate with which the round number increases changes and has to be recalculated A scheduling algorithm has two important things. each time a new arrival or task completion takes place.First, it has to choose the order in which the tasks are Based on the round number, we define the finishconsidered for assignment to a processor (the queue number Fi (t) of task Ti at time t as in [4]ordering problem). Second, for the task that is locatedeach time at the front of the queue, the scheduler has todecide the processor on which the task is assigned (the Fi (t) = R (τ) + wi / ri(t). (12)processor assignment problem). To solve the queue Where τ is the last time a change in the number ofordering problem in fair scheduling, SFTO and AFTO active tasks occurred, and R (τ) is the round number atare discussed. time τ. Fi (t) is recalculated each time new arrivals or task completions take place. Note that Fi (t) is not theC. Simple Fair Task Order time that task Ti will complete its execution. It is only a service tag that we will use to determine the order in In SFTO, the tasks are ordered in the queue in which the tasks are assigned to processors.increasing order of their non adjusted fair completiontime’s ti. The non adjusted fair completion time ti of The adjusted fair completion times tia can betask Ti is defined[4] as computed as the time at which the round number reaches the estimated finish number of the respective ti = δi + wi /ri (11) task. Thus, in [4] tia: R (tia ) = Fi (tia) (13) where ti can be thought of as the time at which thetask would be completed if it could obtain constant Where, the task adjusted fair completion timescomputation rate equal to its fair computation rate ri, determine the order in which the tasks are consideredstarting at time δi . for assignment to processors in the AFTO scheme: The task with the earliest adjusted fair completion time is assigned first, followed by the second earliest, and soD.Adjusted Fair Task Order on. In the AFTO scheme, the tasks are ordered in thequeue in increasing order of their adjusted fair E. Max-Min Fair Schedulingcompletion times tia. The AFTO scheme results inschedules that are fairer than those produced by the In MMFS ,the task are non preemptable, the sum ofSFTO rule; it is, more difficult to implement and more the rates of the tasks assigned for execution to acomputationally demanding than the SFTO scheme, processor may be smaller than the processor capacity,since the adjusted fair completion times tia are more and some processors may not be fully utilized. Adifficult to obtain than the non adjusted fair processor with unused capacity will be called ancompletion times ti. underflow processor. In an optimal solution, tasks assigned to underflow processors have schedulable i.Adjusted Fair Completion Times Estimation: rates that are equal to their respective fair rates, ris = ri. The overflow Oj of processor j is defined [4]as To compute the adjusted fair completion times tia, Oj = max {0, ∑ ri – cj} (14)the fair rate of the active tasks at each time instant www.ijorcs.org
  5. 5. Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing 45 i€Pj Op = max {0, ∑ ri – cp} (21)And the underflow Uk of processor k as i€Pp Uk = max { 0, ∑ ri – ck} Where On >0and Op >0 referred as n and p are(15) overflow processors. If On > Op then n processor take i€Pk more time to complete the job. If Op > On then p processor take more time to complete the job. This will Processors for which Oj > 0 will be referred to as cause more time to complete the full job. To recoveroverflow processors, whereas underflow processors these problems Load Balance Algorithm proceeds toare those for which Uk < 0. In an optimal solution, we rearrange the fair rates of the caused processor, so ithave will reduce overall completion time. The proposed algorithm combines maximum of two processors of capacity of first overflow with processors of capacity∑ ris = cj for all j for which Oj >0 (16) of second overflow to obtain a better exploitation of i€Pj the overall processor capacity. More specifically, given an assignment of tasks to processors, wei. Processor Assignment consider the rearrangement if On > Op then a task of rate rx assigned to an overflow processor On is This algorithm combines processors of capacity substituted for a task of rate ry assigned to an overflowoverflow with processors of capacity underflow to processor Op. After the task rearrangement, theobtain a better exploitation of the overall processor overflow capacity of the processors is updated ascapacity. More specifically, given an assignment of follows:tasks to processors, we consider the rearrangementwhere a task of rate rl assigned to an overflow Rn = On - €processor is substituted for a task of rate rm assigned to Rp = Op - € (22)an underflow processor. After the task rearrangement,the overflow (underflow) capacity of the processors is Where € = rx – ry. It expresses the task rate differenceupdated as[4] follows: between the two selected tasks, where Rn and Rp are the updated processor residuals. If Rn > Rp, processor n Rj = O j - € remains at the more completion time. So it will (17) Rk = Uk - € continue from step (1) up to Rn is more or less equal toWhere Rp. € = rm - rl. (18) B. Execution CostTo expresses the task rate difference between the twoselected tasks, where Rj and Rk are the updated Here, we also implement Execution Cost for all Algorithm used in this thesis. The Execution Cost isprocessor residuals. If Rj > 0, processor j remains at defined by Cexe (Pj) that is execution cost of jththe overflow state after the task rearrangement, processor.whereas if Rj < 0, processor j turns to the underflowstate. A reduction is accomplished only if the task rate Cexe (Pj) = P (tia)j * costj (23)difference satisfies the following equation in [4]: Where P (tia) is fair completion time of processor j. 1 1 € : O j + Ok (19) C. Communication CostWhere Oj1 = max (0, Rj) and Ok1 = max (0, Rk).This Here, we also implements the Communication Cost issatisfies the processor requirements. defined as Cb (Pj) = Cexe (Pj) + F (Pj) (24) V.PROPOSED METHOD Where Cexe (Pj) is execution cost of processor j and FA. Load Balancing (Pj) is Fitness of processor j. VI.RESULTS The existing method is good for fair completiontime but the load is not balanced. That is sometimes This paper proposes Load Balancing in MMFS toprocessor task allocation is excessive than the other, it obtain better load balancing. Here, cost rate range frommay take more time to complete the whole job. For 5 – 10 units is randomly chosen and assignedthis difficulty, here we propose a new algorithm called according to speed of the processor. Speed of theLoad Balance Algorithm to give uniform load to the processor ranges from 0 – 1MIPS are randomlyresources. The overflow On and Op of processors n and assigned to M processor. The proposed method isp is defined a compared with existing one with different number of On = max {0, ∑ ri – cn} (20) processors and tasks. Here number of processor taken i€Pn are 8, 16, 32 and 64 matrixes with number of task as 256, 512, 1024, and 2048MI. Below table shows the www.ijorcs.org
  6. 6. 46 R.Gogulan, A.Kavitha, U.Karthick Kumarcomparison results of load balance in MMFS withexisting algorithm such as EDF, SFTO, AFTO and Number of processor 8MMFS for 8, 16, 32, and 64 processors. The proposed 4000work is approximately gives 45% - 25% less than EDF EDF 3500and 7% - 5% less than SFTO and AFTO and 5% - 2% 3000 SFTO Makespanless than MMFS for makespan. Also, MMFS + LB 2500approximately show 30% - 25% less than EDF and 7% AFTO 2000- 6% less than SFTO and AFTO 2% - 1% less than 1500 MMFSMMFS for Execution cost and Bandwidth cost. The 1000 LBresult shows better performance for Higher Matrix 500also. The following are the comparison result of 0existing and proposed method. 256 512 1024 2048Table: 1 Performance comparison of proposed MMFS + LB Taskwith existing algorithm for EDF, SFTO, AFTO + MMFS for 8 processors Fig 1: Performance comparison of proposed MMFS + LB Executio Communicati with existing algorithm for EDF, SFTO, AFTO + MMFS for Scheduling Resource Makesp n on Algorithm Matrix an Makespan Cost Cost EDF 917.82 5506.91 6424.73 Number of processor 8 SFTO 447.74 4477.44 4925.19 30000 AFTO 444.39 4468.54 4912.94 EDF Execution Cost 25000 MMFS 439.61 4446.77 4886.39 20000 SFTO 256 x 8 MMFS + 15000 AFTO 418.13 4181.27 4599.4 LB 10000 MMFS EDF 1121.32 7849.21 8970.53 5000 LB SFTO 1022.36 5111.8 6134.16 0 256 512 1024 2048 AFTO 1010.09 5050.45 6060.53 Task MMFS 858.54 4292.71 5151.26 512 x 8 MMFS + Fig 2: Performance comparison of proposed MMFS + LB 836.72 4183.58 5020.3 LB with existing algorithm for EDF, SFTO, AFTO + MMFS for EDF 1825.33 10951.97 12777.3 Execution Cost SFTO 1651.45 13211.63 14863.08 AFTO 1686.17 11803.21 13489.38 Number of processor 8 MMFS 1643.32 13180.96 14824.28 35000 EDF Communication Cost 1024 x 8 30000 MMFS + LB 1599.82 12798.55 14398.36 25000 SFTO 20000 EDF 3596.42 25174.94 28771.36 AFTO 15000 10000 MMFS SFTO 3280.39 26243.11 29523.5 5000 LB AFTO 3247.63 25981.06 29228.69 0 256 512 1024 2048 MMFS 3137.59 25100.75 28238.34 2048 x 8 MMFS + Task 3095.82 24766.55 27862.37 LB Fig 3: Performance comparison of proposed MMFS + LB with existing algorithm for EDF, SFTO, AFTO + MMFS for Bandwidth Cost Table: 2 Performance comparison of proposed MMFS +LB with existing algorithm for EDF, SFTO, AFTO + MMFS for 16 processors www.ijorcs.org
  7. 7. Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing 47 CommunicatScheduling Resource Makespan Executio ion Number of processor 16Algorithm Matrix n Cost Cost 30000 EDF 1466.72 7332.11 8798.53 EDF 25000 Execution Cost 20000 SFTO SFTO 304 1520 1824 15000 AFTO AFTO 300.65 1511.1 1811.75 10000 MMFS 256 x 16 5000 LB MMFS 295.87 1489.33 1785.2 0MMFS + LB 209 1045 1254 256 512 1024 2048 Task EDF 1366.48 13664.81 15031.29 Fig 5: Performance comparison of proposed MMFS + LB SFTO 553.89 5538.91 6092.8 with existing algorithm for EDF, SFTO, AFTO + MMFS for AFTO 555.37 5553.75 6109.12 Execution Cost MMFS 512 x 16 545.76 5508.24 6054 Number of processor 16MMFS + LB 483.57 4835.69 5319.26 35000 Communication Cost 30000 EDF EDF 1540.27 9241.6 10781.86 25000 SFTO 20000 AFTO SFTO 1309.94 6549.72 7859.66 15000 10000 MMFS AFTO 1296.35 6481.77 7778.13 5000 LB 0 MMFS 1024 x 16 1301.81 6519.05 7820.86 256 512 1024 2048MMFS + LB 1231.43 6157.14 7388.57 Task EDF 3352.67 23468.72 26821.39 Fig 6: Performance comparison of proposed MMFS + LB SFTO 2742.53 24682.76 27425.29 with existing algorithm for EDF, SFTO, AFTO + MMFS for Communication Cost AFTO 2761.98 27619.81 30381.79 Table 3: Performance comparison of proposed MMFS + LB 2048 x 16 with existing algorithm for EDF, SFTO, AFTO + MMFS for MMFS 2734.4 24652.09 27386.49 32 processors CommunicatMMFS + LB 2641.04 23769.35 26410.39 Scheduling Resource Execution Makespan ion Algorithm Matrix Cost Cost EDF 206.05 1648.40 1854.45 SFTO 183.43 917.15 1100.58 AFTO 180.08 908.25 1088.33 Number of processor 16 MMFS 256 x 32 175.30 886.48 1061.78 MMFS + 4000 114.64 573.22 687.86 LB 3500 EDF EDF 744.80 5958.36 6703.16 3000 SFTO SFTO 580.60 5225.43 5806.04 Makespan 2500 AFTO 577.25 5216.53 5793.79 2000 AFTO MMFS 512 x 32 574.27 3445.64 4019.91 1500 MMFS + 464.54 2787.23 3251.77 1000 MMFS LB 500 EDF 966.47 7731.78 8698.26 LB 0 SFTO 912.96 4564.78 5477.73 AFTO 937.07 7512.53 8451.59 256 512 1024 2048 MMFS 904.83 4534.11 5438.93 1024 x 32 MMFS + 863.54 4317.72 5181.26 Task LB EDF 2675.05 18725.38 21400.44 SFTO 2427.97 21851.76 24279.74 Fig 4: Performance comparison of proposed MMFS + LB with AFTO 2375.23 21377.09 23752.32existing algorithm for EDF, SFTO, AFTO + MMFS for makespan MMFS 2048 x 32 2370.11 21330.99 23701.10 MMFS + 2262 20359.85 22622.06 LB www.ijorcs.org
  8. 8. 48 R.Gogulan, A.Kavitha, U.Karthick Kumar TABLE 4: Performance comparison of proposed MMFS + Number of processor 32 LB with existing algorithm for EDF, SFTO, AFTO + MMFS for 64 processors 3000 EDF Scheduling Resource Execution Communication Makespan Algorithm Matrix Cost Cost Makespan 2500 SFTO EDF 305.35 2748.13 3053.47 2000 SFTO 281.93 1691.60 1973.53 1500 AFTO AFTO 278.58 1682.70 1961.28 MMFS 256 x 64 273.80 1660.93 1934.73 1000 MMFS MMFS + 211.45 1268.7 1480.15 500 LB LB EDF 966.67 5800.02 6766.69 0 SFTO 600 3000 3600 256 512 1024 2048 AFTO 596.65 2991.10 3587.75 MMFS 512 x 64 591.87 2969.33 3561.20 Task MMFS + 450 2700 3150 LB Fig 7: Performance comparison of proposed MMFS + LB EDF 968.49 5810.95 6779.44with existing algorithm for EDF, SFTO, AFTO + MMFS for SFTO 978.87 9788.75 10767.62 makespan AFTO 975.52 9779.85 10755.37 MMFS 1024 x 970.74 9758.08 10728.82 64 MMFS + Number of processor 32 LB 795.34 7953.36 8748.69 EDF 2984.98 23879.85 26864.83 25000 SFTO 2630.08 26300.85 28930.93 EDF 20000 AFTO 2626.73 26291.95 28918.68 Execution Cost SFTO MMFS 2048 x 2621.95 26270.18 28892.13 64 15000 MMFS + 2330.86 23308.63 25639.50 AFTO LB 10000 MMFS 5000 LB Number of processor 64 0 3500 256 512 1024 2048 3000 EDF 2500 SFTO Makespan Task 2000 AFTO 1500 Fig 8: Performance comparison of proposed MMFS + LB with MMFSexisting algorithm for EDF, SFTO, AFTO + MMFS for Execution 1000 500 LB Cost 0 Number of processor 32 256 512 1024 2048 Task 30000 Communication Cost 25000 EDF Fig 10: Performance comparison of proposed MMFS + LB with SFTO existing algorithm for EDF, SFTO, AFTO + MMFS for Makespan 20000 15000 AFTO Number of processor 64 10000 MMFS 30000 25000 EDF Execution Cost 5000 LB 20000 SFTO 0 15000 AFTO 256 512 1024 2048 10000 MMFS Task 5000 LB 0 Fig 9: Performance comparison of proposed MMFS + LB with 256 512 1024 2048 existing algorithm for EDF, SFTO, AFTO + MMFS for Communication Cost Task Fig 11: Performance comparison of proposed MMFS + LB with existing algorithm for EDF, SFTO, AFTO + MMFS for Execution Cost www.ijorcs.org
  9. 9. Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing 49 [7] Parvin Asadzadeh, Rajkumar Buyya1, Chun Ling Kei, Number of processor 64 Deepa Nayar, And Srikumar Venugopal Global Grids 35000 EDF and Software Toolkits:A Study of Four Grid 30000 Middleware Technologies. SFTO 25000 [8] Pal Nilsson1 and Michał Pi´Oro Unsplittable max-min Communication Cost 20000 AFTO demand allocation – a routing problem. 15000 MMFS [9] Hans Jorgen Bang, Torbjorn Ekman And David Gesbert 10000 A Channel Predictive Proportional Fair Scheduling 5000 LB Algorithm. 0 [10] Daphne Lopez. S. V. Kasmir Raja (2009) A Dynamic 256 512 1024 2048 Error Based Fair Scheduling Algorithm For A Task Computational Grid. Journal Of Theoretical And Applied Information Technology JATIT. Fig 12: Performance comparison of proposed MMFS + LB with existing algorithm for EDF, SFTO, AFTO + MMFS for [11] Qin Zheng, Chen-Khong Tham, Bharadwaj Veeravalli Communcation Cost (2008) Dynamic Load Balancing and Pricing in Grid Computing with Communication Delay.Journal in Grid VII. CONCLUSION Computing . In this paper, Load Balancing algorithm is [12] Stefan Schamberger (2005) A Shape Optimizing Loadcompared with normal scheduling algorithm such as Distribution Heuristic for Parallel Adaptive fem Computations.Springer-Verlag Berlin Heidelberg .Earliest Deadline First, and Fair Scheduling algorithmsuch as SFTO, AFTO and MMFS. Our proposed [13] Grosu, D, Chronopoulos. A.T (2005) Noncooperative load balancing in distributed systems.Journal ofalgorithm shows better result for execution cost and Parallel Distrib. Comput. 65(9), 1022–1034.bandwidth cost also. Result shows that load balancing [14] Penmatsa, S., Chronopoulos, A.T (2005) Job allocationwith scheduling produces minimum makespan than schemes in computational Grids based on costothers. Future work will focus on that how fair optimization.In: Proceedings of 19th IEEEscheduling can be applied to optimization techniques, International Parallel and Distributed ProcessingQoS Constrains such as reliability can be used as Symposium,Denver.performance measure. VIII. REFERRENCESBooks[1] Rajkumar Buyya, David Abramson, and Jonathan Giddy A Case for Economy Grid Architecture for Service Oriented Grid Computing[2] Foster.I.,Kesselman.C(1999) The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann Publishers, USA.[3] Wolski.R, Brevik.J, Plank.J, and Bryan.T, (2003) Grid Resource Allocation and Control Using Computational Economies, In Grid Computing: Making the Global Infrastructure a Reality. Berman, F, Fox, G., and Hey, T. editors, Wiley and Sons, pp. 747--772, 2003.Conferences[4] Doulamis.N.D.Doulamis. A.D, Varvarigos. E.A. Varvarigou. T.A (2007) Fair Scheduling Algorithms in Grids .IEEE Transactions on Parallel and Distributed Systems, Volume18, Issue 11Page(s):1630 – 1648.Journals[5] K.Somasundaram, S.Radhakrishnan (2009) Task Resource Allocation in Grid using Swift Scheduler. International Journal of Computers, Communications & Control, ISSN 1841-9836, E-ISSN 1841-9844 Vol. IV.[6] Miguel.L, Bote-Lorenzo, Yannis.A Dimitriadis, And Eduardo Gomez-Sanchez(2004) Grid Characteristics and Uses: a Grid Definition. Springer-Verlag LNCS 2970, pp. 291-298. www.ijorcs.org

×