This document discusses load balancing in distributed systems. It provides definitions of static and dynamic load balancing, compares their approaches, and describes several dynamic load balancing algorithms. Static load balancing assigns tasks at compile time without migration, while dynamic approaches migrate tasks at runtime based on current system state. Dynamic approaches have overhead from migration but better utilize resources. Specific dynamic algorithms discussed include nearest neighbor, random, adaptive contracting with neighbor, and centralized information approaches.
Load Balancing In Distributed ComputingRicha Singh
Load Balancing In Distributed Computing
The goal of the load balancing algorithms is to maintain the load to each processing element such that all the processing elements become neither overloaded nor idle that means each processing element ideally has equal load at any moment of time during execution to obtain the maximum performance (minimum execution time) of the system
Load Balancing In Distributed ComputingRicha Singh
Load Balancing In Distributed Computing
The goal of the load balancing algorithms is to maintain the load to each processing element such that all the processing elements become neither overloaded nor idle that means each processing element ideally has equal load at any moment of time during execution to obtain the maximum performance (minimum execution time) of the system
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
A Distributed computing architeture consists of very lightweight software agents installed on a number of client systems , and one or more dedicated distributed computing managment servers.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
DSM system
Shared memory
On chip memory
Bus based multiprocessor
Working through cache
Write through cache
Write once protocol
Ring based multiprocessor
Protocol used
Similarities and differences b\w ring based and bus based
This is a presentation for Chapter 7 Distributed system management
Book: DISTRIBUTED COMPUTING , Sunita Mahajan & Seema Shah
Prepared by Students of Computer Science, Ain Shams University - Cairo - Egypt
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
A Distributed computing architeture consists of very lightweight software agents installed on a number of client systems , and one or more dedicated distributed computing managment servers.
Synchronization in distributed computingSVijaylakshmi
Synchronization in distributed systems is achieved via clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can share its local time with other nodes in the system. The time is set based on UTC (Universal Time Coordination).
DSM system
Shared memory
On chip memory
Bus based multiprocessor
Working through cache
Write through cache
Write once protocol
Ring based multiprocessor
Protocol used
Similarities and differences b\w ring based and bus based
This is a presentation for Chapter 7 Distributed system management
Book: DISTRIBUTED COMPUTING , Sunita Mahajan & Seema Shah
Prepared by Students of Computer Science, Ain Shams University - Cairo - Egypt
The Concept of Load Balancing Server in Secured and Intelligent NetworkIJAEMSJORNAL
Hundreds and thousands of data packets are routed every second by computer networks which are complex systems. The data should be routed efficiently to handle large amounts of data in network. A core networking solution which is responsible for distribution of incoming traffic among servers hosting the same content is load balancing. For example, if there are ten servers within a network and two of them are doing 95% of the work, the network is not running very efficiently. If each server was handling about 10% of the traffic, the network would run much faster.Networks get more efficient with the help of Load balancing. The traffic is evenly distributed amongst the network making sure no single device is overwhelmed.When a request is balanced across multiple servers, it prevents any server from becoming a single point of failure. It improves overall availability and responsiveness. To evenly split the traffic load among several different servers web servers; often use load balancing.Load balancing requires hardware or software that divides incoming traffic amongst the available serverseither it is done on a local network or a large web server. High amount of traffic is received by a network that have one server dedicated to balance the load among other servers and devices in the network. This server is often known as load balancer. Load balancing is used by clusters or multiple computers that work together, to spread out processing jobs among the available systems.
Modified Active Monitoring Load Balancing with Cloud Computingijsrd.com
Cloud computing is internet-based computing in which large groups of remote servers are networked to allow the centralized data storage, and online access to computer services or resources. Load Balancing is essential for efficient operations in distributed environments. As Cloud Computing is growing rapidly and clients are demanding more services and better results, load balancing for the Cloud has become a very interesting and important research area. In the absence of proper load balancing strategy/technique the growth of CC will never go as per predictions. The main focus of this paper is to verify the approach that has been proposed in the model paper [3]. An efficient load balancing algorithm has the ability to reduce the data center processing time, overall response time and to cope with the dynamic changes of cloud computing environments. The traditional load balancing Active Monitoring algorithm has been modified to achieve better data center processing time and overall response time. The algorithm presented in this paper efficiently distributes the requests to all the VMs for their execution, considering the CPU utilization of all VMs.
Scalable Distributed Job Processing with Dynamic Load Balancingijdpsjournal
We present here a cost effective framework for a robust scalable and distributed job processing system that
adapts to the dynamic computing needs easily with efficient load balancing for heterogeneous systems. The
design is such that each of the components are self contained and do not depend on each other. Yet, they
are still interconnected through an enterprise message bus so as to ensure safe, secure and reliable
communication based on transactional features to avoid duplication as well as data loss. The load
balancing, fault-tolerance and failover recovery are built into the system through a mechanism of health
check facility and a queue based load balancing. The system has a centralized repository with central
monitors to keep track of the progress of various job executions as well as status of processors in real-time.
The basic requirement of assigning a priority and processing as per priority is built into the framework.
The most important aspect of the framework is that it avoids the need for job migration by computing the
target processors based on the current load and the various cost factors. The framework will have the
capability to scale horizontally as well as vertically to achieve the required performance, thus effectively
minimizing the total cost of ownership
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...ijait
Heterogeneous machines can be significantly better than homogeneous machines but for that an effective workload distribution policy is required. Maximum realization of the performance can be achieved when system designer will overcome load imbalance condition within the system. Load
distribution and load balancing policy together can reduce total execution time and increase system throughput.
In this paper; we provide algorithm analysis of a threshold based job allocation and load balancing policy for heterogeneous system where all incoming jobs are judiciously and transparently distributed among sharing nodes on the basis of jobs’ requirement and processor capability for the maximization of performance and decline in execution time. A brief discussion of job allocation, transfer and location policy is given with explanation of how load imbalance condition is solved within the system. A flow of scheme is given with essential code and analysis of present algorithm is given to show how this algorithm is better.
Cloud computing is that ensuing generation of computation. In all probability folks can have everything they need on the cloud. Cloud computing provides resources to shopper on demand. The resources also are code package resources or hardware resources. Cloud computing architectures unit distributed, parallel and serves the requirements of multiple purchasers in various things. This distributed style deploys resources distributive to deliver services with efficiency to users in various geographical channels. Purchasers in a very distributed setting generate request haphazardly in any processor. So the most important disadvantage of this randomness is expounded to task assignment. The unequal task assignment to the processor creates imbalance i.e., variety of the processors sq. measure over laden and many of them unit of measurement to a lower place loaded. The target of load equalisation is to transfer the load from over laden technique to a lower place loaded technique transparently. Load equalisation is one altogether the central issues in cloud computing. To comprehend high performance, minimum interval and high resource utilization relation we want to transfer the tasks between nodes in cloud network. Load equalisation technique is utilized to distribute tasks from over loaded nodes to a lower place loaded or idle nodes. In following sections we have a tendency to tend to stand live discuss concerning cloud computing, load equalisation techniques and additionally the planned work of our load equalisation system. Proposed load equalisation rule is simulated on Cloud Analyst toolkit. Performance is analyzed on the parameters of overall interval, knowledge transfer, average knowledge center mating time and total value of usage. Results area unit compared with 3 existing load equalisation algorithms specifically spherical Robin, Equally unfold Current Execution Load, and Throttled. Results on the premise of case studies performed shows additional knowledge transfer with minimum interval.
Cloud computing is that ensuing generation of computation. In all probability folks can have everything they need on the cloud. Cloud computing provides resources to shopper on demand. The resources also are code package resources or hardware resources. Cloud computing architectures unit distributed, parallel and serves the requirements of multiple purchasers in various things. This distributed style deploys resources distributive to deliver services with efficiency to users in various geographical channels. Purchasers in a very distributed setting generate request haphazardly in any processor. So the most important disadvantage of this randomness is expounded to task assignment. The unequal task assignment to the processor creates imbalance i.e., variety of the processors sq. measure over laden and many of them unit of measurement to a lower place loaded. The target of load equalisation is to transfer the load from over laden technique to a lower place loaded technique transparently. Load equalisation is one altogether the central issues in cloud computing. To comprehend high performance, minimum interval and high resource utilization relation we want to transfer the tasks between nodes in cloud network. Load equalisation technique is utilized to distribute tasks from over loaded nodes to a lower place loaded or idle nodes. In following sections we have a tendency to tend to stand live discuss concerning cloud computing, load equalisation techniques and additionally the planned work of our load equalisation system. Proposed load equalisation rule is simulated on Cloud Analyst toolkit. Performance is analyzed on the parameters of overall interval, knowledge transfer, average knowledge center mating time and total value of usage. Results area unit compared with 3 existing load equalisation algorithms specifically spherical Robin, Equally unfold Current Execution Load, and Throttled. Results on the premise of case studies performed shows additional knowledge transfer with minimum interval.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
A Survey of Job Scheduling Algorithms Whit Hierarchical Structure to Load Ba...Editor IJCATR
Due to the advances in human civilization, problems in science and engineering are becoming more complicated than ever
before. To solve these complicated problems, grid computing becomes a popular tool. a grid environment collects, integrates, and uses
heterogeneous or homogeneous resources scattered around the globe by a high-speed network. Scheduling problems are at the heart of
any Grid-like computational system. a good scheduling algorithm can assign jobs to resources efficiently and can balance the system
load. in this paper we survey three algorithms for grid scheduling and compare benefit and disadvantages of their based on makespan.
Enhanced equally distributed load balancing algorithm for cloud computingeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Similar to Load balancing in Distributed Systems (20)
Seminar Report On Fluid Simulation In Computer Graphics,
Simulation is the imitation of the operation of a real-world process or system over time. The act of simulating something first requires that a model be developed
Flying Hope: Balloon bring Internet to everywhere
Project loon is a research and development project being enveloped by Google. It is a network of balloons travelling on the edge of space, designed to provide ubiquitous Internet connectivity. The balloons float in the stratosphere, twice as high as airplanes and the weather. They are carried around the Earth by winds and they can be steered by rising or descending to an altitude with winds moving in the desired direction. People connect to the balloon network using a special Internet antenna attached to their building.
A virtual private network gives secure access to LAN resources over a shared network infrastructure such as the internet. It can be conceptualized as creating a tunnel from one location to another, with Encrypted data traveling through the tunnel before being decrypted at its destination.
DLNA is an acronym for Digital Living Network Alliance.
It is a capability in a device allowing it to talk to other devices with another DLNA device.
It interconnects electronic devices within a zone.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
1. 1
ABSTRACT
A number of load balancing algorithms were developed in order to improve the execution of
a distributed application in any kind of distributed architecture. Load balancing involves
assigning tasks to each processor and minimizing the execution time of the program. In
practice, it would be possible even to execute the applications on any machine of worldwide
distributed systems. However, the ‘distributed system’ becomes popular and attractive with
the introduction of the web. This results in a significant performance improvement for the
users. This paper describes the necessary, newly developed, principal concepts for several
load balancing techniques in a distributed computing environment. This paper also includes
various types of load balancing strategies, their merits, demerits and comparison depending
on certain parameters. Distributed computing is a promising technology that involves
coordinate and involvement of resources to carry out multifarious computational problems.
One of the major issues in distributed systems is to design of an efficient dynamic load
balancing algorithm that improves the overall performance of the distributed systems.
Scheduling and resource management plays a decisive role in achieving high utilization of
resources in grid computing environments. Load balancing is the process of distributing the
load among various nodes of a distributed system to improve both job response
time and resource utilization while also avoiding a situation where some of the nodes
are heavily loaded while other nodes are idle or lightly loaded.
2. 2
INTRODUCTION
Distributed systems offer the potential for sharing and aggregation of different resources such
as computers, storage systems and other specialized devices. These resources are distributed
and possibly owned by different agents or organization. The users of a distributed system
have different goals, objectives and strategies and their behavior is difficult to characterize. In
such systems the management of resources and applications is a very complex task. A
distributed system can be viewed as a collection of computing and communication resources
shared by active users.
When the demand for computing power increases, the load balancing problem becomes
important. The purpose of load balancing is to improve the performance of a distributed
system through an appropriate distribution of the application load. A general formulation of
this problem is as follows: given a large number of jobs, find the allocation of jobs to
computers optimizing a given objective function (e.g. total execution time).
Processing speed of a system is always highly intended. From the beginning of
the development of computer it is always focused on the system performance that is
how to improve the speed or performance of an existing system and thus we reached to
the era of supercomputer. Specially the business organizations, defense sectors and science
groups need the high performance systems for constant change of their day to day need. So
from serial computer to supercomputer, parallel computing and parallel distributed
computing have been developed. Massively parallel computers (MPC) are available in the
market today. In MPC a group of processors are linked to the memory modules through
the network like mesh, hypercube or torus. Super computers are very expensive so a new
alternative concept has emerged (although existed) that is called parallel distributed
computing in which thousands of processors can be connected either by wide area network or
across a large number of systems which consists of cheap and easily available autonomous
systems like workstations or PCs. So it is becoming extremely popular for large computing
purpose such as scientific calculations as compared to MPC. Recently distributed systems
with several hundred powerful processors have been developed. Distributed computing
system provides high performance environment that are able to provide huge processing
3. 3
power. Multicomputer system can be efficiently used by efficient task partitioning and
balancing the tasks (loads) among the nodes properly .
Distributed network is mainly heterogeneous in nature in the sense that the processing
nodes, network topology, communication medium, operating system etc. may be different in
different network which are widely distributed over the globe. Presently several hundred
computers are connected to build the distributed computing system. In order to get the
maximum efficiency of a system the overall work load has to be distributed among the nodes
over the network. So the issue of load balancing became popular due to the existence of
distributed memory multiprocessor computing systems.
The distribution of loads to the processing elements is simply called the load
balancing problem. In a system with multiple nodes there is a very high chance that some
nodes will be idle while the other will be over loaded. The goal of the load balancing
algorithms is to maintain the load to each processing element such that all the processing
elements become neither overloaded nor idle that means each processing element ideally has
equal load at any moment of time during execution to obtain the maximum performance
(minimum execution time) of the system. So the proper design of a load balancing algorithm
may significantly improve the performance of the system.
In the network there will be some fast computing nodes and slow computing nodes. If we do
not account the processing speed and communication speed (bandwidth), the performance
of the overall system will be restricted by the slowest running node in the network. Thus load
balancing strategies balance the loads across the nodes by preventing the nodes to be idle and
the other nodes to be overwhelmed. Furthermore, load balancing strategies removes
the idleness of any node at run time.
4. 4
LOAD BALANCING
Load balancing is the way of distributing load units (jobs or tasks) across a set of processors
which are connected to a network which may be distributed across the globe. The excess load
or remaining unexecuted load from a processor is migrated to other processors which have
load below the threshold load [9]. Threshold load is such an amount of load to a processor
that any load may come further to that processor. In a system with multiple nodes there is a
very high chance that some nodes will be idle while the other will be over loaded. So the
processors in a system can be identified according to their present load as heavily loaded
processors (enough jobs are waiting for execution), lightly loaded processors(less jobs
are waiting) and idle processors (have no job to execute). By load balancing strategy it is
possible to make every processor equally busy and to finish the works approximately at the
same time.
A load balancing operation consists of three rules. These are location rule, distribution rule
and selection rule The selection rule works either in preemptive or in non- preemptive
fashion. The newly generated process is always picked up by the non-preemptive rule while
the running process may be picked up by the preemptive rule. Preemptive transfer is costly
than non-preemptive transfer which is more preferable. However preemptive transfer is more
excellent than non-preemptive transfer in some instances.
5. 5
Benefits of Load balancing
a) Load balancing improves the performance of each node and hence the overall system
performance.
b) Load balancing reduces the job idle time
c) Small jobs do not suffer from long starvation
d) Maximum utilization of resources
e) Response time becomes shorter
f) Higher throughput
g) Higher reliability
h) Low cost but high gain
i) Extensibility and incremental growth
6. 6
Static Load Balancing
In static algorithm the processes are assigned to the processors at the compile time according
to the performance of the nodes. Once the processes are assigned, no change or reassignment
is possible at the run time. Number of jobs in each node is fixed in static load
balancing algorithm. Static algorithms do not collect any information about the nodes.
The assignment of jobs is done to the processing nodes on the basis of the following
factors: incoming time, extent of resource needed, mean execution time and
inter-process communications.
Since these factors should be measured before the assignment, this is why static load
balance is also called probabilistic algorithm. As there is no migration of job at the runtime
no overhead occurs or a little over head may occur.
Since load is balanced prior to the execution, several fundamental flaws with static load
balancing even if a mathematical solution exist: Very difficult to estimate accurately the
execution times of various parts of a program without actually executing the parts.
Communication delays that vary under different circumstances Some problems have an
indeterminate number of steps to reach their solution.
In static load balancing it is observed that as the number of tasks is more than the
processors, better will be the load balancing.
Fig shows a schematic diagram of static load balancing where local tasks arrive at the
assignment queue. A job either be transferred to a remote node or can be assigned to
threshold queue from the assignment queue. A job from remote node similarly be assigned to
threshold queue. Once a job is assigned to a threshold queue, it can not be migrated to any
7. 7
node. A job arriving at any node either processed by that node or transferred to another node
for remote processing through the communication network. The static load balancing
algorithms can be divided into two sub classes: optimal static load balancing and sub optimal
static load balancing.
Model of Processing Node
8. 8
Dynamic Load Balancing
During the static load balancing too much information about the system and jobs must
be known before the execution. These information may not be available in advance. A
thorough study on the system state and the jobs quite tedious approach in advance. So,
dynamic load balancing algorithm came into existence. The assignment of jobs is done at
the runtime. In DLB jobs are reassigned at the runtime depending upon the situation that is
the load will be transferred from heavily loaded nodes to the lightly loaded nodes. In this
case communication over heads occur and becomes more when number of processors
increase.
In dynamic load balancing no decision is taken until the process gets execution. This
strategy collects the information about the system state and about the job information.
As more information is collected by an algorithm in a short time, potentially the algorithm
can make better decision [10]. Dynamic load balancing is mostly considered in
heterogeneous system because it consists of nodes with different speeds, different
communication link speeds, different memory sizes, and variable external loads due to the
multiple. The numbers of load balancing strategies have been developed and classified so far
for getting the high performance of a system.
Fig shows a simple dynamic load balancing for transferring jobs from heavily loaded to the
lightly loaded nodes.
9. 9
COMPARISON BETWEEN SLB and DLB ALGORITHM
Some qualitative parameters for comparative study have been listed below.
1. Nature
Whether the applied algorithm is static or dynamic is determined by this factor.
2. Overhead Involved
In static load balancing algorithm redistribution of tasks are not possible and there is
no overhead involved at runtime. But a little overhead may occur due to the inter process
communications. In case of dynamic load balancing algorithm redistribution of tasks are
done at the run time so considerable over heads may involve. Hence it clear that SLB
involves a less amount of overheads as compared to DLB.
3. Utilization of Resource
Though the response time is minimum in case of SLB, it has poor resource utilization
capability because it is impractical to get all the submitted jobs to the corresponding
processors will completed at the same time that means there is a great chance that some
would be idle after completing their assigned jobs and some will remain busy due to the
absence of reassignment policy. In case of dynamic algorithm since there is reassignment
policy exist at run time, it is possible to complete all the jobs approximately at the same time.
So, better resource utilization occurs in DLB.
4. Thrashing or Process Dumping
A processor is called in thrashing if it is spending more time in migration of jobs than
executing any useful work [1]. As the degree of migration is less, processor thrashing will be
less. So SLB is out of thrashing but DLB incurs considerable thrashing due to the process
migration during run time.
5. State Woggling
It corresponds to the frequent change of the status by the processors between low and high. It
is a performance degrading factor.
10. 10
6. Predictability
Predictability corresponds to the fact that whether it is possible to predict about the behavior
of an algorithm. The behavior of the SLB algorithm is predictable as everything is known
before compilation. DLB algorithm’s behavior is unpredictable, as everything is done at run
time.
7. Adaptability
Adaptability determines whether an algorithm will adjust by itself with the change of
the system state. SLB has no ability to adapt with changing environment. But DLB has that
ability.
8. Reliability
Reliability of a system is concerned with if a node fails still the system will work without any
error. SLB is not so reliable as there is no ability to adapt with the changing of a system’s
state. But DLB has adaptation power, so DLB is more reliable.
9. Response Time
Response time measures how much time is taken by a system applying a particular
load balancing algorithm to respond for a job. SLB algorithm has shorter response time
because processors fully involved in processing due to the absence of job transferring.
But DLB algorithm has larger response time because processors can not fully involved in
processing due to the presence of job transferring policy.
10. Stability
SLB is more stable as every thing is known before compilation and work load transfer is
done. But DLB is not so stable as SLB because it involves both the compile time assignment
of jobs and distribution of work load as needed.
11. Complexity Involved
SLB algorithms are easy to construct while DLB algorithms are not so easy to develop
because nothing is known in advance. Although the dynamic load balancing is complex
phenomenon, the benefits from it is much more than its complexity [10].
11. 11
COMPARISON of SOME DYNAMIC LOAD BALANCING
ALGORITHMS
Now some dynamic load balancing algorithms are studied below.
1. Nearest Neighbor Algorithm
With nearest neighbor algorithm each processor considers only its immediate neighbor
processors to perform load balancing operations. A processor takes the balancing decision
depending on the load it has and the load information to its immediate neighbors. By
exchanging the load successively to the neighboring nodes the system attains a global
balanced load state. The nearest neighbor algorithm is mainly divided into two categories
which are diffusion method and dimension exchange method.
2. Random (RAND) Algorithm
As soon as a workload (greater than threshold load) is generated in a processor, it is migrated
to a randomly selected neighbor. It does not check state information of anode. This
algorithm neither maintains any local load information nor sends any load information to
other processors . Furthermore, it is simple to design and easy to implement. But it causes
considerable communication overheads due to the random selection of lightly loaded
processor to the nearest neighbors.
3. Adaptive Contracting with Neighbor (ACWN)
As soon as the workload is newly generated, it is migrated to the least loaded nearest
neighbor processor. The load accepting processor keeps the load in its local heap. If the load
in its heap is less than to its threshold load then no problem otherwise it sends the load to
the neighbor processor which has load below the threshold load. So, ACWN does require
maintaining the local load information and also the load information of the neighbors for
exchanging the load periodically. Hence, RAND is different form the ACWN in a respect that
ACWN always finds the target node which is least loaded in neighbors.
12. 12
4. Prioritized Random (PRAND) Algorithm
In both RAND and ACWN the work load is supposed to be uniform in the sense of their
computational requisites. Modification is done on RAND and ACWN for the non-
uniform workload to get prioritized RAND (PRAND) and prioritized ACWN (PACWN)
respectively. In these algorithms the work loads are assigned index numbers on the basis of
the weight of their heaps. PRAND is similar to RAND except that it selects the second
largest weighted load from the heap and transfers it to a randomly selected neighbor. On the
other hand, PACWN selects the second largest weighted workload and transfer it to the least
loaded neighbor.
4.1 Averaging Dimension Exchange (ADE)
ADE algorithm follows the concept of local averaging load distribution operation and
balances the workload with one of its neighbors.
4.2 Averaging Diffusion (ADF) Algorithm
It averages the local load distribution as ADE but exchanges the load with all its neighbors.
5. CYCLIC Algorithm
This is the outcome of RAND algorithm after slight modification. The workload is assigned
to a remote system in a cyclic fashion. This algorithm remembers always the last system to
which a process was sent.
6. PROBABILISTIC
Each node keeps a load vector including the load of a subset of nodes. The first half of the
load vector holding also the local load is sent periodically to a randomly selected
node. Thus information is revised in this way and the information may be spread in the
network without broadcasting. However, the quality of this algorithm is not ideal, its
extensibility is poor and insertion is delayed.
7. THRESHOLD and LEAST
They both use a partial knowledge obtained by message exchanges. A node is
randomly selected for accepting a migrated load in THRESHOLD. If the load is below
13. 13
threshold load, the load accepted by there. Otherwise, polling is repeated with another node
for finding appropriate node for transferring the load. After a maximum number of attempts if
no proper recipient has been reported, the process is executed locally. LEAST is an instant of
THRESHOLD and after polling least loaded machine is chosen for receiving the migrated
load. THRESHOLD and LEAST have good performance and are of simple in nature.
Furthermore, up-to-date load values are used by these algorithms.
8. RECEPTION
In this algorithm, nodes having below the threshold load find the overloaded node by random
polling for migrating load from overloaded node.
9. Centralized Information and Centralized Decision
In this class of algorithms the information about the system is stored in a single node and the
decision is also taken by that single node. CENTRAL is a subclass of this algorithm. When a
heavily loaded node wants to migrate a job, it requests a server for a lightly loaded node.
Every node in the system informs the server machine whether a lightly loaded node is
available or not. CENTRAL afford very efficient performance results. But this algorithm
suffers from a very serious problem that if the server is crashed, no facility will be provided
by this algorithm.
10. Centralized Information and Distributed Decision
In GLOBAL, collection of information is centralized while decision making is distributed.
The load situation on the nodes is broadcasted by the server. Through this information an
overloaded processor finds the lightly loaded node from its load vector without going through
the server. This algorithm is very efficient due to the less inclusion of message information
and it is robust in nature because the system remains alive even when the server is in
recovery state. GLOBAL algorithm gathers large information but information is not up-to-
date. As a result greater overheads occur in the system.
11. Distributed information and Distributed Decision
Each node in OFFER broadcasts its load situation periodically and each node keeps a global
load vector. Performance of this algorithm is poor.
14. 14
12.RADIO
In RADIO, both the information and decision are distributed and there is no
broadcasting without compulsion. In this algorithm, a distributed list consisting of lightly
loaded nodes in which each machine is aware of its successor and predecessor. Furthermore,
each node is aware of the head of the available list that is called manager. The migration
of a process from a heavily loaded node to the lightly loaded node is done directly or
indirectly through the manager. Broadcasting occurs when manager crashes or a node joins
the available list.
13. The Shortest Expected Delay (SED) Strategy
These strategy efforts to minimize the expected delay of each job completion so the
destination node will be selected in such a way that the delay becomes minimal. This is a
greedy approach in which each job does according to its best interest and joins the queue
which can minimize the expected delay of completion. The average delay of a given batch of
jobs with no further successive arrival is minimized by this approach. SED does not minimize
the average delay for an ongoing arrival process. To find out the destination node the source
node has to get state information from other nodes for location policy.
14. The Never Queue (NQ) Strategy
NQ policy is a separable strategy in which the sending server estimates the cost of sending a
job to each final destination or a subset of final destinations and the job is placed on the
server with minimal cost . This algorithm always places a job to a fastest available server.
This algorithm minimizes the extra delay into successive arriving jobs so that the overall
delay will be minimized by NQ policy. Furthermore, a server does not transfer incoming job
to the servers until fastest server than it is available.
15. Greedy Throughput (GT) Strategy
This strategy is different from SED and NQ strategies. GT strategy deals with the throughput
of the system that is the number of jobs completed per unit time would be maximum before
the arrival of new job instead of maximizing only the throughput rate at the instant of
balancing. This is why it is called Greedy Throughput (GT) policy [11, 16].
15. 15
COMPARISON of DLB
The comparison of above discussed dynamic load balancing algorithms has been shown :
S.N. Algorithms State information
check
Performance
1 RAND No Excellent
2 PRAND Partial Excellent
3 ACWN Yes Good
4 PACWN Yes Good
5 CYCLIC Partial Slightly better than
RAND6 PROBALISTIC Partial Good
7 THRESHOLD Partial Better
8 LEAST Partial Better
9 RECEPTION Partial Not so good
10 CENTRAL Yes Excellent
11 GLOBAL Yes Good
12 OFFER Yes Poor
13 RADIO Yes Good
14 SED Yes Good
15 NQ Yes Good
16. 16
CONCLUSION
In this paper we studied the load balancing strategies lucidly in detail. Load
balancing in distributed systems is the most thrust area in research today as the demand of
heterogeneous computing due to the wide use of internet. More efficient load balancing
algorithm more is the performance of the computing system. We made a comparison
between SLB and DLB introducing some new parameters. We have enumerated the
facilities provided by load balancing algorithms. Finally, we studied some important
dynamic load balancing algorithms and made their comparison to focus their importance in
different situations. There exists no absolutely perfect balancing algorithm but one can use
depending one the need.
The comparative study not only provides an insight view of the load balancing algorithms,
but also offers practical guidelines to researchers in designing efficient load balancing
algorithms for distributed computing systems.
17. 17
REFERENCES
[1] Md. Firoj Ali and Rafiqul Zaman Khan. “The Study On Load Balancing Strategies In
Distributed Computing System”. International SJournal of Computer Science & Engineering
Survey (IJCSES) Vol.3, No.2, April 2012
[1] Ahmad I., Ghafoor A. and Mehrotra K. “Performance Prediction of Distributed
Load Balancing on Multicomputer Systems”. ACM, 830-839, 1991.
[2] Antonis K., Garofalakis J., Mourtos I. and spirakis P. “A Hierarchical Adaptive
Distributed Algorithm for Load Balancing”. Journal of Parallel and Distributed Computing,
Elsevier Inc.2003.
[4] E. Altman, T. Basar, T. Jimenez, and N. Shimkin. Routing in two parallel links: Game-
theoretic distributed algorithms. J. Parallel and Distributed Computing, 61(9):1367–1381,
September 2001.