Introduction to Server Load Balancing<br />Server Load Balancing (SLB) may be defined as a process and technology that distributes site traffic among several servers using a network-based device. This device intercepts traffic destined for a site and redirects that traffic to various servers. <br />Figure 1-1. SLB simplified<br />5/8/2011 12:50:57 AM<br />3<br />
A load balancer performs the following functions :-<br /><ul><li>Intercepts network-based traffic (such as web traffic) destined for a site.
Splits the traffic into individual requests and decides which servers receive individual requests.
Maintains a watch on the available servers, ensuring that they are responding to traffic. If they are not, they are taken out of rotation.</li></ul>5/8/2011 12:50:57 AM<br />4<br />
Concepts of Server Load Balancing<br /><ul><li>OSI Layer Model</li></ul>It stands for Open Systems Interconnection model .<br />When referring to load balancers, OSI layers are often mentioned. OSI was developed as a framework for developing protocols and applications that could interact seamlessly. It closely resembles the Internet IP world in which load balancers exist today.<br />5/8/2011 12:50:57 AM<br />5<br />
Components of SLB devices<br /><ul><li> VIPs </li></ul>Virtual IP (VIP) is the load-balancing instance where the world points its browsers<br />to get to a site. A VIP has an IP address, which must be publicly available to be<br />useable. Usually a TCP or UDP port number is associated with the VIP, such as<br />TCP port 80 for web traffic. A VIP will have at least one real server assigned to it,<br />to which it will dispense traffic .<br />5/8/2011 12:50:57 AM<br />6<br />
<ul><li> Servers </li></ul>A server is a device running a service that shares the load among other services. A server typically refers to an HTTP server, although other or even multiple services would also be relevant. A server has an IP address and usually a TCP/UDP port associated with it and does not have to be publicly addressable .<br /><ul><li> Redundancy </li></ul>Redundancy as a concept is simple: if one device should fail, another will take its<br />place and function, with little or no impact on operations as a whole.<br />5/8/2011 12:51:02 AM<br />7<br />
Anatomy of a Server Load Balancer<br />SLB works by manipulating a packet before and after it reaches an actual server.<br /> This is typically done by manipulating the source or destination IP addresses of a<br /> IP packet in a process known as Network<br />Address Translation (NAT).<br />In Figure , you see an IP packet sent from a source address of 22.214.171.124<br />destined for 192.168.0.200. This IP header is like the "To" and "From" portions of a<br />letter sent through the post office. Routers use that information to forward the<br />packets along on their journeys through the various networks.<br />Figure . An IP packet header<br />Mohit<br />5/8/2011 12:51:02 AM<br />8<br />
Methods of load balancing<br />Hardware Load Balancing<br />Hardware load balancers can route TCP/IP packets to various servers in a cluster.<br /> These types of load balancers are often found to provide a robust topology with <br />high availability, but comes for a much higher cost. <br />Pros: Uses circuit level network gateway to route traffic.<br />Cons: Higher costs compared to software versions.<br />Software Load Balancing<br />Most commonly used load balancers are software based, and often comes as an <br />integrated component of expensive web server and application server software packages.<br />Pros: Cheaper than hardware load balancers. More configurable based on requirements. <br />Can incorporate intelligent routing based on multiple input parameters.<br />Cons: Need to provide additional hardware to isolate the load balancer.<br />5/8/2011 12:51:02 AM<br />9<br />
Reduce latency and maximize uptime<br />Server load balancing increases the efficiency of your server farm, keeping <br />applications running if servers go down, and forwarding computing requests<br /> to the most appropriate server. <br /><ul><li>Maximize Performance</li></ul>Increase the performance of your server farm by running distributed <br />Applications where the load balancer forwards end-users requests <br />to application servers based on pre-defined rules & policies.<br /><ul><li>Increase scalability</li></ul>Load balancers allows new virtual and/or physical server to be added<br /> transparently, maximizing flexibility, and allowing server applications to scale<br /> without disruption<br /><ul><li>Maintain persistence</li></ul>Persistence is needed in certain server application that manage the client <br />state on the server side<br />5/8/2011 12:51:02 AM<br />10<br />