Be the first to like this
Load balancing has traditionally being used as the way of share the workload among a set of available resources. In a web server farm, load balancing allows the distribution of user requests among the web servers in the farm.
Content Aware Request Distribution is a load balancing technique used for switching client's requests based on the request's content information in addition to information about the load on the server nodes (backend nodes).
Content Aware Request Distribution has several advantages over current low-level layer switching techniques used in state-of-the-art commercial products [IBM00]. It can improve locality in the backend servers' main memory caches, increase secondary storage scalability by partitioning the server's database, and provide the ability to employ backend server nodes that are specialized for certain types of request (e.g. audio, video)
Intel PA100 is a network processor created for the purpose of running network applications at wire speed. It differs from general-purpose processors in that the hardware is specifically designed to handle packets efficiently. We choose the Intel PA100 processor as it provides a programming framework that is being used by current and future implementations of Intel's network processors.
No studies have been done before that design and implement multiple load balancing systems using the Intel PA100 network processor and furthermore compare the advantages that Content Based Switching System have over traditional load balancing mechanism. Our purpose is to use PA100 as a front-end device that directs incoming request to one server in a farm of back-end servers using different load balancing mechanisms.
In this thesis, we also implement and evaluate the impact that different load balancing algorithms have on the PA100 network processor architecture. Locality Aware Request Distribution (LARD) and Weighted Round Robin (WRR) are the load balancing algorithms analyzed. LARD achieves high cache hit rates and good load balancing in a cluster server according to [Pai98]. In addition, it has been confirmed by [Zhang] that focusing on locality can lead to significant improvements in cluster throughput. WRR is attractive because of its simplicity and speed.
We also implement a TCP handoff protocol proposed in [Hunt97], in order to hand-off incoming request to a backend in a manner transparent to the client, after the front end has inspected the content of the request.