1. Client-side Load Balancer using Cloud Sewook Wee Huan Liu Accenture Technology Labs Accenture Technology Labs 50 W. San Fernando Street, Suite 1200 50 W. San Fernando Street, Suite 1200 San Jose, California, USA San Jose, California, USA firstname.lastname@example.org email@example.comABSTRACT for the potential peak, thus wasting valuable capital, or pro-Web applications’ traﬃc demand ﬂuctuates widely and un- vision for the normal load, but not able to handle peak whenpredictably. The common practice of provisioning a ﬁxed it does materialize. Using the elastic provisioning capabilitycapacity would either result in unsatisﬁed customers (un- of a cloud, a web application can ideally provision its infras-derprovision) or waste valuable capital investment (overpro- tructure tracking the load in real time and pay only for thevision). By leveraging an infrastructure cloud’s on-demand, capacity needed to serve the real application demand.pay-per-use capabilities, we ﬁnally can match the capacity Due to the large infrastructure capacity a cloud provides,with the demand in real time. This paper investigates how there is a common myth that an application can scale upwe can build a large-scale web server farm in the cloud. Our unlimitedly and automatically when application demand in-performance study shows that using existing cloud compo- creases. In reality, our study shows that scaling an appli-nents and optimization techniques, we cannot achieve high cation in a cloud is more diﬃcult, because a cloud is veryscalability. Instead, we propose a client-side load balanc- diﬀerent from a traditional enterprise infrastructure in ating architecture, which can scale and handle failure on a least several respects.milli-second time scale. We experimentally show that our First, in enterprises, application owners can choose an op-architecture achieves high throughput in a cloud environ- timal infrastructure for their applications amongst variousment while meeting QoS requirements. options from various vendors. In comparison, a cloud infras- tructure is owned and maintained by the cloud providers. Because of their commodity business model, they only of-1. INTRODUCTION fer a limited set of infrastructure components. For example, An infrastructure cloud, such as Amazon’s EC2/S3 ser- Amazon EC2 only oﬀers 5 types of virtual servers and ap-vices , promises to fundamentally change the economics plication owners cannot customize the speciﬁcation of them.of computing. First, it provides a practically unlimited in- Second, again due to its commodity business model, afrastructure capacity (e.g., computing servers, storage or cloud typically only provides commodity Virtual Machinesnetwork) on demand. Instead of grossly over-provisioning (VM). The computation power and the network bandwidthupfront due to uncertain demands, users can elastically pro- is typically less than high-end servers. For example, all Ama-vision their infrastructure resources from the provider’s pool zon VMs are capable of at most transmitting at roughlyonly when needed. Second, the pay-per-use model allows 800Mbps, whereas, commercial web servers routinely haveusers to pay for the actual consumption instead of for the several Network Interface Cards (NIC), each capable of atpeak capacity. Third, a cloud infrastructure is much larger least 1Gbps. The commodity VMs require us to use hori-than most enterprise data centers. The economy of scale, zontal scaling to increase the system capacity.both in terms of hardware procurement and infrastructure Third, unlike in an enterprise, application owners have lit-management and maintenance, helps to drive down the in- tle or no control of the underlying cloud infrastructure. Forfrastructure cost further. example, for security reasons, Amazon EC2 disabled many These characteristics make cloud an attractive infrastruc- networking layer features, such as ARP, promiscuous mode,ture solution especially for web applications due to their IP spooﬁng, and IP multicast. Application owners have novariable loads. Because web applications could have a dra- ability to change these infrastructure features. Many per-matic diﬀerence between their peak load (such as during formance optimization techniques rely on the infrastructureﬂash crowd) and their normal load, a traditional infrastruc- choice and control. For example, to scale a web application,ture is ill-suited for them. We either grossly over-provision application owners either ask for a hardware load balancer or ask for the ability to assign the same IP address to all web servers to achieve load balancing. Unfortunately, nei- ther option is available in Amazon EC2.Permission to make digital or hard copies of all or part of this work for Last, commodity machines are likely to fail more frequently.personal or classroom use is granted without fee provided that copies are Any architecture design based on cloud must handle machinenot made or distributed for proﬁt or commercial advantage and that copies failures quickly, ideally in a few milli-seconds or faster, in or-bear this notice and the full citation on the ﬁrst page. To copy otherwise, to der not to frequently disrupt service.republish, to post on servers or to redistribute to lists, requires prior speciﬁcpermission and/or a fee. Because of these characteristics, cloud-hosted web appli-SAC’10 March 22-26, 2010, Sierre, Switzerland. cations tend to run on a cluster with many standard com-Copyright 2010 ACM 978-1-60558-638-0/10/03 ...$10.00.
3. 2.3 Layer 2 Optimization 3.1 Cloud Components’ Performance Charac- In an enterprise where one can fully control the infras- teristicstructure, we can apply layer 2 optimization techniques to We have run an extensive set of performance tests on abuild a scalable web server farm that does not impose all number of cloud components to understand their perfor-drawbacks discussed above: expensive hardware, single per- mance limits. The results are reported in a technical re-formance bottleneck, and lack of adaptiveness. port  and they are not reproduced here due to space There are several variations of layer 2 optimization. One limitations (we do not cite it to facilitate double-blind re-way, referred to as direct web server return , is to have view). A few key observations from the performance studya set of web servers, all have the same IP address, but dif- motivated our design. They are listed as follows:ferent layer 2 addresses (MAC address). A browser requestmay ﬁrst hit one web server, which may in turn load balance • When used as a web server, a cloud virtual machine hasthe request to other web servers. However, when replying a much smaller capacity than a dedicated web server.to a browser request, any web server can directly reply. By For example, in our test, an m1.small Amazon instanceremoving the constraint that all replies have to go through is only able to handle roughly 1000 SPECweb2005 the same server, we can achieve higher scalability. This tech- sessions. In contrast, as reported on SPECweb2005nique requires the ability to dynamically change the map- website, a dedicated web server can handle up to 11,000ping between an IP address and a layer 2 address at the sessions.router level. Another variation, TCP handoﬀ , works in a slightlydiﬀerent way. A browser ﬁrst establishes a TCP connection • When used as a software load balancer, a cloud vir-with a front-end dispatcher. Before any data transfer occurs, tual machine has an even more limited capability. Forthe dispatcher transfers the TCP state to one of the back- example, because of traﬃc relaying, any Amazon in-end servers, which takes over the communication with the stances can only handle 400Mbps client traﬃc, halfclient. This technique again requires the ability for the back- of what the network interface can support (800Mbps).end servers to masquerade the dispatcher’s IP address. Similarly, we found Google App Engine cannot sup- Unfortunately, this ability could open doors for security port high traﬃc when used as a load balancer. Thus, aexploits. For example, one can intercept all packets target- diﬀerent mechanism is needed in order to scale higher.ing for a host by launching another host with the same IPaddress. Because of the security concerns, Amazon EC2 dis- • Amazon S3 is designed for static content distribution.ables all layer 2 capabilities so that any layer 2 technique to It has built in a high-degree of scalability. In our tests,scale an application will not work in Amazon cloud. it can sustain at least 16Gbps throughput. However, S3 can only host static ﬁles; it has no ability to run any server side processing such as JSP, CGI, or PHP.2.4 Client Load Balancing The concept of client side load balancing is not new . 3.2 Overview and OriginalityOne existing approach, the earlier version of NetScape, re- We present a new web server farm architecture: client-sidequires modiﬁcation to the browser. Given the diversity of load balancing. With this architecture, a browser decides onweb browsers available today, it is diﬃcult to make sure that a web server that it will communicate with amongst availablethe visitors to a web site have the required modiﬁcation. back-end web servers in the farm. The decision is based onSmart Client , developed as part of the WebOS project the information that lists the web server IP addresses and, requires Java Applets to perform load balancing at their individual loads.the client. Unfortunately, it has several drawbacks. First, The new architecture gets around limitations posed byJava Applets require the Java Virtual Machine, which is not cloud components, and achieves a high degree of scalabil-available by default on most browsers. This is especially true ity with infrastructure components currently available fromin the mobile environment. Second, if the user accidentally cloud providers.agrees, a Java Applet could have full access to the client ma- Compared to a software load balancer, our architecturechine, leaving open a big security vulnerability. Third, many has no single point of scalability bottleneck. Instead, theorganizations only allow administrators to install software, browser decides on the back-end web server from the listso users cannot view applets by default. Fourth, a Java Ap- of servers and communication ﬂows directly between theplet is an application; the HTML page navigation structure browser and the chosen back-end web server. Compared tois lost if navigating within the applet. Last, Smart Client DNS aliasing technique, our architecture has a ﬁner loadstill relies on a central server to download the Java Applet balancing granularity and adaptiveness. Because client’sand the server list, which still presents a single point of fail- browser makes the decision, we can distribute traﬃc in ses-ure and scalability bottleneck. sion granularity. Moreover, changes in web server list due to auto-scaling or failures can be quickly propagated to browsers (in milli-seconds) so that the clients would not experience extended congestion or outage. This is because the server3. NEW ARCHITECTURE information is not cached in days but rather it is updated Since traditional techniques for designing a scalable web whenever a session is created. We achieve high scalabilityserver farm would not work in a cloud environment, we need without requiring sophisticated control on the infrastructureto devise new techniques which leverage scalable cloud com- as layer 2 optimization does. Instead, IP addresses of webponents while getting around their limitations. servers and their individual load information are suﬃcient.