Why should I care?• Load balancing as a services (LBaaS) areexpected from cloud services targeting criticalapplications.• Load balancers are crucial part of– Availability– Scalability– Manageability
Radware Involvement inOpenStack• Radware Joined OpenStack on Dec 2011• Planning of LBaaS for Grizzly and Havana• Contributor to the Networking/LBaaSprojectSlide 3
Agenda• LBaaS History• LBaaS in Grizzly• Focus Areas for Havana– Multivendor Support– Tenant API– Network Topologies
Notes• HA Proxy process per VIP• VIP / Pool Members on the same network /subnet• Nat only• Model is actionable on the Device/Instancewhen all the model is completely defined• Does not support multi network nodes• Does not support HA for the service
OpenStack/Networking/LBaaS –Highlights for Havana• Multiple load balancing technologies and vendors could beused in parallel• Service Types as a way to specify the required service (ex:Platinum, Gold, Silver)• Solution can be used out of the box with a default opensource load balancer driverSlide 12
Multi Vendor Support• Vendor/Driver selection should be done in the LBaaS Plug-in running inside Quantum– Based on Service Type– Based on the decision how to handle service insertion• Device provisioning and selection (AKA Scheduling) is theresponsibility of the Driver.– Shared libraries could assist but should not be mandatory (ex:scheduling library)• Should allow different service models– NS based– Service VM based– HW appliance based– Other
LBaaS Driver• The Driver API is similar to the LBaaS Plugin API,the Plugin delegates handling of the Message tothe Driver and pass itself as parameter.• HA is complex and should be managed by eachvendor per his needs:– Allocating QPorts and managing IP adress allocationmust be done in the LBaaS Plugin / Driver and not onan Agent - Some of the capabilities exists only whenembedded in the Quantum Plug-in
LBaaS Driver• Handling a-sync operations– Message Queues with Driver <->Agent– Callback threads with ITC queue• Connecting Physical appliances to theQuantum network is still missing APIcapabilities that allow for example connectinga VLAN based appliance to Quantum via L2/L3network gateway.
Tenant API• Support Multiple vendors at the same time• How to expose LBaaS vendors’ uniquecapabilities• Validate/Update the Grizzly Tenant API
Remarks on current model• Health Monitor as global entity– The model was derived from vendors who canreuse Health Monitor on the boundary of a device– Managing Health Monitor over multiple instancesis an error prone experience since updates shouldbe done “atomically”– Options• Use Health Monitor definition globally but whenconnect to a Pool, do a copy• Manage Health Monitor on the Pool and not global
Remarks on current model• Since the model is actionable only when fullydefined, does it make sense to still manage itas different “flat” model or should it behierarchical under VIP?
Network Topologies• LB between two networks - the case when Vipand Pool are assigned to different subnets• Adding SNAT and DSR on top of the currentNAT implementation (extension to L3 agent?)• Can the LB replace the L3 GW?