White
 PAPER




 10 Gigabit Ethernet Virtual Data Center Architectures
         Introduction
         Consolidation of da...
The Foundation for a                                     Multiple tiers of physically segregated servers as shown
Service ...
Design Principles for the Next Generation                      Virtualization provides the stability of running a single
V...
Network Consolidation and Virtualization: Highly                       Storage Resource Consolidation and Virtualization:
...
Ultra Resiliency/Reliability: As data centers are consoli-      Overall data center scalability is addressed by configurin...
This section of the document focuses on the various          NIC Virtualization
design aspects of the consolidated and vir...
Layer 3 Aggregation/Access Switching
                                                             Figure 8 shows the logic...
In addition to application VLANs, control VLANs are                state and its green firewalls in a standby state. The
c...
VMware Distributed Resource Scheduler (DRS) works            Virtualization of server resources, including VMotion-
with V...
Resource Virtualization Across Data Centers                 Migration from Legacy Data Center
                            ...
Summary                                                                 References:

As enterprise data centers move throu...
Upcoming SlideShare
Loading in...5
×

10 Gigabit Ethernet Virtual Data Center Architectures

751

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
751
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
49
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "10 Gigabit Ethernet Virtual Data Center Architectures"

  1. 1. White PAPER 10 Gigabit Ethernet Virtual Data Center Architectures Introduction Consolidation of data center resources offers an opportunity for architectural transformation based on the use of scalable, high density, high availability technology solutions, such as high port-density 10 GbE switch/routers, cluster and grid computing, blade or rack servers, and network attached storage. Consolidation also opens doors for virtualization of applications, servers, storage, and networks. This suite of highly complementary technologies has now matured to the point where mainstream adoption in large data centers has been occurring for some time. According to a recent Yankee Group survey of both large and smaller enterprises, 62% of respondents already have a server virtualization solution at least partially in place, while another 21% plan to deploy the technology over the next 12 months. A consolidated and virtualized 10 GbE data center offers numerous benefits: • Lower OPEX/CAPEX and TCO through reduced complexity, reductions in the number of physical servers and switches, improved lifecycle management, and better human and capital resource utilization • Increased adaptability of the network to meet changing business requirements • Reduced requirements for space, power, cooling, and cabling. For example, in power/cooling (P/C) alone, the following savings are possible: – Server consolidation via virtualization: up to 50–60% of server P/C – Server consolidation via Blade or Rack servers: up to an additional 20–30% of server P/C – Switch consolidation with high density switching: up to 50% of switch P/C • Improved business continuance and compliance with regulatory security standards The virtualized 10 GbE data center also provides the foundation for a service oriented architecture (SOA). From an application perspective, SOA is a virtual application architecture where the application is comprised of a set of component services (e.g., implemented with web services) that may be distributed throughout the data center or across multiple data centers. SOA’s emphasis on application modularity and re-use of application component modules enables enterprises to readily create high level application services that encapsulate existing business processes and functions, or address new business requirements. From an infrastructure perspective, SOA is a resource architecture where applications and services draw on a shared pool of resources rather than having physical resources rigidly dedicated to specific applications. The application and infrastructure aspects of SOA are highly complementary. In terms of applications, SOA offers a methodology to dramatically increase productivity in application creation/modification, while the SOA-enabled infrastructure, embodied by the 10 GbE virtual data center, dramatically improves the flexibility, productivity, and manageability of delivering application results to end users by drawing on a shared pool of virtualized computing, storage, and networking resources. This document provides guidance in designing consolidated, virtualized, and SOA-enabled data centers based on the ultra high port-density 10 GbE switch/router products of Force10 Networks in conjunction with other specialized hardware and software components provided by Force10 technology partners, including those offering: • Server virtualization and server management software • iSCSI storage area networks • GbE and 10 GbE server NICs featuring I/O virtualization and protocol acceleration • Application delivery switching, load balancers, and firewalls © 2007 FORCE10 NETWORKS, INC. [ PA G E 1 O F 1 1 ]
  2. 2. The Foundation for a Multiple tiers of physically segregated servers as shown Service Oriented Architecture in Figure 1 are frequently employed because a single tier of aggregation and access switches may lack the Over the last several years data center managers have scalability to provide the connectivity and aggregate had to deal with the problem of server sprawl to meet performance needed to support large numbers of the demand for application capacity. As a result, the servers. The ladder structure of the network shown in prevalent legacy enterprise data center architecture Figure 1 also minimizes the traffic load on the data has evolved as a multi-tier structure patterned after center core switches because it isolates intra-tier traffic, high volume websites. Servers are organized into three web-to-application traffic, and application-to-database separate tiers of the data center network comprised traffic from the data center core. of web or front-end servers, application servers, and database/back-end servers, as shown in Figure 1. This While this legacy architecture has performed fairly architecture has been widely adapted to enterprise well, it has some significant drawbacks. The physical applications such as ERP and CRM, that support segregation of the tiers requires a large number of web-based user access. devices, including three sets of Layer 2 access switches, three sets of Layer 2/Layer 3 aggregation switches, and three sets of appliances such as load balancers, fire- walls, IDS/IPS devices, and SSL offload devices that are not shown in the figure. The proliferation of devices is further exacerbated by dedicating a separate data center module similar to that shown in Figure 1 to each enterprise application, with each server running a single application or application component. This physical application/server segregation typically results in servers that are, on average, only 20% utilized. This wastes 80% of server capital investment and support costs. As a result, the inefficiency of dedicated physical resources per application is the driving force behind on-going efforts to virtualize the data center. The overall complexity of the legacy design has a number of undesirable side-effects: • The infrastructure is difficult to manage, especially when additional applications or application capacity is required • Optimizing performance requires fairly complex traffic engineering to ensure that traffic flows follow predictable paths • When load-balancers, firewalls, and other appliances are integrated within the aggregation switch/router to reduce box count, it may be necessary to use active- passive redundancy configurations rather than the more efficient active-active redundancy more readily achieved with stand alone appliances. Designs calling for active-passive redundancy for appliances and switches in the aggregation layer require twice as much throughput capacity as active-active redundancy designs • The total cost of ownership (TCO) is high due to low resource utilization levels combined with the impact Figure 1. Legacy three tier data center architecture of complexity on downtime and on the requirements for power, cooling, space, and management time © 2007 FORCE10 NETWORKS, INC. [ PA G E 2 O F 1 1 ]
  3. 3. Design Principles for the Next Generation Virtualization provides the stability of running a single Virtual Data Centers application per (virtual) server, while greatly reducing the number of physical servers required and improving The Force10 Networks approach to next generation utilization of server resources. VM technology also data center designs is to build on the legacy architec- greatly facilitates the mobility of applications among ture’s concept of modularity, but to greatly simplify the virtual servers and the provisioning of additional server network while significantly improving its efficiency, resources to satisfy fluctuations in demand for critical scalability, reliability, and flexibility, resulting in much applications. Server virtualization and cluster computing lower low total cost of ownership. This is accomplished are highly complementary technologies for fully by consolidating and virtualizing the network, exploiting emerging multi-core CPU microprocessors. computing, and storage resources, resulting in an VM technology provides robustness in running SOA-enabled data center infrastructure. multiple applications per core plus facilitating mobility Following are the key principles of data center consoli- of applications across VMs and cores. Cluster comput- dation and virtualization upon which the Virtual Data ing middleware allows multiple VMs or multiple cores Center Architecture is based: to collaborate in the execution of a single application. For example, VMware Virtual SMP™ enables a single POD Modularity: A POD (point of delivery) is a group virtual machine to span multiple physical cores, of compute, storage, network, and application software virtualizing processor-intensive enterprise applications components that work together to deliver a service or such as ERP and CRM. The VMware Virtual Machine application. The POD is a repeatable construct, and its File System (VMFS) is a high-performance cluster file components must be consolidated and virtualized to system that allows clustering of virtual machines maximize the modularity, scalability, and manageability spanning multiple physical servers. By 2010, the of data centers. Depending on the architectural model number of cores per server CPU is projected to be in for applications, a POD may deliver a high level appli- the range of 16-64 with network I/O requirements in cation service or it may provide a single component of the 100 Gbps range. Since most near-term growth in an SOA application, such as web front end or database chip-based CPU performance will come from higher service. In spite of the fact that the POD modules share core count rather than increased clock rate, the data a common architecture, they can be customized to centers requiring higher application performance will support a tiered services model. For example, the need to place increasing emphasis on technologies security, resiliency/availability, and QoS capabilities such as cluster computing and Virtual SMP. of an individual POD can be adjusted to meet the service level requirements of the specific application NIC Virtualization: With numerous VMs per physical or service that it delivers. Thus, an eCommerce POD server, network virtualization has to be extended to the would be adapted to deliver the higher levels of server and its network interface. Each VM is configured security/availability/QoS required vs. those suitable with a virtual NIC that shares the resources of the for lower tier applications, such as email. server’s array of real NICs. This level of virtualization, together with a virtual switch capability providing Server Consolidation and Virtualization: Server inter-VM switching on a physical server, is provided by virtualization based on virtual machine (VM) tech- VMware Infrastructure software. Higher performance nology, such as VMware ESX Server, allows numerous I/O virtualization is possible using intelligent NICs virtual servers to run on a single physical server, as that provide hardware support for I/O virtualization, shown in Figure 2. off-loading the processing supporting protocol stacks, virtual NICs, and virtual switching from the server CPUs. NICs that support I/O virtualization as well as protocol offload (e.g., TCP/IP, RDMA, iSCSI) are available from Force10 technology partners including NetXen, Neterion, Chelsio, NetEffect, and various server vendors. Benchmark results have shown that protocol offload NICs can dramatically improve network throughput and latency for both data applications (e.g., HPC, Figure 2. Simplified view of virtual machine technology clustered databases, and web servers) and network storage access (NAS and iSCSI SANs). © 2007 FORCE10 NETWORKS, INC. [ PA G E 3 O F 1 1 ]
  4. 4. Network Consolidation and Virtualization: Highly Storage Resource Consolidation and Virtualization: scalable and resilient 10 Gigabit Ethernet switch/routers, Storage resources accessible over the Ethernet/IP data exemplified by the Force10 E-Series, provide the oppor- network further simplify the data center LAN by mini- tunity to greatly simplify the network design of the POD mizing the number of separate switching fabrics that module, as well as the data center core. Leveraging must be deployed and managed. 10 GbE switching in VLAN technology together with the E-Series scalability the POD provides ample bandwidth for accessing and resiliency allows the distinct aggregation and access unified NAS/iSCSI IP storage devices, especially layers of the legacy data center design to be collapsed when compared to the bandwidth available for Fibre into a single aggregation/access layer of switch/routing, Channel SANs. Consolidated, shared, and virtualized as shown in Figure 3. storage also facilitates VM-based application provision- ing and mobility since each physical server has shared The integrated aggregation/access switch becomes the access to the necessary virtual machine images and basic network switching element upon which a POD required application data. The VMFS provides multiple is built. VMware ESX Servers with concurrent read-write access The benefits of a single layer of switch/routing within to the same virtual machine storage. The cluster file the POD include reduced switch-count, simplified traf- system thus enables live migration of running virtual fic flow patterns, elimination of Layer 2 loops and STP machines from one physical server to another, auto- scalability issues, and improved overall reliability. The matic restart of failed virtual machines on a different ultra high density, reliability, and performance of the physical server, and the clustering of virtual machines. E-Series switch/router maximizes the scalability of the Global Virtualization: Virtualization should not be design model both within PODs and across the data constrained to the confines of the POD, but should be center core. The scalability of the E-Series often enables capable of being extended to support a pool of shared network consolidations with a >3:1 reduction in the resources spanning not only a single POD, but also number of data center switches. This high reduction multiple PODs, the entire data center, or even multiple factor is due to the combination of the following factors: data centers. Virtualization of the infrastructure allows • Elimination of the access switching layer. the PODs to be readily adapted to an SOA application • More servers per POD aggregation switch, resulting model where the resource pool is called upon to in fewer aggregation switches. respond rapidly to changes in demand for services • More POD aggregation switches per core switch, and to new services being installed on the network. resulting in fewer core switches. Figure 3. Consolidation of data center aggregation and access layers © 2007 FORCE10 NETWORKS, INC. [ PA G E 4 O F 1 1 ]
  5. 5. Ultra Resiliency/Reliability: As data centers are consoli- Overall data center scalability is addressed by configuring dated and virtualized, resiliency and reliability become multiple PODs connected to a common set of data even more critical aspects of the network design. This center core switches to meet application/service capacity, is because the impact of a failed physical resource is organizational, and policy requirements. In addition to now more likely to extend to multiple applications and server connectivity, the basic network design of the larger numbers of user flows. Therefore, the virtual data POD can be utilized to provide other services on the center requires the combination of ultra high resiliency network, such as ISP connectivity, WAN access, etc. devices, such as the E-Series switch/routers, and an end- Within an application POD, multiple servers running to-end network design that takes maximum advantage the same application are placed in the same application of active-active redundancy configurations, with rapid VLAN with appropriate load balancing and security fail-over to standby resources. services provided by the appliances. Enterprise applica- Security: Consolidation and virtualization also place tions, such as ERP, that are based on distinct, segregated increased emphasis on data center network security. sets of web, application, and database servers can be With virtualization, application or administrative implemented within a single tier of scalable L2/L3 domains may share a pool of common resources switching using server clustering and distinct VLANs creating the requirement that the logical segregation for segregation of web servers, application servers, and among virtual resources be even stronger than the database servers. Alternatively, where greater scalability physical segregation featured in the legacy data center is required, the application could be distributed across architecture. This level of segregation is achieved by a web server POD, an application server POD, and a having multiple levels of security at the logical bound- database POD. aries of the resources being protected within the Further simplification of the design is achieved using PODs and throughout the data center. In the virtual IP/Ethernet storage attachment technologies, such as data center, security is provided by: NAS and iSCSI, with each application’s storage resources • Full virtual machine isolation to prevent ill-behaved incorporated within the application-specific VLAN. or compromised applications from impacting any other virtual machine/application in the environment • Application and control VLANs to provide traffic segregation • Wire-rate switch/router ACLs applied to intra-POD and inter-POD traffic • Stateful virtual firewall capability that can be customized to specific application requirements within the POD • Security-aware appliances for load balancing and other traffic management and acceleration functions • IDS/IPS appliance functionality at full wire-rate for real- time protection of critical POD resources from both known intrusion methodologies and day-one attacks • AAA for controlled user access to the network and network devices to enforce policies defining user authentication and authorization profiles Figure 4 provides an overview of the architecture of a consolidated data center based on 10 Gigabit Ethernet switch/routers providing an integrated layer of aggrega- tion and access switching with Layer 4–Layer 7 services being provided with stand alone appliances. The consoli- dation of the data center network simplifies deployment of virtualization technologies that will be described in more detail in subsequent sections of this document. Figure 4. Reference design for the virtual data center © 2007 FORCE10 NETWORKS, INC. [ PA G E 5 O F 1 1 ]
  6. 6. This section of the document focuses on the various NIC Virtualization design aspects of the consolidated and virtualized When server virtualization is deployed, a number of data center POD module. VMs generally share a physical NIC. Where the VMs are spread across multiple applications, the physical NIC Network Interface Contoller (NIC) Teaming needs to support traffic for multiple VLANs. An elegant solution for multiple VMs and VLANs sharing a physical As noted earlier, physical and virtual servers dedicated NIC is provided by VMware ESX Server Virtual Switch to a specific application are placed in a VLAN reserved Tagging (VST). As shown in Figure 6, each VM’s virtual for that application. This simplifies the logical design NICs are attached to a port group on the ESX Server of the network and satisfies the requirement of many Virtual Switch that corresponds to the VLAN associated clustered applications for Layer 2 adjacency among with the VM’s application. The virtual switch then adds nodes participating in the cluster. 802.1Q VLAN tags to all outbound frames, extending In order to avoid single points of failure (SPOF) in the 802.1Q trunking to the server and allowing multiple access portion of the network, NIC teaming is recom- VMs to share a single physical NIC. mended to allow each physical server to be connected to two different aggregation/access switches. For exam- ple, a server with two teamed NICs, sharing a common IP address and MAC address, can be connected to both POD switches as shown in Figure 5. The primary NIC is in the active state, and the secondary NIC is in standby mode, ready to be activated in the event of failure in the primary path to the POD. NIC teaming can also used for bonding several GbE NICs to form a higher speed link aggregation group (LAG) connected to one of the POD switches. As 10 GbE interfaces continue to ride the volume/cost curve, GbE NIC teaming will become a relatively less cost-effective means of increasing bandwidth per server. Figure 6. VMware virtual switch tagging with NIC teaming The overall benefits of NIC teaming and I/O virtualization can be combined with VMware Infrastructure’s ESX Server V3.0 by configuring multiple virtual NICs per VM and multiple real NICs per physical server. ESX Server V3.0 NIC teaming supports a variety of fault tolerant and load sharing operational modes in addition to the simple primary/secondary teaming model described at the beginning of this section. Figure 7 shows how VST, together with simple primary/secondary NIC teaming, supports red and green VLANs while eliminating SPOFs in a POD employing server virtualization. Figure 5. NIC teaming for data center servers © 2007 FORCE10 NETWORKS, INC. [ PA G E 6 O F 1 1 ]
  7. 7. Layer 3 Aggregation/Access Switching Figure 8 shows the logical flow of application traffic through a POD. For web traffic from the Internet, traffic is routed in the following way: 1.Internet flows are routed with OSPF from the core to a VLAN/security zone for untrusted traffic based on public, virtual IP addresses (VIPs). 2.Load balancers (LBs) route the traffic to another untrusted VLAN, balancing the traffic based on the private, real IP addresses of the servers. Redundant load balancers are configured with VRRP for gateway redundancy. For load balanced applications, the LBs function as the default virtual gateway. 3.Finally, traffic is routed by firewalls (FWs) to the trusted application VLANs on which the servers reside. The firewalls also use VRRP for gateway Figure 7. VMware virtual switch tagging with NIC teaming redundancy. For applications requiring stateful inspection of flows but no load balancing, the As noted earlier, for improved server and I/O perfor- firewalls function as the default virtual gateway. mance, the virtualization of NICs, virtual switching, and VLAN tagging can be offloaded to intelligent Ethernet Intranet traffic would be routed through a somewhat adapters that provide hardware support for protocol different set of VLAN security zones based on whether processing, virtual networking, and virtual I/O. load balancing is needed and the degree of trust placed in the source/destination for that particular application Layer 2 Aggregation/Access Switching flow. In many cases, Intranet traffic would bypass untrusted security zones, with switch/router ACLs With a collapsed aggregation/access layer of switching, providing ample security to allow Intranet traffic to be the Layer 2 topology of the POD is extremely simple routed through the data center core directly from one with servers in each application VLAN evenly distributed application VLAN to another without traversing load across the two POD switches. This distributes the traffic balancing or firewall appliances. across the POD switches, which form an active-active redundant pair. The Layer 2 topology is free from loops for intra-POD traffic. Nevertheless, for extra robustness, it is recommended that application VLANs be protected from loops that could be formed by configuration errors or other faults, using standard practices for MSTP/RSTP The simplicity of the Layer 2 network makes it feasible for the POD to support large numbers of real and virtual servers, and also makes it feasible to extend application VLANs through the data center core switch/router to other PODs in the data center or even to PODs in other data centers. When VLANs are extended beyond the POD, per-LAN MSTP/RSTP is required to deal with possible loops in the core of the network. In addition, it may also be desirable to allocate applica- tions to PODs in a manner that minimizes data flows between distinct application VLAN within the POD. This preserves the POD’s horizontal bandwidth for intra-VLAN communications between clustered servers Figure 8. Logical topology for Internet flows in the POD and for Ethernet/IP-based storage access. © 2007 FORCE10 NETWORKS, INC. [ PA G E 7 O F 1 1 ]
  8. 8. In addition to application VLANs, control VLANs are state and its green firewalls in a standby state. The configured to isolate control traffic among the network second physical firewall (connected to the second POD devices from application traffic. For example, control switch) would have the complementary configuration. In VLANs carry routing updates among switch/routers. In the event of an appliance or link failure, all of the active addition, a redundant pair of load balancers or stateful virtual firewalls on the failed device would fail over to firewalls would share a control VLAN to permit traffic the standby virtual firewalls on the remaining device flows to failover from the primary to the secondary appli- ance without loss of state or session continuity. In a typical Resource Virtualization Within and network design, trunk links carry a combination of traffic Across PODs for application VLANs and link-specific control VLANs. One of the keys to server virtualization within and From the campus core switches through the data center across PODs is a server management environment for core switches, there are at least two equal cost routes virtual servers that automates operational procedures to the server subnets. This permits the core switches to and optimizes availability and efficiency in utilization load balance Layer 3 traffic to each POD switch using of the resource pool. OSPF ECMP routing. Where application VLANs are The VMware Virtual Center provides the server manage- extended beyond the POD, the trunks to and among ment function for VMware Infrastructure, including the data center core switches will carry a combination ESX Server, VMFS, and Virtual SMP. With Virtual Center, of Layer 2 and Layer 3 traffic. virtual machines can be provisioned, configured, started, stopped, deleted, relocated, and remotely accessed. Layer 4-7 Aggregation/Access Switching In addition, Virtual Center supports high availability by Because the POD design is based on stand-alone appli- allowing a virtual machine to automatically fail-over ances for Layer 4-7 services (including server load bal- to another physical server in the event of host failure. ancing, SSL termination/acceleration, VPN termination, All of these operations are simplified because virtual and firewalls), data center designers are free to deploy machines are completely encapsulated in virtual disk devices with best-in-class functionality and performance files stored centrally using shared NAS or iSCSI SAN that meet the particular application requirements within storage. The Virtual Machine File System allows a each POD. For example, Layer 4-7 devices may support server resource pool to concurrently access the same a number of advanced features, including: files to boot and run virtual machines, effectively virtualizing VM storage. • Integrated functionality: For example, load balancing, SSL acceleration, and packet filtering functionality Virtual Center also supports the organization of ESX may be integrated within a single device, reducing Servers and their virtual machines into clusters allowing box count, while improving the reliability and multiple servers and virtual machines to be managed manageability of the POD as a single entity. Virtual machines can be provisioned • Device Virtualization: Load balancers and firewalls to a cluster rather than linked to a specific physical that support virtualization allow physical device host, adding another layer of virtualization to the resources to be partitioned into multiple virtual pool of computing resources. devices, each with its own configuration. Device VMware VMotion enables the live migration of running virtualization within the POD allows virtual appliances virtual machines from one physical server to another to be devoted to each application, with the configura- with zero downtime, continuous service availability, tion corresponding to the optimum device behavior for complete transaction integrity, and continuity of net- that application type and its domain of administration work connectivity via the appropriate application • Active/Active Redundancy: Virtual appliances also VLAN. Live migration of virtual machines enables facilitate high availability configurations where pairs of hardware maintenance without scheduling downtime physical devices provide active-active redundancy. For and resulting disruption of business operations. example, a pair of physical firewalls can be configured VMotion also allows virtual machines to be continu- with one set of virtual firewalls customized to each of ously and automatically optimized within resource the red VLANs and a second set customized for each pools for maximum hardware utilization, flexibility, of the green VLANs. The physical firewall attached to a and availability. POD switch would have the red firewalls in an active © 2007 FORCE10 NETWORKS, INC. [ PA G E 8 O F 1 1 ]
  9. 9. VMware Distributed Resource Scheduler (DRS) works Virtualization of server resources, including VMotion- with VMware Infrastructure to continuously automate enabled automated VM failovers, and resource re-allo- the balancing of virtual machine workloads across a cation as described above, can readily be extended cluster in the virtual infrastructure. When guaranteed across PODs simply by extending the application resource allocation cannot be met on a physical server, VLANs across the data center core trunks using DRS will use VMotion to migrate the virtual machine to 802.1Q VLAN trunking. Therefore, the two clusters another host in the cluster that has the needed resources. shown in Figure 9 could just as well be located in distinct physical PODs. With VLAN extension, a virtual Figure 9 shows an example of server resource re-alloca- POD can be defined that spans multiple physical PODs. tion within a POD. In this scenario, a group of virtual Without this form of POD virtualization, it would be and/or physical servers currently participating in cluster necessary to use patch cabling between physical PODs A is re-allocated to a second cluster B running another in order to extend the computing resources available to application. Virtual Center and VMotion are used to a given application. Patch cabling among physical de-install the cluster A software images from the servers PODs is an awkward solution for ad hoc connectivity, being transferred and then install the required cluster B especially when the physical PODs are on separate image including application, middleware, operating floors of the data center facility. system, and network configuration. As part of the process, the VLAN membership of the transferred As noted earlier, the simplicity of the POD Layer 2 servers is changed from VLAN A to VLAN B. network makes this VLAN extension feasible without running the risk of STP-related instabilities. With application VLANs and cluster membership extended throughout the data center, the data center trunks carry a combination of Layer 3 and Layer 2 traffic, potentially with multiple VLANs per trunk, as shown in Figure 10. The 10 GbE links between the PODs provide ample bandwidth to support VM clustering, VMotion transfers and failovers, as well as access to shared storage resources. Figure 9. Re-allocation of server resources within the POD Figure 10. Multiple VLANs per trunk © 2007 FORCE10 NETWORKS, INC. [ PA G E 9 O F 1 1 ]
  10. 10. Resource Virtualization Across Data Centers Migration from Legacy Data Center Architectures Resource virtualization can also be leveraged among data centers sharing the same virtual architecture. As a The best general approach to migrating from a legacy result, Virtual Center management of VMotion-based 3-tier data center architecture to a virtual data center backup and restore operations can provide redundancy architecture is to start at the server level and follow a and disaster recovery capabilities among enterprise data step-by-step procedure replacing access switches, center sites. This form of global virtualization is based distribution/aggregation switches, and finally data on an N x 10 GbE Inter-Data Center backbone, which center core switches. One possible blueprint for such carries a combination of Layer 2 and Layer 3 traffic a migration is as follows: resulting from extending application and control VLANs 1.Select an application for migration. Upgrade and from the data center cores across the 10 GbE virtualize the application’s servers with VMware MAN/WAN network, as shown in Figure 11. ESX Server software and NICs as required to support the desired NIC teaming functionality and/or NIC Virtualization. Install VMware Virtual Center in the NOC. 2.Replace existing access switches specific to the chosen application with E-Series switch/routers. Establish a VLAN for the application if necessary and configure the E-Series switch to conform to the existing access networking model. 3.Migrate any remaining applications supported by the set of legacy distribution switches in question to E-Series access switches. 4.Transition load balancing and firewall VLAN connectivity to the E-Series along with OSPF routing among the application VLANs. Existing distribution switches still provide connectivity to the data center core. 5.Introduce new E-Series data center core switch/ routers with OSPF and 10 GbE, keeping the existing core routers in place. If necessary, configure OSPF in old core switches and re-distribute routes from OSPF to the legacy routing protocol and vice versa. Figure 11. Global virtualization 6.Remove the set of legacy distribution switches and use the E-series switches for all aggregation/access In this scenario, policy routing and other techniques functions. At this point, a single virtualized POD would be employed to keep traffic as local as possible, has been created. using remote resources only when local alternatives are 7.Now the process can be repeated until all not appropriate or not currently available. Redundant applications and servers in the data center have Virtual Center server management operations centers been migrated to integrated PODs. The legacy data ensure the availability and efficient operation of the center core switches can be removed either before globally virtualized resource pool even if entire data or after full POD migration. centers are disrupted by catastrophic events. © 2007 FORCE10 NETWORKS, INC. [ PA G E 1 0 O F 1 1 ]
  11. 11. Summary References: As enterprise data centers move through consolidation General discussion of Data Center Consolidation and Virtualization: phases toward next generation architectures that www.force10networks.com/products/pdf/wp_datacenter_con- increasingly leverage virtualization technologies, virt.pdf the importance of very high performance Ethernet E-Series Reliability and Resiliency: switch/routers will continue to grow. Switch/routers www.force10networks.com/products/highavail.asp with ultra high capacity coupled with ultra high Next Generation Terabit Switch/Routers: reliability/resiliency contribute significantly to the www.force10networks.com/products/nextgenterabit.asp simplicity and attractive TCO of the virtual data High Performance Network Security (IPS): center. In particular, the E-Series offers a number www.force10networks.com/products/hp_network_security.asp of advantages for this emerging architecture: iSCSI over 10 GbE: www.force10networks.com/products/iSCSI_10GE.asp • Smallest footprint per GbE port or per 10 GbE port VMware Infrastructure 3 Documentation: due to highest port densities www.vmware.com/support/pubs/vi_pubs.html • Ultra-high power efficiency requiring only 4.7 watts per GbE port, simplifying high density configurations and minimizing the growing costs of power and cooling • Ample aggregate bandwidth to support unification of aggregation and access layers of the data center network plus unification of data and storage fabrics • System architecture providing a future-proof migration path to the next generation of Ethernet consolidation/virtualization/unification at 100 Gbps • Unparalleled system reliability and resiliency featuring: – multi-processor control plane – control plane and switching fabric redundancy – modular switch/router operating system (OS) supporting hitless software updates and restarts A high performance 10 GbE switched data center infrastructure provides the ideal complement for local and global resource virtualization. The combination of these fundamental technologies as described in this guide provides the basic SOA-enabled modular infrastructure needed to fully support the next wave of SOA application development where an application’s component services may be transparently distributed throughout the enterprise data center or even among data centers. Force10 Networks, Inc. © 2007 Force10 Networks, Inc. All rights reserved. Force10 Networks and E-Series are registered trademarks, and Force10, the Force10 logo, P-Series, S-Series, TeraScale and FTOS are trademarks of Force10 Networks, Inc. All other 350 Holger Way company names are trademarks of their respective holders. Information in this document is subject to change without San Jose, CA 95134 USA notice. Certain features may not yet be generally available. Force10 Networks, Inc. assumes no responsibility for any errors that may appear in this document. www.force10networks.com WP18 107 v1.2 408-571-3500 PHONE 408-571-3550 FACSIMILE © 2007 FORCE10 NETWORKS, INC. [ PA G E 1 1 O F 1 1 ]

×