QLogic NetSlice™ Technology Addresses I/O Virtualization Needs
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
878
On Slideshare
878
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
14
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. WHITE PaPEr QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Maximizes Cost-Efficiency and Utilization Supported Over 10GbE Introduction a thin software layer called a virtual machine monitor (VMM) or hypervisor creates and controls the virtual machines and other Server virtualization based on virtual machine (VM) software is virtual subsystems. The VMM also takes complete control of the becoming a popular solution for consolidation of data center physical machine and provides resource guarantees for CPU, servers. With VM software, a single physical machine can support memory, storage space, and I/O bandwidth for each guest OS. a number of guest operating systems (OSes), each of which runs The VMM can also provide a management interface that allows on its own complete virtual instance of the underlying physical server resources to be dynamically allocated to match temporal machine, as shown in Figure 1. The guest OSes can be instances changes in user demand for different applications. VMM software of a single version of one OS, different releases of the same OS, is available from independent software vendors (ISVs) such as or completely different OSes (for example, Linux®, Windows®, VMware®, Citrix™ (XenSource), and Virtual Iron® (now part or Mac OS X®, or Solaris®). Oracle®). Microsoft®, a late entrant to this market, has been gaining acceptance for its Hyper-V VMM because it offers the Application 1 Application 2 Application 3 ation 5 tio on ... convenience of being integrated into their server OSes, starting with Windows Server® 2008. Guest OS 1 Guest OS 2 Guest OS 3 O OS 5 ... Virtual Virtual Virtual ... Machine 1 Machine 2 Machine 3 ne 5 Virtualization Benefits Virtual Machine Monitor / Hypervisor There are numerous benefits that can be derived from server virtualization: Physical Machine • Server Consolidation. a single physical server can easily support Figure 1. Simplified View of Virtual Machine Technology 4 to 16 VMs, allowing numerous applications that normally require dedicated servers to share a single physical server. This configuration allows the number of servers in the data center
  • 2. WHITE PaPEr QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE to be reduced while increasing average utilization of physical servers from as low as 5–15 percent to as high as 50–60 percent. Virtual Virtual Virtual al • Flexible Server/application Provisioning. Virtual machines can be Machine 1 Machine 2 Machine 3 ine 5 in ... brought up on a physical machine in almost no time at all because Virtual NIC Virtual NIC Virtual NIC al NIC IC there is no context or personality tying a particular application to specific physical resources. This configuration allows a VM Virtual Switched Network to quickly react to changing workloads by requesting and being Virtual Machine Monitor NIC Driver allocated additional resources (CPU, memory, etc.) to dynamically respond to peak workloads. NIC Physical Machine • reliability. The lack of ties to particular physical machines also allows a running VM (together with its application) to be migrated LAN over the network to a different physical server connected to the same storage area network (SaN) without service interruption. Figure 2. Current Software-based Virtual NIC This enables workload management/optimization across the virtual infrastructure as well as zero-downtime maintenance. There are some challenges that arise out of software-based I/O VMs also help streamline provisioning new applications and virtualization. backup/restore operations. • Server CPU utilization increases in proportion to the number of • Lower Total Cost of Ownership (TCO). Server virtualization allows VMs and their aggregate bandwidth. This is because the CPU significant savings in the both CaPEX (costs of server hardware, has to support the VMM-based virtual switched networking for SaN host bus adapters, switch ports, and Ethernet adapters) and multiple VMs, the network I/O through the NIC, the computational OPEX (server management labor expense; plus facility costs such demands of VM applications, and the general overhead of as power, cooling, and floor space). virtualization. With a software-based approach, the overhead of virtualization is increased because the VMM must emulate a NIC These benefits have led to efforts to further optimize the performance for each VM so that each VM appears to have its own dedicated and effectiveness of server virtualization by adding hardware support NIC, with a fair share of the network bandwidth provided by the for virtualization in both the CPU and I/O subsystems. The remainder NIC. There may be extra copies of the data at the VM-VMM of this white paper focuses on the QLogic architectural approach interface. to hardware-based I/O virtualization and the role this can play in a virtualized, agile data center. • With traditional NICs (where the host CPU performs TCP/IP processing and control of data transfers), the aggregate network I/O of multiple VMs can easily overwhelm the server CPU. a Case for Intelligent Ethernet adapters • Traditional NICs do not have any hardware support for I/O Figure 2 shows how networking I/O is typically virtualized by VMM virtualization. This forces the virtualization infrastructure software. The VMs within a virtualized server share a conventional to implement software-based virtualization, adding to the physical Ethernet adapter to connect to a data center LaN. The VMM overhead. provides each VM with a virtual NIC (vNIC) instance complete with as a result, software-based I/O virtualization, especially with MaC and IP addresses and creates a virtual switched network to traditional NICs, poses a serious constraint on the number of VMs provide the connectivity between the vNICs and the physical Ethernet per server and thus hinders consolidation strategies based on adapter. The virtual switched network performs the I/O transfers virtualization of high performance servers, such as modern multi- using shared memory and asynchronous buffer descriptors, similar socket, multi-core server platforms. to a shared memory Ethernet switch. With this software-based I/O virtualization, the VMM is directly in the data path between each VM The solution to these challenges is a hardware-based Intelligent and the shared NIC. Ethernet adapter that can offload I/O virtualization processing and upper layer protocol processing from the host CPU or CPUs. The Intelligent Ethernet adapter minimizes VMM-related host CPU utilization by removing the hypervisor from the I/O data path and greatly reducing TCP/IP-related host CPU utilization by offloading TCP/IP and upper layer protocol processing. Intelligent Ethernet HSG-WP09005 NE0130902-00 a 2
  • 3. WHITE PaPEr QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE adapters with the following characteristics will play a key role in of the data flow by ensuring that the DMa requests of the VMs are driving the performance and scalability of virtualized servers to meet isolated from one another. the demands of next-generation data centers: While direct assignment would reduce the overhead of virtual • 10GbE Support. a single 10GbE physical connection is more networking in the VMM software layer, the requirement for a flexible and cost-effective than multiple 1GbE connections and has dedicated physical Ethernet adapter for each VM is undesirable the aggregate bandwidth needed for multiple VMs. By matching because it would detract from the flexibility of server virtualization I/O bandwidth to aggregate VM network requirements, rather than (Figure 3). For example, the ability to move a VM from one physical to the number of servers, I/O cost/efficiency can be improved due machine to another would be adversely affected. to fewer Ethernet adapters, cables, and switch ports. Scalability, in terms of the number of VMs, is also improved. • TCP/IP Offload. TCP/IP offload delivers the full potential bandwidth Virtual Virtual Virtual al and server scalability afforded by 10GbE, because it enables Machine 1 Machine 2 Machine 3 ine 5 in ... end-to-end wire speed throughput with extremely low CPU NIC Driver NIC Driver NIC Driver er utilization. The CPU savings allow the server to scale in terms of both performance per VM and number of VMs per server, further Virtual Machine Monitor driving the economic momentum for server consolidation. Direct Memory Access Remapping • Multi-Protocol Offload Support. The combination of Intelligent Ethernet adapters and 10GbE will drive Ethernet as the “unified NIC NIC NIC Memory fabric” in the data center. For example, 10GbE Intelligent Ethernet adapters allow the data center’s backbone to serve as the switch Physical Machine fabric for both the general purpose LaN and high performance storage area network (SaN) via iSCSI or Fiber Channel over Figure 3. Direct Assignment of NICs to VMs Ethernet (FCoE). as the industry moves to a 10GbE unified fabric, applications will require offload for multiple protocols The shortcomings of direct assignment are overcome with a new simultaneously. These include transport protocols such as TCP/ category of Intelligent Ethernet adapters that support multiple virtual IP, rDMa/iWarP, iSCSI, and iSCSI extensions for rDMa (iSEr). NICs over a shared physical architecture, as shown in Figure 4. • Hardware Support for I/O Virtualization. Efficient virtualized NIC solutions will: Virtual Virtual Virtual al Machine 1 Machine 2 Machine 3 he 5 e – Support removal of VMM software from the I/O data path, ... Virtual Virtual Virtual al while preserving VMM control functions NIC Driver NIC Driver NIC Driver Driver Dri er – Support hardware virtualization features being added to Intel® and aMD® CPUs Virtual Machine Monitor – Leverage emerging virtualization features of PCI Express® as ratified by the PCI-SIG® I/O Virtualization Workgroup. Direct Memory Access Memory Intelligent NIC Networking Virtualized Servers with Hardware-Based Physical Machine I/O Virtualization One way to circumvent some of the limitations of software-based LAN I/O virtualization is through direct assignment of physical NICs to VMs. This can be accomplished by installing a NIC driver in the Figure 4. Near Future-Virtual NIC with Hardware-Based DMA VM partition and allowing the VM to pass data directly to the NIC Remapping without involving the VMM in the data path. Bypassing the VMM also requires a DMa “remapping” function in hardware. DMa remapping This new type of adapter provides hardware support for I/O intercepts the I/O device’s attempts to access system memory and virtualization, allowing a dedicated virtual NIC to be assigned to each uses I/O page tables to (re)map the access to the portion of system VM. One pre-requisite for this is a DMa remapping function. Some memory that belongs to the target VM. The VMM retains control platforms have started shipping with DMa remapping (IOMMU) support HSG-WP09005 NE0130902-00 a 3
  • 4. WHITE PaPEr QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE in the chipset. However, this addition implies an additional latency • Improved support for I/O virtualization via an integrated I/O in the data path. To reduce or eliminate the latency, the Intelligent memory management unit (IOMMU), including DMa remapping Ethernet adapter implements a cache of the IOMMU translations, and These enhancements simplify VMM robustness and security, is able to bypass the IOMMU remapping in the chipset for most of the improve the performance of guest operating systems, and improve frequently used mappings. another key feature of this adapter is the I/O performance. ability to natively support generic interfaces as defined by the VMM. This feature ensures that there are no hardware specific components With these enhancements (and corresponding enhancements in the in the VM. With this implementation, advanced virtualization features VMM software), the virtualization architecture changes in both VMMs such as VM migration are preserved. and I/O devices. Based on these improvements, the architecture of The virtual NIC must also perform the virtual networking functions virtual I/O in the next generation of virtualized server will evolve to previously performed by the VMM in software-based I/O virtualization resemble the one shown in Figure 5. at this stage in the evolution, the (Figure 2). These functions include: VMM will be able to manage the assignment of virtual I/O devices to the VMs. In addition, processor hardware will support one or more • Ensuring that the virtual NICs are fully independent, each with its logical CPUs dedicated to each VM. The CPU or processor will use the own parameter settings, such as MTU size, TCP segmentation IOMMU to ensure discrete I/O memory space for each VM’s I/O. offload (TSO) on/off, and interrupt parameters. • Implementing policies for sharing bandwidth and other NIC resources across the VMs. Virtual Virtual Virtual ... • Virtual switching and traffic steering based on Layer 2/Layer 3 Machine 1 Machine 2 Machine 3 e 4 header information. Virtual switching supports IP-based VM-to- VM communication, which is required for a virtual DMZ where Control Virtual Machine Monitor Control firewall, IDS, and web servers run on separate VMs within a single physical server. Memory I/O Memory Physical CPU Management Memory Management NIC • Providing flexibility so that the VMM can control and configure the Unit Unit virtual NIC interfaces. • Providing each virtual NIC its own virtual function to interface to LAN the host. Figure 5. Next Generation Hardware-based Virtual NIC (with CPU I/O Virtualization — a Look ahead and Chipset Support for Virtualization) as the market acceptance of server virtualization gains momentum, With the migration of DMa remapping to the CPU chipset, the virtual vendors of CPUs, operating systems, VMMs, and intelligent I/O NIC will still maintain a DMa I/O translation look-aside buffer (IOTLB) devices are working to improve hardware support for virtualization. to serve as a cache for recent address translations and to allow pre- This means that server virtualization technology is expected to evolve fetching of translated addresses. With these enhancements, the rapidly over the next several years with a number of developments virtual NIC will be able to support hardware-based network transfers contributing to improved computational performance, better I/O directly to application memory locations, bypassing both the VMM performance, and enhanced flexibility in system configuration. and guest OS and thereby further reducing host CPU overhead. Processor Support for I/O Virtualization PCI Support for I/O Virtualization Both of the major vendors of x86 architecture CPUs (Intel and aMD) The PCI-SIG I/O Virtualization (IOV) Workgroup has released introduced processors in the latter half of 2006 that offer hardware specifications that allow virtualized systems based on PCI Express assistance for CPU and memory virtualization. These enhancements (PCIe®) to leverage shared IOV devices. The IOV Workgroup focused (Intel’s Virtualization Technology and aMD’s Pacifica Virtualization) on the following three areas to support interoperable solutions: are similar and share the following major features: • DMa remapping • New privileged instructions to speed context switches between the VMM and VMs, supporting multiple logical CPUs • Single root (Sr) I/O virtualization and sharing • VM interrupt handling assistance • Multi-root (Mr) I/O virtualization and sharing HSG-WP09005 NE0130902-00 a 4
  • 5. WHITE PaPEr QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE DMa remapping QLogic NetSlice™ architecture for I/O Virtualization The aTS 1.0 specification enables PCIe I/O endpoints to interoperate as described in this white paper, hardware-based I/O virtualization with platform-specific DMa remapping functions, which are termed requires a new type of Intelligent Ethernet adapter which is itself address translation services (aTS) by the PCI-SIG. The specification a virtualized device optimized for sharing I/O among multiple VMs. is compatible with DMa remapping performed either within the I/O The QLogic NetSlice architecture has been engineered to satisfy the device (Figure 4) or the CPU (Figure 5). following goals: • Maximum Cost-effectiveness. High-performance virtualized Single root (Sr) I/O Virtualization and Sharing servers will require 10GbE physical interfaces for multiple data The IOV-Sr 1.0 specification focuses on allowing multiple VMs (or center interconnects, including general purpose LaN connectivity, “system images”) in a single root complex (host CPU chip set cluster interconnects, and storage interconnects. The QLogic including memory or shared memory) to share a PCIe IOV endpoint NetSlice architecture, as implemented in its Intelligent Ethernet without sacrificing performance. The specification covers adapter, supports multiple 10GbE interfaces on a single PCIe configuration, resource allocation, error/event/interrupt handling, card. etc. The basic Sr topology is shown in Figure 6. • Maximum flexibility. End-user requirements for protocol support by Ethernet adapters can vary significantly. In addition, upper level protocols (iWarP/rDMa, iSCSI, and FCoE) are continuing Virtual Virtual Virtual ... to evolve. Furthermore, the system architecture for virtualized Machine 1 Machine 2 Machine 3 e 4 servers is also evolving, as described in the previous sections. The only way to protect investments in data center I/O virtualization Virtual Machine Monitor is with an Intelligent Ethernet adapter based on a programmable architecture rather than “hard-wired” aSIC implementations. PCI Root QLogic’s programmable NetSlice architecture supports the Physical Machine evolution of hardware-based I/O virtualization from its inception PCIe Switch (Figure 4) through several stages of evolution, spanning the development of I/O virtualization support in CPUs and PCI Express PCIe IOV Endpoint PCIe IOV Endpoint (Figures 5 through 7), to its final, fully offloaded form (Figure 9). The NetSlice architecture creates a platform that exports a number Figure 6. Single Root PCIe IOV Topology of independent virtual interfaces, with each virtual interface utilizing a share of the existing (hardware) resources. Each virtual interface Multi-root (Mr) I/O Virtualization and Sharing is serviced in a fair manner. Optionally, each interface can be The Mr IOV specification focuses on allowing multiple root complexes assigned a priority that determines how much of a share of the (such as multiple processor blades in a blade server) to share PCIe underlying resources are allocated to that interface. Strong isolation IOV endpoints. The basic Mr topology is shown in Figure 7. To is built around each interface to ensure that one interface does not support the multi-root topology, the PCIe switches and IOV devices adversely affect another. need to be Mr “aware” (support Mr capabilities). Mr awareness within the adapter supports enhanced PCI Express routing and PCIe Node separate register sets for storing a number of independent PCI hierarchies associated with the various root complexes. Virtual Virtual Virtual ... Interface Interface Interface ce Virtual Virtual Virtual ... Virtual Virtual Virtual ... Machine 1 Machine 2 Machine 3 e 4 Machine 1 Machine 2 Machine 3 e 4 Virtual Machine Monitor Virtual Machine Monitor Direct Direct Direct Memory Memory Memory Buffers/FIFOs FIFOs FIFO PCI Root PCI Root Access Access Access Physical Machine Multi-Root Aware PCIe Switch Physical NIC MAC/PHY PCIe IOV Endpoint PCIe IOV Endpoint Figure 8. Virtualized Multi-channel NIC Figure 7. Multi-Root PCIe IOV Topology HSG-WP09005 NE0130902-00 a 5
  • 6. WHITE PaPEr QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE a major benefit of the virtual channel abstraction is its synergy with VM basis also avoids data commingling, increasing the security the programmability of the underlying physical resources, maximizing and robustness of the virtualized I/O path. the architectural flexibility to accommodate technology evolution. • The Intelligent Ethernet adapter hardware supports virtual switching (switch) and traffic steering based on Layer 2 and Layer Key features of the QLogic NetSlice architecture and Intelligent 3 packet header information, including MaC address, 802.1p Ethernet adapter implementation include: priority, VLaN ID, and IP address. This support allows flexible, • The Intelligent Ethernet adapter currently operates in PCIe base QoS-aware switching within and among VLaNs for VM-to-VM IP mode, which means that the device appears as a standard PCIe communications on the same physical platform. Consequently, device to a single root complex system. as the platform evolves, a VM can communicate to another VM on the same physical new firmware loads will enable appropriate functionality to keep machine with a low-latency path because local traffic never pace with the new features. leaves the physical system. • Virtual NICs are managed on a per VM basis for up to 1,024 VMs. as a result of these features, the CPU utilization on the host is Each VM can be assigned a priority for access to I/O resources. significantly reduced. This enables more VMs to run on the same For example, traffic metering and rate control per VM ensures physical machine, thus magnifying the virtualization benefits to the fairness of access across VMs and implementation of service end user. level agreements. • Multiple DMa engines ensure optimum use of the PCIe bus The End Game bandwidth, irrespective of the type of traffic generated/received by the individual VMs. In its final form, the ultimate expression of a scalable, intelligent, virtual NIC is one in which: • PCIe interrupts issued by the extended message signaled interrupt (MSI-X) scheme are DMa remapped to be received only • Virtualization overhead is offloaded. by the target VM. QLogic currently supports up to 1,024 VMs • Multiple (up to 1024) virtual NICs can be presented and managed with MSI-X. independently. • The QLogic Intelligent Ethernet adapter can be reprogrammed • Network protocol processing overhead is completely offloaded. to perform the caching of DMa requests via its IOTLB, which • Multiple network protocols are supported simultaneously for reduces latency in the data path. Support for IOTLB will become each VM. necessary as DMa remapping becomes widely supported by server CPUs and chipsets. QLogic’s Intelligent Ethernet adapter is capable of doing all of these things today using a unique programmable solution that supports full • Interrupt moderation allows each virtual NIC to have its interrupt offload of transport protocols, such as TCP/IP and iWarP (rDMa), in behavior adjusted to minimize disruption of traffic flows. With standalone as well as virtualized environments. The ultimate solution support for independent interrupt moderation, the Intelligent is shown in Figure 9. Ethernet adapter can implement a different moderation scheme for each virtual channel optimized for the type of traffic flow (sensitivity to latency, jitter, or throughput). Virtual Machine 1 Virtual Machine 2 Virtual Machine 3 Mach Machine 5 Socket Socket Socket ocket • The Intelligent Ethernet adapter supports multiple unicast and TCP UDP TCP UDP TCP UDP TCP P UDP CP UDP ... IP Route IP Route IP Route RouteRoute IP eRo Ro multicast MaC addresses that can be mapped to the virtual NICs Virtual NIC Virtual NIC Virtual NIC Virtu NI Virtual NIC Driver Driver Driver Driver of a given VM, furthering the appearance of a given VM having a dedicated NIC. Virtual Machine Monitor • Virtual NICs can be configured with multiple transmit and receive TCP/IP TCP/IP TCP/IP ... queues dedicated to each VM. The logical separation of I/O MAC MAC MAC Virtual Switch Port queues minimizes contention among VMs for I/O bandwidth, Intelligent NIC while multiple transmit and receive queues support advanced Physical NIC functions such as traffic steering, traffic classification, and priority LAN processing of different traffic classes (in accordance with IEEE 802.1p or DiffServ markings). The separation of queues on a per Figure 9. End Game — Protocol and Virtualization Offload HSG-WP09005 NE0130902-00 a 6
  • 7. WHITE PaPEr QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE Figure 9 shows how the layers of the networking stack within each guest OS interfaces to vNICs within the Intelligent Ethernet adapter. The I/O path for socket connections bypasses the guest OS protocol stack via direct connection to the offload engine in the vNIC. Virtualization overhead in the VMM is also bypassed to maximize performance and scalability. at the same time, the traditional I/O path taken through the guest OS stack for connections that cannot be offloaded is still present. The vSwitch switching function, also shown as part of the vNIC protocol stack, is accessed by the protocol offload engine or directly by the VM guest OS stack. as described in this white paper, offloading protocol processing to the Intelligent Ethernet adapter increases the number of VMs that can be instantiated on the physical server because of the substantial reduction in host CPU utilization required to process network traffic. QLogic’s architecture is already capable of achieving this next level of virtualized I/O optimization. Summary and Conclusion Server virtualization is one of a number of virtualization technologies that promises to transform the data center by improving its ability to adapt to changing user requirements while, at the same time, dramatically reducing TCO. However, virtualization technology continues to evolve at a fairly rapid pace to accommodate demands for higher I/O performance, better integration of VMMs with single-core and multi-core CPUs, and better integration with peripheral interconnect buses. In this dynamic environment, data center managers need flexible solutions that can evolve along with the technology to protect current investments in virtualization and server consolidation. The QLogic NetSlice architecture addresses these requirements by incorporating a unique degree of programmable support for virtualization and protocol offload. Programmability allows the QLogic Intelligent Ethernet adapter to be deployed today to meet current requirements for server virtualization without running the risk of obsolescence. The QLogic Intelligent Ethernet adapters deployed today can also meet tomorrow’s requirements by adapting to evolving virtualization technology and/or changing user requirements through simple firmware and driver updates. HSG-WP09005 NE0130902-00 a 7
  • 8. WHITE PaPEr QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE Disclaimer reasonable efforts have been made to ensure the validity and accuracy of these performance tests. QLogic Corporation is not liable for any error in this published white paper or the results thereof. Variation in results may be a result of change in configuration or in the environment. QLogic specifically disclaims any warranty, expressed or implied, relating to the test results and their accuracy, analysis, completeness or quality. Corporate Headquarters QLogic Corporation 26650 aliso Viejo Parkway aliso Viejo, Ca 92656 949.389.6000 www.qlogic.com Europe Headquarters QLogic (UK) LTD. Quatro House Lyon Way, Frimley Camberley Surrey, GU16 7Er UK +44 (0) 1276 804 670 © 2007–2009 QLogic Corporation. Specifications are subject to change without notice. all rights reserved worldwide. QLogic, the QLogic logo, and NetSlice are trademarks or registered trademarks of QLogic Corporation. Microsoft and Windows are registered trademarks of Microsoft Corporation. IBM is a registered trademark of International Business Machines Corporation. Oracle is a registered trademark of Oracle Corporation. Microsoft, Windows, and Windows Server are registered trademarks of Microsoft Corpora- tion. Linux is a registered trademark of Linus Torvalds. Mac OS X is a registered trademark of apple, Inc. Oracle is a registered trademark of Oracle Corporation. Solaris is a registered trademark of Sun Microsystems, Inc. VMware is a registered trademark of VMware, Inc. Citrix is a trademark of the Citrix Inc. Virtual Iron is a registered trademark of Virtual Iron, Inc. PCIe, PCI Express, and PCI-SIG are registered trademarks of PCI-SIG Corporation. aMD is a registered trademark of advanced Micro Devices, Inc. Intel is a registered trademark of Intel Corporation. all other brand and product names are trademarks or registered trademarks of their respective owners. Information supplied by QLogic Corporation is believed to be accurate and reliable. QLogic Corporation assumes no responsibility for any errors in this brochure. QLogic Corporation reserves the right, without notice, to make changes in product design or specifications. HSG-WP09005 NE0130902-00 a 8