Document # INDUSTRY2013002, July 2013 
Page 2 
Most medium and large sized IT organizations have deployed several generation of virtualized servers, becoming more comfortable with the performance and reliability with each deployment. As IT organizations started to increase VM density, they hit the limits of Hyper-V software and server memory, CPU, and I/O. 
A new VM Engine is now available and this documents describes how it can help IT organizations maximize use of their servers running Hyper-V in Windows Server 2012. 
Best Practices in Hyper-V…………...……………..…………………………………………………...………..…3 
Power Needed for More VMs………………………………………………..…………………………………….4 
A New VM Engine…………………………………………………...……………………..……………..…………...5 
A New VM Chassis ………………………………………..…………………………………………………………....6 
Virtualized I/O………………………………....……………………………………….………………...……………..7 
Racing Exhaust Systems: 16Gb FC, 10GbE & Converged Networking………..………….…..…..8 
New Hyper-V Performance …………………………….………………………..………………………..……….9 
Performance with 16Gb Fibre Channel………………..………………………...………….……………...10 
Live VM Migration…..……………………………….…………………………………….…………..…………….11 
Live Storage Migration………………….…………..…………….………………..…………….………………..12 
Lowering the Cost of VM I/O………………………...………………………..…………………………………13 
Low-Latency Connectivity……………………………………………………………………………….……...…14 
Scalability with 16Gb Fibre Channel……………………………………………….………………………….15 
Accelerating App Performance………………………………..……………………………………………...…16 
The Bottom Line……….…………………………………………..……………………………………………….….17 
Resources……….…………………………………………..……………………………………………….……………18 
Harnessing the Power 
In a survey conducted by IT Brand, IT professionals said the average number of VMs per server would almost double in the next 24 months. 
VMs Per Server
Document # INDUSTRY2013002, July 2013 
Page 3 
Best Practices in Hyper-V 
No hardware resource is more important to overall performance than memory. Plan to ensure each VM has the memory it needs, but without wasting memory in the process. 
Memory 
Start with Planning 
When planning a Hyper-V installation, it is important to take into account the new capabilities of Hyper-V in Windows Server 2012. Windows Server 2012 has added significantly to the scalability of Hyper-V. For datacenters virtualizing Tier-1 applications, the critical scalability enhancement is the ability to have up to 1TB of memory and 64 virtual CPU cores per VM. This will ensure almost all Tier-1 applications should perform well in a Microsoft Hyper-V environment. 
However, these new capabilities bring new complexities, and with them the need to plan new datacenter architectures. This not only includes planning the deployment for today’s needs, but also thoroughly investigating evolution strategies for applications before bolting down racks and filling them with servers. 
 Planning which applications are going to run on your virtualized servers is the first step in understanding your needs. 
 From there, it is critical to define server integration points with existing resources (likely core switching and storage resources), and how these will be affected by the evolution of existing resources. 
 After that, planning your approach to Live Migration and capacity growth over the lifetime of your new infrastructure will help you scope internal I/O requirements appropriately. 
Finally, determining whether to utilize converged networks or not, and what I/O performance you need, will enable you to intelligently discuss your I/O and networking options with your SAN/LAN equipment providers. These steps will help you ensure success when virtualizing your Tier-1 applications. 
To fully optimize virtualized data centers, servers need maximum I/O capacity to support high input output operation rates and high bandwidth applications. Increased bandwidth is also needed for server virtualization, which aggregates I/O from multiple virtual machines (VMs) to the host’s data path. This next- generation combination takes full advantage of new features that are described in detail in this planning guide. Read on to discover how QLogic can increase your infrastructure ROI and overall competitiveness.
Document # INDUSTRY2013002, July 2013 
Page 4 
The Great VM Migration 
With many server admins working on their 3rd and 4th generation of virtualized servers, the focus has changed from interoperability and learning the behavior of Hyper-V, to increasing VM density (#VMs/Physical Host Server). With the availability of servers based on Intel’s E5 processors (multi-core, 768GB of RAM, PCI Express Gen3) and the combination of new features within Hyper-V, a new, game-changing compute platform was introduced. This new platform allows for new levels of VM density and for the first time Tier-1 applications that previously required dedicated server hardware can now run on virtual servers, achieving improved performance, scalability and efficiency. 
While Hyper-V and E5-based servers are seeing significant deployments in many enterprise datacenters, the I/O and network infrastructure to support these new technologies lags far behind. In a survey conducted by IT Brand Pulse, IT professionals said the average number of VMs per server would almost double in the next 24 months. Approximately 25% of IT professionals surveyed also said what they need most to increase the VM density is more I/O bandwidth. The purpose of this industry brief is to provide a planning guide to help enterprises deploy Tier-1 applications with adequate bandwidth in a dense Microsoft Hyper-V Server 2012 virtualization environment. 
Power Needed for more VMs 
Approximately 25% of IT professionals surveyed said what they need most to increase the density of VMs per server is more I/O bandwidth. 
VM 
Density 
IT Brand Pulse 
The average number of VMs per server in my environment: 
What I need most to increase the density of VMs per physical servers is more: 
IT Brand Pulse
Document # INDUSTRY2013002, July 2013 
Page 5 
A New VM Engine 
2 more cores, 8MB more of cache, 6 more DIMMs of faster DDR3- 1600 memory increasing to 768GB, double the I/O bandwidth with PCIe 3.0, and more Intel QuickPath links between processors. 
Xeon E5 
Xeon E5 offers double the I/O bandwidth to 10GbE NICs, 10GbE/16Gb FC CNAs and 16Gb FC HBAs. 
The Intel Xeon E5 Platform 
The introduction of the Intel® Xeon® E5 family of processors responds to the call for more virtual server re- sources with 2 more cores, 8MB more of cache, 6 more DIMMs of faster DDR3-1600 memory. Increasing the total to 8 cores, 768GB of RAM, and doubling the I/O bandwidth with PCIe 3.0. 
Intel’s new Xeon E5 promises a significant increase in server I/O by enabling full- bandwidth, four-port 10GbE server adapters as well as dual-port 16Gb FC server adapter support, addressing the VM density issue with a substantial increase in I/O bandwidth that host servers require. 
Intel Xeon E5 Platform
Document # INDUSTRY2013002, July 2013 
Page 6 
Windows Server 2012 Hyper-V 
The newest release of Windows Server 2012 Hyper-V delivers a high-performance VM chassis harnessing the new I/O capabilities of 16Gb Fibre Channel (FC) Storage Networking and 10GbE Data Networking. Several new features of Windows Server 2012 Hyper-V are highlighted below: 
 vCPU—Virtual machines can now have up to 64 virtual CPUs (vCPUs) and 1TB of virtual RAM (vRAM) allowing Tier- 1 application to be virtualized and new levels of VM density to be reached. 
 Virtual Fibre Channel (Virtual FC)—Hyper-V now enables VM workloads to access FC SANs by provisioning virtual FC ports with a standard Worldwide Name (WWN) within the guest OS. 
 Live Migration—Virtual FC also enables Live Migration of VMs across Hyper-V hosts while maintaining FC connectivity. Two WWNs are configured and maintained for each virtual FC adapter. 
 Live Storage Migration—A VM’s Virtual Hard Disk (VHDX) storage can now be migrated without shutting down the VM. The operation copies data from source storage device to a target via a FC or similar interconnect. 
 Multipath I/O (MPIO)— Hyper-V now extends MPIO capability to VMs ensuring fault-tolerant FC connectivity for delivering High Availability and Resiliency to virtualized workloads 
 16Gb Fibre Channel—To help maximize efficiency of Live Migration and Storage Motion, Hyper-V includes support for 16Gb Fibre Channel, the fastest storage interconnect available today. 
 10GbE and SR-IOV— Allows 10GbE NIC to appear as multiple virtual devices that can optimize I/O performance by providing direct I/O for individual virtual machines. 
From a storage planning perspective, when comparing Windows Server 2012 to previous version, there are two specifications which stand-out: the amount of memory/VM (1TB), and the amount of active VMs per machine (1,024). 
Using today’s storage usage, a petabyte of storage could be needed to support 1024 VMs. While this scenario is unlikely for at least a few years, running 100 VMs with 512GB of virtual memory each on a single server (which would require 52TB of storage for the memory contents alone) is very foreseeable. 
The ability to provide high-performance storage is critical for a high-density or Tier-1 virtualization strategy. The new storage tools in Hyper-V that we will cover later in this paper (virtual Fibre Channel, offloaded data transfer, and the new virtual hard disk format) can positively impact performance in these environments. 
A New VM Chassis 
In Hyper-V, Virtual Fibre Channel SAN connectivity on a per VM basis. 
Virtual 
Fibre Channel
Document # INDUSTRY2013002, July 2013 
Page 7 
Picking the Right I/O Pieces, and Making Them Work Together 
Tier-1 applications are uniquely demanding in many dimensions. Their needs with respect to CPU power, memory footprint, high availability/failover, resiliency and responsiveness to outside stimuli is typically unmatched within the enterprise. Moreover, Tier-1 applications also tend to be tightly integrated with other applications and resources within the enterprise. Because of this, virtualizing a Tier-1 application requires rigorous planning of the I/O strategy. There are five steps to this: 
 Identify the I/O fabrics that the Tier-1 applications will use (it may very well be “all of them”). 
 Quantify the data flows for each fabric when the application was operating on a standalone system. 
 Estimate Live Migration I/O needs for failovers and evolution. Note that most Live Migration traffic will be storage I/O; if the data stays within one external array during the Live Migration, Microsoft’s ODX capability can significantly reduce the I/O traffic. 
 Determine your primary and secondary I/O paths for multi-pathing on all of your networks. 
 Determine QoS levels for the Tier-1 apps. 
One simplifying option available is to utilize converged networking adapters that can function on both FC and Ethernet/ FCoE networks. The QLogic QLE2672 is an example of such an adapter; it can be reconfigured in the field to operate on 16Gb FC or 10Gb FCoE/ Ethernet networks. 
Virtualized I/O 
Business Critical Applications such as ERP, CRM, eCommerce and Email need high-performance and high availability I/O infrastructure to meet business SLAs. 
Tier-1 Apps 
Networking Considerations When Virtualizing Tier-1 Applications
Document # INDUSTRY2013002, July 2013 
Page 8 
QLogic Server Adapters 
Windows Server 2012 Hyper-V and Xeon E5 streamline the process of moving VMs and their associated storage with Live Migration and Live Storage Migration and support low-latency networking traffic with SR- IOV. Moving terabytes of data across the virtual machines and migration virtual servers requires low-latency, high performance I/O adapters. QLogic offers a family of server adapters for 16Gb Fibre Channel , 10GbE or converged network connectivity provide the bandwidth for increased Virtual Machine (VM) scalability and to power Tier-1 application workloads. 
Racing Exhaust Systems 
The latest generation of CNAs from QLogic support Ethernet LANs, NAS, iSCSI SANs and FCoE SANs, as well as native Fibre Channel SANs 
CNA 
QLogic Server Adapters 
2600 
Series 
3200 
Series 
8300 
Series 
Server Adapter 
Description 
Fibre Channel HBA 
Intelligent Ethernet Adapter 
Converged Network Adapter 
Speed 
16Gbps 
10Gbps 
10Gbps 
Protocols 
FC 
TCP/IP LAN and NAS, iSCSI SAN 
TCP/IP LAN and NAS, iSCSI SAN, FCoE SAN 
Use in virtualized server 
Highest performance Fibre Channel SAN connectivity for storage intensive applications 
Consolidate multiple 1GbE server connections to LAN and NAS on one high-speed Ethernet wire 
Consolidate server connections to LAN and SAN on one Ethernet wire
Document # INDUSTRY2013002, July 2013 
Page 9 
The size of a dynamically expanding VHD is as large as the data that is written to it. As more data is written to a dynamically expanding VHD, the file increases to a maximum size. A differencing VHD is similar to a dynamically expanding VHD, but it contains only the modified disk blocks of the associated parent VHD. Dynamically expanding VHDs are useful for testing environments because there is less impact if you have to rebuild the VHD. For example, some of the tests performed for this report used multiple dynamically expanding VHDs, each with a different Windows image. Fixed VHDs are recommended for production. 
New Hyper-V Performance 
VHDX 
A new Hyper-V virtual hard disk (VHD) format introduced with Windows Server 2012 which increases storage capacity from 2TB to 64 TB 
25% More Throughput with Windows Server 2012 Hyper-V 
One of the most important features in Microsoft Windows 2012 Hyper-V for I/O performance is the VHDX virtual hard disk format, which provides storage for the guest OS. Testing by Microsoft shows that VHDX delivers nearly 25% better write throughput than VHD for both dynamically expanding and differencing disks. 
VHDX Performance —1MB Sequential Writes
Document # INDUSTRY2013002, July 2013 
Page 10 
Testing with Iometer showed the performance of competitive products was identical at real world 4k, 8k and 16k block sizes. However, the QLogic QLE2672 used up to 23% less CPU processing power to do the same work. 
Performance with 16Gb FC 
QLE2672 
Testing by QLogic shows the QLE2672 16Gb Fibre Channel adapter delivers high IOPS with Microsoft Windows Server 2012 with significantly less CPU utilization than competitive products. 
Real World Performance and CPU Efficiency 
When used with a high-efficiency 16Gb Fibre Channel adapter, Hyper-V with VHDX can provide even larger performance advantages. In testing performed by QLogic, the QLogic QLE2672 16Gb FC adapter delivered the same IOPS performance using real-world 4KB & 8KB block sizes for dual-port adapters as the nearest competitor—with 23% less CPU utilization. This frees CPU cycles for virtual machines and their workloads which is critical for Tier-1 applications or in dense VM environments. 
CPU % - Dual Port, 100% Reads 
IOPS - Dual Port, 100% Reads 
IOPS Performance and CPU Utilization with 16Gb FC Adapters
Document # INDUSTRY2013002, July 2013 
Page 11 
ODX Data Copy Model Offloads Server 
Microsoft’s Live Migration offers the ability to move a live virtual machine from one physical server to another. Included are some interesting storage-oriented capabilities that provide added value if you use SANs (especially Fibre Channel SANs). The first is the ODX Data Copy Model. On compatible external storage arrays, ODX provides the ability to move data between LUNs without involving the server. For large data movements, this results in a huge performance improvement. 
Improved Live Migration with vFC 
Hyper-V’s Virtual Fibre Channel (vFC) is a new capability that augments Live Migration for those end users with Fibre Channel SANs. By creating virtual FC HBAs natively within Hyper-V, Microsoft simplifies migrations by moving the adapter with the virtual machine. This eliminates the need to re-configure network switches after a Live Migration. 
Microsoft Hyper-V’s powerful migration capabilities can provide even more utility if likely failover and evolution paths in the private cloud are planned into the framework. This is especially true for migrations to resolve hardware failures, which tend to be done under considerable stress. Planning failover migrations decreases the likelihood of negative performance impacts that may ultimately have to be undone later. 
Live VM Migration 
Windows Offloaded Data Transfer (ODX) in Windows Server 2012 directly transfers data within or between compatible storage devices, bypassing the host computer. 
ODX 
This diagram shows a Live Migration utilizing vFC. The 2-port virtual HBA ping-pongs from the first port to the second port, avoiding traffic disruption during the live migration. 
Traditional Data Copy Model 
ODX Data Copy Model 
1. A user copies or moves a file or this occurs as part of a virtual machine migration. 
2. Windows Server 2012 translates this transfer request into an ODX and creates a token representing the data. 
3. The token is copied between the source server and des- tination server. 
4. The token is delivered to the storage array. 
5. The storage array internally performs the copy or move and provides status information to the user.
Document # INDUSTRY2013002, July 2013 
Page 12 
16Gb Fibre Channel Helps Close the Storage Migration Window 
A process known as Live Storage Migration allows for a non-disruptive migration of a running VM disk files between to different physical storage devices. This process allows for the virtual machine to remain running with no need to take its workload offline to move the VM’s files to a different physical storage device. Additional use cases for Live Storage Migration include migration of data to new storage arrays or larger capacity, better performing LUNs. NPIV zoning and LUN masking must be properly configured to ensure the VM and host server continue to have access to the storage after the migration is completed. Live Storage Migration across a 16Gb Fibre Channel link can finish in half the time it takes a 8Gb Fibre Channel link. All paths related to Live Storage Migration should be supported by high performance networks in order to reduce the time it takes to evacuate storage safely to a new destination and resume normal operations. Additionally, 10GbE links can replace 1GbE links to ensure proper bandwidth exists for Live Storage Migration in Ethernet environments. 
Live Storage Migration 
A single port QLE2670 16Gb Fibre Channel HBA doubles throughput for a live storage migration. 
Storage Live Migration at 16Gbps 
This bandwidth intensive operation now enables virtual machines and associated VHDX files to be migrated between clusters that do not have a common set of storage. 
Live Storage 
Migration 
QLE2670 16Gb Fibre Channel HBAs 
QLE3240 10Gb Intelligent Ethernet Adapters
Document # INDUSTRY2013002, July 2013 
Page 13 
8300 Series CNAs Enable Convergence at the VM edge 
QLogic Converged Network Adapter solutions leverage core technologies and expertise including the most established and proven driver stack in the industry. These adapters are designed for next-generation, virtualized, and unified data centers with powerful multiprocessor, multicore servers. Optimized to handle large numbers of virtual machines and support for VM aware network services with support for concurrent NIC, FCoE, and iSCSI traffic. 
One 8300 series CNA can be configured for connectivity to an Ethernet network and to deliver storage networking via Fibre Channel over Ethernet simultaneously. Powerful iSCSI and FCoE hardware offloads improve system performance and advanced virtualization technologies are supported through secure SR-IOV or switch and OS agnostic NIC Partitioning (NPAR). Combine with QLogic’s Quality of Service (QoS) capability for consistent and guaranteed, application aware performance in dense VM environments. 
Lowering the Cost of VM I/O 
The Fibre Channel over Ethernet (FCoE) protocol allows Fibre Channel traffic to run over Data Center Ethernet (DCE) for LAN and SAN convergence on one wire. 
FCoE 
For organizations maintaining a parallel LAN and SAN architecture all the way to the server adapter, QLogic offers the QLE8300 Series of adapters supporting 10GbE LAN, NAS and iSCSI SAN traffic, as well as Fibre Channel traffic. 
Network Convergence at the VM Server 
Adapter & Fabric Convergence 
Adapter Convergence, Separate Fabrics 
Both ports used for LAN, NAS and SAN traffic over Ethernet 
Ethernet ToR Switch 
Ethernet ToR Switch 
FCoE ToR Switch 
1 port used as FCoE CNA 
1 port used as Ethernet NIC
Document # INDUSTRY2013002, July 2013 
Page 14 
8300 Series CNAs Offload the VM Kernel from Switching Virtual NICs 
Single Root I/O Virtualization is a standard that allows one PCI Express (PCIe) adapter to be presented as multiple separate logical devices to virtual machines for partitioning adapter bandwidth. The hypervisor manages the Physical Function (PF) while the Virtual Functions (VFs) are exposed to the virtual machines. In the hypervisor, SR-IOV capable network devices offer the benefits of direct I/O, which includes reduced latency and reduced host CPU utilization. With SR-IOV, pass through functionality can be provided from a single adapter to multiple virtual machines through Virtual Functions. To deploy SR-IOV today, an organization needs to ensure a minimum level of infrastructure (server hardware and OS) support for SR-IOV. In contrast, QLogic NPAR technology can similarly be used today without the minimum levels of dependencies of SR-IOV. 
Low-Latency Connectivity 
Latency is the time between the start and completion of one action measured in microseconds (μs) . 
Latency 
With SR-IOV enabled on a 10GbE NIC, pass through functionality can be provided from a single adapter to multiple virtual machines through Virtual Functions (VFs). 
Implementing Pass-Through Functions with SR-IOV 
8300 Series CNAs
Document # INDUSTRY2013002, July 2013 
Page 15 
In Transaction Intensive and Bandwidth Intensive Environments 
For virtualized environments, the most critical measure of performance is the ability to scale as the number of VMs and application workloads increase. In testing conducted by QLogic, the QLogic QLE2672 delivered three times the transactions and double the bandwidth of 8Gb Fibre Channel Adapters. The QLE2672 also demonstrated a 50% advantage over competitive products for read-only performance and 25% better mixed- read-write performance. The superior performance of QLogic 16Gb Fibre Channel Adapters translates to support for higher VM density and support for more demanding Tier-1 applications. 
QLogic achieves superior performance by leveraging the advanced 16Gb Fibre Channel and PCIe® Gen3 specifications—while maintaining backwards compatibility with existing Fibre Channel networks. The unique port-isolation architecture of the QLogic FlexSuite adapters ensures data integrity, security and deterministic scalable performance to drive storage traffic at line rate across all ports. QoS enables IT teams to control and prioritize traffic. And paired with adapter partitioning technology the QLE2672 can deliver capacity on demand and multitenant feature requirements of highly virtualized environments. 
More Virtual CPUs for scaling Tier-1 Apps 
If you’re concerned about hosting tier-1 apps on VMs, the argument about virtualizing tier-1 apps is over. Even flagship enterprise applications such as Microsoft SQL Server 2012 and Exchange 2010 have adopted server virtualization as a best practice. In fact the CPU, memory, storage and networking requirements are well documented by Microsoft. 
In the example on the right, a mission-critical OLTP Workload running on a single SQL Server 2012 VM demonstrates linearly increasing transactional performance and reduced transaction response times as the number of virtual CPUs assigned to the workload are increased to the maximum of 64 now supported on Hyper-V. 
Scalability with 16Gb FC 
Throughput 
Bandwidth refers to the maximum potential volume. Throughput is the actual volume. Both are measured as the amount of data transferred in a given time or megabytes per second (MBps). 
The number of transactions processed per second and the average response time were monitored as virtual CPUs were increased from 4-64. The OLTP workload and concurrent user counts remained constant. 
Hyper-V Virtual CPU Scalability 
With OLTP Workloads 
(Source: Microsoft )
Document # INDUSTRY2013002, July 2013 
Page 16 
10000 Series FabricCache Adapters Cache Hot VM Data 
The 10000 Series is the industry's first caching SAN adapter. This new class of server- based PCIe SSD/Fibre Channel HBAs uses the Fibre Channel network to cache and share SAN metadata. Adding large caches to servers places the cache closest to the application and in a position where it is insensitive to congestion. 
An advantage to this approach is that PCIe flash based caching can be shared and replicated in different servers for high-availability and for cache coherency across migrating servers in a virtual machine cluster. With the FabricCache architecture, the new generation of PCIe SSDs provide redundancy and fail-over for a new level of enterprise-class availability. 
Accelerating App Performance 
A QLogic architecture for sharing and replicating cache on a PCIe SSD adapter in a SAN. 
FabricCache 
The lightning fast SSD SLC flash from the 10000 Series FabricCache adapters is used to cache hot data stored on a FC SAN array. For high availability, the cache LUNs from a FabricCache adapter in one server can fail-over to a FabricCache adapter in another server and can also be used for cache coherency across migrating servers in a virtual machine cluster. 
Shared PCIe SSD
Document # INDUSTRY2013002, July 2013 
Page 17 
The Bottom Line 
The improvement factor for Memory per VM for Windows Server 2012 Hyper-V — addressing the biggest issue in scaling VMs. 
16X 
More VMs with Hyper-V, Xeon E5 and QLogic Server Adapters 
Fabric-based networks are a fundamental requirement in supporting highly virtualized data centers. Fibre Channel SANs are the nucleus of the next-generation Windows Server 2012 data center. If your goal is to increase VM density, Windows Server 2012 Hyper-V, combined with the latest generation of servers based on Intel Xeon E5 processors, and QLogic server adapters allow you to more than double the number of VMs per server while enjoying the same level of performance. Virtualization features like Microsoft vFC and Fibre Channel QoS from QLogic combine to deliver reliability, performance, and the flexibility necessary to manage the complexity and risks associated with virtualization projects. Choosing to virtualize tier-1 data center applications or increase virtualization densities with QLogic and Hyper-V will enable your businesses to leverage the built-in architecture of both products to increase availability, improve agility, and overcome scalability and performance concerns. 
Hyper-V delivers improvements on all key virtualization metrics—making I/O performance critical. 
Windows Server 
2008 R2 Hyper-V 
2012 Hyper-V 
Factor 
Host 
HW Logical Processors 
64 LPs 
320 LPs 
5x 
Physical Memory 
1 TB 
4 TB 
4x 
Virtual CPUs per Host 
512 
2048 
4x 
VM 
Virtual CPUs per VM 
4 
64 
16x 
Memory per VM 
64GB 
1TB 
16x 
Active VMs per Host 
384 
1024 
2.7x 
Guest NUMA 
No 
Yes 
- 
Cluster 
Max Nodes 
16 
64 
4x 
Max VMs 
1,000 
8,000 
8x
Related Links 
What’s New in Hyper-V—Platform 
What’s New in Hyper-V—Networking 
What’s New in Hyper-V— Virtual Fibre Channel Storage 
What’s new in Hyper-V—Storage Migration 
QLogic Fibre Channel Adapters 
QLogic Converged Network Adapters 
Acceleration for Microsoft SQL Servers 
About the Authors 
Rahul Shah, Director, IT Brand Pulse Labs Rahul Shah has over 20 years of experience in senior engineering and product management positions with semiconductor, storage networking and IP networking manufacturers including QLogic and Lantronics. At IT Brand Pulse, Rahul is responsible for managing the delivery of technical services ranging from hands-on testing to product launch collateral. You can contact Rahul at rahul.shah@itbrandpulse.com. 
Tim Lustig, Director of Corporate Marketing, QLogic Corporation 
With over 18 years of experience in the storage networking industry, Lustig has authored numerous papers and articles on all aspects of IT storage, and has been a featured speaker at many industry conferences on a global basis. As the Director of Corporate Marketing at QLogic, Lustig is responsible for corporate communications, , third party testing/validation, outbound marketing activities and strategic product marketing directives of QLogic. His responsibilities include customer research, evaluation of market conditions, press and Media relations, social media and technical writing. 
Resources

Harnessing the Power of Hyper-V Engine

  • 2.
    Document # INDUSTRY2013002,July 2013 Page 2 Most medium and large sized IT organizations have deployed several generation of virtualized servers, becoming more comfortable with the performance and reliability with each deployment. As IT organizations started to increase VM density, they hit the limits of Hyper-V software and server memory, CPU, and I/O. A new VM Engine is now available and this documents describes how it can help IT organizations maximize use of their servers running Hyper-V in Windows Server 2012. Best Practices in Hyper-V…………...……………..…………………………………………………...………..…3 Power Needed for More VMs………………………………………………..…………………………………….4 A New VM Engine…………………………………………………...……………………..……………..…………...5 A New VM Chassis ………………………………………..…………………………………………………………....6 Virtualized I/O………………………………....……………………………………….………………...……………..7 Racing Exhaust Systems: 16Gb FC, 10GbE & Converged Networking………..………….…..…..8 New Hyper-V Performance …………………………….………………………..………………………..……….9 Performance with 16Gb Fibre Channel………………..………………………...………….……………...10 Live VM Migration…..……………………………….…………………………………….…………..…………….11 Live Storage Migration………………….…………..…………….………………..…………….………………..12 Lowering the Cost of VM I/O………………………...………………………..…………………………………13 Low-Latency Connectivity……………………………………………………………………………….……...…14 Scalability with 16Gb Fibre Channel……………………………………………….………………………….15 Accelerating App Performance………………………………..……………………………………………...…16 The Bottom Line……….…………………………………………..……………………………………………….….17 Resources……….…………………………………………..……………………………………………….……………18 Harnessing the Power In a survey conducted by IT Brand, IT professionals said the average number of VMs per server would almost double in the next 24 months. VMs Per Server
  • 3.
    Document # INDUSTRY2013002,July 2013 Page 3 Best Practices in Hyper-V No hardware resource is more important to overall performance than memory. Plan to ensure each VM has the memory it needs, but without wasting memory in the process. Memory Start with Planning When planning a Hyper-V installation, it is important to take into account the new capabilities of Hyper-V in Windows Server 2012. Windows Server 2012 has added significantly to the scalability of Hyper-V. For datacenters virtualizing Tier-1 applications, the critical scalability enhancement is the ability to have up to 1TB of memory and 64 virtual CPU cores per VM. This will ensure almost all Tier-1 applications should perform well in a Microsoft Hyper-V environment. However, these new capabilities bring new complexities, and with them the need to plan new datacenter architectures. This not only includes planning the deployment for today’s needs, but also thoroughly investigating evolution strategies for applications before bolting down racks and filling them with servers.  Planning which applications are going to run on your virtualized servers is the first step in understanding your needs.  From there, it is critical to define server integration points with existing resources (likely core switching and storage resources), and how these will be affected by the evolution of existing resources.  After that, planning your approach to Live Migration and capacity growth over the lifetime of your new infrastructure will help you scope internal I/O requirements appropriately. Finally, determining whether to utilize converged networks or not, and what I/O performance you need, will enable you to intelligently discuss your I/O and networking options with your SAN/LAN equipment providers. These steps will help you ensure success when virtualizing your Tier-1 applications. To fully optimize virtualized data centers, servers need maximum I/O capacity to support high input output operation rates and high bandwidth applications. Increased bandwidth is also needed for server virtualization, which aggregates I/O from multiple virtual machines (VMs) to the host’s data path. This next- generation combination takes full advantage of new features that are described in detail in this planning guide. Read on to discover how QLogic can increase your infrastructure ROI and overall competitiveness.
  • 4.
    Document # INDUSTRY2013002,July 2013 Page 4 The Great VM Migration With many server admins working on their 3rd and 4th generation of virtualized servers, the focus has changed from interoperability and learning the behavior of Hyper-V, to increasing VM density (#VMs/Physical Host Server). With the availability of servers based on Intel’s E5 processors (multi-core, 768GB of RAM, PCI Express Gen3) and the combination of new features within Hyper-V, a new, game-changing compute platform was introduced. This new platform allows for new levels of VM density and for the first time Tier-1 applications that previously required dedicated server hardware can now run on virtual servers, achieving improved performance, scalability and efficiency. While Hyper-V and E5-based servers are seeing significant deployments in many enterprise datacenters, the I/O and network infrastructure to support these new technologies lags far behind. In a survey conducted by IT Brand Pulse, IT professionals said the average number of VMs per server would almost double in the next 24 months. Approximately 25% of IT professionals surveyed also said what they need most to increase the VM density is more I/O bandwidth. The purpose of this industry brief is to provide a planning guide to help enterprises deploy Tier-1 applications with adequate bandwidth in a dense Microsoft Hyper-V Server 2012 virtualization environment. Power Needed for more VMs Approximately 25% of IT professionals surveyed said what they need most to increase the density of VMs per server is more I/O bandwidth. VM Density IT Brand Pulse The average number of VMs per server in my environment: What I need most to increase the density of VMs per physical servers is more: IT Brand Pulse
  • 5.
    Document # INDUSTRY2013002,July 2013 Page 5 A New VM Engine 2 more cores, 8MB more of cache, 6 more DIMMs of faster DDR3- 1600 memory increasing to 768GB, double the I/O bandwidth with PCIe 3.0, and more Intel QuickPath links between processors. Xeon E5 Xeon E5 offers double the I/O bandwidth to 10GbE NICs, 10GbE/16Gb FC CNAs and 16Gb FC HBAs. The Intel Xeon E5 Platform The introduction of the Intel® Xeon® E5 family of processors responds to the call for more virtual server re- sources with 2 more cores, 8MB more of cache, 6 more DIMMs of faster DDR3-1600 memory. Increasing the total to 8 cores, 768GB of RAM, and doubling the I/O bandwidth with PCIe 3.0. Intel’s new Xeon E5 promises a significant increase in server I/O by enabling full- bandwidth, four-port 10GbE server adapters as well as dual-port 16Gb FC server adapter support, addressing the VM density issue with a substantial increase in I/O bandwidth that host servers require. Intel Xeon E5 Platform
  • 6.
    Document # INDUSTRY2013002,July 2013 Page 6 Windows Server 2012 Hyper-V The newest release of Windows Server 2012 Hyper-V delivers a high-performance VM chassis harnessing the new I/O capabilities of 16Gb Fibre Channel (FC) Storage Networking and 10GbE Data Networking. Several new features of Windows Server 2012 Hyper-V are highlighted below:  vCPU—Virtual machines can now have up to 64 virtual CPUs (vCPUs) and 1TB of virtual RAM (vRAM) allowing Tier- 1 application to be virtualized and new levels of VM density to be reached.  Virtual Fibre Channel (Virtual FC)—Hyper-V now enables VM workloads to access FC SANs by provisioning virtual FC ports with a standard Worldwide Name (WWN) within the guest OS.  Live Migration—Virtual FC also enables Live Migration of VMs across Hyper-V hosts while maintaining FC connectivity. Two WWNs are configured and maintained for each virtual FC adapter.  Live Storage Migration—A VM’s Virtual Hard Disk (VHDX) storage can now be migrated without shutting down the VM. The operation copies data from source storage device to a target via a FC or similar interconnect.  Multipath I/O (MPIO)— Hyper-V now extends MPIO capability to VMs ensuring fault-tolerant FC connectivity for delivering High Availability and Resiliency to virtualized workloads  16Gb Fibre Channel—To help maximize efficiency of Live Migration and Storage Motion, Hyper-V includes support for 16Gb Fibre Channel, the fastest storage interconnect available today.  10GbE and SR-IOV— Allows 10GbE NIC to appear as multiple virtual devices that can optimize I/O performance by providing direct I/O for individual virtual machines. From a storage planning perspective, when comparing Windows Server 2012 to previous version, there are two specifications which stand-out: the amount of memory/VM (1TB), and the amount of active VMs per machine (1,024). Using today’s storage usage, a petabyte of storage could be needed to support 1024 VMs. While this scenario is unlikely for at least a few years, running 100 VMs with 512GB of virtual memory each on a single server (which would require 52TB of storage for the memory contents alone) is very foreseeable. The ability to provide high-performance storage is critical for a high-density or Tier-1 virtualization strategy. The new storage tools in Hyper-V that we will cover later in this paper (virtual Fibre Channel, offloaded data transfer, and the new virtual hard disk format) can positively impact performance in these environments. A New VM Chassis In Hyper-V, Virtual Fibre Channel SAN connectivity on a per VM basis. Virtual Fibre Channel
  • 7.
    Document # INDUSTRY2013002,July 2013 Page 7 Picking the Right I/O Pieces, and Making Them Work Together Tier-1 applications are uniquely demanding in many dimensions. Their needs with respect to CPU power, memory footprint, high availability/failover, resiliency and responsiveness to outside stimuli is typically unmatched within the enterprise. Moreover, Tier-1 applications also tend to be tightly integrated with other applications and resources within the enterprise. Because of this, virtualizing a Tier-1 application requires rigorous planning of the I/O strategy. There are five steps to this:  Identify the I/O fabrics that the Tier-1 applications will use (it may very well be “all of them”).  Quantify the data flows for each fabric when the application was operating on a standalone system.  Estimate Live Migration I/O needs for failovers and evolution. Note that most Live Migration traffic will be storage I/O; if the data stays within one external array during the Live Migration, Microsoft’s ODX capability can significantly reduce the I/O traffic.  Determine your primary and secondary I/O paths for multi-pathing on all of your networks.  Determine QoS levels for the Tier-1 apps. One simplifying option available is to utilize converged networking adapters that can function on both FC and Ethernet/ FCoE networks. The QLogic QLE2672 is an example of such an adapter; it can be reconfigured in the field to operate on 16Gb FC or 10Gb FCoE/ Ethernet networks. Virtualized I/O Business Critical Applications such as ERP, CRM, eCommerce and Email need high-performance and high availability I/O infrastructure to meet business SLAs. Tier-1 Apps Networking Considerations When Virtualizing Tier-1 Applications
  • 8.
    Document # INDUSTRY2013002,July 2013 Page 8 QLogic Server Adapters Windows Server 2012 Hyper-V and Xeon E5 streamline the process of moving VMs and their associated storage with Live Migration and Live Storage Migration and support low-latency networking traffic with SR- IOV. Moving terabytes of data across the virtual machines and migration virtual servers requires low-latency, high performance I/O adapters. QLogic offers a family of server adapters for 16Gb Fibre Channel , 10GbE or converged network connectivity provide the bandwidth for increased Virtual Machine (VM) scalability and to power Tier-1 application workloads. Racing Exhaust Systems The latest generation of CNAs from QLogic support Ethernet LANs, NAS, iSCSI SANs and FCoE SANs, as well as native Fibre Channel SANs CNA QLogic Server Adapters 2600 Series 3200 Series 8300 Series Server Adapter Description Fibre Channel HBA Intelligent Ethernet Adapter Converged Network Adapter Speed 16Gbps 10Gbps 10Gbps Protocols FC TCP/IP LAN and NAS, iSCSI SAN TCP/IP LAN and NAS, iSCSI SAN, FCoE SAN Use in virtualized server Highest performance Fibre Channel SAN connectivity for storage intensive applications Consolidate multiple 1GbE server connections to LAN and NAS on one high-speed Ethernet wire Consolidate server connections to LAN and SAN on one Ethernet wire
  • 9.
    Document # INDUSTRY2013002,July 2013 Page 9 The size of a dynamically expanding VHD is as large as the data that is written to it. As more data is written to a dynamically expanding VHD, the file increases to a maximum size. A differencing VHD is similar to a dynamically expanding VHD, but it contains only the modified disk blocks of the associated parent VHD. Dynamically expanding VHDs are useful for testing environments because there is less impact if you have to rebuild the VHD. For example, some of the tests performed for this report used multiple dynamically expanding VHDs, each with a different Windows image. Fixed VHDs are recommended for production. New Hyper-V Performance VHDX A new Hyper-V virtual hard disk (VHD) format introduced with Windows Server 2012 which increases storage capacity from 2TB to 64 TB 25% More Throughput with Windows Server 2012 Hyper-V One of the most important features in Microsoft Windows 2012 Hyper-V for I/O performance is the VHDX virtual hard disk format, which provides storage for the guest OS. Testing by Microsoft shows that VHDX delivers nearly 25% better write throughput than VHD for both dynamically expanding and differencing disks. VHDX Performance —1MB Sequential Writes
  • 10.
    Document # INDUSTRY2013002,July 2013 Page 10 Testing with Iometer showed the performance of competitive products was identical at real world 4k, 8k and 16k block sizes. However, the QLogic QLE2672 used up to 23% less CPU processing power to do the same work. Performance with 16Gb FC QLE2672 Testing by QLogic shows the QLE2672 16Gb Fibre Channel adapter delivers high IOPS with Microsoft Windows Server 2012 with significantly less CPU utilization than competitive products. Real World Performance and CPU Efficiency When used with a high-efficiency 16Gb Fibre Channel adapter, Hyper-V with VHDX can provide even larger performance advantages. In testing performed by QLogic, the QLogic QLE2672 16Gb FC adapter delivered the same IOPS performance using real-world 4KB & 8KB block sizes for dual-port adapters as the nearest competitor—with 23% less CPU utilization. This frees CPU cycles for virtual machines and their workloads which is critical for Tier-1 applications or in dense VM environments. CPU % - Dual Port, 100% Reads IOPS - Dual Port, 100% Reads IOPS Performance and CPU Utilization with 16Gb FC Adapters
  • 11.
    Document # INDUSTRY2013002,July 2013 Page 11 ODX Data Copy Model Offloads Server Microsoft’s Live Migration offers the ability to move a live virtual machine from one physical server to another. Included are some interesting storage-oriented capabilities that provide added value if you use SANs (especially Fibre Channel SANs). The first is the ODX Data Copy Model. On compatible external storage arrays, ODX provides the ability to move data between LUNs without involving the server. For large data movements, this results in a huge performance improvement. Improved Live Migration with vFC Hyper-V’s Virtual Fibre Channel (vFC) is a new capability that augments Live Migration for those end users with Fibre Channel SANs. By creating virtual FC HBAs natively within Hyper-V, Microsoft simplifies migrations by moving the adapter with the virtual machine. This eliminates the need to re-configure network switches after a Live Migration. Microsoft Hyper-V’s powerful migration capabilities can provide even more utility if likely failover and evolution paths in the private cloud are planned into the framework. This is especially true for migrations to resolve hardware failures, which tend to be done under considerable stress. Planning failover migrations decreases the likelihood of negative performance impacts that may ultimately have to be undone later. Live VM Migration Windows Offloaded Data Transfer (ODX) in Windows Server 2012 directly transfers data within or between compatible storage devices, bypassing the host computer. ODX This diagram shows a Live Migration utilizing vFC. The 2-port virtual HBA ping-pongs from the first port to the second port, avoiding traffic disruption during the live migration. Traditional Data Copy Model ODX Data Copy Model 1. A user copies or moves a file or this occurs as part of a virtual machine migration. 2. Windows Server 2012 translates this transfer request into an ODX and creates a token representing the data. 3. The token is copied between the source server and des- tination server. 4. The token is delivered to the storage array. 5. The storage array internally performs the copy or move and provides status information to the user.
  • 12.
    Document # INDUSTRY2013002,July 2013 Page 12 16Gb Fibre Channel Helps Close the Storage Migration Window A process known as Live Storage Migration allows for a non-disruptive migration of a running VM disk files between to different physical storage devices. This process allows for the virtual machine to remain running with no need to take its workload offline to move the VM’s files to a different physical storage device. Additional use cases for Live Storage Migration include migration of data to new storage arrays or larger capacity, better performing LUNs. NPIV zoning and LUN masking must be properly configured to ensure the VM and host server continue to have access to the storage after the migration is completed. Live Storage Migration across a 16Gb Fibre Channel link can finish in half the time it takes a 8Gb Fibre Channel link. All paths related to Live Storage Migration should be supported by high performance networks in order to reduce the time it takes to evacuate storage safely to a new destination and resume normal operations. Additionally, 10GbE links can replace 1GbE links to ensure proper bandwidth exists for Live Storage Migration in Ethernet environments. Live Storage Migration A single port QLE2670 16Gb Fibre Channel HBA doubles throughput for a live storage migration. Storage Live Migration at 16Gbps This bandwidth intensive operation now enables virtual machines and associated VHDX files to be migrated between clusters that do not have a common set of storage. Live Storage Migration QLE2670 16Gb Fibre Channel HBAs QLE3240 10Gb Intelligent Ethernet Adapters
  • 13.
    Document # INDUSTRY2013002,July 2013 Page 13 8300 Series CNAs Enable Convergence at the VM edge QLogic Converged Network Adapter solutions leverage core technologies and expertise including the most established and proven driver stack in the industry. These adapters are designed for next-generation, virtualized, and unified data centers with powerful multiprocessor, multicore servers. Optimized to handle large numbers of virtual machines and support for VM aware network services with support for concurrent NIC, FCoE, and iSCSI traffic. One 8300 series CNA can be configured for connectivity to an Ethernet network and to deliver storage networking via Fibre Channel over Ethernet simultaneously. Powerful iSCSI and FCoE hardware offloads improve system performance and advanced virtualization technologies are supported through secure SR-IOV or switch and OS agnostic NIC Partitioning (NPAR). Combine with QLogic’s Quality of Service (QoS) capability for consistent and guaranteed, application aware performance in dense VM environments. Lowering the Cost of VM I/O The Fibre Channel over Ethernet (FCoE) protocol allows Fibre Channel traffic to run over Data Center Ethernet (DCE) for LAN and SAN convergence on one wire. FCoE For organizations maintaining a parallel LAN and SAN architecture all the way to the server adapter, QLogic offers the QLE8300 Series of adapters supporting 10GbE LAN, NAS and iSCSI SAN traffic, as well as Fibre Channel traffic. Network Convergence at the VM Server Adapter & Fabric Convergence Adapter Convergence, Separate Fabrics Both ports used for LAN, NAS and SAN traffic over Ethernet Ethernet ToR Switch Ethernet ToR Switch FCoE ToR Switch 1 port used as FCoE CNA 1 port used as Ethernet NIC
  • 14.
    Document # INDUSTRY2013002,July 2013 Page 14 8300 Series CNAs Offload the VM Kernel from Switching Virtual NICs Single Root I/O Virtualization is a standard that allows one PCI Express (PCIe) adapter to be presented as multiple separate logical devices to virtual machines for partitioning adapter bandwidth. The hypervisor manages the Physical Function (PF) while the Virtual Functions (VFs) are exposed to the virtual machines. In the hypervisor, SR-IOV capable network devices offer the benefits of direct I/O, which includes reduced latency and reduced host CPU utilization. With SR-IOV, pass through functionality can be provided from a single adapter to multiple virtual machines through Virtual Functions. To deploy SR-IOV today, an organization needs to ensure a minimum level of infrastructure (server hardware and OS) support for SR-IOV. In contrast, QLogic NPAR technology can similarly be used today without the minimum levels of dependencies of SR-IOV. Low-Latency Connectivity Latency is the time between the start and completion of one action measured in microseconds (μs) . Latency With SR-IOV enabled on a 10GbE NIC, pass through functionality can be provided from a single adapter to multiple virtual machines through Virtual Functions (VFs). Implementing Pass-Through Functions with SR-IOV 8300 Series CNAs
  • 15.
    Document # INDUSTRY2013002,July 2013 Page 15 In Transaction Intensive and Bandwidth Intensive Environments For virtualized environments, the most critical measure of performance is the ability to scale as the number of VMs and application workloads increase. In testing conducted by QLogic, the QLogic QLE2672 delivered three times the transactions and double the bandwidth of 8Gb Fibre Channel Adapters. The QLE2672 also demonstrated a 50% advantage over competitive products for read-only performance and 25% better mixed- read-write performance. The superior performance of QLogic 16Gb Fibre Channel Adapters translates to support for higher VM density and support for more demanding Tier-1 applications. QLogic achieves superior performance by leveraging the advanced 16Gb Fibre Channel and PCIe® Gen3 specifications—while maintaining backwards compatibility with existing Fibre Channel networks. The unique port-isolation architecture of the QLogic FlexSuite adapters ensures data integrity, security and deterministic scalable performance to drive storage traffic at line rate across all ports. QoS enables IT teams to control and prioritize traffic. And paired with adapter partitioning technology the QLE2672 can deliver capacity on demand and multitenant feature requirements of highly virtualized environments. More Virtual CPUs for scaling Tier-1 Apps If you’re concerned about hosting tier-1 apps on VMs, the argument about virtualizing tier-1 apps is over. Even flagship enterprise applications such as Microsoft SQL Server 2012 and Exchange 2010 have adopted server virtualization as a best practice. In fact the CPU, memory, storage and networking requirements are well documented by Microsoft. In the example on the right, a mission-critical OLTP Workload running on a single SQL Server 2012 VM demonstrates linearly increasing transactional performance and reduced transaction response times as the number of virtual CPUs assigned to the workload are increased to the maximum of 64 now supported on Hyper-V. Scalability with 16Gb FC Throughput Bandwidth refers to the maximum potential volume. Throughput is the actual volume. Both are measured as the amount of data transferred in a given time or megabytes per second (MBps). The number of transactions processed per second and the average response time were monitored as virtual CPUs were increased from 4-64. The OLTP workload and concurrent user counts remained constant. Hyper-V Virtual CPU Scalability With OLTP Workloads (Source: Microsoft )
  • 16.
    Document # INDUSTRY2013002,July 2013 Page 16 10000 Series FabricCache Adapters Cache Hot VM Data The 10000 Series is the industry's first caching SAN adapter. This new class of server- based PCIe SSD/Fibre Channel HBAs uses the Fibre Channel network to cache and share SAN metadata. Adding large caches to servers places the cache closest to the application and in a position where it is insensitive to congestion. An advantage to this approach is that PCIe flash based caching can be shared and replicated in different servers for high-availability and for cache coherency across migrating servers in a virtual machine cluster. With the FabricCache architecture, the new generation of PCIe SSDs provide redundancy and fail-over for a new level of enterprise-class availability. Accelerating App Performance A QLogic architecture for sharing and replicating cache on a PCIe SSD adapter in a SAN. FabricCache The lightning fast SSD SLC flash from the 10000 Series FabricCache adapters is used to cache hot data stored on a FC SAN array. For high availability, the cache LUNs from a FabricCache adapter in one server can fail-over to a FabricCache adapter in another server and can also be used for cache coherency across migrating servers in a virtual machine cluster. Shared PCIe SSD
  • 17.
    Document # INDUSTRY2013002,July 2013 Page 17 The Bottom Line The improvement factor for Memory per VM for Windows Server 2012 Hyper-V — addressing the biggest issue in scaling VMs. 16X More VMs with Hyper-V, Xeon E5 and QLogic Server Adapters Fabric-based networks are a fundamental requirement in supporting highly virtualized data centers. Fibre Channel SANs are the nucleus of the next-generation Windows Server 2012 data center. If your goal is to increase VM density, Windows Server 2012 Hyper-V, combined with the latest generation of servers based on Intel Xeon E5 processors, and QLogic server adapters allow you to more than double the number of VMs per server while enjoying the same level of performance. Virtualization features like Microsoft vFC and Fibre Channel QoS from QLogic combine to deliver reliability, performance, and the flexibility necessary to manage the complexity and risks associated with virtualization projects. Choosing to virtualize tier-1 data center applications or increase virtualization densities with QLogic and Hyper-V will enable your businesses to leverage the built-in architecture of both products to increase availability, improve agility, and overcome scalability and performance concerns. Hyper-V delivers improvements on all key virtualization metrics—making I/O performance critical. Windows Server 2008 R2 Hyper-V 2012 Hyper-V Factor Host HW Logical Processors 64 LPs 320 LPs 5x Physical Memory 1 TB 4 TB 4x Virtual CPUs per Host 512 2048 4x VM Virtual CPUs per VM 4 64 16x Memory per VM 64GB 1TB 16x Active VMs per Host 384 1024 2.7x Guest NUMA No Yes - Cluster Max Nodes 16 64 4x Max VMs 1,000 8,000 8x
  • 18.
    Related Links What’sNew in Hyper-V—Platform What’s New in Hyper-V—Networking What’s New in Hyper-V— Virtual Fibre Channel Storage What’s new in Hyper-V—Storage Migration QLogic Fibre Channel Adapters QLogic Converged Network Adapters Acceleration for Microsoft SQL Servers About the Authors Rahul Shah, Director, IT Brand Pulse Labs Rahul Shah has over 20 years of experience in senior engineering and product management positions with semiconductor, storage networking and IP networking manufacturers including QLogic and Lantronics. At IT Brand Pulse, Rahul is responsible for managing the delivery of technical services ranging from hands-on testing to product launch collateral. You can contact Rahul at rahul.shah@itbrandpulse.com. Tim Lustig, Director of Corporate Marketing, QLogic Corporation With over 18 years of experience in the storage networking industry, Lustig has authored numerous papers and articles on all aspects of IT storage, and has been a featured speaker at many industry conferences on a global basis. As the Director of Corporate Marketing at QLogic, Lustig is responsible for corporate communications, , third party testing/validation, outbound marketing activities and strategic product marketing directives of QLogic. His responsibilities include customer research, evaluation of market conditions, press and Media relations, social media and technical writing. Resources