SlideShare a Scribd company logo
Microsoft
Virtual
Academy
Microsoft
Virtual
Academy
Part 1 | Windows Server 2012 Hyper-V
&. VMware vSphere 5.1
Part 2 | System Center 2012 SP1 &
VMware’s Private Cloud
(01) Introduction & Scalability
(05) Introduction & Overview of
System Center 2012
(02) Storage & Resource Management (06) Application Management
(03) Security, Multi-tenancy & Flexibility (07) Cross-Platform Management
(04) High-Availability & Resiliency (08) Foundation, Hybrid Clouds & Costs
** MEAL BREAK **

Live migration maintaining
Fibre Channel connectivity
SCALE AND
PERFORMANCE
Hyper-V host 1 Hyper-V host 2
Worldwide
Name Set B
Worldwide
Name Set A
Worldwide
Name Set B
• Unmediated access to a storage area
network (SAN)
• Hardware-based I/O path to virtual hard
disk stack
• N_Port ID Virtualization (NPIV) support
• Single Hyper-V host connected to different
SANs
• Up to four Virtual Fibre Channel adapters
on a virtual machine
• Multipath I/O (MPIO) functionality
• Live migration
Access Fibre Channel SAN
data from a virtual machine
Worldwide
Name Set A
Capabilities
• Improved performance of virtual hard disks
on 512e disks
• Ability to host virtual hard disks on native
4 KB disks
Benefits
• Reduces impact of 512e disks on virtual
hard disk stack
• Workloads complete more quickly
Virtual hard disk 4 KB block (blue) not
aligned with physical 4 KB boundary
9
SCALE AND
PERFORMANCE
Physical sector 0 Physical sector 1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Logical sector
Sector bitmap
Features
• Storage capacity up to 64 TBs
• Corruption protection during power failures
• Optimal structure alignment for large-sector
disks
Benefits
• Increases storage capacity
• Protects data
• Helps to ensure quality performance on
large-sector disks
VHDX
SCALE AND
PERFORMANCE
Large allocations
and 1 MB aligned
Header region
Data region (large allocations and 1 MB aligned)
Metadata region (small allocations and unaligned)
Intent log
Block Allocation
Table (BAT)
Metadata table
User data blocks
Sector bitmap blocks
User metadata
File metadata
Header
• Virtualization of storage with
Storage Pools and Storage
Spaces
• Storage resilience and
availability with commodity
hardware
• Resiliency and data
redundancy through
n-way mirroring (clustered or
unclustered) or parity mode
(unclustered)
• Utilization optimized through
thin and trim provisioning
and enclosure awareness
• Integration with other
Windows Server 2012
capabilities
• Serial Attached SCSI (SAS)
and Serial AT Attachment
(SATA) interconnects
Windows
Virtualized
Storage
Windows Application Server or File Server
Physical or
virtualized
deployments
Physical
Storage
(Shared) SAS or SATA
Integrated
with other
Windows
Server 2012
capabilities
Storage PoolStorage Pool
File Server
Administration Console
Hyper-V
Cluster Shared
Volume
Failover Clustering
SMB Multichannel
NFS Windows Storage Mgmt.
NTFS SMB Direct
1
Storage Space Storage Space Storage Space
ENTERPRISE-CLASS
FEATURES ON LESS
EXPENSIVE HARDWARE
1
External Intelligent Storage Array
Virtual Disk Virtual Disk
Token
IMPROVED
PERFORMANCE,
MORE CHOICE
Benefits
• Rapid virtual machine provisioning and
migration
• Faster transfers on large files
• Minimized latency
• Maximized array throughput
• Less CPU and network use
• Performance not limited by network
throughput or server use
• Improved datacenter capacity and scale
Offloaded Data Transfer
(ODX)
Token-based data transfer between
intelligent storage arrays
0
20
40
60
80
100
120
140
160
180
200
Average Desktop ODX
Creation of a 10 GB Fixed Disk
Time (seconds)
<1 Second!
~3 Minutes
Capability
Hyper-V
(2012)
vSphere
Hypervisor
vSphere 5.1
Enterprise Plus
Virtual Fiber Channel Yes Yes Yes
3rd Party Multipathing (MPIO) Yes No Yes (VAMP)1
Native 4-KB Disk Support Yes No No
Maximum Virtual Disk Size 64TB VHDX 2TB VMDK 2TB VMDK
Maximum Pass Through Disk Size 256TB+2 64TB 64TB
Offloaded Data Transfer Yes No Yes (VAAI)3
Boot from USB Disk Yes4 Yes Yes
Storage Pooling Yes No No
1 vStorage API for Multipathing (VAMP) is only available in Enterprise & Enterprise Plus editions of vSphere 5.1
2 The maximum size of a physical disk attached to a virtual machine is determined by the guest operating system and
the chosen file system within the guest. More recent Windows Server operating systems support disks in excess of
256TB in size
3 vStorage API for Array Integration (VAAI) is only available in Enterprise & Enterprise Plus editions of vSphere 5.1
4 Hyper-V Server 2012 Only
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf and
http://www.vmware.com/products/vsphere/buy/editions_comparison.html
VM1
Maximum
memory
Improvements for Hyper-V
CONTINUOUS
SERVICES
Hyper-V
Physical
memory
pool
Minimum
memory
Maximum
memory Memory in use
Physical
memory
pool
Memory in use
Physical
memory
pool
Administrator can
increase maximum
memory without a
restart
Dynamic Memory
• Introduced in Windows Server 2008 R2 SP1
• Reallocates memory automatically among
running virtual machines
Windows Server 2012
improvements
• Minimum memory
• Hyper-V smart paging
• Memory ballooning
• Runtime configuration
Hyper-V
VM1
Maximum
memory
Improvements for Hyper-V
Virtual machine starting with
Hyper-V smart paging
CONTINUOUS
SERVICES
Minimum
memory
VMn
Minimum
memory
Maximum
memory
VM2
Maximum
memory
Minimum
memory
Physical
memory
pool
Physical
memory
pool
Physical
memory
pool
Startup increases
memory in use
Paging file provides
additional memory
for startup
Benefits
• Higher consolidation numbers
• Improved reliability of Hyper-V operations
• Ability to increase maximum memory
configuration with minimal downtime
Removing paged memory after
virtual machine restart
Memory reclaimed
after startup
Physical
memory
pool
Memory in use
after startup
Features
• Uses resource pools
• Compatible with all Hyper-V
operations
• Unaffected by virtual machine
movement
• Uses Network Metering Port
ACLs
Benefits of
Resource
Metering
• Easier to track virtual machine
use
• Can be used to aggregate
data for multiple virtual
machines
• Can be used to build accurate
lookback and chargeback
solutions
• Easier to obtain resource use
data
Metrics
Average CPU use
Average memory use
Minimum memory use
Maximum memory use
Maximum disk allocation
Incoming network traffic
Outgoing network traffic
ISOLATION AND
MULTITENANCY
A two-tenant environment built with
Hyper-V in Windows Server 2012
Virtual Machine
Resource Metering
20 10
Resource Pool Internet
Resource Metering10 5
3045 2540
Resource Pool
Internet
Customer 2
30 55
Resource Pool
Internet
Customer 1
0
0 0
0
1015202530354045505152025
Features and mechanisms
Relative minimum
bandwidth
Strict minimum
bandwidth
Bandwidth
oversubscription
CONTINUOUS
SERVICES
Features
• Establishes a bandwidth floor
• Assigns specified bandwidth for each type
of traffic
• Helps to ensure fair sharing during
congestion
• Can exceed quota with no congestion
Two mechanisms
• Enhanced packet scheduler (software)
• Network adapter with DCB support
(hardware)
Normal
priority
High
priority
Critical
Hyper-V Extensible Switch
W=1 W=2 W=5
Bronze
tenant
Silver
tenant
Gold
tenant
Hyper-V Extensible Switch
100 MB 200 MB 500 MB
1 Gbps
Gold
tenant
Gold
tenant
Gold
tenant
Hyper-V Extensible Switch
500 MB 500 MB 500 MB
1 Gbps
NIC Teaming
1 Gbps
Capability
Hyper-V
(2012)
vSphere
Hypervisor
vSphere 5.1
Enterprise Plus
Dynamic Memory Yes Yes Yes
Resource Metering Yes Yes1 Yes
Quality of Service Yes No Yes2
Data Center Bridging (DCB) Yes Yes Yes
1 Without vCenter, Resource Metering in the vSphere Hypervisor is only available on an individual host by host basis.
2 Quality of Service (QoS) is only available in the Enterprise Plus edition of vSphere 5.1
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf and
http://www.vmware.com/products/vsphere/buy/editions_comparison.html
©2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, Office, Azure, System Center, Dynamics and other product names are or may be registered trademarks and/or trademarks in the
U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft
must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after
the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

More Related Content

What's hot

12 christian ferber xen_server_advanced
12 christian ferber xen_server_advanced12 christian ferber xen_server_advanced
12 christian ferber xen_server_advancedDigicomp Academy AG
 
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...VMworld
 
Introducing Xen Server
Introducing Xen ServerIntroducing Xen Server
Introducing Xen ServerStephenRice86
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentationvirtualsouthwest
 
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущееxen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущееDenis Gundarev
 
VMworld 2013: Virtualizing Highly Available SQL Servers
VMworld 2013: Virtualizing Highly Available SQL Servers VMworld 2013: Virtualizing Highly Available SQL Servers
VMworld 2013: Virtualizing Highly Available SQL Servers VMworld
 
EVO-RAIL 2.0 Overview Deck
EVO-RAIL 2.0 Overview DeckEVO-RAIL 2.0 Overview Deck
EVO-RAIL 2.0 Overview DeckErik Bussink
 
VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017Novosco
 
Christian ferver xen server_6.1_overview
Christian ferver xen server_6.1_overviewChristian ferver xen server_6.1_overview
Christian ferver xen server_6.1_overviewDigicomp Academy AG
 
Xen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationXen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationNuno Alves
 
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld
 
VMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use CasesVMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use CasesVMworld
 
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld
 
VMworld 2013: What's New in VMware vSphere?
VMworld 2013: What's New in VMware vSphere? VMworld 2013: What's New in VMware vSphere?
VMworld 2013: What's New in VMware vSphere? VMworld
 
Nashville VMUG Keynote April 8 2015 - vSphere 6
Nashville VMUG Keynote April 8 2015 - vSphere 6Nashville VMUG Keynote April 8 2015 - vSphere 6
Nashville VMUG Keynote April 8 2015 - vSphere 6Adam Eckerle
 
VMworld Europe 2014: Top 10 Do’s / Don’ts of Data Protection For VMware vSphere
VMworld Europe 2014: Top 10 Do’s / Don’ts of Data Protection For VMware vSphereVMworld Europe 2014: Top 10 Do’s / Don’ts of Data Protection For VMware vSphere
VMworld Europe 2014: Top 10 Do’s / Don’ts of Data Protection For VMware vSphereVMworld
 
VMworld Europe 2014: Storage DRS - Deep Dive and Best Practices
VMworld Europe 2014: Storage DRS - Deep Dive and Best PracticesVMworld Europe 2014: Storage DRS - Deep Dive and Best Practices
VMworld Europe 2014: Storage DRS - Deep Dive and Best PracticesVMworld
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld
 

What's hot (20)

12 christian ferber xen_server_advanced
12 christian ferber xen_server_advanced12 christian ferber xen_server_advanced
12 christian ferber xen_server_advanced
 
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
 
Introducing Xen Server
Introducing Xen ServerIntroducing Xen Server
Introducing Xen Server
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentation
 
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущееxen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
 
VMworld 2013: Virtualizing Highly Available SQL Servers
VMworld 2013: Virtualizing Highly Available SQL Servers VMworld 2013: Virtualizing Highly Available SQL Servers
VMworld 2013: Virtualizing Highly Available SQL Servers
 
EVO-RAIL 2.0 Overview Deck
EVO-RAIL 2.0 Overview DeckEVO-RAIL 2.0 Overview Deck
EVO-RAIL 2.0 Overview Deck
 
VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017VMware vSAN - Novosco, June 2017
VMware vSAN - Novosco, June 2017
 
Christian ferver xen server_6.1_overview
Christian ferver xen server_6.1_overviewChristian ferver xen server_6.1_overview
Christian ferver xen server_6.1_overview
 
Xen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationXen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentation
 
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
 
ebk EVO-RAIL v104
ebk EVO-RAIL v104ebk EVO-RAIL v104
ebk EVO-RAIL v104
 
VMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use CasesVMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use Cases
 
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
 
VMworld 2013: What's New in VMware vSphere?
VMworld 2013: What's New in VMware vSphere? VMworld 2013: What's New in VMware vSphere?
VMworld 2013: What's New in VMware vSphere?
 
3 Hyper V
3 Hyper V3 Hyper V
3 Hyper V
 
Nashville VMUG Keynote April 8 2015 - vSphere 6
Nashville VMUG Keynote April 8 2015 - vSphere 6Nashville VMUG Keynote April 8 2015 - vSphere 6
Nashville VMUG Keynote April 8 2015 - vSphere 6
 
VMworld Europe 2014: Top 10 Do’s / Don’ts of Data Protection For VMware vSphere
VMworld Europe 2014: Top 10 Do’s / Don’ts of Data Protection For VMware vSphereVMworld Europe 2014: Top 10 Do’s / Don’ts of Data Protection For VMware vSphere
VMworld Europe 2014: Top 10 Do’s / Don’ts of Data Protection For VMware vSphere
 
VMworld Europe 2014: Storage DRS - Deep Dive and Best Practices
VMworld Europe 2014: Storage DRS - Deep Dive and Best PracticesVMworld Europe 2014: Storage DRS - Deep Dive and Best Practices
VMworld Europe 2014: Storage DRS - Deep Dive and Best Practices
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep Dive
 

Viewers also liked

VMWARE Professionals - App Management
VMWARE Professionals - App ManagementVMWARE Professionals - App Management
VMWARE Professionals - App ManagementPaulo Freitas
 
VMWARE Professionals - Foundation Hybrid Clouds and Costs
VMWARE Professionals -  Foundation Hybrid Clouds and CostsVMWARE Professionals -  Foundation Hybrid Clouds and Costs
VMWARE Professionals - Foundation Hybrid Clouds and CostsPaulo Freitas
 
VMWARE Professionals - Availability and Resiliency
VMWARE Professionals -  Availability and ResiliencyVMWARE Professionals -  Availability and Resiliency
VMWARE Professionals - Availability and ResiliencyPaulo Freitas
 
VMWARE Professionals - Cross-Plattform Mangement
VMWARE Professionals -  Cross-Plattform MangementVMWARE Professionals -  Cross-Plattform Mangement
VMWARE Professionals - Cross-Plattform MangementPaulo Freitas
 
Office Comunnications Server 2007 R2 Poster
Office Comunnications Server 2007 R2 PosterOffice Comunnications Server 2007 R2 Poster
Office Comunnications Server 2007 R2 PosterPaulo Freitas
 
VMWARE Professionals - Intro to System Center 2012 SP1
VMWARE Professionals -  Intro to System Center 2012 SP1VMWARE Professionals -  Intro to System Center 2012 SP1
VMWARE Professionals - Intro to System Center 2012 SP1Paulo Freitas
 

Viewers also liked (6)

VMWARE Professionals - App Management
VMWARE Professionals - App ManagementVMWARE Professionals - App Management
VMWARE Professionals - App Management
 
VMWARE Professionals - Foundation Hybrid Clouds and Costs
VMWARE Professionals -  Foundation Hybrid Clouds and CostsVMWARE Professionals -  Foundation Hybrid Clouds and Costs
VMWARE Professionals - Foundation Hybrid Clouds and Costs
 
VMWARE Professionals - Availability and Resiliency
VMWARE Professionals -  Availability and ResiliencyVMWARE Professionals -  Availability and Resiliency
VMWARE Professionals - Availability and Resiliency
 
VMWARE Professionals - Cross-Plattform Mangement
VMWARE Professionals -  Cross-Plattform MangementVMWARE Professionals -  Cross-Plattform Mangement
VMWARE Professionals - Cross-Plattform Mangement
 
Office Comunnications Server 2007 R2 Poster
Office Comunnications Server 2007 R2 PosterOffice Comunnications Server 2007 R2 Poster
Office Comunnications Server 2007 R2 Poster
 
VMWARE Professionals - Intro to System Center 2012 SP1
VMWARE Professionals -  Intro to System Center 2012 SP1VMWARE Professionals -  Intro to System Center 2012 SP1
VMWARE Professionals - Intro to System Center 2012 SP1
 

Similar to VMWARE Professionals - Storage and Resources

Windows Server 2012 Deep-Dive - EPC Group
Windows Server 2012 Deep-Dive - EPC GroupWindows Server 2012 Deep-Dive - EPC Group
Windows Server 2012 Deep-Dive - EPC GroupEPC Group
 
Virtualizacao de Servidores - Windows
Virtualizacao de Servidores - WindowsVirtualizacao de Servidores - Windows
Virtualizacao de Servidores - WindowsSergio Maia
 
Windows Server 2012 Deep-Dive - EPC Group
Windows Server 2012 Deep-Dive - EPC GroupWindows Server 2012 Deep-Dive - EPC Group
Windows Server 2012 Deep-Dive - EPC GroupEPC Group
 
V sphere 5.1 what's new presentation, customer
V sphere 5.1   what's new presentation, customerV sphere 5.1   what's new presentation, customer
V sphere 5.1 what's new presentation, customersolarisyourep
 
Storage Enhancements in Windows 2012 R2
Storage Enhancements in Windows 2012 R2Storage Enhancements in Windows 2012 R2
Storage Enhancements in Windows 2012 R2Michael Rüefli
 
Advantages of HyperV over vSphere 5.1
Advantages of HyperV over vSphere 5.1Advantages of HyperV over vSphere 5.1
Advantages of HyperV over vSphere 5.1uNIX Jim
 
Dell EMC: Virtualizace není jen hypervisor!
Dell EMC: Virtualizace není jen hypervisor!Dell EMC: Virtualizace není jen hypervisor!
Dell EMC: Virtualizace není jen hypervisor!MarketingArrowECS_CZ
 
TechWiseTV Workshop: Cisco HyperFlex Systems
TechWiseTV Workshop: Cisco HyperFlex SystemsTechWiseTV Workshop: Cisco HyperFlex Systems
TechWiseTV Workshop: Cisco HyperFlex SystemsRobb Boyd
 
Server Virtualization using Hyper-V
Server Virtualization using Hyper-VServer Virtualization using Hyper-V
Server Virtualization using Hyper-VMd Yousup Faruqu
 
Net app virtualization preso
Net app virtualization presoNet app virtualization preso
Net app virtualization presoAccenture
 
Hyper-V’s Virtualization Enhancements - EPC Group
Hyper-V’s Virtualization Enhancements - EPC GroupHyper-V’s Virtualization Enhancements - EPC Group
Hyper-V’s Virtualization Enhancements - EPC GroupEPC Group
 
Секреты виртуализации - Windows Server 2012 Hyper-V
Секреты виртуализации - Windows Server 2012 Hyper-VСекреты виртуализации - Windows Server 2012 Hyper-V
Секреты виртуализации - Windows Server 2012 Hyper-VВиталий Стародубцев
 
Microsoft Server Virtualization and Private Cloud
Microsoft Server Virtualization and Private CloudMicrosoft Server Virtualization and Private Cloud
Microsoft Server Virtualization and Private CloudMd Yousup Faruqu
 
70-412 Objectives
70-412 Objectives70-412 Objectives
70-412 Objectivespupeadra
 
VMware@night - Was ist neu in VMware Horizon View 5.3 und Mirage 4.3
VMware@night - Was ist neu in VMware Horizon View 5.3 und Mirage 4.3VMware@night - Was ist neu in VMware Horizon View 5.3 und Mirage 4.3
VMware@night - Was ist neu in VMware Horizon View 5.3 und Mirage 4.3Digicomp Academy AG
 
Comparing MS Cloud with VMware Cloud
Comparing MS Cloud with VMware CloudComparing MS Cloud with VMware Cloud
Comparing MS Cloud with VMware CloudTudor Damian
 
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyTrack 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyEMC Forum India
 
VMWARE Professionals - Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and FlexibilityVMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals - Security, Multitenancy and FlexibilityPaulo Freitas
 
Keynote talk on Windows 8 - Jeff Stokes
Keynote talk on Windows 8 - Jeff StokesKeynote talk on Windows 8 - Jeff Stokes
Keynote talk on Windows 8 - Jeff StokesJeff Stokes
 

Similar to VMWARE Professionals - Storage and Resources (20)

Windows Server 2012 Deep-Dive - EPC Group
Windows Server 2012 Deep-Dive - EPC GroupWindows Server 2012 Deep-Dive - EPC Group
Windows Server 2012 Deep-Dive - EPC Group
 
Virtualizacao de Servidores - Windows
Virtualizacao de Servidores - WindowsVirtualizacao de Servidores - Windows
Virtualizacao de Servidores - Windows
 
Windows Server 2012 Deep-Dive - EPC Group
Windows Server 2012 Deep-Dive - EPC GroupWindows Server 2012 Deep-Dive - EPC Group
Windows Server 2012 Deep-Dive - EPC Group
 
V sphere 5.1 what's new presentation, customer
V sphere 5.1   what's new presentation, customerV sphere 5.1   what's new presentation, customer
V sphere 5.1 what's new presentation, customer
 
Storage Enhancements in Windows 2012 R2
Storage Enhancements in Windows 2012 R2Storage Enhancements in Windows 2012 R2
Storage Enhancements in Windows 2012 R2
 
Advantages of HyperV over vSphere 5.1
Advantages of HyperV over vSphere 5.1Advantages of HyperV over vSphere 5.1
Advantages of HyperV over vSphere 5.1
 
Dell EMC: Virtualizace není jen hypervisor!
Dell EMC: Virtualizace není jen hypervisor!Dell EMC: Virtualizace není jen hypervisor!
Dell EMC: Virtualizace není jen hypervisor!
 
TechWiseTV Workshop: Cisco HyperFlex Systems
TechWiseTV Workshop: Cisco HyperFlex SystemsTechWiseTV Workshop: Cisco HyperFlex Systems
TechWiseTV Workshop: Cisco HyperFlex Systems
 
vSphere
vSpherevSphere
vSphere
 
Server Virtualization using Hyper-V
Server Virtualization using Hyper-VServer Virtualization using Hyper-V
Server Virtualization using Hyper-V
 
Net app virtualization preso
Net app virtualization presoNet app virtualization preso
Net app virtualization preso
 
Hyper-V’s Virtualization Enhancements - EPC Group
Hyper-V’s Virtualization Enhancements - EPC GroupHyper-V’s Virtualization Enhancements - EPC Group
Hyper-V’s Virtualization Enhancements - EPC Group
 
Секреты виртуализации - Windows Server 2012 Hyper-V
Секреты виртуализации - Windows Server 2012 Hyper-VСекреты виртуализации - Windows Server 2012 Hyper-V
Секреты виртуализации - Windows Server 2012 Hyper-V
 
Microsoft Server Virtualization and Private Cloud
Microsoft Server Virtualization and Private CloudMicrosoft Server Virtualization and Private Cloud
Microsoft Server Virtualization and Private Cloud
 
70-412 Objectives
70-412 Objectives70-412 Objectives
70-412 Objectives
 
VMware@night - Was ist neu in VMware Horizon View 5.3 und Mirage 4.3
VMware@night - Was ist neu in VMware Horizon View 5.3 und Mirage 4.3VMware@night - Was ist neu in VMware Horizon View 5.3 und Mirage 4.3
VMware@night - Was ist neu in VMware Horizon View 5.3 und Mirage 4.3
 
Comparing MS Cloud with VMware Cloud
Comparing MS Cloud with VMware CloudComparing MS Cloud with VMware Cloud
Comparing MS Cloud with VMware Cloud
 
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyTrack 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
 
VMWARE Professionals - Security, Multitenancy and Flexibility
VMWARE Professionals -  Security, Multitenancy and FlexibilityVMWARE Professionals -  Security, Multitenancy and Flexibility
VMWARE Professionals - Security, Multitenancy and Flexibility
 
Keynote talk on Windows 8 - Jeff Stokes
Keynote talk on Windows 8 - Jeff StokesKeynote talk on Windows 8 - Jeff Stokes
Keynote talk on Windows 8 - Jeff Stokes
 

More from Paulo Freitas

VMWARE Professionals - Intro and Scale
VMWARE Professionals -  Intro and ScaleVMWARE Professionals -  Intro and Scale
VMWARE Professionals - Intro and ScalePaulo Freitas
 
Hyper-V Integration with other System Center 2012 Components
Hyper-V Integration with other System Center 2012 ComponentsHyper-V Integration with other System Center 2012 Components
Hyper-V Integration with other System Center 2012 ComponentsPaulo Freitas
 
Hyper-V Integration with System Center 2012 Virtual Machine Manager
Hyper-V Integration with System Center 2012 Virtual Machine ManagerHyper-V Integration with System Center 2012 Virtual Machine Manager
Hyper-V Integration with System Center 2012 Virtual Machine ManagerPaulo Freitas
 
Hyper-V High Availability and Live Migration
Hyper-V High Availability and Live MigrationHyper-V High Availability and Live Migration
Hyper-V High Availability and Live MigrationPaulo Freitas
 
Hyper-V Infrastructure
Hyper-V InfrastructureHyper-V Infrastructure
Hyper-V InfrastructurePaulo Freitas
 
Windows Server 2012 R2 Jump Start - WEB
Windows Server 2012 R2 Jump Start - WEBWindows Server 2012 R2 Jump Start - WEB
Windows Server 2012 R2 Jump Start - WEBPaulo Freitas
 
Windows Server 2012 R2 Jump Start - AIP
Windows Server 2012 R2 Jump Start - AIPWindows Server 2012 R2 Jump Start - AIP
Windows Server 2012 R2 Jump Start - AIPPaulo Freitas
 
Windows Server 2012 R2 Jump Start - Intro
Windows Server 2012 R2 Jump Start - IntroWindows Server 2012 R2 Jump Start - Intro
Windows Server 2012 R2 Jump Start - IntroPaulo Freitas
 
Windows Server 2012 R2 Jump Start - Intro
Windows Server 2012 R2 Jump Start - IntroWindows Server 2012 R2 Jump Start - Intro
Windows Server 2012 R2 Jump Start - IntroPaulo Freitas
 
Multi site Clustering with Windows Server 2008 Enterprise
Multi site Clustering with Windows Server 2008 EnterpriseMulti site Clustering with Windows Server 2008 Enterprise
Multi site Clustering with Windows Server 2008 EnterprisePaulo Freitas
 
SQL Server 2008 R2 System Views Map
SQL Server 2008 R2 System Views MapSQL Server 2008 R2 System Views Map
SQL Server 2008 R2 System Views MapPaulo Freitas
 
Share Point 2010 Developer Platform Poster
Share Point 2010 Developer Platform PosterShare Point 2010 Developer Platform Poster
Share Point 2010 Developer Platform PosterPaulo Freitas
 
Remote Desktop Services Component Architecture
Remote Desktop Services Component ArchitectureRemote Desktop Services Component Architecture
Remote Desktop Services Component ArchitecturePaulo Freitas
 
Hyper-V Virtual Machine Mobility
Hyper-V Virtual Machine MobilityHyper-V Virtual Machine Mobility
Hyper-V Virtual Machine MobilityPaulo Freitas
 

More from Paulo Freitas (20)

VMWARE Professionals - Intro and Scale
VMWARE Professionals -  Intro and ScaleVMWARE Professionals -  Intro and Scale
VMWARE Professionals - Intro and Scale
 
Hyper-V Integration with other System Center 2012 Components
Hyper-V Integration with other System Center 2012 ComponentsHyper-V Integration with other System Center 2012 Components
Hyper-V Integration with other System Center 2012 Components
 
Hyper-V Integration with System Center 2012 Virtual Machine Manager
Hyper-V Integration with System Center 2012 Virtual Machine ManagerHyper-V Integration with System Center 2012 Virtual Machine Manager
Hyper-V Integration with System Center 2012 Virtual Machine Manager
 
Hyper-V High Availability and Live Migration
Hyper-V High Availability and Live MigrationHyper-V High Availability and Live Migration
Hyper-V High Availability and Live Migration
 
Hyper-V Management
Hyper-V ManagementHyper-V Management
Hyper-V Management
 
Hyper-V Storage
Hyper-V StorageHyper-V Storage
Hyper-V Storage
 
Hyper-V Networking
Hyper-V NetworkingHyper-V Networking
Hyper-V Networking
 
Hyper-V Infrastructure
Hyper-V InfrastructureHyper-V Infrastructure
Hyper-V Infrastructure
 
Windows Server 2012 R2 Jump Start - WEB
Windows Server 2012 R2 Jump Start - WEBWindows Server 2012 R2 Jump Start - WEB
Windows Server 2012 R2 Jump Start - WEB
 
Windows Server 2012 R2 Jump Start - AIP
Windows Server 2012 R2 Jump Start - AIPWindows Server 2012 R2 Jump Start - AIP
Windows Server 2012 R2 Jump Start - AIP
 
Windows Server 2012 R2 Jump Start - Intro
Windows Server 2012 R2 Jump Start - IntroWindows Server 2012 R2 Jump Start - Intro
Windows Server 2012 R2 Jump Start - Intro
 
Windows Server 2012 R2 Jump Start - Intro
Windows Server 2012 R2 Jump Start - IntroWindows Server 2012 R2 Jump Start - Intro
Windows Server 2012 R2 Jump Start - Intro
 
Multi site Clustering with Windows Server 2008 Enterprise
Multi site Clustering with Windows Server 2008 EnterpriseMulti site Clustering with Windows Server 2008 Enterprise
Multi site Clustering with Windows Server 2008 Enterprise
 
SQL Server 2008 R2 System Views Map
SQL Server 2008 R2 System Views MapSQL Server 2008 R2 System Views Map
SQL Server 2008 R2 System Views Map
 
Share Point 2010 Developer Platform Poster
Share Point 2010 Developer Platform PosterShare Point 2010 Developer Platform Poster
Share Point 2010 Developer Platform Poster
 
Remote Desktop Services Component Architecture
Remote Desktop Services Component ArchitectureRemote Desktop Services Component Architecture
Remote Desktop Services Component Architecture
 
Hyper-V Virtual Machine Mobility
Hyper-V Virtual Machine MobilityHyper-V Virtual Machine Mobility
Hyper-V Virtual Machine Mobility
 
Hyper-v Storage
Hyper-v StorageHyper-v Storage
Hyper-v Storage
 
Hyper-v Replica
Hyper-v ReplicaHyper-v Replica
Hyper-v Replica
 
Hyper-V Networking
Hyper-V NetworkingHyper-V Networking
Hyper-V Networking
 

Recently uploaded

UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1DianaGray10
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...Product School
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...Product School
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backElena Simperl
 
In-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT ProfessionalsIn-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT ProfessionalsExpeed Software
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...Product School
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaRTTS
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Alison B. Lowndes
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Product School
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsPaul Groth
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...Product School
 
Optimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityOptimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityScyllaDB
 
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi IbrahimzadeFree and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi IbrahimzadeCzechDreamin
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIES VE
 
Demystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John StaveleyDemystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John StaveleyJohn Staveley
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesThousandEyes
 
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Julian Hyde
 

Recently uploaded (20)

UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
In-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT ProfessionalsIn-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT Professionals
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
Optimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityOptimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through Observability
 
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi IbrahimzadeFree and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
Free and Effective: Making Flows Publicly Accessible, Yumi Ibrahimzade
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and Planning
 
Demystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John StaveleyDemystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John Staveley
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
 

VMWARE Professionals - Storage and Resources

  • 2. Microsoft Virtual Academy Part 1 | Windows Server 2012 Hyper-V &. VMware vSphere 5.1 Part 2 | System Center 2012 SP1 & VMware’s Private Cloud (01) Introduction & Scalability (05) Introduction & Overview of System Center 2012 (02) Storage & Resource Management (06) Application Management (03) Security, Multi-tenancy & Flexibility (07) Cross-Platform Management (04) High-Availability & Resiliency (08) Foundation, Hybrid Clouds & Costs ** MEAL BREAK **
  • 3.
  • 4.
  • 5.
  • 6.
  • 7. Live migration maintaining Fibre Channel connectivity SCALE AND PERFORMANCE Hyper-V host 1 Hyper-V host 2 Worldwide Name Set B Worldwide Name Set A Worldwide Name Set B • Unmediated access to a storage area network (SAN) • Hardware-based I/O path to virtual hard disk stack • N_Port ID Virtualization (NPIV) support • Single Hyper-V host connected to different SANs • Up to four Virtual Fibre Channel adapters on a virtual machine • Multipath I/O (MPIO) functionality • Live migration Access Fibre Channel SAN data from a virtual machine Worldwide Name Set A
  • 8.
  • 9. Capabilities • Improved performance of virtual hard disks on 512e disks • Ability to host virtual hard disks on native 4 KB disks Benefits • Reduces impact of 512e disks on virtual hard disk stack • Workloads complete more quickly Virtual hard disk 4 KB block (blue) not aligned with physical 4 KB boundary 9 SCALE AND PERFORMANCE Physical sector 0 Physical sector 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Logical sector Sector bitmap
  • 10.
  • 11. Features • Storage capacity up to 64 TBs • Corruption protection during power failures • Optimal structure alignment for large-sector disks Benefits • Increases storage capacity • Protects data • Helps to ensure quality performance on large-sector disks VHDX SCALE AND PERFORMANCE Large allocations and 1 MB aligned Header region Data region (large allocations and 1 MB aligned) Metadata region (small allocations and unaligned) Intent log Block Allocation Table (BAT) Metadata table User data blocks Sector bitmap blocks User metadata File metadata Header
  • 12.
  • 13. • Virtualization of storage with Storage Pools and Storage Spaces • Storage resilience and availability with commodity hardware • Resiliency and data redundancy through n-way mirroring (clustered or unclustered) or parity mode (unclustered) • Utilization optimized through thin and trim provisioning and enclosure awareness • Integration with other Windows Server 2012 capabilities • Serial Attached SCSI (SAS) and Serial AT Attachment (SATA) interconnects Windows Virtualized Storage Windows Application Server or File Server Physical or virtualized deployments Physical Storage (Shared) SAS or SATA Integrated with other Windows Server 2012 capabilities Storage PoolStorage Pool File Server Administration Console Hyper-V Cluster Shared Volume Failover Clustering SMB Multichannel NFS Windows Storage Mgmt. NTFS SMB Direct 1 Storage Space Storage Space Storage Space ENTERPRISE-CLASS FEATURES ON LESS EXPENSIVE HARDWARE
  • 14.
  • 15. 1 External Intelligent Storage Array Virtual Disk Virtual Disk Token IMPROVED PERFORMANCE, MORE CHOICE Benefits • Rapid virtual machine provisioning and migration • Faster transfers on large files • Minimized latency • Maximized array throughput • Less CPU and network use • Performance not limited by network throughput or server use • Improved datacenter capacity and scale Offloaded Data Transfer (ODX) Token-based data transfer between intelligent storage arrays
  • 16. 0 20 40 60 80 100 120 140 160 180 200 Average Desktop ODX Creation of a 10 GB Fixed Disk Time (seconds) <1 Second! ~3 Minutes
  • 17.
  • 18. Capability Hyper-V (2012) vSphere Hypervisor vSphere 5.1 Enterprise Plus Virtual Fiber Channel Yes Yes Yes 3rd Party Multipathing (MPIO) Yes No Yes (VAMP)1 Native 4-KB Disk Support Yes No No Maximum Virtual Disk Size 64TB VHDX 2TB VMDK 2TB VMDK Maximum Pass Through Disk Size 256TB+2 64TB 64TB Offloaded Data Transfer Yes No Yes (VAAI)3 Boot from USB Disk Yes4 Yes Yes Storage Pooling Yes No No 1 vStorage API for Multipathing (VAMP) is only available in Enterprise & Enterprise Plus editions of vSphere 5.1 2 The maximum size of a physical disk attached to a virtual machine is determined by the guest operating system and the chosen file system within the guest. More recent Windows Server operating systems support disks in excess of 256TB in size 3 vStorage API for Array Integration (VAAI) is only available in Enterprise & Enterprise Plus editions of vSphere 5.1 4 Hyper-V Server 2012 Only vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf and http://www.vmware.com/products/vsphere/buy/editions_comparison.html
  • 19.
  • 20.
  • 21. VM1 Maximum memory Improvements for Hyper-V CONTINUOUS SERVICES Hyper-V Physical memory pool Minimum memory Maximum memory Memory in use Physical memory pool Memory in use Physical memory pool Administrator can increase maximum memory without a restart Dynamic Memory • Introduced in Windows Server 2008 R2 SP1 • Reallocates memory automatically among running virtual machines Windows Server 2012 improvements • Minimum memory • Hyper-V smart paging • Memory ballooning • Runtime configuration
  • 22. Hyper-V VM1 Maximum memory Improvements for Hyper-V Virtual machine starting with Hyper-V smart paging CONTINUOUS SERVICES Minimum memory VMn Minimum memory Maximum memory VM2 Maximum memory Minimum memory Physical memory pool Physical memory pool Physical memory pool Startup increases memory in use Paging file provides additional memory for startup Benefits • Higher consolidation numbers • Improved reliability of Hyper-V operations • Ability to increase maximum memory configuration with minimal downtime Removing paged memory after virtual machine restart Memory reclaimed after startup Physical memory pool Memory in use after startup
  • 23.
  • 24. Features • Uses resource pools • Compatible with all Hyper-V operations • Unaffected by virtual machine movement • Uses Network Metering Port ACLs Benefits of Resource Metering • Easier to track virtual machine use • Can be used to aggregate data for multiple virtual machines • Can be used to build accurate lookback and chargeback solutions • Easier to obtain resource use data Metrics Average CPU use Average memory use Minimum memory use Maximum memory use Maximum disk allocation Incoming network traffic Outgoing network traffic ISOLATION AND MULTITENANCY A two-tenant environment built with Hyper-V in Windows Server 2012 Virtual Machine Resource Metering 20 10 Resource Pool Internet Resource Metering10 5 3045 2540 Resource Pool Internet Customer 2 30 55 Resource Pool Internet Customer 1 0 0 0 0 1015202530354045505152025
  • 25.
  • 26. Features and mechanisms Relative minimum bandwidth Strict minimum bandwidth Bandwidth oversubscription CONTINUOUS SERVICES Features • Establishes a bandwidth floor • Assigns specified bandwidth for each type of traffic • Helps to ensure fair sharing during congestion • Can exceed quota with no congestion Two mechanisms • Enhanced packet scheduler (software) • Network adapter with DCB support (hardware) Normal priority High priority Critical Hyper-V Extensible Switch W=1 W=2 W=5 Bronze tenant Silver tenant Gold tenant Hyper-V Extensible Switch 100 MB 200 MB 500 MB 1 Gbps Gold tenant Gold tenant Gold tenant Hyper-V Extensible Switch 500 MB 500 MB 500 MB 1 Gbps NIC Teaming 1 Gbps
  • 27.
  • 28. Capability Hyper-V (2012) vSphere Hypervisor vSphere 5.1 Enterprise Plus Dynamic Memory Yes Yes Yes Resource Metering Yes Yes1 Yes Quality of Service Yes No Yes2 Data Center Bridging (DCB) Yes Yes Yes 1 Without vCenter, Resource Metering in the vSphere Hypervisor is only available on an individual host by host basis. 2 Quality of Service (QoS) is only available in the Enterprise Plus edition of vSphere 5.1 vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf and http://www.vmware.com/products/vsphere/buy/editions_comparison.html
  • 29.
  • 30. ©2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, Office, Azure, System Center, Dynamics and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

Editor's Notes

  1. Both Windows Server 2012 Hyper-V &amp; Hyper-V Server 2012 also introduce a number of enhanced storage capabilities to support the most intensive, mission-critical of workloads. These capabilities include:Virtual Fibre Channel – Enables virtual machines to integrate directly into Fiber Channel Storage Area Networks (SAN), unlocking scenarios such as fiber channel-based Hyper-V Guest Clusters.Support for 4-KB Disk Sectors in Hyper‑V Virtual Disks. Support for 4,000-byte (4‑KB) disk sectors lets customers take advantage of the emerging innovation in storage hardware that provides increased capacity and reliability.New Virtual Hard Disk Format. This new format, called VHDX, is designed to better handle current and future workloads and addresses the technological demands of an enterprise’s evolving needs by increasing storage capacity, protecting data, improving quality performance on 4-KB disks, and providing additional operation-enhancing features. The maximum size of a VHDX file is 64TB.
  2. Note: This slide has 2 Clicks for animation to describe how live migration works when you use Virtual Fibre Channel in the VM.Current situationYou need your virtualized workloads to connect to your existing storage arrays with as little trouble as possible. Many enterprises have already invested in Fibre Channel SANs, deploying them in their data centers to address their growing storage requirements. These customers often want the ability to use this storage from within their virtual machines instead of having it only accessible from and used by the Hyper‑V host.Virtual Fibre Channel for Hyper‑V, a new feature of Windows Server 2012, provides Fibre Channel ports within the guest operating system, which lets you connect to Fibre Channel directly from within virtual machines.With Windows Server 2012Virtual Fibre Channel support includes the following:Unmediated access to a SAN. Virtual Fibre Channel for Hyper‑V provides the guest operating system with unmediated access to a SAN by using a standard World Wide Name (WWN) associated with a virtual machine. Hyper‑V lets you use Fibre Channel SANs to virtualize workloads that require direct access to SAN logical unit numbers (LUNs). Fibre Channel SANs also allow you to operate in new scenarios, such as running the Windows Failover Cluster Management feature inside the guest operating system of a virtual machine connected to shared Fibre Channel storage.A hardware-based I/O path to the Windows software virtual hard disk stack. Mid-range and high-end storage arrays include advanced storage functionality that helps offload certain management tasks from the hosts to the SANs. Virtual Fibre Channel presents an alternative, hardware-based I/O path to the Windows software virtual hard disk stack. This path lets you use the advanced functionality of your SANs directly from Hyper‑V virtual machines. For example, Hyper‑V users can offload storage functionality (such as taking a snapshot of a LUN) to the SAN hardware simply by using a hardware Volume Shadow Copy Service (VSS) provider from within a Hyper‑V virtual machine.N_Port ID Virtualization (NPIV). NPIV is a Fibre Channel facility that allows multiple N_Port IDs to share a single physical N_Port. This allows multiple Fibre Channel initiators to occupy a single physical port, easing hardware requirements in SAN design, especially where virtual SANs are called for. Virtual Fibre Channel for Hyper‑V guests uses NPIV (T11 standard) to create multiple NPIV ports on top of the host’s physical Fibre Channel ports. A new NPIV port is created on the host each time a virtual host bus adapter (HBA) is created inside a virtual machine. When the virtual machine stops running on the host, the NPIV port is removed.A single Hyper‑V host connected to different SANs with multiple Fibre Channel ports. Hyper‑V allows you to define virtual SANs on the host to accommodate scenarios where a single Hyper‑V host is connected to different SANs via multiple Fibre Channel ports. A virtual SAN defines a named group of physical Fibre Channel ports that are connected to the same physical SAN. For example, assume a Hyper‑V host is connected to two SANs—a production SAN and a test SAN. The host is connected to each SAN through two physical Fibre Channel ports. In this example, you might configure two virtual SANs—one named “Production SAN” that has two physical Fibre Channel ports connected to the production SAN and one named “Test SAN” that has two physical Fibre Channel ports connected to the test SAN. You can use the same technique to name two separate paths to a single storage target.Up to four virtual Fibre Channel adapters on a virtual machine. You can configure as many as four virtual Fibre Channel adapters on a virtual machine and associate each one with a virtual SAN. Each virtual Fibre Channel adapter is associated with one WWN address, or two WWN addresses to support live migration. Each WWN address can be set automatically or manually.MPIO functionality. Hyper‑V in Windows Server 2012 can use the multipath I/O (MPIO) functionality to help ensure optimal connectivity to Fibre Channel storage from within a virtual machine. You can use MPIO functionality with Fibre Channel in the following ways:Virtualize workloads that use MPIO. Install multiple Fibre Channel ports in a virtual machine, and use MPIO to provide highly available connectivity to the LUNs accessible by the host.Configure multiple virtual Fibre Channel adapters inside a virtual machine, and use a separate copy of MPIO within the guest operating system of the virtual machine to connect to the LUNs the virtual machine can access. This configuration can coexist with a host MPIO setup.Use different device-specific modules (DSMs) for the host or each virtual machine. This approach allows live migration of the virtual machine configuration, including the configuration of DSM and connectivity between hosts and compatibility with existing server configurations and DSMs.Live migration support with virtual Fibre Channel in Hyper‑V: To support live migration of virtual machines across hosts running Hyper‑V while maintaining Fibre Channel connectivity, two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Hyper‑V automatically alternates between the Set A and Set B WWN addresses during a live migration. This helps to ensure that all LUNs are available on the destination host before the migration and minimal downtime occurs during the migration.Requirements for Virtual Fibre Channel in Hyper‑V:One or more installations of Windows Server 2012 with the Hyper‑V role installed. Hyper‑V requires a computer with processor support for hardware virtualization.A computer with one or more Fibre Channel HBAs, each with an updated HBA driver that supports Virtual Fibre Channel. Updated HBA drivers are included with the in-box HBA drivers for some models. Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 as the guest operating system.Connection only to data LUNs. Storage accessed through a Virtual Fibre Channel connected to a LUN can’t be used as boot media.
  3. Both Windows Server 2012 Hyper-V &amp; Hyper-V Server 2012 also introduce a number of enhanced storage capabilities to support the most intensive, mission-critical of workloads. These capabilities include:Virtual Fibre Channel – Enables virtual machines to integrate directly into Fiber Channel Storage Area Networks (SAN), unlocking scenarios such as fiber channel-based Hyper-V Guest Clusters.Support for 4-KB Disk Sectors in Hyper‑V Virtual Disks. Support for 4,000-byte (4‑KB) disk sectors lets customers take advantage of the emerging innovation in storage hardware that provides increased capacity and reliability.New Virtual Hard Disk Format. This new format, called VHDX, is designed to better handle current and future workloads and addresses the technological demands of an enterprise’s evolving needs by increasing storage capacity, protecting data, improving quality performance on 4-KB disks, and providing additional operation-enhancing features. The maximum size of a VHDX file is 64TB.
  4. Current situationIncreases in storage density and reliability (among other factors) are driving the data storage industry to transition the physical format of hard disk drives from 512-byte sectors to 4,096-byte sectors (also known as 4-KB sectors). However, most of the software industry depends on 512-byte disk sectors of. A change in sector size introduces major compatibility issues in many applications. To minimize the impact on the ecosystem, hard-drive vendors are introducing transitional “512-byte emulation drives,” also known as “512e.” These drives offer some of the advantages of 4-KB native drives, such as improved format efficiency and an improved scheme for error correction codes, but with fewer compatibility issues than by exposing a 4-KB sector size at the disk interface.With Windows Server 2012In Windows Server 2012, Hyper‑V supports 4-KB disk sectors.Support for improved performance of virtual hard disks on 512e disksA 512e disk can perform a write only in terms of a physical sector—that is, it can’t directly write a 512‑byte sector write issued to it. The internal process in the disk that makes this write possible follows these steps:The disk reads the 4-KB physical sector into its internal cache, which contains the 512-byte logical sector referred to in the write.Data in the 4-KB buffer is modified to include the updated 512-byte sector.The disk performs a write of the updated 4-KB buffer back to its physical sector on the disk.This process, called an RMW, causes performance degradation in virtual hard disks for the following reasons:Dynamic and differencing virtual hard disks have a 512-byte sector bitmap in front of their data payload. In addition, footer/header/parent locators all align to a 512-byte sector. It’s common for the virtual hard disk driver to issue 512-byte writes to update these structures, resulting in the RMW behavior just described.Applications commonly issue reads and writes in multiples of 4-KB sizes (the default cluster size of NTFS). Because there’s a 512-byte sector bitmap in front of the data payload block of dynamic and differencing virtual hard disks, the 4-KB blocks aren’t aligned to the physical 4-KB boundary, as shown in the figure.Support for hosting virtual hard disks on native 4-KB disksHyper‑V in Windows Server 2012 makes it possible to store virtual hard disks on 4-KB disks by implementing a software RMW algorithm in the virtual hard disk layer. This algorithm converts 512-byte access-and-update requests to corresponding 4-KB accesses and updates.BenefitsThe storage industry is introducing 4-KB physical format drives to provide increased capacity and reliability. Hyper‑V in Windows Server 2012 lets you take advantage of this emerging innovation in storage hardware with support for improved performance of virtual hard disks on 512e disks and support for hosting virtual hard disks on native 4-KB disks. Hyper‑V 4-KB disk sector support in Windows Server 2012 reduces the performance impact of 512e disks on the virtual hard disk stack, which lets workloads complete more quickly.Requirements for 4-KB disk sector support:Windows Server 2012Physical disk drives that use:512e formatNative 4-KB format
  5. Both Windows Server 2012 Hyper-V &amp; Hyper-V Server 2012 also introduce a number of enhanced storage capabilities to support the most intensive, mission-critical of workloads. These capabilities include:Virtual Fibre Channel – Enables virtual machines to integrate directly into Fiber Channel Storage Area Networks (SAN), unlocking scenarios such as fiber channel-based Hyper-V Guest Clusters.Support for 4-KB Disk Sectors in Hyper‑V Virtual Disks. Support for 4,000-byte (4‑KB) disk sectors lets customers take advantage of the emerging innovation in storage hardware that provides increased capacity and reliability.New Virtual Hard Disk Format. This new format, called VHDX, is designed to better handle current and future workloads and addresses the technological demands of an enterprise’s evolving needs by increasing storage capacity, protecting data, improving quality performance on 4-KB disks, and providing additional operation-enhancing features. The maximum size of a VHDX file is 64TB.
  6. With the evolution of storage systems, and the ever-increasing reliance on virtualized enterprise workloads, the VHD format of Windows Server needed to also evolve. The new format is better suited to address the current and future requirements for running enterprise-class workloads, specifically:Where the size of the VHD is larger then 2,040 GB. To reliably protect against issues for dynamic and differencing disks during power failures. To prevent performance degradation issues on the new, large-sector physical disks.Hyper‑V in Windows Server 2012 contains an update to the VHD format, called VHDX, that has much larger capacity and additional resiliency. VHDX supports up to 64 terabytes of storage. It also provides additional protection from corruption from power failures by logging updates to the VHDX metadata structures and prevents performance degradation on large-sector physical disks by optimizing structure alignment.Technical descriptionThe VHDX format’s principal new features are:Support for virtual hard disk storage capacity of up to 64 terabytes.Protection against corruption during power failures by logging updates to the VHDX metadata structures. The format contains an internal log that is used to capture updates to the metadata of the virtual hard disk file before being written to its final location. In case of a power failure, if the write to the final destination is corrupted, then it is played back from the log to promote consistency of the virtual hard disk file.Optimal structure alignment of the virtual hard disk format to suit large sector disks. If unaligned I/Os are issued to these disks, an associated performance penalty is caused by the Read-Modify-Write cycles that are required to satisfy these I/Os. The structures in the format are aligned to help ensure that are no unaligned I/Os exist.The VHDX format also provides the following features:Larger block sizes for dynamic and differential disks, which lets these disks attune to the needs of the workload.A 4-KB logical sector virtual disk that results in increased performance when applications and workloads that are designed for 4-KB sectors use it.The ability to store custom metadata about the file that you might want to record, such as operating system version or patches applied.Efficiency (called trim) in representing data, which results in smaller files and lets the underlying physical storage device reclaim unused space. (Trim requires pass-through or SCSI disks and trim-compatible hardware.)The figure illustrates the VHDX hard disk format.As you can see in the preceding figure, most of the structures are large allocations and are MB aligned. This alleviates the alignment issue that is associated with virtual hard disks. The different regions of the VHDX format are as follows:Header region. The header region is the first region of the file and identifies the location of the other structures, including the log, block allocation table (BAT), and metadata region. The header region contains two headers, only one of which is active at a time, to increase resiliency to corruptions.Intent log. The intent log is a circular ring buffer. Changes to the VHDX metastructures are written to the log before they are written to the final location. If corruption occurs during a power failure while an update is being written to the actual location, on the subsequent open, the change is applied again from the log, and the VHDX file is brought back to a consistent state. The log does not track changes to the payload blocks, so it does not protect data contained within them.Data region. The BAT contains entries that point to both the user data blocks and sector bitmap block locations within the VHDX file. This is an important difference from the VHD format because sector bitmaps are aggregated into their own blocks instead of being appended in front of each payload block.Metadata region. The metadata region contains a table that points to both user-defined metadata and virtual hard disk file metadata such as block size, physical sector size, and logical sector size.Hyper‑V in Windows Server 2012 also introduces support that lets VHDX files be more efficient when they represent that data within it. Because the VHDX files can be large, based on the workload they are supporting, the space they consume can grow quickly. Currently, when applications delete content within a virtual hard disk, the Windows storage stack in both the guest operating system and the Hyper‑V host have limitations that prevent this information from being communicated to the virtual hard disk and the physical storage device. This contains the Hyper‑V storage stack from optimizing the space used and prevents the underlying storage device from reclaiming the space previously occupied by the deleted data.In Windows Server 2012, Hyper‑V now supports unmap notifications, which lets VHDX files be more efficient in representing that data within it. This results in smaller files size, which lets the underlying physical storage device reclaim unused space.BenefitsVHDX, which is designed to handle current and future workloads, has a much larger storage capacity than the earlier formats and addresses the technological demands of evolving enterprises. The VDHX performance-enhancing features make it easier to handle large workloads, protect data better during power outages, and optimize structure alignments of dynamic and differential disks to prevent performance degradation on new, large-sector physical disks.RequirementsTo take advantage of the new version of the new VHDX format, you need the following:Windows Server 2012 or Windows 8The Hyper‑V server roleTo take advantage of the trim feature, you need the following:VHDX-based virtual disks connected as virtual SCSI devices or as directly attached physical disks (sometimes referred to as pass-through disks). This optimization also is supported for natively attached VHDX-based virtual disks.Trim-capable hardware.
  7. Storage Spaces – Storage Spaces transforms SAS &amp; SATA disks into storage pools, from which logical disks, or storage spaces, can then be provisioned. These spaces can be given different levels of resiliency and performance, can be thinly or fully provisioned, and support advanced features such as trim provisioning. Storage Spaces enable you to deliver a new category of highly capable storage solutions to all Windows customer segments at a dramatically lower price point. At the same time, you can maximize your operations by leveraging commodity storage to supply high-performance and feature-rich storage to servers, clusters, and applications alike.Offloaded Data Transfer (ODX). With offloaded data transfer support, the Hyper-V host can concentrate on the processing needs of the application and offload any storage-related tasks to the SAN, increasing performanceBoot from USB Disk: - Finally, with it’s reduced footprint, Microsoft Hyper-V Server 2012 supports installation to USB media, providing more deployment flexibility, especially in scenarios such as diskless servers. This is specific to Hyper-V Server 2012.
  8. We have had virtualization at Hyper-V layer over the last couple of releases of Windows server. With Windows Server 2012, we give you the ability to virtualize your storage solution. Storage spaces gives you the ability to consolidate all your SAS and SATA connected disks – no matter whether they are SSDs or traditional HDDs and consolidate them together as storage Pools. &lt;Click&gt;You can then assign these pools to different departments within your enterprise or customers so that your data is isolated and administration is easy. Once you have created these pools, you can then create logical disks from them, called Storage Space. &lt;Click&gt;These logical disks, for the most part looks and acts like regular disks. But they can be configured for different resiliency schemes – mirroring or parity depending the performance and space requirements.When you create a Storage space, you can either choose thin or fixed provisioning. This lets you have the ability to increase your storage investments only when you need. You can create a logical disk or Space that is bigger than your pool and add disks only when there is an actual need.&lt;Click&gt;Lets assume that your Hyper-V VMs are stored in logical disks created using Storage Spaces. With trim provisioning, when a large file gets deleted from one of the VMs, the VMs communicate this to the host and the host down to storage spaces and spaces will automatically reclaim this storage and assign it to other disks within the same pool or other pools. So you are optimizing storage utilization with on-demand provisioning and automated capacity reclamation.&lt;Click&gt;Storage Spaces is compatible with other Windows Server 2012 storage features, like SMB Direct and SMB Failover Clustering, so you can use simple inexpensive storage devices to create powerful and resilient storage infrastructures on a limited budget. &lt;Click&gt;Storage Spaces enable you to deliver a new category of highly capable storage solutions to all Windows customer segments at a dramatically lower price point. At the same time, you can maximize your operations by leveraging commodity storage to supply high-performance and feature-rich storage to servers, clusters, and applications alike.
  9. Storage Spaces – Storage Spaces transforms SAS &amp; SATA disks into storage pools, from which logical disks, or storage spaces, can then be provisioned. These spaces can be given different levels of resiliency and performance, can be thinly or fully provisioned, and support advanced features such as trim provisioning. Storage Spaces enable you to deliver a new category of highly capable storage solutions to all Windows customer segments at a dramatically lower price point. At the same time, you can maximize your operations by leveraging commodity storage to supply high-performance and feature-rich storage to servers, clusters, and applications alike.Offloaded Data Transfer (ODX). With offloaded data transfer support, the Hyper-V host can concentrate on the processing needs of the application and offload any storage-related tasks to the SAN, increasing performanceBoot from USB Disk: - Finally, with it’s reduced footprint, Microsoft Hyper-V Server 2012 supports installation to USB media, providing more deployment flexibility, especially in scenarios such as diskless servers. This is specific to Hyper-V Server 2012.
  10. Offloaded data transfer (ODX) in Windows Server 2012 enables you to accomplish more with your existing external storage arrays by letting you quickly move large files and virtual machines directly between storage arrays, which reduces host CPU and network resource consumption. Offloaded Data Transfer (ODX) support is a feature of the storage stack of Hyper‑V in Windows Server 2012. ODX, when used with offload-capable SAN storage hardware, lets a storage device perform a file copy operation without the main processor of the Hyper‑V host actually reading the content from one storage place and writing it to another.ODX uses a token-based mechanism for reading and writing data within or between intelligent storage arrays. Instead of routing the data through the host, a small token is copied between the source and destination. The token simply serves as a point-in-time representation of the data. As an example, when you copy a file or migrate a virtual machine between storage locations (either within or between storage arrays), a token that represents the virtual machine file is copied, which removes the need to copy the underlying data through the servers. In a token-based copy operation, the steps are as follows (see the following figure):&lt;Click&gt;A user initiates a file copy or move in Windows Explorer, a command-line interface, or a virtual machine migration.&lt;Click&gt;Windows Server automatically translates this transfer request into an ODX (if supported by the storage array) and receives a token representation of the data.&lt;Click&gt;The token is copied between the source and destination systems.&lt;Click&gt;The token is delivered to the storage array.&lt;Click&gt;The storage array performs the copy internally and returns progress status.ODX is especially significant in the cloud space when you must provision new virtual machines from virtual machine template libraries or when virtual hard disk operations are triggered and require large blocks of data to be copied, as in virtual hard disk merges, storage migration, and live migration. These copy operations are then handled by the storage device that must be able to perform offloads (such as an offload-capable iSCSI, Fibre Channel SAN, or a file server based in Windows Server 2012) and frees up the Hyper‑V host processors to carry more virtual machine workloads.As you can imagine having an ODX compliant array provides a wide range of benefits:ODX frees up the main processor to handle virtual machine workloads and lets you achieve native-like performance when your virtual machines read from and write to storage.ODX greatly reduces time to copy large amounts of data.With ODX, copy operations don’t use processor time.Virtualized workload now operates as efficiently as it would in a non-virtualized environment.
  11. Storage Spaces – Storage Spaces transforms SAS &amp; SATA disks into storage pools, from which logical disks, or storage spaces, can then be provisioned. These spaces can be given different levels of resiliency and performance, can be thinly or fully provisioned, and support advanced features such as trim provisioning. Storage Spaces enable you to deliver a new category of highly capable storage solutions to all Windows customer segments at a dramatically lower price point. At the same time, you can maximize your operations by leveraging commodity storage to supply high-performance and feature-rich storage to servers, clusters, and applications alike.Offloaded Data Transfer (ODX). With offloaded data transfer support, the Hyper-V host can concentrate on the processing needs of the application and offload any storage-related tasks to the SAN, increasing performanceBoot from USB Disk: - Finally, with it’s reduced footprint, Microsoft Hyper-V Server 2012 supports installation to USB media, providing more deployment flexibility, especially in scenarios such as diskless servers. This is specific to Hyper-V Server 2012.
  12. As shown in the table, both Windows Server 2012 Hyper-V and the free Hyper-V Server 2012 provide a significant number of advantages over both the vSphere Hypervisor and vSphere 5.1 Enterprise Plus. Customers building virtualized infrastructures today require the highest levels of availability and performance, and wish to maximize the investment in their chosen technologies to help drive their business forward. With Microsoft, the ability to utilize Device Specific Modules, also known as DSMs, produced by storage vendors, in conjunction with the Multipath I/O framework within Windows Server and Hyper-V Server, ensures that customers run their workloads on an optimized configuration from the start, as the storage vendor intended, providing the highest levels of performance and availability. This framework is built into the Windows Server platform, at no cost. Unfortunately, the vSphere Hypervisor doesn’t provide the ability to utilize these storage vendor specific optimizations, and in fact, only the Enterprise and Enterprise Plus editions of vSphere 5.1, through a feature known as ‘vStorage APIs for Multipathing’, provide this capability, meaning customers have to upgrade to higher, more costly editions in order to unlock the best performance from their storage investments.When implementing a virtualized infrastructure, customers today look to the future to understand new technology trends and innovations that are coming down the line. One of those innovations is the rapidly emerging Advanced Format Disks, that have a 4KB physical sector size. These disks bring an increase in performance, and are natively supported by Windows Server 2012 Hyper-V and Hyper-V Server 2012, but unfortunately, are not supported with the vSphere Hypervisor and vSphere 5.1, restricting future hardware upgrades.As customers introduce larger, more powerful workloads into their virtual environments, the amount of data associated with these workloads, over time, will grow. Fortunately, Windows Server 2012 Hyper-V and Hyper-V Server 2012 support the creation of virtual disks, quickly and efficiently, of up to 64 Terabytes (TB) in size, allowing huge databases, file repositories or document archives to be stored within individual disks. Whilst VMware’s proprietary file system, VMFS5, supports datastore sizes of 64TB, the Virtual Machine Disk Format (VMDK), is restricted to 2TB, meaning customers have to utilize the less flexible, less portable Raw Device Mappings (RDMs). If customers do choose to implement RDMs, 64TB is the maximum supported size, however with Microsoft, Hyper-V places no specific maximum on the size of a pass through disk. The maximum size of a physical disk attached to a virtual machine is ultimately determined by what the guest operating system supports. This ensures that the largest data-driven workloads can be virtualized on Hyper-V with ease.We mentioned earlier a capability known as 3rd Party Multipathing, and how this enables customers to optimize their Host-to-SAN integration and connectivity, maximizing their investment in both of these key elements of the virtualized infrastructure, and providing the highest levels of performance and availability for their critical workloads. Offloaded Data Transfer (ODX), a key capability of Windows Server 2012 Hyper-V, is another of those features that enables organizations to maximize their investment in their current technologies. By integrating Windows Server 2012 Hyper-V and Hyper-V Server 2012 with an ODX-capable storage array, many of the storage-related tasks that would normally use valuable CPU and network resources on the Hyper-V hosts, are offloaded to the array itself, executing much faster, increasing performance significantly, and unlocking extra resources on the hosts themselves. VMware offer a similar capability, known as vStorage APIs for Array Integration, VAAI, but unfortunately, this capability is only available in the Enterprise and Enterprise Plus editions of vSphere 5.1, meaning customers, again, have to upgrade to higher editions to achieve higher performance from their hardware investments
  13. Windows Server 2012 Hyper-V and Hyper-V Server 2012 also introduce a number of enhanced resource management capabilities that include:Dynamic Memory Improvements - These improvements dramatically increase virtual machine consolidation ratios and improve reliability for restart operations that can lead to lower costs, especially in environments, such as VDI, that have many idle or low-load virtual machines.Resource Metering - Resource Metering provides the ability to track and report the amount of data that is transferred per IP address or virtual machine to help ensure accurate chargebacks.Quality of Service - QoS provides the ability to programmatically adhere to a service level agreement (SLA) by specifying the minimum bandwidth that is available to a virtual machine or a port. It prevents latency issues by allocating maximum bandwidth use for a virtual machine or port.Data Center Bridging (DCB) - DCB takes advantage of the latest innovations and reduces the cost and difficulty to maintain separate network, management, live migration and storage traffic by using a modern, converged 10-gigabit local area network (LAN).
  14. Note: This slide is animated and has 1 clickDynamic Memory was introduced with Windows Server 2008 R2 SP1 and is used to reallocate memory between virtual machines that are running on a Hyper-V host. Improvements made within Windows Server 2012 Hyper-V includeMinimum memory setting – being able to set a minimum value for the memory assigned to a virtual machine that is lower than the startup memory settingHyper-V smart paging – which is paging that is used to enable a virtual machine to reboot while the Hyper-V host is under extreme memory pressureMemory ballooning – the technique used to reclaim unused memory from a virtual machine to be given to another virtual machine that has memory needsRuntime configuration – the ability to adjust the minimum memory setting and the maximum memory configuration setting on the fly while the virtual machine is running without requiring a reboot.Because a memory upgrade requires shutting down the virtual machine, a common challenge for administrators is upgrading the maximum amount of memory for a virtual machine as demand increases. For example, consider a virtual machine running SQL Server and configured with a maximum of 8 GB of RAM. Because of an increase in the size of the databases, the virtual machine now requires more memory. In Windows Server 2008 R2 with SP1, you must shut down the virtual machine to perform the upgrade, which requires planning for downtime and decreasing business productivity. With Windows Server 2012, you can apply that change while the virtual machine is running.[Click]As memory pressure on the virtual machine increases, an administrator can change the maximum memory value of the virtual machine, while it is running and without any downtime to the VM. Then, the Hot-Add memory process of the VM will ask for more memory and that memory is now available for the virtual machine to use.
  15. Note: This slide is animated and has 2 clicksHyper-V Smart Paging is a memory management technique that uses disk resources as additional, temporary memory when more memory is required to restart a virtual machine. This approach has both advantages and drawbacks. It provides a reliable way to keep the virtual machines running when no physical memory is available. However, it can degrade virtual machine performance because disk access speeds are much slower than memory access speeds.To minimize the performance impact of Smart Paging, Hyper-V uses itonly when all of the following occur:The virtual machine is being restarted.No physical memory is available.No memory can be reclaimed from other virtual machines that are running on the host.Hyper-V Smart Paging is not used when:A virtual machine is being started from an off state (instead of a restart).Oversubscribing memory for a running virtual machine would result.A virtual machine is failing over in Hyper-V clusters.Hyper-V continues to rely on internal guest paging when host memory is oversubscribed because it is more effective than Hyper-V Smart Paging. With internal guest paging, the paging operation inside virtual machines is performed by Windows Memory Manager. Windows Memory Manager has more information than does the Hyper-V host about memory use within the virtual machine, which means it can provide Hyper-V with better information to use when it chooses the memory to be paged. Because of this, internal guest paging incurs less overhead to the system than Hyper-V Smart Paging.In this example, we have multiple VMs running, and we are restarting the last virtual machine. Normally, that VM would be using some amount of memory between the Minimum and Maximum values. In this case, the Hyper-V host is running fairly loaded and there isn’t enough memory available to give the virtual machine all of the startup value needed to boot.[Click]When this occurs, a Hyper-V Smart Paging file is created for the VM to give it enough RAM to be able to start.[Click]After some time, the Hyper-V host will use the Dynamic Memory techniques like ballooning to pull the RAM away from this or other virtual machines to free up enough RAM to bring all of the Smart Paging contents back off of the disk.
  16. Windows Server 2012 Hyper-V and Hyper-V Server 2012 also introduce a number of enhanced resource management capabilities that include:Dynamic Memory Improvements - These improvements dramatically increase virtual machine consolidation ratios and improve reliability for restart operations that can lead to lower costs, especially in environments, such as VDI, that have many idle or low-load virtual machines.Resource Metering - Resource Metering provides the ability to track and report the amount of data that is transferred per IP address or virtual machine to help ensure accurate chargebacks.Quality of Service - QoS provides the ability to programmatically adhere to a service level agreement (SLA) by specifying the minimum bandwidth that is available to a virtual machine or a port. It prevents latency issues by allocating maximum bandwidth use for a virtual machine or port.Data Center Bridging (DCB) - DCB takes advantage of the latest innovations and reduces the cost and difficulty to maintain separate network, management, live migration and storage traffic by using a modern, converged 10-gigabit local area network (LAN).
  17. Note: This slide is animated with 2 clicksYour computing resources are limited. You need to know how different workloads draw upon these resources—even when they are virtualized. In Windows Server 2012, Hyper‑V introduces Resource Metering, a technology that helps you track historical data of the use of virtual machines. With Resource Metering, you can gain insight into the resource use of specific servers. You can use this data to perform capacity planning, to monitor consumption by different business units or customers, or to capture data needed to help redistribute the costs of running a workload. You could also use the information that this feature provides to help build a billing solution, so that customers of your hosting services can be charged appropriately for resource usage.Hyper‑V in Windows Server 2012 lets providers build a multitenant environment, in which virtual machines can be served to multiple clients in a more isolated and secure way, as shown in the figure. Because a single client may have many virtual machines, aggregation of resource usage data can be a challenging task. However, Windows Server 2012 simplifies this task by using resource pools, a feature available in Hyper‑V. Resource pools are logical containers that collect resources of the virtual machines that belong to one client, permitting single-point querying of the client’s overall resource use.Hyper‑V Resource Metering has the following features:Uses resource pools, logical containers that collect resources of the virtual machines that belong to one client and allow single-point querying of the client’s overall resource use.Works with all Hyper‑V operations. Helps ensure that movement of virtual machines between Hyper‑V hosts (such as through live, offline, or storage migration) doesn’t affect the collected data.Uses Network Metering Port ACLs to differentiate between Internet and intranet traffic, so providers can measure incoming and outgoing network traffic for a given IP address range.Resource Metering can measure the following:Average CPU use. Average CPU, in megahertz, used by a virtual machine over a period of time.Average memory use. Average physical memory, in megabytes, used by a virtual machine over a period of time.Minimum memory use. Lowest amount of physical memory, in megabytes, assigned to a virtual machine over a period of time.Maximum memory use. Highest amount of physical memory, in megabytes, assigned to a virtual machine over a period of time. Maximum disk allocation. Highest amount of disk space capacity, in megabytes, allocated to a virtual machine over a period of time.Incoming network traffic. Total incoming network traffic, in megabytes, for a virtual network adapter over a period of time. Outgoing network traffic. Total outgoing network traffic, in megabytes, for a virtual network adapter over a period of time.
  18. Windows Server 2012 Hyper-V and Hyper-V Server 2012 also introduce a number of enhanced resource management capabilities that include:Dynamic Memory Improvements - These improvements dramatically increase virtual machine consolidation ratios and improve reliability for restart operations that can lead to lower costs, especially in environments, such as VDI, that have many idle or low-load virtual machines.Resource Metering - Resource Metering provides the ability to track and report the amount of data that is transferred per IP address or virtual machine to help ensure accurate chargebacks.Quality of Service - QoS provides the ability to programmatically adhere to a service level agreement (SLA) by specifying the minimum bandwidth that is available to a virtual machine or a port. It prevents latency issues by allocating maximum bandwidth use for a virtual machine or port.Data Center Bridging (DCB) - DCB takes advantage of the latest innovations and reduces the cost and difficulty to maintain separate network, management, live migration and storage traffic by using a modern, converged 10-gigabit local area network (LAN).
  19. Current situationPublic cloud hosting providers and large enterprises must often run multiple application servers on servers running Hyper‑V. Hosting providers that host customers on a server running Hyper‑V must deliver performance that’s based on service level agreements (SLAs). Enterprises want to run multiple application servers on a server running Hyper‑V with the confidence that each one will perform predictably. Most hosting providers and enterprises use a dedicated network adapter and a dedicated network for a specific type of workload, such as storage or live migration, to help achieve network performance isolation on a server running Hyper‑V. This strategy works for 1-gigabit Ethernet (GbE) network adapters, but becomes impractical for those using or planning to use 10 GigE network adapters. For most deployments, one or two 10 GigE network adapters provide enough bandwidth for all the workloads on a server running Hyper‑V. However, 10-GbE network adapters and switches are considerably more expensive than their 1-GbE counterparts. To optimize the 10 GigE hardware, a server running Hyper‑V requires new capabilities to manage bandwidth.Windows Server 2008 R2In Windows Server 2008 R2, QoS supports the enforcement of maximum bandwidth. This is known as rate limiting. Consider a typical server running Hyper‑V in which the following four types of network traffic share a single 10 GigE network adapter:Traffic between virtual machines and resources on other servers.Traffic to and from storage.Traffic for live migration of virtual machines between servers running Hyper‑V.Traffic to and from a CSV (intercommunication between nodes in a cluster).If virtual machine data is rate limited to 3 gigabits per second (Gbps), the sum of the virtual machine data throughputs can’t exceed 3 Gbps at any time, even if the other network traffic types don’t use the remaining 7 Gbps of bandwidth. However, this also means the other types of traffic can reduce the actual amount of bandwidth available for virtual machine data to unacceptable levels, depending on how their maximum bandwidths are defined. Windows Server 2012 solution Windows Server 2012 introduces new a QoS bandwidth management feature, minimum bandwidth, that enables hosting providers and enterprises to provide services with predictable network performance to virtual machines on a server running Hyper‑V. Features of minimum bandwidthUnlike maximum bandwidth, which is a bandwidth cap, minimum bandwidth is a bandwidth floor. It assigns a certain amount of bandwidth to a given type of traffic. In the event of congestion, when the desired network bandwidth exceeds the available bandwidth, minimum bandwidth is designed to help ensure that each type of network traffic receives at least its assigned bandwidth. For this reason, minimum bandwidth is also known as fair sharing. This characteristic is essential to converge multiple types of network traffic on a single network adapter. If there’s no congestion—that is, when there’s sufficient bandwidth to accommodate all network traffic—each type of network traffic can exceed its quota and consume as much bandwidth as is available. This characteristic makes minimum bandwidth superior to maximum bandwidth in using available bandwidth. If the importance of workloads in virtual machines is relative, you can use relative minimum bandwidth, where you assign a weight to each virtual machine, giving the more important ones a higher weight. You determine the bandwidth fraction that you assign to a virtual machine by dividing the virtual machine’s weight by the sum of all the weights of virtual machines that are attached to the Hyper‑V Extensible Switch. The following figure illustrates relative minimum bandwidthIf you want to provide an exact bandwidth, you should use strict minimum bandwidth where you assign an exact bandwidth quota to each virtual machine that is attached to the Hyper‑V Extensible Switch.Bandwidth oversubscription: The maximum amount of bandwidth that can be assigned to virtual machines is the bandwidth of a member network adapter in the network adapter team. The figure shows an invalid, oversubscribed configuration.Two mechanismsWindows Server 2012 offers two different mechanisms to enforce minimum bandwidth:The software solution: The newly enhanced packet scheduler. The hardware solution: Network adapters that support Data Center Bridging. In both cases, network traffic needs to first be classified:The server either classifies a packet itself or gives instructions to a network adapter to classify it. The result of classification is a number of traffic flows in Windows, and a given packet can only belong to one of them. For example, a traffic flow could be a live migration connection, a file transfer between a server and a client, or a remote desktop connection. Based on how the bandwidth policies are configured, either the packet scheduler in Windows Server 2012 or the network adapter will dispatch the packets at a rate equal to or higher than the minimum bandwidth configured for the traffic flow. Each of the two mechanisms has its own advantages and disadvantages:Packet scheduler. The software solution, which is built on the new packet scheduler in Windows Server 2012, provides a fine granularity of classification. It’s the only viable choice if there are many traffic flows that require minimum bandwidth enforcement. A typical example is a server running Hyper‑V hosting many virtual machines, where each virtual machine is classified as a traffic flow. Network adapter with DCB support. The hardware solution, which depends on DCB support on the network adapter, supports far fewer traffic flows but is able to classify network traffic that doesn’t originate from the networking stack. A typical scenario involves a Converged Network Adapter that supports iSCSI offload, in which iSCSI traffic bypasses the networking stack and is framed and transmitted directly by the Converged Network Adapter. Because the packet scheduler in the networking stack doesn’t process this offloaded traffic, DCB is the only viable choice to enforce minimum bandwidth.Both mechanisms can be employed on the same server:For example, a server running Hyper‑V has two physical network adapters: one binds to a virtual switch and serves virtual machine data, and the other serves the rest of the traffic of the host server. You can enable the software-based minimum bandwidth in Hyper‑V to help ensure bandwidth fair sharing among virtual machines and enable the hardware-based minimum bandwidth on the second network adapter to help ensure bandwidth fair sharing among various types of network traffic from the host server.Microsoft doesn’t recommend that you enable both mechanisms at the same time for a given type of network traffic:Using the previous example, live migration and storage traffic are configured to use the second network adapter on the server running Hyper‑V. If you’ve already configured the network adapter to allocate bandwidth for live migration and storage traffic, you shouldn’t also configure the packet scheduler in Windows Server 2012 to do the same, and vice versa. Enabling both mechanisms at the same time for the same types of network traffic compromises the intended results.
  20. Windows Server 2012 Hyper-V and Hyper-V Server 2012 also introduce a number of enhanced resource management capabilities that include:Dynamic Memory Improvements - These improvements dramatically increase virtual machine consolidation ratios and improve reliability for restart operations that can lead to lower costs, especially in environments, such as VDI, that have many idle or low-load virtual machines.Resource Metering - Resource Metering provides the ability to track and report the amount of data that is transferred per IP address or virtual machine to help ensure accurate chargebacks.Quality of Service - QoS provides the ability to programmatically adhere to a service level agreement (SLA) by specifying the minimum bandwidth that is available to a virtual machine or a port. It prevents latency issues by allocating maximum bandwidth use for a virtual machine or port.Data Center Bridging (DCB) - DCB takes advantage of the latest innovations and reduces the cost and difficulty to maintain separate network, management, live migration and storage traffic by using a modern, converged 10-gigabit local area network (LAN).
  21. But how does VMware compare? Well, as shown in the table, when it comes to memory management, Windows Server 2012 Hyper-V and Hyper-V Server 2012, along with VMware vSphere Hypervisor and vSphere 5.1, all provide techniques to better utilize virtual machine memory, increase density and maximize return on investment, however Microsoft’s approach to memory management is different to that of VMware. VMware claim, that through their 4 memory management techniques; Memory Ballooning, Transparent Page Sharing, Compression and Swapping, they can provide a virtual machine density greater than that of Hyper-V, yet in reality, this is false. All 4 of these memory management techniques only operate when the host is under memory pressure, heavily laden, as a reactive measure. With technologies such as Transparent Page Sharing (TPS), with the majority of hardware platforms now supporting higher performance 2MB Large Page Tables by default (LPT), TPS is unable to deduplicate memory pages as easily as it would, prior to LPT, thus the capability becomes significantly less useful. Under memory pressure, the ESXi host will break down large memory pages into smaller, 4KB pages, which it can then deduplicate, freeing up memory, but unfortunately, this process doesn’t occur without a cost to already limited, host performance. With compression and swapping, whilst both help to keep virtual machines operable, it’s too little too late, with performance of key workloads at this point, becoming severely degraded.With Dynamic Memory, Hyper-V works intuitively with the guest operating system, delivering, and reclaiming memory from the virtual machine in a way that is optimal for the guest operating system, ensuring resources are provided appropriately, and a consistent level of performance is achieved for key workloads, ultimately providing the highest levels of density, and the greatest return on investment.As we move towards more cloud-oriented infrastructures, especially in multi-tenanted environments, hosting providers and enterprises must be able to measure the amount of data center resources (compute, network, and storage) that are consumed by each workload. These can be used to charge external customers (known as chargeback), or for internal accounting (known as showback) for cross-departmental budget management scenarios of an enterprise.  Resource Metering, which is a standard feature ofWindows Server 2012 Hyper-V and Hyper-V Server 2012, when combined with new performance counters, exposes a wealth of information from which chargeback and showback models can be built.  While the vSphere Hypervisor, and vSphere 5.1 both enable the capturing of information within vCenter, organizations must purchase, at additional cost to vSphere 5.1, vCenter Chargeback Manager in order utilize the information in a meaningful manner.Whilst chargeback and showback are two important elements for a private cloud, ensuring service levels are met is equally important, whether the primary business is that of a hosting provider, serving external customers, or an enterprise organization, serving internal business units with chargeable resources.  Either way, ensuring the highest levels of performance is imperative, and with Windows Server 2012 Hyper-V and Hyper-V Server 2012, Quality of Service (QoS) is a standard feature, enabling organizations to ensure that Service Level Agreements (SLAs) for key workloads are met, and at the same time, intensive virtual machines don’t consume more than their allocated allowance.  With VMware however, QoS is only available in the Enterprise Plus edition of vSphere 5.1, so for those customers who wish to implement stringent SLAs, customers must upgrade, at additional cost, to VMware’s highest edition.