SlideShare a Scribd company logo
1 of 61
Storage Virtualization Seminar Stephen Foskett Director of Data Practice, Contoural
Part 1:Breaking the Connections Storage virtualization is here, breaking the connection between physical storage infrastructure and the logical way we use it
Agenda What is storage virtualization? Volume management Advanced file systems Virtualizing the SAN  Virtual NAS
Poll: Who is Already Using Storage Virtualization? ,[object Object]
…but your storage is already virtualized!
Disk drives map blocks
RAID is as old as storage (conceived 1978-1988)
Modern OSes include volume management and path management
Network-attached storage (NAS) redirectors and DFS
Storage arrays are highly virtualized (clustering, LUN carving, relocation, tiering, etc…)
According to ESG, 52% have already implemented storage virtualization and 48% plan to! (ESG 2008),[object Object]
What and Why? Virtualization removes the hard connection between storage hardware and users Address space is mapped to logical rather than physical locations The virtualizing service consistently maintains this meta-data I/O can be redirected to a new physical location We gain by virtualizing Efficiency, flexibility, and scalability Stability, availability, and recoverability
The Non-Revolution:Storage Virtualization Software ,[object Object]
Virtualization exists for both block and file storage networks
Can be located in server-based software, on network-based appliances, SAN switches, or integrated with the storage arraySwitch Appliance Array
Introducing Volume Management ,[object Object]
Volume managers abstract block storage (LUNs, disks, partitions) into virtual “volumes”
Very common – all* modern OSes have volume managers built in
Windows Logical Disk Manager, Linux LVM/EVMS, AIX LVM, HP-UX LVM, Solaris Solstice, Veritas Volume Manager
Mostly used for flexibility
Resize volumes
Protect data (RAID)
Add capacity (concatenate or expand stripe or RAID)
Mirror, snapshot, replicate
Migrate data,[object Object]
ZFS: Super File System! ,[object Object]
Open source (CDDL) project managed by Sun
Will probably replace UFS (Sun), HFS+ (Apple OS X Snow Leopard Server)
ZFS creates a truly flexible, extensible, and full-featured pool of storage across systems and disks
Filesystems contained in “zpools” on “vdevs” with striping and optional RAID-Z/Z2
128-bit addresses mean near-infinite capacity (in theory)
Blocks are “copy-on-write” with checksums for snapshots, clones, authentication
…but there are some limitations
Adding (and especially removing) vdevs is hard/impossible
Stacked RAID is impossible
There is no clustering (until Sun adds Lustre),[object Object]
Virtualizing the SAN ,[object Object]
Can require less reconfiguration and server work
Works with all servers and storage (potentially)
Resides on appliance or switch placed in the storage network
Some are in the data path, others are less so
Brocade and Cisco switches have application blades
Some use dedicated storage services modules (SSMs),[object Object]
SAN Virtualization Products
Virtual NAS ,[object Object]
IP network connectivity and host processing possibilities
Multitude of file servers? Virtualize!
Global namespace across all NAS and servers
Share excess capacity
Transparently migrate data (easier than redirecting users!)
Tier files on large “shares” with variety of data
Create multiple virtual file servers,[object Object]
Transformed Storage Systems ,[object Object]
Arrays create large RAID sets and “carve out” virtual LUNs for use by servers
Controller clusters (and grids) redirect activity based on workload and availability
Snapshots/mirrors and replication are common features
A new generation arrays with virtualization features is appearing, with tiered storage, thin provisioning, migration, de-duplication
Sub-disk RAID = the end of RAID as we know it?,[object Object]
Most arrays support multiple drive types
“Bulk” SATA or SAS drives are common (500 GB - 1 TB)
Solid-state drives are the latest innovation

More Related Content

What's hot

Network Attached Storage (NAS)
Network Attached Storage (NAS)Network Attached Storage (NAS)
Network Attached Storage (NAS)
sandeepgodfather
 

What's hot (20)

Network Attached Storage (NAS)
Network Attached Storage (NAS)Network Attached Storage (NAS)
Network Attached Storage (NAS)
 
Introduction to Hyper-V
Introduction to Hyper-VIntroduction to Hyper-V
Introduction to Hyper-V
 
Google App Engine
Google App EngineGoogle App Engine
Google App Engine
 
Cloud Computing Tools
Cloud Computing ToolsCloud Computing Tools
Cloud Computing Tools
 
VIRTUALIZATION STRUCTURES TOOLS.docx
VIRTUALIZATION STRUCTURES TOOLS.docxVIRTUALIZATION STRUCTURES TOOLS.docx
VIRTUALIZATION STRUCTURES TOOLS.docx
 
Virtualization
VirtualizationVirtualization
Virtualization
 
Clustering and High Availability
Clustering and High Availability Clustering and High Availability
Clustering and High Availability
 
The kvm virtualization way
The kvm virtualization wayThe kvm virtualization way
The kvm virtualization way
 
cloud computing:Types of virtualization
cloud computing:Types of virtualizationcloud computing:Types of virtualization
cloud computing:Types of virtualization
 
Cloud Infrastructure Mechanisms
Cloud Infrastructure MechanismsCloud Infrastructure Mechanisms
Cloud Infrastructure Mechanisms
 
Hyper-Converged Infrastructure: Concepts
Hyper-Converged Infrastructure: ConceptsHyper-Converged Infrastructure: Concepts
Hyper-Converged Infrastructure: Concepts
 
VMware Presentation
VMware PresentationVMware Presentation
VMware Presentation
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16
 
Hyper-Converged Infrastructure Vx Rail
Hyper-Converged Infrastructure Vx Rail Hyper-Converged Infrastructure Vx Rail
Hyper-Converged Infrastructure Vx Rail
 
VMware Overview
VMware OverviewVMware Overview
VMware Overview
 
What’s New in VMware vSphere 7?
What’s New in VMware vSphere 7?What’s New in VMware vSphere 7?
What’s New in VMware vSphere 7?
 
Network Virtualization
Network VirtualizationNetwork Virtualization
Network Virtualization
 
cloud storage
cloud storagecloud storage
cloud storage
 
VMware Virtual SAN Presentation
VMware Virtual SAN PresentationVMware Virtual SAN Presentation
VMware Virtual SAN Presentation
 
Xen & virtualization
Xen & virtualizationXen & virtualization
Xen & virtualization
 

Similar to Storage Virtualization Introduction

Storage area network
Storage area networkStorage area network
Storage area network
Neha Agarwal
 
Building a Distributed Block Storage System on Xen
Building a Distributed Block Storage System on XenBuilding a Distributed Block Storage System on Xen
Building a Distributed Block Storage System on Xen
The Linux Foundation
 
Hyper-converged infrastructure
Hyper-converged infrastructureHyper-converged infrastructure
Hyper-converged infrastructure
Igor Malts
 

Similar to Storage Virtualization Introduction (20)

Challenges in Managing IT Infrastructure
Challenges in Managing IT InfrastructureChallenges in Managing IT Infrastructure
Challenges in Managing IT Infrastructure
 
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
 
Storage Networks
Storage NetworksStorage Networks
Storage Networks
 
Cisco hyperflex software defined storage and ucs unite
Cisco hyperflex software defined storage and ucs uniteCisco hyperflex software defined storage and ucs unite
Cisco hyperflex software defined storage and ucs unite
 
Survey of distributed storage system
Survey of distributed storage systemSurvey of distributed storage system
Survey of distributed storage system
 
Gridstore's Software-Defined-Storage Architecture
Gridstore's Software-Defined-Storage ArchitectureGridstore's Software-Defined-Storage Architecture
Gridstore's Software-Defined-Storage Architecture
 
Storage area network
Storage area networkStorage area network
Storage area network
 
3487570
34875703487570
3487570
 
Scale-on-Scale : Part 1 of 3 - Production Environment
Scale-on-Scale : Part 1 of 3 - Production EnvironmentScale-on-Scale : Part 1 of 3 - Production Environment
Scale-on-Scale : Part 1 of 3 - Production Environment
 
110629 nexenta- andy bennett
110629   nexenta- andy bennett110629   nexenta- andy bennett
110629 nexenta- andy bennett
 
Building a Distributed Block Storage System on Xen
Building a Distributed Block Storage System on XenBuilding a Distributed Block Storage System on Xen
Building a Distributed Block Storage System on Xen
 
Advanced DB chapter 2.pdf
Advanced DB chapter 2.pdfAdvanced DB chapter 2.pdf
Advanced DB chapter 2.pdf
 
SAN BASICS..Why we will go for SAN?
SAN BASICS..Why we will go for SAN?SAN BASICS..Why we will go for SAN?
SAN BASICS..Why we will go for SAN?
 
Storage Virtualization: Towards an Efficient and Scalable Framework
Storage Virtualization: Towards an Efficient and Scalable FrameworkStorage Virtualization: Towards an Efficient and Scalable Framework
Storage Virtualization: Towards an Efficient and Scalable Framework
 
It's the End of Data Storage As We Know It (And I Feel Fine)
It's the End of Data Storage As We Know It (And I Feel Fine)It's the End of Data Storage As We Know It (And I Feel Fine)
It's the End of Data Storage As We Know It (And I Feel Fine)
 
Hyper-converged infrastructure
Hyper-converged infrastructureHyper-converged infrastructure
Hyper-converged infrastructure
 
network storage
network storagenetwork storage
network storage
 
Storage Training July 10
Storage Training July 10Storage Training July 10
Storage Training July 10
 
VirtualStor Extreme - Software Defined Scale-Out All Flash Storage
VirtualStor Extreme - Software Defined Scale-Out All Flash StorageVirtualStor Extreme - Software Defined Scale-Out All Flash Storage
VirtualStor Extreme - Software Defined Scale-Out All Flash Storage
 
final-unit-ii-cc-cloud computing-2022.pdf
final-unit-ii-cc-cloud computing-2022.pdffinal-unit-ii-cc-cloud computing-2022.pdf
final-unit-ii-cc-cloud computing-2022.pdf
 

More from Stephen Foskett

State of the Art Thin Provisioning
State of the Art Thin ProvisioningState of the Art Thin Provisioning
State of the Art Thin Provisioning
Stephen Foskett
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
Stephen Foskett
 
Eleven Essential Attributes For Email Archiving
Eleven Essential Attributes For Email ArchivingEleven Essential Attributes For Email Archiving
Eleven Essential Attributes For Email Archiving
Stephen Foskett
 
Email Archiving Solutions Whats The Difference
Email Archiving Solutions Whats The DifferenceEmail Archiving Solutions Whats The Difference
Email Archiving Solutions Whats The Difference
Stephen Foskett
 
Deep Dive Into Email Archiving Products
Deep Dive Into Email Archiving ProductsDeep Dive Into Email Archiving Products
Deep Dive Into Email Archiving Products
Stephen Foskett
 
Extreme Tiered Storage Flash, Disk, And Cloud
Extreme Tiered Storage Flash, Disk, And CloudExtreme Tiered Storage Flash, Disk, And Cloud
Extreme Tiered Storage Flash, Disk, And Cloud
Stephen Foskett
 
The Right Approach To Cloud Storage
The Right Approach To Cloud StorageThe Right Approach To Cloud Storage
The Right Approach To Cloud Storage
Stephen Foskett
 
Storage Decisions Nirvanix Introduction
Storage Decisions Nirvanix IntroductionStorage Decisions Nirvanix Introduction
Storage Decisions Nirvanix Introduction
Stephen Foskett
 
Solve 3 Enterprise Storage Problems Today
Solve 3 Enterprise Storage Problems TodaySolve 3 Enterprise Storage Problems Today
Solve 3 Enterprise Storage Problems Today
Stephen Foskett
 
Virtualization Changes Storage
Virtualization Changes StorageVirtualization Changes Storage
Virtualization Changes Storage
Stephen Foskett
 

More from Stephen Foskett (19)

The Zen of Storage
The Zen of StorageThe Zen of Storage
The Zen of Storage
 
What’s the Deal with Containers, Anyway?
What’s the Deal with Containers, Anyway?What’s the Deal with Containers, Anyway?
What’s the Deal with Containers, Anyway?
 
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
 
The Four Horsemen of Storage System Performance
The Four Horsemen of Storage System PerformanceThe Four Horsemen of Storage System Performance
The Four Horsemen of Storage System Performance
 
Gestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
Gestalt IT - Why It’s Time to Stop Thinking In Terms of SilosGestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
Gestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
 
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
 
State of the Art Thin Provisioning
State of the Art Thin ProvisioningState of the Art Thin Provisioning
State of the Art Thin Provisioning
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
 
Eleven Essential Attributes For Email Archiving
Eleven Essential Attributes For Email ArchivingEleven Essential Attributes For Email Archiving
Eleven Essential Attributes For Email Archiving
 
Email Archiving Solutions Whats The Difference
Email Archiving Solutions Whats The DifferenceEmail Archiving Solutions Whats The Difference
Email Archiving Solutions Whats The Difference
 
Storage School 1
Storage School 1Storage School 1
Storage School 1
 
Storage School 2
Storage School 2Storage School 2
Storage School 2
 
Deep Dive Into Email Archiving Products
Deep Dive Into Email Archiving ProductsDeep Dive Into Email Archiving Products
Deep Dive Into Email Archiving Products
 
Extreme Tiered Storage Flash, Disk, And Cloud
Extreme Tiered Storage Flash, Disk, And CloudExtreme Tiered Storage Flash, Disk, And Cloud
Extreme Tiered Storage Flash, Disk, And Cloud
 
The Right Approach To Cloud Storage
The Right Approach To Cloud StorageThe Right Approach To Cloud Storage
The Right Approach To Cloud Storage
 
Storage Decisions Nirvanix Introduction
Storage Decisions Nirvanix IntroductionStorage Decisions Nirvanix Introduction
Storage Decisions Nirvanix Introduction
 
Solve 3 Enterprise Storage Problems Today
Solve 3 Enterprise Storage Problems TodaySolve 3 Enterprise Storage Problems Today
Solve 3 Enterprise Storage Problems Today
 
Virtualization Changes Storage
Virtualization Changes StorageVirtualization Changes Storage
Virtualization Changes Storage
 
Cloud Storage Benefits
Cloud Storage BenefitsCloud Storage Benefits
Cloud Storage Benefits
 

Recently uploaded

Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
panagenda
 
Structuring Teams and Portfolios for Success
Structuring Teams and Portfolios for SuccessStructuring Teams and Portfolios for Success
Structuring Teams and Portfolios for Success
UXDXConf
 

Recently uploaded (20)

Working together SRE & Platform Engineering
Working together SRE & Platform EngineeringWorking together SRE & Platform Engineering
Working together SRE & Platform Engineering
 
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
 
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
 
How we scaled to 80K users by doing nothing!.pdf
How we scaled to 80K users by doing nothing!.pdfHow we scaled to 80K users by doing nothing!.pdf
How we scaled to 80K users by doing nothing!.pdf
 
Event-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream ProcessingEvent-Driven Architecture Masterclass: Challenges in Stream Processing
Event-Driven Architecture Masterclass: Challenges in Stream Processing
 
Introduction to FDO and How It works Applications _ Richard at FIDO Alliance.pdf
Introduction to FDO and How It works Applications _ Richard at FIDO Alliance.pdfIntroduction to FDO and How It works Applications _ Richard at FIDO Alliance.pdf
Introduction to FDO and How It works Applications _ Richard at FIDO Alliance.pdf
 
State of the Smart Building Startup Landscape 2024!
State of the Smart Building Startup Landscape 2024!State of the Smart Building Startup Landscape 2024!
State of the Smart Building Startup Landscape 2024!
 
Overview of Hyperledger Foundation
Overview of Hyperledger FoundationOverview of Hyperledger Foundation
Overview of Hyperledger Foundation
 
TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...
TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...
TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...
 
WebRTC and SIP not just audio and video @ OpenSIPS 2024
WebRTC and SIP not just audio and video @ OpenSIPS 2024WebRTC and SIP not just audio and video @ OpenSIPS 2024
WebRTC and SIP not just audio and video @ OpenSIPS 2024
 
How Red Hat Uses FDO in Device Lifecycle _ Costin and Vitaliy at Red Hat.pdf
How Red Hat Uses FDO in Device Lifecycle _ Costin and Vitaliy at Red Hat.pdfHow Red Hat Uses FDO in Device Lifecycle _ Costin and Vitaliy at Red Hat.pdf
How Red Hat Uses FDO in Device Lifecycle _ Costin and Vitaliy at Red Hat.pdf
 
Collecting & Temporal Analysis of Behavioral Web Data - Tales From The Inside
Collecting & Temporal Analysis of Behavioral Web Data - Tales From The InsideCollecting & Temporal Analysis of Behavioral Web Data - Tales From The Inside
Collecting & Temporal Analysis of Behavioral Web Data - Tales From The Inside
 
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
 
Secure Zero Touch enabled Edge compute with Dell NativeEdge via FDO _ Brad at...
Secure Zero Touch enabled Edge compute with Dell NativeEdge via FDO _ Brad at...Secure Zero Touch enabled Edge compute with Dell NativeEdge via FDO _ Brad at...
Secure Zero Touch enabled Edge compute with Dell NativeEdge via FDO _ Brad at...
 
The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?
 
Structuring Teams and Portfolios for Success
Structuring Teams and Portfolios for SuccessStructuring Teams and Portfolios for Success
Structuring Teams and Portfolios for Success
 
WebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM PerformanceWebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM Performance
 
Your enemies use GenAI too - staying ahead of fraud with Neo4j
Your enemies use GenAI too - staying ahead of fraud with Neo4jYour enemies use GenAI too - staying ahead of fraud with Neo4j
Your enemies use GenAI too - staying ahead of fraud with Neo4j
 
Simplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdf
Simplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdfSimplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdf
Simplified FDO Manufacturing Flow with TPMs _ Liam at Infineon.pdf
 
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
 

Storage Virtualization Introduction

  • 1. Storage Virtualization Seminar Stephen Foskett Director of Data Practice, Contoural
  • 2. Part 1:Breaking the Connections Storage virtualization is here, breaking the connection between physical storage infrastructure and the logical way we use it
  • 3. Agenda What is storage virtualization? Volume management Advanced file systems Virtualizing the SAN Virtual NAS
  • 4.
  • 5. …but your storage is already virtualized!
  • 7. RAID is as old as storage (conceived 1978-1988)
  • 8. Modern OSes include volume management and path management
  • 9. Network-attached storage (NAS) redirectors and DFS
  • 10. Storage arrays are highly virtualized (clustering, LUN carving, relocation, tiering, etc…)
  • 11.
  • 12. What and Why? Virtualization removes the hard connection between storage hardware and users Address space is mapped to logical rather than physical locations The virtualizing service consistently maintains this meta-data I/O can be redirected to a new physical location We gain by virtualizing Efficiency, flexibility, and scalability Stability, availability, and recoverability
  • 13.
  • 14. Virtualization exists for both block and file storage networks
  • 15. Can be located in server-based software, on network-based appliances, SAN switches, or integrated with the storage arraySwitch Appliance Array
  • 16.
  • 17. Volume managers abstract block storage (LUNs, disks, partitions) into virtual “volumes”
  • 18. Very common – all* modern OSes have volume managers built in
  • 19. Windows Logical Disk Manager, Linux LVM/EVMS, AIX LVM, HP-UX LVM, Solaris Solstice, Veritas Volume Manager
  • 20. Mostly used for flexibility
  • 23. Add capacity (concatenate or expand stripe or RAID)
  • 25.
  • 26.
  • 27. Open source (CDDL) project managed by Sun
  • 28. Will probably replace UFS (Sun), HFS+ (Apple OS X Snow Leopard Server)
  • 29. ZFS creates a truly flexible, extensible, and full-featured pool of storage across systems and disks
  • 30. Filesystems contained in “zpools” on “vdevs” with striping and optional RAID-Z/Z2
  • 31. 128-bit addresses mean near-infinite capacity (in theory)
  • 32. Blocks are “copy-on-write” with checksums for snapshots, clones, authentication
  • 33. …but there are some limitations
  • 34. Adding (and especially removing) vdevs is hard/impossible
  • 35. Stacked RAID is impossible
  • 36.
  • 37.
  • 38. Can require less reconfiguration and server work
  • 39. Works with all servers and storage (potentially)
  • 40. Resides on appliance or switch placed in the storage network
  • 41. Some are in the data path, others are less so
  • 42. Brocade and Cisco switches have application blades
  • 43.
  • 45.
  • 46. IP network connectivity and host processing possibilities
  • 47. Multitude of file servers? Virtualize!
  • 48. Global namespace across all NAS and servers
  • 50. Transparently migrate data (easier than redirecting users!)
  • 51. Tier files on large “shares” with variety of data
  • 52.
  • 53.
  • 54. Arrays create large RAID sets and “carve out” virtual LUNs for use by servers
  • 55. Controller clusters (and grids) redirect activity based on workload and availability
  • 56. Snapshots/mirrors and replication are common features
  • 57. A new generation arrays with virtualization features is appearing, with tiered storage, thin provisioning, migration, de-duplication
  • 58.
  • 59. Most arrays support multiple drive types
  • 60. “Bulk” SATA or SAS drives are common (500 GB - 1 TB)
  • 61. Solid-state drives are the latest innovation
  • 62. Some arrays can dynamically load balance
  • 63. A few can “hide” other arrays “behind”
  • 64. SAN: HDS USP-V and similar from Sun, HP
  • 65.
  • 66. Some arrays can “thinly” provision just the capacity that actually contains data
  • 67. 500 GB request for new project, but only 2 GB of initial data is written – array only allocates 2 GB and expands as data is written
  • 70. Oops – we provisioned a petabyte and ran out of storage
  • 71. Chunk sizes and formatting conflicts
  • 72. Can it thin unprovision?
  • 73. Can it replicate to and from thin provisioned volumes?
  • 74.
  • 75. More appropriate to some applications than others
  • 76. Software or appliance (and now array!) analyzes files or blocks, saving duplicates just once
  • 77. Block-based reduce capacity more by looking inside files
  • 78. Once common only for archives, now available for production data
  • 79. Serious implications for performance and capacity utilization
  • 80. In-line devices process all data before it is written
  • 81.
  • 82. The Next-Generation Data Center Virtualization of server and storage will transform the data center Clusters of capability host virtual servers Cradle to grave integrated management SAN/network convergence is next InfiniBand offers converged virtual connectivity today iSCSI and FCoE become datacenter Ethernet (DCE) with converged network adapters (CNAs)
  • 85. Part 2:Storage in the Virtual World Responding to the demands of server, application, and business users with new flexible technologies
  • 86. Agenda Why virtual storage for virtual servers? The real world impact and benefits Best practices for implementation
  • 87. Poll: Who Is Using VMware?
  • 88. Poll: Does Server Virtualization Improve Storage Utilization?
  • 89. Why Use Virtual Storage For Virtual Servers? Mobility of virtual machines between physical servers for load balancing Improved disaster recovery Higher availability Enabling physical server upgrades Operational recovery of virtual machine images
  • 90.
  • 91. VMware is the #1 driver of SAN adoption today!
  • 92. 60% of virtual server storage is on SAN or NAS (ESG 2008)
  • 93. 86% have implemented some server virtualization (ESG 2008)
  • 94.
  • 95.
  • 96. Patchwork of support, few standards
  • 97. “VMware mode” on storage arrays
  • 98. Virtual HBA/N_Port ID Virtualization (NPIV)
  • 99. Everyone is qualifying everyone and jockeying for position
  • 100. Can be “detrimental” to storage utilization
  • 101.
  • 102. VMware Storage Options:Shared Storage on NFS Shared storage on NFS – skip VMFS and use NAS NTFS is the datastore Wow! Simple – no SAN Multiple queues Flexible (on-the-fly changes) Simple snap and replicate* Enables full Vmotion Use fixed LACP for trunking But… Less familiar (3.0+) CPU load questions Default limited to 8 NFS datastores Will multi-VMDK snaps be consistent? VM Host Guest OS NFS Storage VMDK
  • 103. VMware Storage Options:Raw Device Mapping (RDM) Raw device mapping (RDM) - guest VM’s access storage directly over iSCSI or FC VM’s can even boot from raw devices Hyper-V pass-through LUN is similar Great! Per-server queues for performance Easier measurement The only method for clustering But… Tricky VMotion and DRS No storage VMotion More management overhead Limited to 256 LUNs per data center VM Host Guest OS I/O Mapping File SAN Storage
  • 104. Physical vs. Virtual RDM Virtual Compatibility Mode Appears the same as a VMDK on VMFS Retains file locking for clustering Allows VM snapshots, clones, VMotion Retains same characteristics if storage is moved Physical Compatibility Mode Appears as a LUN on a “hard” host Allows V-to-P clustering,a VMware locking No VM snapshots, VCB, VMotion All characteristics and SCSI commands (except “Report LUN”) are passed through – required for some SAN management software
  • 106. Poll: Which VMware Storage Method Performs Best? Mixed Random I/O CPU Cost Per I/O VMFS, RDM (p), or RDM (v) Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc., 2008
  • 107. Which Storage Protocol is For You? FC, iSCSI, NFS all work well Most production VM data is on FC Either/or? - 50% use a combination (ESG 2008) Leverage what you have and are familiar with For IP storage Use TOE cards/iSCSI HBAs Use a separate network or VLAN Is your switch backplane fast? No VM Cluster support with iSCSI* For FC storage 4 Gb FC is awesome for VM’s Get NPIV (if you can)
  • 108. Poll: Which Storage Protocol Performs Best? Throughput by I/O Size CPU Cost Per I/O Fibre Channel, NFS, iSCSI (sw), iSCSI (TOE) Source: “Comparison of Storage Protocol Performance”, VMware Inc., 2008
  • 109. Storage Configuration Best Practices Separate operating system and application data OS volumes (C: or /) on a different VMFS or LUN from applications (D: etc) Heavy apps get their own VMFS or raw LUN(s) Optimize storage by application Consider different tiers or RAID levels for OS, data, transaction logs - automated tiering can help No more than one VMFS per LUN Less than 16 production ESX .VMDKs per VMFS Get thin Deduplication can have a huge impact on VMDKs created from a template! Thin provisioning can be very useful – Thin disk is in Server, not ESX!?!
  • 110. Why NPIV Matters Without NPIV N_Port ID Virtualization (NPIV) gives each server a unique WWN Easier to move and clone* virtual servers Better handling of fabric login Virtual servers can have their own LUNs, QoS, and zoning Just like a real server! When looking at NPIV, consider: How many virtual WWNs does it support? T11 spec says “up to 256” OS, virtualization software, HBA, FC switch, and array support and licensing Can’t upgrade some old hardware for NPIV, especially HBAs Virtual Server Virtual Server Virtual Server 21:00:00:e0:8b:05:05:04 With NPIV Virtual Server Virtual Server Virtual Server …05:05:05 …05:05:06 …05:05:07
  • 111. Virtualization-Enabled Disaster Recovery DR is a prime beneficiary of server and storage virtualization Fewer remote machines idling No need for identical equipment Quicker recovery (RTO) through preparation and automation Who’s doing it? 26% are replicating server images, an additional 39% plan to (ESG 2008) Half have never used replication before (ESG 2008) News: VMware Site Recovery Manager (SRM) integrates storage replication with DR
  • 112. Enhancing Virtual Servers with Storage Virtualization Mobility of server and storage images enhances load balancing, availability, and maintenance SAN and NAS arrays can snap and replicate server images VMotion moves the server, Storage VMotion (new in 3.5) moves the storage between shared storage locations Virtualization-optimized storage Pillar and HDS claim to tweak allocation per VM Many vendors announcing compatibility with VMware SRM Most new arrays are NPIV-capable Virtual storage appliances LeftHand VSA – A virtual virtualized storage array FalconStor CDP – a virtual CDP system
  • 113. Enabling Virtual Backup Virtual servers cause havoc for traditional client/server backups I/O crunch as schedules kick off – load is consolidated instead of balanced Difficult to manage and administer (or even comprehend!) Storage virtualization can help Add disk to handle the load (VTL) Switch to alternative mechanisms (snapshots, CDP) Consider VMware consolidated backup (VCB) Snapshot-based backup of shared VMware storage Block-based backup of all VMDKs on a physical server
  • 116. Part 3:Should You Virtualize? A look at the practical benefits of virtualized storage
  • 117.
  • 119. Stability, availability, and recoverability
  • 122.
  • 123. The right amount of storage for the application
  • 124. The right type (tiered storage)
  • 125. Quickly add and remove on demand
  • 126. Move storage from device to another
  • 128.
  • 129. Performance A battle royale between in- and out-of-band! In-band virtualization can improve performance with caching Out-of-band stays out of the way, relying on caching at the device level Split-path adds scalability to in-band Large arrays perform better (usually) than lots of tiny RAIDs or disks First rule of performance: Spindles Second rule of performance: Cache Third rule of performance: I/O Bottlenecks
  • 130. Solid State Drives (and Myths) The new (old) buzz RAM vs. NAND flash vs. disk EMC added flash drives to the DMX (CX?) as “tier-0”, CEO Joe Tucci claims flash will displace high-end disk after 2010 Sun, HP adding flash to the server as a cache Gear6 caches NAS with RAM But… Are they reliable? Do they really perform that well? Will you be able to use them? Is the 10x-30x cost justified? Do they really save power? Notes: 1 – No one writes this fast 24x7 2 – Manufacturers claim 2x to 10x better endurance
  • 131. Stability, Availability, and Recoverability Replication creates copies of storage in other locations Local replicas (mirrors and snapshots) are usually frequent and focused on restoring data in daily use Remote replicas are used to recover from disasters Virtualization can ease replication Single point of configuration and monitoring Can support different hardware at each location
  • 132. We Love It! Efficiency, scalability, performance, availability, recoverability, etc… Without virtualization, none of this can happen!
  • 133.
  • 134. Downtime and performance affect more systems
  • 135. Harder to back out if unsatisfied
  • 136. Additional complexity and interoperability concerns
  • 137.
  • 138. Cost Benefit Analysis Benefits Improved utilization Tiering lowers per-GB cost Reduced need for proprietary technologies Potential reduction of administrative/ staffing costs Flexibility boosts IT response time Performance boosts operational efficiency Costs Additional hardware and software cost Added complexity, vendors Training and daily management Reporting and incomprehensibility Possible negative performance impact Stability and reliability concerns
  • 139. Where Will You Virtualize?
  • 140. Closing Thought:What Is Virtualization Good For? Virtualization is a technology not a product What will you get from using it? Better DR? Improved service levels and availability? Better performance? Shortened provisioning time? The cost must be justified based on business benefit, not cool technology
  • 141. Audience Response Questions? Stephen Foskett Contoural, Inc. sfoskett@contoural.com http://blog.fosketts.net

Editor's Notes

  1. Taneja “Next-Generation FC Arrays”:Clustered controller designSub-disk virtualizationSelf-configuring and self-tuning storageAutomated storage tieringThin technologies
  2. Up to 256 FC or iSCSI LUNsESX multipathingLoad balancingFailoverFailover between FC and iSCSI*Beware of block sizes greater than 256 KB!If you want virtual disks greater than 256 GB, you must use a VMFS block size larger than 1 MBAlign your virtual disk starting offset to your array (by booting the VM and using diskpart, Windows PE, or UNIX fdisk)*
  3. Link Aggregate Control Protocol (LACP) for trunking/EtherChannel - Use “fixed” path policy, not LRUUp to 8 (or 32) NFS mount pointsTurn off access time updatesThin provisioning? Turn on AutoSize and watch out
  4. www.netapp.com/library/tr/3428.pdf