Your SlideShare is downloading. ×
0
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Storage for Virtual Environments 2011 R2
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Storage for Virtual Environments 2011 R2

2,274

Published on

The Storage for Virtual Environments seminar focuses on the challenges of backup and recovery in a virtual infrastructure, the various solutions that users are now using to solve those challenges, and …

The Storage for Virtual Environments seminar focuses on the challenges of backup and recovery in a virtual infrastructure, the various solutions that users are now using to solve those challenges, and a roadmap for making the most of all an organization’s virtualization initiatives.

This slide deck was used by Stephen Foskett for his

Published in: Technology, Business
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,274
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
232
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Mirror Mode paper: http://www.usenix.org/events/atc11/tech/final_files/Mashtizadeh.pdfhttp://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-2-storage-vmotion.html
  • http://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-1-vmfs-5.html
  • Up to 256 FC or iSCSI LUNsESX multipathingLoad balancingFailoverFailover between FC and iSCSI*Beware of block sizes greater than 256 KB!If you want virtual disks greater than 256 GB, you must use a VMFS block size larger than 1 MBAlign your virtual disk starting offset to your array (by booting the VM and using diskpart, Windows PE, or UNIX fdisk)*
  • Link Aggregate Control Protocol (LACP) for trunking/EtherChannel - Use “fixed” path policy, not LRUUp to 8 (or 32) NFS mount pointsTurn off access time updatesThin provisioning? Turn on AutoSize and watch out
  • http://www.techrepublic.com/blog/datacenter/stretch-your-storage-dollars-with-vsphere-thin-provisioning/2655http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf
  • http://virtualgeek.typepad.com/virtual_geek/2011/07/vstorage-apis-for-array-integration-vaai-vsphere-5-edition.htmlhttp://blogs.vmware.com/vsphere/2011/07/new-enhanced-vsphere-50-storage-features-part-3-vaai.html
  • http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is  20 – 30 MSFor SAS storage the recommended latency threshold is  20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
  • http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is  20 – 30 MSFor SAS storage the recommended latency threshold is  20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
  • http://www.slideshare.net/esloof/vsphere-5-whats-new-storage-drshttp://blogs.vmware.com/vsphere/2011/07/vsphere-50-storage-features-part-5-storage-drs-initial-placement.html
  • http://www.ntpro.nl/blog/archives/1804-vSphere-5-Whats-New-Storage-Appliance-VSA.html
  • http://jpaul.me/?p=2072
  • Transcript

    • 1. Storage for Virtual Environments
      Stephen Foskett
      Foskett Services and Gestalt IT
      Live Footnotes:
      • @Sfoskett
      • 2. #VirtualStorage
    • This is Not a Rah-Rah Session
    • 3. Agenda
    • 4. Introducing the Virtual Data Center
    • 5. This Hour’s Focus:What Virtualization Does
      Introducing storage and server virtualization
      The future of virtualization
      The virtual datacenter
      Virtualization confounds storage
      Three pillars of performance
      Other issues
      Storage features for virtualization
      What’s new in VMware
    • 6. Virtualization of Storage, Serverand Network
      Storage has been stuck in the Stone Age since the Stone Age!
      Fake disks, fake file systems, fixed allocation
      Little integration and no communication
      Virtualization is a bridge to the future
      Maintains functionality for existing apps
      Improves flexibility and efficiency
    • 7. A Look at the Future
    • 8. Server Virtualization is On the Rise
      Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
    • 9. Server Virtualization is a Pile of Lies!
      What the OS thinks it’s running on…
      What the OS is actually running on…
      Physical Hardware
      VMkernel
      Binary Translation, Paravirtualization, Hardware Assist
      Guest OS
      VM
      Guest OS
      VM
      Scheduler and Memory Allocator
      vNIC
      vSwitch
      NIC Driver
      vSCSI/PV
      VMDK
      VMFS
      I/O Driver
    • 10. And It Gets Worse Outside the Server!
    • 11. The Virtual Data Center of Tomorrow
      Management
      Applications
      The Cloud™
      Applications
      Legacy
      Applications
      Applications
      Applications
      CPU
      Network
      Backup
      Storage
    • 12. The Real Future of IT Infrastructure
      Orchestration Software
    • 13. Three Pillars of VM Performance
    • 14. Confounding Storage Presentation
      Storage virtualization is nothing new…
      RAID and NAS virtualized disks
      Caching arrays and SANs masked volumes
      New tricks: Thin provisioning, automated tiering, array virtualization
      But, we wrongly assume this is where it ends
      Volume managers and file systems
      Databases
      Now we have hypervisors virtualizing storage
      VMFS/VMDK = storage array?
      Virtual storage appliances (VSAs)
    • 15. Begging for Converged I/O
      4G FC Storage
      1 GbE Network
      1 GbE Cluster
      How many I/O ports and cables does a server need?
      Typical server has 4 ports, 2 used
      Application servers have 4-8 ports used!
      Do FC and InfiniBand make sense with 10/40/100 GbE?
      When does commoditization hit I/O?
      Ethernet momentum is unbeatable
      Blades and hypervisors demand greater I/O integration and flexibility
      Other side of the coin – need to virtualize I/O
    • 16. Driving Storage Virtualization
      Server virtualization demands storage features
      Data protection with snapshots and replication
      Allocation efficiency with thin provisioning+
      Performance and cost tweaking with automated sub-LUN tiering
      Improved locking and resource sharing
      Flexibility is the big one
      Must be able to create, use, modify and destroy storage on demand
      Must move storage logically and physically
      Must allow OS to move too
    • 17. “The I/O Blender” Demands New Architectures
      Shared storage is challenging to implement
      Storage arrays “guess” what’s coming next based on allocation (LUN) taking advantage of sequential performance
      Server virtualization throws I/O into a blender – All I/O is now random I/O!
    • 18. Server Virtualization Requires SAN and NAS
      Server virtualization has transformed the data center and storage requirements
      VMware is the #1 driver of SAN adoption today!
      60% of virtual server storage is on SAN or NAS
      86% have implemented some server virtualization
      Server virtualization has enabled and demanded centralization and sharing of storage on arrays like never before!
      Source: ESG, 2008
    • 19. Keys to the Future For Storage Folks
      Ye Olde Seminar Content!
    • 20. Primary Production Virtualization Platform
      Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
    • 21. Storage Features for Virtualization
    • 22. Which Features Are People Using?
      Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
    • 23. What’s New in vSphere 4 and 4.1
      VMware vSphere 4 (AKA ESX/ESXi 4) is a major upgrade for storage
      Lots of new features like thin provisioning, PSA, any-to-any Storage VMotion, PVSCSI
      Massive performance upgrade (400k IOPS!)
      vSphere 4.1 is equally huge for storage
      Boot from SAN
      vStorage APIs for Array Integration (VAAI)
      Storage I/O control (SIOC)
    • 24. What’s New in vSphere 5
      VMFS-5 – Scalability and efficiency improvements
      Storage DRS – Datastore clusters and improved load balancing
      Storage I/O Control – Cluster-wide and NFS support
      Profile-Driven Storage – Provisioning, compliance and monitoring
      FCoE Software Initiator
      iSCSI Initiator GUI
      Storage APIs – Storage Awareness (VASA)
      Storage APIs – Array Integration (VAAI 2) – Thin Stun, NFS, T10
      Storage vMotion - Enhanced with mirror mode
      vSphere Storage Appliance (VSA)
      vSphere Replication – New in SRM
    • 25. And Then, There’s VDI…
      Virtual desktop infrastructure (VDI) takes everything we just worried about and amplifies it:
      Massive I/O crunches
      Huge duplication of data
      More wasted capacity
      More user visibility
      More backup trouble
    • 26. What’s next
      Vendor Showcase and Networking Break
    • 27. Technical Considerations - Configuring Storage for VMs
      The mechanics of presenting and using storage in virtualized environments
    • 28. This Hour’s Focus:Hypervisor Storage Features
      Storage vMotion
      VMFS
      Storage presentation: Shared, raw, NFS, etc.
      Thin provisioning
      Multipathing (VMware Pluggable Storage Architecture)
      VAAI and VASA
      Storage I/O control and storage DRS
    • 29. Storage vMotion
      Introduced in ESX 3 as “Upgrade vMotion”
      ESX 3.5 used a snapshot while the datastore was in motion
      vSphere 4 used changed-block tracking (CBT) and recursive passes
      vSphere 5 Mirror Mode mirrors writes to in-progress vMotions and also supports migration of vSphere snapshots and Linked Clones
      Can be offloaded for VAAI-Block (but not NFS)
    • 30. vSphere 5: What’s New in VMFS 5
      Max VMDK size is still 2 TB – 512 bytes
      Virtual (non-passthru) RDM still limited to 2 TB
      Max LUNs per host is still 256
    • 31. Hypervisor Storage Options:Shared Storage
      The common/ workstation approach
      VMware: VMDK image in VMFS datastore
      Hyper-V: VHD image in CSV datastore
      Block storage (direct or FC/iSCSI SAN)
      Why?
      Traditional, familiar, common (~90%)
      Prime features (Storage VMotion, etc)
      Multipathing, load balancing, failover*
      But…
      Overhead of two storage stacks (5-8%)
      Harder to leverage storage features
      Often shares storage LUN and queue
      Difficult storage management
      VM
      Host
      Guest
      OS
      VMFS
      VMDK
      DAS or SAN
      Storage
    • 32. Hypervisor Storage Options:Shared Storage on NAS
      Skip VMFS and use NAS
      NFS or SMB is the datastore
      Wow!
      Simple – no SAN
      Multiple queues
      Flexible (on-the-fly changes)
      Simple snap and replicate*
      Enables full Vmotion
      Link aggregation (trunking) is possible
      But…
      Less familiar (ESX 3.0+)
      CPU load questions
      Limited to 8 NFS datastores (ESX default)
      Snapshot consistency for multiple VMDK
      VM
      Host
      Guest
      OS
      NAS
      Storage
      VMDK
    • 33. Hypervisor Storage Options:Guest iSCSI
      Skip VMFS and use iSCSI directly
      Access a LUN just like any physical server
      VMware ESX can even boot from iSCSI!
      Ok…
      Storage folks love it!
      Can be faster than ESX iSCSI
      Very flexible (on-the-fly changes)
      Guest can move and still access storage
      But…
      Less common to VM folks
      CPU load questions
      No Storage VMotion (but doesn’t need it)
      VM
      Host
      Guest
      OS
      iSCSI
      Storage
      LUN
    • 34. Hypervisor Storage Options:Raw Device Mapping (RDM)
      Guest VM’s access storage directly over iSCSI or FC
      VM’s can even boot from raw devices
      Hyper-V pass-through LUN is similar
      Great!
      Per-server queues for performance
      Easier measurement
      The only method for clustering
      Supports LUNs larger than 2 TB (60 TB passthru in vSphere 5!)
      But…
      Tricky VMotion and dynamic resource scheduling (DRS)
      No storage VMotion
      More management overhead
      Limited to 256 LUNs per data center
      VM
      Host
      Guest
      OS
      I/O
      Mapping File
      SAN Storage
    • 35. Hypervisor Storage Options:Direct I/O
      VMware ESX VMDirectPath - Guest VM’s access I/O hardware directly
      Leverages AMD IOMMU or Intel VT-d
      Great!
      Potential for native performance
      Just like RDM but better!
      But…
      No VMotion or Storage VMotion
      No ESX fault tolerance (FT)
      No ESX snapshots or VM suspend
      No device hot-add
      No performance benefit in the real world!
      VM
      Host
      Guest
      OS
      I/O
      Mapping File
      SAN Storage
    • 36. Which VMware Storage Method Performs Best?
      Mixed random I/O
      CPU cost per I/O
      VMFS,
      RDM (p), or RDM (v)
      Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc.,ESX 3.5, 2008
    • 37. vSphere 5: Policy or Profile-Driven Storage
      Allows storage tiers to be defined in vCenter based on SLA, performance, etc.
      Used during provisioning, cloning, Storage vMotion, Storage DRS
      Leverages VASA for metrics and characterization
      All HCL arrays and types (NFS, iSCSI, FC)
      Custom descriptions and tagging for tiers
      Compliance status is a simple binary report
    • 38. Native VMware Thin Provisioning
      VMware ESX 4 allocates storage in 1 MB chunks as capacity is used
      Similar support enabled for virtual disks on NFS in VI 3
      Thin provisioning existed for block, could be enabled on the command line in VI 3
      Present in VMware desktop products
      vSphere 4 fully supports and integrates thin provisioning
      Every version/license includes thin provisioning
      Allows thick-to-thin conversion during Storage VMotion
      In-array thin provisioning also supported (we’ll get to that…)
    • 39. Four Types of VMware ESX Volumes
      Note: FT is not supported
      What will your array do? VAAI helps…
      Friendly to on-array thin provisioning
    • 40. Storage Allocation and Thin Provisioning
      VMware tests show no performance impact from thin provisioning after zeroing
    • 41. Pluggable Storage Architecture:Native Multipathing
      VMware ESX includes multipathing built in
      Basic native multipathing (NMP) is round-robin fail-over only – it will not load balance I/O across multiple paths or make more intelligent decisions about which paths to use
      Pluggable Storage Architecture (PSA)
      VMware NMP
      Third-Party MPP
      VMware SATP
      Third-Party SATP
      VMware PSP
      Third-Party PSP
    • 42. Pluggable Storage Architecture: PSP and SATP
      vSphere 4 Pluggable Storage Architecture allows third-party developers to replace ESX’s storage I/O stack
      ESX Enterprise+ Only
      There are two classes of third-party plug-ins:
      Path-selection plug-ins (PSPs) optimize the choice of which path to use, ideal for active/passive type arrays
      Storage array type plug-ins (SATPs) allow load balancing across multiple paths in addition to path selection for active/active arrays
      EMC PowerPath/VE for vSphere does everything
    • 43. Storage Array Type Plug-ins (SATP)
      ESX native approaches
      Active/Passive
      Active/Active
      Pseudo Active
      Storage Array Type Plug-Ins
      VMW_SATP_LOCAL – Generic local direct-attached storage
      VMW_SATP_DEFAULT_AA – Generic for active/active arrays
      VMW_SATP_DEFAULT_AP – Generic for active/passive arrays
      VMW_SATP_LSI – LSI/NetApp arrays from Dell, HDS, IBM, Oracle, SGI
      VMW_SATP_SVC – IBM SVC-based systems (SVC, V7000, Actifio)
      VMW_SATP_ALUA – Asymmetric Logical Unit Access-compliant arrays
      VMW_SATP_CX – EMC/Dell CLARiiON and Celerra (also VMW_SATP_ALUA_CX)
      VMW_SATP_SYMM – EMC Symmetrix DMX-3/DMX-4/VMAX, Invista
      VMW_SATP_INV – EMC Invista and VPLEX
      VMW_SATP_EQL – Dell EqualLogic systems
      Also, EMC PowerPath and HDS HDLM and vendor-unique plugins not detailed in the HCL
    • 44. Path Selection Plug-ins (PSP)
      VMW_PSP_MRU – Most-Recently Used (MRU) – Supports hundreds of storage arrays
      VMW_PSP_FIXED – Fixed - Supports hundreds of storage arrays
      VMW_PSP_RR – Round-Robin - Supports dozens of storage arrays
      DELL_PSP_EQL_ROUTED – Dell EqualLogic iSCSI arrays
      Also, EMC PowerPath and other vendor unique
    • 45. vStorage APIs for Array Integration (VAAI)
      VAAI integrates advanced storage features with VMware
      Basic requirements:
      A capable storage array
      ESX 4.1+
      A software plug-in for ESX
      Not every implementation is equal
      Block zeroing can be very demanding for some arrays
      Zeroing might conflict with full copy
    • 46. VAAI Support Matrix
    • 47. vSphere 5: VAAI 2
      Block
      (FC/iSCSI)
      T10 compliance is improved - No plug-in needed for many arrays
      File
      (NFS)
      NAS plugins come from vendors, not VMware
    • 48. vSphere 5: vSphereStorage APIs – Storage Awareness (VASA)
      VASA is communication mechanism for vCenter to detect array capabilities
      RAID level, thin provisioning state, replication state, etc.
      Two locations in vCenter Server:
      “System-Defined Capabilities” – per-datastore descriptors
      Storage views and SMS API’s
    • 49. Storage I/O Control (SIOC)
      Storage I/O Control (SIOC) is all about fairness:
      Prioritization and QoS for VMFS
      Re-distributes unused I/O resources
      Minimizes “noisy neighbor” issues
      ESX can provide quality of service for storage access to virtual machines
      Enabled per-datastore
      When a pre-defined latency level is exceeded on a VM it begins to throttle I/O (default 30 ms)
      Monitors queues on storage arrays and per-VM I/O latency
      But:
      vSphere 4.1 with Enterprise Plus
      Disabled by default but highly recommended!
      Block storage only (FC or ISCSI)
      Whole-LUN only (no extents)
      No RDM
    • 50. Storage I/O Control in Action
    • 51. Virtual Machine Mobility
      Moving virtual machines is the next big challenge
      Physical servers are difficult to move around and between data centers
      Pent-up desire to move virtual machines from host to host and even to different physical locations
      VMware DRS would move live VMs around the data center
      The “Holy Grail” for server managers
      Requires networked storage (SAN/NAS)
    • 52. vSphere 5: Storage DRS
      Datastore clusters aggregate multiple datastores
      VMs and VMDKs placement metrics:
      Space - Capacity utilization and availability (80% default)
      Performance – I/O latency (15 ms default)
      When thresholds are crossed, vSphere will rebalance all VMs and VMDKs according to Affinity Rules
      Storage DRS works with either VMFS/block or NFS datastores
      Maintenance Mode evacuates a datastore
    • 53. What’s next
      Lunch
    • 54. Expanding the Conversation
      Converged I/O, storage virtualization and new storage architectures
    • 55. This Hour’s Focus:Non-Hypervisor Storage Features
      Converged networking
      Storage protocols (FC, iSCSI, NFS)
      Enhanced Ethernet (DCB, CAN, FCoE)
      I/O virtualization
      Storage for virtual storage
      Tiered storage and SSD/flash
      Specialized arrays
      Virtual storage appliances (VSA)
    • 56. Introduction: Converging on Convergence
      Data centers rely more on standard ingredients
      What will connect these systems together?
      IP and Ethernet are logical choices
    • 57. Drivers of Convergence
    • 58. Which Storage Protocol to Use?
      Server admins don’t know/care about storage protocols and will want whatever they are familiar with
      Storage admins have preconceived notions about the merits of various options:
      FC is fast, low-latency, low-CPU, expensive
      NFS is slow, high-latency, high-CPU, cheap
      iSCSI is medium, medium, medium, medium
    • 59. vSphere Protocol Performance
    • 60. vSphere CPU Utilization
    • 61. vSphere Latency
    • 62. Microsoft Hyper-V Performance
    • 63. Which Storage Protocols Do People Use?
      Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
    • 64. The Upshot: It Doesn’t Matter
      Use what you have and are familiar with!
      FC, iSCSI, NFS all work well
      Most enterprise production VM data is on FC, many smaller shops using iSCSI or NFS
      Either/or? - 50% use a combination
      For IP storage
      Network hardware and config matter more than protocol (NFS, iSCSI, FC)
      Use a separate network or VLAN
      Use a fast switch and consider jumbo frames
      For FC storage
      8 Gb FC/FCoE is awesome for VMs
      Look into NPIV
      Look for VAAI
    • 65. The Storage Network Roadmap
    • 66. Serious Performance
      10 GbE is faster than most storage interconnects
      iSCSI and FCoE both can perform at wire-rate
    • 67. Latency is Critical Too
      Latency is even more critical in shared storage
      FCoE with 10 GbE can achieve well over 500,000 4K IOPS (if the array and client can handle it!)
    • 68. Benefits Beyond Speed
      10 GbE takes performance off the table (for now…)
      But performance is only half the story:
      Simplified connectivity
      New network architecture
      Virtual machine mobility
      1 GbE Cluster
      4G FC Storage
      1 GbE Network
      10 GbE
      (Plus 6 Gbps extra capacity)
    • 69. Enhanced 10 Gb Ethernet
      • Ethernet and SCSI were not made for each other
      • 70. SCSI expects a lossless transport with guaranteed delivery
      • 71. Ethernet expects higher-level protocols to take care of issues
      • 72. “Data Center Bridging” is a project to create lossless Ethernet
      • 73. AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
      • 74. iSCSI and NFS are happy with or without DCB
      • 75. DCB is a work in progress
      • 76. FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
      • 77. QCN (Qau) is still not ready
      Priority Flow Control (PFC)
      802.1Qbb
      Congestion Management (QCN)
      802.1Qau
      Bandwidth Management (ETS)
      802.1Qaz
      PAUSE
      802.3x
      Data Center Bridging Exchange Protocol (DCBX)
      Traffic Classes 802.1p/Q
    • 78. FCoE CNAs for VMware ESX
      No Intel (OpenFCoE) or Broadcom support in vSphere 4…
    • 79. vSphere 5: FCoE Software Initiator
      Dramatically expands the FCoE footprint from just a few CNAs
      Based on Intel OpenFCoE? – Shows as “Intel Corporation FCoE Adapter”
    • 80. I/O Virtualization: Virtual I/O
      Extends I/O capabilities beyond physical connections (PCIe slots, etc)
      Increases flexibility and mobility of VMs and blades
      Reduces hardware, cabling, and cost for high-I/O machines
      Increases density of blades and VMs
    • 81. I/O Virtualization: IOMMU (Intel VT-d)
      IOMMU gives devices direct access to system memory
      AMD IOMMU or Intel VT-d
      Similar to AGP GART
      VMware VMDirectPath leverages IOMMU
      Allows VMs to access devices directly
      May not improve real-world performance
      System Memory
      IOMMU
      MMU
      I/O Device
      CPU
    • 82. Does SSD Change the Equation?
      RAM and flash promise high performance…
      But you have to use it right
    • 83. Flash is Not A Disk
      Flash must be carefully engineered and integrated
      Cache and intelligence to offset write penalty
      Automatic block-level data placement to maximize ROI
      IF a system can do this, everything else improves
      Overall system performance
      Utilization of disk capacity
      Space and power efficiency
      Even system cost can improve!
    • 84. The Tiered Storage Cliché
      Cost and Performance
      Optimized for Savings!
    • 85. Tiered Storage Evolves
    • 86. Three Approaches to SSD For VM
      EMC Project Lightning promises to deliver all three!
    • 87. Storage for Virtual Servers (Only!)
      New breed of storage solutions just for virtual servers
      Highly integrated (vCenter, VMkernel drivers, etc.)
      High-performance (SSD cache)
      Mostly from startups (for now)
      Tintri– NFS-based caching array
      Virsto+EvoStor – Hyper-V software, moving to VMware
    • 88. Virtual Storage Appliances (VSA)
      What if the SAN was pulled inside the hypervisor?
      VSA = A virtual storage array as a guest VM
      Great for lab or PoC
      Some are not for production
      Can build a whole data center in a hypervisor, including LAN, SAN, clusters, etc
      Physical Server Resources
      Hypervisor
      VM Guest
      VM Guest
      Virtual Storage Appliance
      Virtual SAN
      Virtual LAN
      CPU
      RAM
    • 89. vSphere 5: vSphere Storage Appliance (VSA)
      Aimed at SMB market
      Two deployment options:
      2x replicates storage 4:2
      3x replicates round-robin 6:3
      Uses local (DAS) storage
      Enables HA and vMotion with no SAN or NAS
      Uses NFS for storage access
      Also manages IP addresses for HA
    • 90. Virtual Storage Appliance Options
    • 91. Whew! Let’s Sum Up
      Server virtualization changes everything
      Throw your old assumptions about storage workloads and presentation out the window
      We (storage folks) have some work to do
      New ways of presenting storage to the server
      Converged I/O (Ethernet!)
      New demand for storage virtualization features
      New architectural assumptions
    • 92. Thank You!
      Stephen Foskett
      stephen@fosketts.net
      twitter.com/sfoskett
      +1(508)451-9532
      FoskettServices.com
      blog.fosketts.net
      GestaltIT.com

    ×