How Does Virtualization Change Storage?May 18, 2009<br />Stephen Foskett, Director of Consulting, Nirvanix<br />
Abstract<br />The virtualization of servers destroys everything that storage folks thought they knew about I/O and throws ...
Server Virtualization Recoil<br /><ul><li>The server virtualization revolution has challenged storage in many ways:
Dramaticallychanged I/O
Impact on storage capacity utilization
Architecture decisions to be made: DAS, SAN, NAS
Trouble for traditional backup, replication, reporting
Biggest issue: Converged technology leads to converged management organizations</li></li></ul><li>Pillars of Virtual Machi...
Virtualization As An I/O Engine<br />Server virtualization is the single greatest I/O driver in the modern data center<br ...
I/O Is Concentrated<br />Then…<br />Each server had its own storage and LAN ports<br />I/O utilization was low<br />Now…<b...
I/O is Randomized<br />Then…<br />Now…<br />Sequential I/O is mixed together randomly<br />Disk is virtualized and re-comb...
I/O is Accelerated<br />Then…<br />Now…<br />Combined I/O<br />Packets arrive quickly<br />Quicker protocols: 10 GbE, 8 Gb...
Converged Data Center I/O<br />Now…<br />All I/O is concentrated on just a few ports<br />Soon…<br />9<br />I/O is converg...
Server Virtualization and Storage Utilization<br />
Wasted Space<br />Each level of abstraction adds overhead<br />Overall utilization is low!<br />11<br />Raw array capacity...
Thin Provisioning<br />Thin provisioning allocates storage as-needed<br />Example: 500 GB request for new project, but onl...
Server Virtualization Demands SAN and NAS<br />Server virtualization has transformed the data center and storage requireme...
VMware Storage Options:Shared Storage<br />Shared storage - the common/ workstation approach<br />Stores VMDK image in VMF...
VMware Storage Options:Shared Storage on NFS<br />Shared storage on NFS – skip VMFS and use NAS<br />NTFS is the datastore...
VMware Storage Options:Raw Device Mapping (RDM)<br />Raw device mapping (RDM) - guest VM’s access storage directly over iS...
Which VMware Storage Method Performs Best?<br />Mixed Random I/O<br />CPU Cost Per I/O<br />VMFS,<br />RDM (p), or RDM (v)...
Which Storage Protocol Performs Best?<br />Throughput by I/O Size<br />CPU Cost Per I/O<br />Fibre Channel,<br />NFS,<br /...
How about Hyper-V?<br />19<br />
Which Storage Protocol is For You?<br />FC, iSCSI, NFS all work well<br />Most production VM data is on FC<br />Either/or?...
Upcoming SlideShare
Loading in …5
×

Virtualization Changes Storage

1,491
-1

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,491
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
101
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Up to 256 FC or iSCSI LUNsESX multipathingLoad balancingFailoverFailover between FC and iSCSI*Beware of block sizes greater than 256 KB!If you want virtual disks greater than 256 GB, you must use a VMFS block size larger than 1 MBAlign your virtual disk starting offset to your array (by booting the VM and using diskpart, Windows PE, or UNIX fdisk)*
  • Link Aggregate Control Protocol (LACP) for trunking/EtherChannel - Use “fixed” path policy, not LRUUp to 8 (or 32) NFS mount pointsTurn off access time updatesThin provisioning? Turn on AutoSize and watch out
  • Virtualization Changes Storage

    1. 1. How Does Virtualization Change Storage?May 18, 2009<br />Stephen Foskett, Director of Consulting, Nirvanix<br />
    2. 2. Abstract<br />The virtualization of servers destroys everything that storage folks thought they knew about I/O and throws in a new layer of abstraction to boot<br />Creating storage for virtualization is not the same as storage for most other apps, and storage virtual servers on a SAN or NAS is not the same as using internal disk<br />This session will walk through what virtualization changes about storage, the various storage options, pros and cons, and what the future looks like with FCoE, UCS, 10 GbE, and VMware vStorage APIs<br />2<br />
    3. 3. Server Virtualization Recoil<br /><ul><li>The server virtualization revolution has challenged storage in many ways:
    4. 4. Dramaticallychanged I/O
    5. 5. Impact on storage capacity utilization
    6. 6. Architecture decisions to be made: DAS, SAN, NAS
    7. 7. Trouble for traditional backup, replication, reporting
    8. 8. Biggest issue: Converged technology leads to converged management organizations</li></li></ul><li>Pillars of Virtual Machine Performance<br />4<br />Processor<br />I/O (disk/net)<br />Memory<br />Virtual machine performance demands a balanced base of processing, I/O subsystem, and memory performance, capability, and capacity<br />
    9. 9. Virtualization As An I/O Engine<br />Server virtualization is the single greatest I/O driver in the modern data center<br />CPU power and memory capacity are easy to ramp up, I/O is not<br />Unbalanced systems will not perform well<br />5<br />
    10. 10. I/O Is Concentrated<br />Then…<br />Each server had its own storage and LAN ports<br />I/O utilization was low<br />Now…<br />6<br />All I/O is concentrated on just a few ports<br />LAN<br />SAN<br />LAN<br />SAN<br />
    11. 11. I/O is Randomized<br />Then…<br />Now…<br />Sequential I/O is mixed together randomly<br />Disk is virtualized and re-combined<br />7<br />I/O was mainly sequential<br />Requests were grouped physically on disk<br />Storage could read ahead and cache data<br />
    12. 12. I/O is Accelerated<br />Then…<br />Now…<br />Combined I/O<br />Packets arrive quickly<br />Quicker protocols: 10 GbE, 8 Gb FC<br />8<br />Channels were under-utilized with little contention for resources<br />Speeds were low: 1 GbE, IDE/ATA<br />In the same amount of time…<br />1 GbE handles 1 packet from 1 host...<br />4 Gb FC handles 4 packets from 4 hosts...<br />8 Gb FC handles 8 packets from 5 hosts...<br />10 GbE handles 10 packets from all 6 hosts...<br />
    13. 13. Converged Data Center I/O<br />Now…<br />All I/O is concentrated on just a few ports<br />Soon…<br />9<br />I/O is converged on 10GbE and extended into server hardware<br />LAN<br />SAN<br />LAN<br />SAN<br />
    14. 14. Server Virtualization and Storage Utilization<br />
    15. 15. Wasted Space<br />Each level of abstraction adds overhead<br />Overall utilization is low!<br />11<br />Raw array capacity<br />Usable array capacity<br />LUNs presented to host<br />Configured datastore<br />Server 1 virtual disk<br />Server 2 virtual disk<br />Server 3 virtual disk<br />Server 1 used capacity<br />Server 3 used capacity<br />Server 3 used capacity<br />
    16. 16. Thin Provisioning<br />Thin provisioning allocates storage as-needed<br />Example: 500 GB request for new project, but only 2 GB of initial data is written – array only allocates 2 GB and expands as data is written<br />What’s not to love?<br />Oops – we provisioned a petabyte and ran out of storage<br />Chunk sizes and formatting conflicts<br />Can it thin unprovision?<br />Can it replicate to and from thin provisioned volumes?<br />VMware adding thin provisioning to vSphere 4 (standard at all license levels!)<br />Some storage arrays do thin (3PAR, HDS, NetApp)<br />12<br />
    17. 17. Server Virtualization Demands SAN and NAS<br />Server virtualization has transformed the data center and storage requirements<br />86% have implemented some server virtualization (ESG 2008)<br />VMware is the #1 driver of SAN adoption today!<br />60% of virtual server storage is on SAN or NAS (ESG 2008)<br />Server virtualization has enabled and demanded centralization and sharing of storage on arrays like never before!<br />
    18. 18. VMware Storage Options:Shared Storage<br />Shared storage - the common/ workstation approach<br />Stores VMDK image in VMFS datastores<br />DAS or FC/iSCSI SAN<br />Hyper-V VHD is similar<br />Why?<br />Traditional, familiar, common (~90%)<br />Prime features (Storage VMotion, etc)<br />Multipathing, load balancing, failover*<br />But…<br />Overhead of two storage stacks (5-8%)<br />Harder to leverage storage features<br />Often shares storage LUN and queue<br />Difficult storage management<br />VM<br />Host<br />Guest<br />OS<br />VMFS<br />VMDK<br />DAS or SAN<br />Storage<br />
    19. 19. VMware Storage Options:Shared Storage on NFS<br />Shared storage on NFS – skip VMFS and use NAS<br />NTFS is the datastore<br />Wow!<br />Simple – no SAN<br />Multiple queues<br />Flexible (on-the-fly changes)<br />Simple snap and replicate*<br />Enables full Vmotion<br />Use fixed LACP for trunking<br />But…<br />Less familiar (3.0+)<br />CPU load questions<br />Default limited to 8 NFS datastores<br />Will multi-VMDK snaps be consistent?<br />VM<br />Host<br />Guest<br />OS<br />NFS<br />Storage<br />VMDK<br />
    20. 20. VMware Storage Options:Raw Device Mapping (RDM)<br />Raw device mapping (RDM) - guest VM’s access storage directly over iSCSI or FC<br />VM’s can even boot from raw devices<br />Hyper-V pass-through LUN is similar<br />Great!<br />Per-server queues for performance<br />Easier measurement<br />The only method for clustering<br />But…<br />Tricky VMotion and DRS<br />No storage VMotion<br />More management overhead<br />Limited to 256 LUNs per data center<br />VM<br />Host<br />Guest<br />OS<br />I/O<br />Mapping File<br />SAN Storage<br />
    21. 21. Which VMware Storage Method Performs Best?<br />Mixed Random I/O<br />CPU Cost Per I/O<br />VMFS,<br />RDM (p), or RDM (v)<br />Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc., 2008<br />
    22. 22. Which Storage Protocol Performs Best?<br />Throughput by I/O Size<br />CPU Cost Per I/O<br />Fibre Channel,<br />NFS,<br />iSCSI (sw),<br />iSCSI (TOE)<br />Source: “Comparison of Storage Protocol Performance”, VMware Inc., 2008<br />And iSCSI is even better in vSphere 4!<br />
    23. 23. How about Hyper-V?<br />19<br />
    24. 24. Which Storage Protocol is For You?<br />FC, iSCSI, NFS all work well<br />Most production VM data is on FC<br />Either/or? - 50% use a combination (ESG 2008)<br />Leverage what you have and are familiar with<br />For IP storage<br />Use TOE cards/iSCSI HBAs<br />Use a separate network or VLAN<br />Is your switch backplane fast?<br />No VM Cluster support with iSCSI*<br />For FC storage<br />4 Gb FC is awesome for VM’s<br />Get NPIV (if you can)<br />FCoE is the future<br />Converged storage and networks adapters (CNAs)<br />Cisco UCS<br />
    25. 25. Storage in VMware vSphere 4<br />Thin provisioning is standard for all levels<br />Dynamic expansion of VMFS volumes<br />Any-to-any Storage Vmotion<br />High performance I/O<br />Paravirtualized SCSI<br />Enhanced iSCSI stack<br />Jumbo frames<br />Data Protection APIs (A)<br />Pluggable Storage multipathing (E+)<br />21<br />
    26. 26. The Organizational Challenge<br />How will server, storage, and networking teams deal with integration?<br />Each discipline has its own best practices<br />Each has its own prejudices<br />They can be forced together, but will it work?<br />22<br />
    27. 27. Who Is Nirvanix<br />23<br />The Premier “Cloud Storage” Service Provider for the Enterprise <br />Backed by Intel Capital, Mission Ventures, <br />Valhalla Partners, Windward Ventures and European Founders Fund<br />2007 “Storage Products of the Year”2008 “Top Startups to Watch”<br />2008 “Product of the Year”<br />Over 500 customers including leading Fortune 10, Media & Entertainment and Web 2.0 companies<br />
    28. 28. 24<br />Thank You<br />Nirvanix<br />We manage your storage, <br />so you can manage your business<br />www.nirvanix.com<br />twitter.com/nirvanix<br />Stephen Foskett<br />sfoskett@nirvanix.com<br />Enterprise Storage Strategies Blog: bit.ly/ESSBlog<br />Personal Blog: blog.fosketts.net<br />Enterprise IT Content: gestaltit.com<br />
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×