Virtualization Changes Storage
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Virtualization Changes Storage

on

  • 2,037 views

 

Statistics

Views

Total Views
2,037
Views on SlideShare
2,025
Embed Views
12

Actions

Likes
0
Downloads
99
Comments
0

3 Embeds 12

http://www.linkedin.com 6
http://www.slideshare.net 5
https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Up to 256 FC or iSCSI LUNsESX multipathingLoad balancingFailoverFailover between FC and iSCSI*Beware of block sizes greater than 256 KB!If you want virtual disks greater than 256 GB, you must use a VMFS block size larger than 1 MBAlign your virtual disk starting offset to your array (by booting the VM and using diskpart, Windows PE, or UNIX fdisk)*
  • Link Aggregate Control Protocol (LACP) for trunking/EtherChannel - Use “fixed” path policy, not LRUUp to 8 (or 32) NFS mount pointsTurn off access time updatesThin provisioning? Turn on AutoSize and watch out

Virtualization Changes Storage Presentation Transcript

  • 1. How Does Virtualization Change Storage?May 18, 2009
    Stephen Foskett, Director of Consulting, Nirvanix
  • 2. Abstract
    The virtualization of servers destroys everything that storage folks thought they knew about I/O and throws in a new layer of abstraction to boot
    Creating storage for virtualization is not the same as storage for most other apps, and storage virtual servers on a SAN or NAS is not the same as using internal disk
    This session will walk through what virtualization changes about storage, the various storage options, pros and cons, and what the future looks like with FCoE, UCS, 10 GbE, and VMware vStorage APIs
    2
  • 3. Server Virtualization Recoil
    • The server virtualization revolution has challenged storage in many ways:
    • 4. Dramaticallychanged I/O
    • 5. Impact on storage capacity utilization
    • 6. Architecture decisions to be made: DAS, SAN, NAS
    • 7. Trouble for traditional backup, replication, reporting
    • 8. Biggest issue: Converged technology leads to converged management organizations
  • Pillars of Virtual Machine Performance
    4
    Processor
    I/O (disk/net)
    Memory
    Virtual machine performance demands a balanced base of processing, I/O subsystem, and memory performance, capability, and capacity
  • 9. Virtualization As An I/O Engine
    Server virtualization is the single greatest I/O driver in the modern data center
    CPU power and memory capacity are easy to ramp up, I/O is not
    Unbalanced systems will not perform well
    5
  • 10. I/O Is Concentrated
    Then…
    Each server had its own storage and LAN ports
    I/O utilization was low
    Now…
    6
    All I/O is concentrated on just a few ports
    LAN
    SAN
    LAN
    SAN
  • 11. I/O is Randomized
    Then…
    Now…
    Sequential I/O is mixed together randomly
    Disk is virtualized and re-combined
    7
    I/O was mainly sequential
    Requests were grouped physically on disk
    Storage could read ahead and cache data
  • 12. I/O is Accelerated
    Then…
    Now…
    Combined I/O
    Packets arrive quickly
    Quicker protocols: 10 GbE, 8 Gb FC
    8
    Channels were under-utilized with little contention for resources
    Speeds were low: 1 GbE, IDE/ATA
    In the same amount of time…
    1 GbE handles 1 packet from 1 host...
    4 Gb FC handles 4 packets from 4 hosts...
    8 Gb FC handles 8 packets from 5 hosts...
    10 GbE handles 10 packets from all 6 hosts...
  • 13. Converged Data Center I/O
    Now…
    All I/O is concentrated on just a few ports
    Soon…
    9
    I/O is converged on 10GbE and extended into server hardware
    LAN
    SAN
    LAN
    SAN
  • 14. Server Virtualization and Storage Utilization
  • 15. Wasted Space
    Each level of abstraction adds overhead
    Overall utilization is low!
    11
    Raw array capacity
    Usable array capacity
    LUNs presented to host
    Configured datastore
    Server 1 virtual disk
    Server 2 virtual disk
    Server 3 virtual disk
    Server 1 used capacity
    Server 3 used capacity
    Server 3 used capacity
  • 16. Thin Provisioning
    Thin provisioning allocates storage as-needed
    Example: 500 GB request for new project, but only 2 GB of initial data is written – array only allocates 2 GB and expands as data is written
    What’s not to love?
    Oops – we provisioned a petabyte and ran out of storage
    Chunk sizes and formatting conflicts
    Can it thin unprovision?
    Can it replicate to and from thin provisioned volumes?
    VMware adding thin provisioning to vSphere 4 (standard at all license levels!)
    Some storage arrays do thin (3PAR, HDS, NetApp)
    12
  • 17. Server Virtualization Demands SAN and NAS
    Server virtualization has transformed the data center and storage requirements
    86% have implemented some server virtualization (ESG 2008)
    VMware is the #1 driver of SAN adoption today!
    60% of virtual server storage is on SAN or NAS (ESG 2008)
    Server virtualization has enabled and demanded centralization and sharing of storage on arrays like never before!
  • 18. VMware Storage Options:Shared Storage
    Shared storage - the common/ workstation approach
    Stores VMDK image in VMFS datastores
    DAS or FC/iSCSI SAN
    Hyper-V VHD is similar
    Why?
    Traditional, familiar, common (~90%)
    Prime features (Storage VMotion, etc)
    Multipathing, load balancing, failover*
    But…
    Overhead of two storage stacks (5-8%)
    Harder to leverage storage features
    Often shares storage LUN and queue
    Difficult storage management
    VM
    Host
    Guest
    OS
    VMFS
    VMDK
    DAS or SAN
    Storage
  • 19. VMware Storage Options:Shared Storage on NFS
    Shared storage on NFS – skip VMFS and use NAS
    NTFS is the datastore
    Wow!
    Simple – no SAN
    Multiple queues
    Flexible (on-the-fly changes)
    Simple snap and replicate*
    Enables full Vmotion
    Use fixed LACP for trunking
    But…
    Less familiar (3.0+)
    CPU load questions
    Default limited to 8 NFS datastores
    Will multi-VMDK snaps be consistent?
    VM
    Host
    Guest
    OS
    NFS
    Storage
    VMDK
  • 20. VMware Storage Options:Raw Device Mapping (RDM)
    Raw device mapping (RDM) - guest VM’s access storage directly over iSCSI or FC
    VM’s can even boot from raw devices
    Hyper-V pass-through LUN is similar
    Great!
    Per-server queues for performance
    Easier measurement
    The only method for clustering
    But…
    Tricky VMotion and DRS
    No storage VMotion
    More management overhead
    Limited to 256 LUNs per data center
    VM
    Host
    Guest
    OS
    I/O
    Mapping File
    SAN Storage
  • 21. Which VMware Storage Method Performs Best?
    Mixed Random I/O
    CPU Cost Per I/O
    VMFS,
    RDM (p), or RDM (v)
    Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc., 2008
  • 22. Which Storage Protocol Performs Best?
    Throughput by I/O Size
    CPU Cost Per I/O
    Fibre Channel,
    NFS,
    iSCSI (sw),
    iSCSI (TOE)
    Source: “Comparison of Storage Protocol Performance”, VMware Inc., 2008
    And iSCSI is even better in vSphere 4!
  • 23. How about Hyper-V?
    19
  • 24. Which Storage Protocol is For You?
    FC, iSCSI, NFS all work well
    Most production VM data is on FC
    Either/or? - 50% use a combination (ESG 2008)
    Leverage what you have and are familiar with
    For IP storage
    Use TOE cards/iSCSI HBAs
    Use a separate network or VLAN
    Is your switch backplane fast?
    No VM Cluster support with iSCSI*
    For FC storage
    4 Gb FC is awesome for VM’s
    Get NPIV (if you can)
    FCoE is the future
    Converged storage and networks adapters (CNAs)
    Cisco UCS
  • 25. Storage in VMware vSphere 4
    Thin provisioning is standard for all levels
    Dynamic expansion of VMFS volumes
    Any-to-any Storage Vmotion
    High performance I/O
    Paravirtualized SCSI
    Enhanced iSCSI stack
    Jumbo frames
    Data Protection APIs (A)
    Pluggable Storage multipathing (E+)
    21
  • 26. The Organizational Challenge
    How will server, storage, and networking teams deal with integration?
    Each discipline has its own best practices
    Each has its own prejudices
    They can be forced together, but will it work?
    22
  • 27. Who Is Nirvanix
    23
    The Premier “Cloud Storage” Service Provider for the Enterprise
    Backed by Intel Capital, Mission Ventures,
    Valhalla Partners, Windward Ventures and European Founders Fund
    2007 “Storage Products of the Year”2008 “Top Startups to Watch”
    2008 “Product of the Year”
    Over 500 customers including leading Fortune 10, Media & Entertainment and Web 2.0 companies
  • 28. 24
    Thank You
    Nirvanix
    We manage your storage,
    so you can manage your business
    www.nirvanix.com
    twitter.com/nirvanix
    Stephen Foskett
    sfoskett@nirvanix.com
    Enterprise Storage Strategies Blog: bit.ly/ESSBlog
    Personal Blog: blog.fosketts.net
    Enterprise IT Content: gestaltit.com