Storage for Virtual Environments 2011 R2
Upcoming SlideShare
Loading in...5
×
 

Storage for Virtual Environments 2011 R2

on

  • 2,561 views

The Storage for Virtual Environments seminar focuses on the challenges of backup and recovery in a virtual infrastructure, the various solutions that users are now using to solve those challenges, and ...

The Storage for Virtual Environments seminar focuses on the challenges of backup and recovery in a virtual infrastructure, the various solutions that users are now using to solve those challenges, and a roadmap for making the most of all an organization’s virtualization initiatives.

This slide deck was used by Stephen Foskett for his

Statistics

Views

Total Views
2,561
Views on SlideShare
2,561
Embed Views
0

Actions

Likes
2
Downloads
227
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Mirror Mode paper: http://www.usenix.org/events/atc11/tech/final_files/Mashtizadeh.pdfhttp://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-2-storage-vmotion.html
  • http://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-1-vmfs-5.html
  • Up to 256 FC or iSCSI LUNsESX multipathingLoad balancingFailoverFailover between FC and iSCSI*Beware of block sizes greater than 256 KB!If you want virtual disks greater than 256 GB, you must use a VMFS block size larger than 1 MBAlign your virtual disk starting offset to your array (by booting the VM and using diskpart, Windows PE, or UNIX fdisk)*
  • Link Aggregate Control Protocol (LACP) for trunking/EtherChannel - Use “fixed” path policy, not LRUUp to 8 (or 32) NFS mount pointsTurn off access time updatesThin provisioning? Turn on AutoSize and watch out
  • http://www.techrepublic.com/blog/datacenter/stretch-your-storage-dollars-with-vsphere-thin-provisioning/2655http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf
  • http://virtualgeek.typepad.com/virtual_geek/2011/07/vstorage-apis-for-array-integration-vaai-vsphere-5-edition.htmlhttp://blogs.vmware.com/vsphere/2011/07/new-enhanced-vsphere-50-storage-features-part-3-vaai.html
  • http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is  20 – 30 MSFor SAS storage the recommended latency threshold is  20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
  • http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is  20 – 30 MSFor SAS storage the recommended latency threshold is  20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
  • http://www.slideshare.net/esloof/vsphere-5-whats-new-storage-drshttp://blogs.vmware.com/vsphere/2011/07/vsphere-50-storage-features-part-5-storage-drs-initial-placement.html
  • http://www.ntpro.nl/blog/archives/1804-vSphere-5-Whats-New-Storage-Appliance-VSA.html
  • http://jpaul.me/?p=2072

Storage for Virtual Environments 2011 R2 Storage for Virtual Environments 2011 R2 Presentation Transcript

  • Storage for Virtual Environments
    Stephen Foskett
    Foskett Services and Gestalt IT
    Live Footnotes:
    • @Sfoskett
    • #VirtualStorage
  • This is Not a Rah-Rah Session
  • Agenda
    View slide
  • Introducing the Virtual Data Center
    View slide
  • This Hour’s Focus:What Virtualization Does
    Introducing storage and server virtualization
    The future of virtualization
    The virtual datacenter
    Virtualization confounds storage
    Three pillars of performance
    Other issues
    Storage features for virtualization
    What’s new in VMware
  • Virtualization of Storage, Serverand Network
    Storage has been stuck in the Stone Age since the Stone Age!
    Fake disks, fake file systems, fixed allocation
    Little integration and no communication
    Virtualization is a bridge to the future
    Maintains functionality for existing apps
    Improves flexibility and efficiency
  • A Look at the Future
  • Server Virtualization is On the Rise
    Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
  • Server Virtualization is a Pile of Lies!
    What the OS thinks it’s running on…
    What the OS is actually running on…
    Physical Hardware
    VMkernel
    Binary Translation, Paravirtualization, Hardware Assist
    Guest OS
    VM
    Guest OS
    VM
    Scheduler and Memory Allocator
    vNIC
    vSwitch
    NIC Driver
    vSCSI/PV
    VMDK
    VMFS
    I/O Driver
  • And It Gets Worse Outside the Server!
  • The Virtual Data Center of Tomorrow
    Management
    Applications
    The Cloud™
    Applications
    Legacy
    Applications
    Applications
    Applications
    CPU
    Network
    Backup
    Storage
  • The Real Future of IT Infrastructure
    Orchestration Software
  • Three Pillars of VM Performance
  • Confounding Storage Presentation
    Storage virtualization is nothing new…
    RAID and NAS virtualized disks
    Caching arrays and SANs masked volumes
    New tricks: Thin provisioning, automated tiering, array virtualization
    But, we wrongly assume this is where it ends
    Volume managers and file systems
    Databases
    Now we have hypervisors virtualizing storage
    VMFS/VMDK = storage array?
    Virtual storage appliances (VSAs)
  • Begging for Converged I/O
    4G FC Storage
    1 GbE Network
    1 GbE Cluster
    How many I/O ports and cables does a server need?
    Typical server has 4 ports, 2 used
    Application servers have 4-8 ports used!
    Do FC and InfiniBand make sense with 10/40/100 GbE?
    When does commoditization hit I/O?
    Ethernet momentum is unbeatable
    Blades and hypervisors demand greater I/O integration and flexibility
    Other side of the coin – need to virtualize I/O
  • Driving Storage Virtualization
    Server virtualization demands storage features
    Data protection with snapshots and replication
    Allocation efficiency with thin provisioning+
    Performance and cost tweaking with automated sub-LUN tiering
    Improved locking and resource sharing
    Flexibility is the big one
    Must be able to create, use, modify and destroy storage on demand
    Must move storage logically and physically
    Must allow OS to move too
  • “The I/O Blender” Demands New Architectures
    Shared storage is challenging to implement
    Storage arrays “guess” what’s coming next based on allocation (LUN) taking advantage of sequential performance
    Server virtualization throws I/O into a blender – All I/O is now random I/O!
  • Server Virtualization Requires SAN and NAS
    Server virtualization has transformed the data center and storage requirements
    VMware is the #1 driver of SAN adoption today!
    60% of virtual server storage is on SAN or NAS
    86% have implemented some server virtualization
    Server virtualization has enabled and demanded centralization and sharing of storage on arrays like never before!
    Source: ESG, 2008
  • Keys to the Future For Storage Folks
    Ye Olde Seminar Content!
  • Primary Production Virtualization Platform
    Data: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
  • Storage Features for Virtualization
  • Which Features Are People Using?
    Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
  • What’s New in vSphere 4 and 4.1
    VMware vSphere 4 (AKA ESX/ESXi 4) is a major upgrade for storage
    Lots of new features like thin provisioning, PSA, any-to-any Storage VMotion, PVSCSI
    Massive performance upgrade (400k IOPS!)
    vSphere 4.1 is equally huge for storage
    Boot from SAN
    vStorage APIs for Array Integration (VAAI)
    Storage I/O control (SIOC)
  • What’s New in vSphere 5
    VMFS-5 – Scalability and efficiency improvements
    Storage DRS – Datastore clusters and improved load balancing
    Storage I/O Control – Cluster-wide and NFS support
    Profile-Driven Storage – Provisioning, compliance and monitoring
    FCoE Software Initiator
    iSCSI Initiator GUI
    Storage APIs – Storage Awareness (VASA)
    Storage APIs – Array Integration (VAAI 2) – Thin Stun, NFS, T10
    Storage vMotion - Enhanced with mirror mode
    vSphere Storage Appliance (VSA)
    vSphere Replication – New in SRM
  • And Then, There’s VDI…
    Virtual desktop infrastructure (VDI) takes everything we just worried about and amplifies it:
    Massive I/O crunches
    Huge duplication of data
    More wasted capacity
    More user visibility
    More backup trouble
  • What’s next
    Vendor Showcase and Networking Break
  • Technical Considerations - Configuring Storage for VMs
    The mechanics of presenting and using storage in virtualized environments
  • This Hour’s Focus:Hypervisor Storage Features
    Storage vMotion
    VMFS
    Storage presentation: Shared, raw, NFS, etc.
    Thin provisioning
    Multipathing (VMware Pluggable Storage Architecture)
    VAAI and VASA
    Storage I/O control and storage DRS
  • Storage vMotion
    Introduced in ESX 3 as “Upgrade vMotion”
    ESX 3.5 used a snapshot while the datastore was in motion
    vSphere 4 used changed-block tracking (CBT) and recursive passes
    vSphere 5 Mirror Mode mirrors writes to in-progress vMotions and also supports migration of vSphere snapshots and Linked Clones
    Can be offloaded for VAAI-Block (but not NFS)
  • vSphere 5: What’s New in VMFS 5
    Max VMDK size is still 2 TB – 512 bytes
    Virtual (non-passthru) RDM still limited to 2 TB
    Max LUNs per host is still 256
  • Hypervisor Storage Options:Shared Storage
    The common/ workstation approach
    VMware: VMDK image in VMFS datastore
    Hyper-V: VHD image in CSV datastore
    Block storage (direct or FC/iSCSI SAN)
    Why?
    Traditional, familiar, common (~90%)
    Prime features (Storage VMotion, etc)
    Multipathing, load balancing, failover*
    But…
    Overhead of two storage stacks (5-8%)
    Harder to leverage storage features
    Often shares storage LUN and queue
    Difficult storage management
    VM
    Host
    Guest
    OS
    VMFS
    VMDK
    DAS or SAN
    Storage
  • Hypervisor Storage Options:Shared Storage on NAS
    Skip VMFS and use NAS
    NFS or SMB is the datastore
    Wow!
    Simple – no SAN
    Multiple queues
    Flexible (on-the-fly changes)
    Simple snap and replicate*
    Enables full Vmotion
    Link aggregation (trunking) is possible
    But…
    Less familiar (ESX 3.0+)
    CPU load questions
    Limited to 8 NFS datastores (ESX default)
    Snapshot consistency for multiple VMDK
    VM
    Host
    Guest
    OS
    NAS
    Storage
    VMDK
  • Hypervisor Storage Options:Guest iSCSI
    Skip VMFS and use iSCSI directly
    Access a LUN just like any physical server
    VMware ESX can even boot from iSCSI!
    Ok…
    Storage folks love it!
    Can be faster than ESX iSCSI
    Very flexible (on-the-fly changes)
    Guest can move and still access storage
    But…
    Less common to VM folks
    CPU load questions
    No Storage VMotion (but doesn’t need it)
    VM
    Host
    Guest
    OS
    iSCSI
    Storage
    LUN
  • Hypervisor Storage Options:Raw Device Mapping (RDM)
    Guest VM’s access storage directly over iSCSI or FC
    VM’s can even boot from raw devices
    Hyper-V pass-through LUN is similar
    Great!
    Per-server queues for performance
    Easier measurement
    The only method for clustering
    Supports LUNs larger than 2 TB (60 TB passthru in vSphere 5!)
    But…
    Tricky VMotion and dynamic resource scheduling (DRS)
    No storage VMotion
    More management overhead
    Limited to 256 LUNs per data center
    VM
    Host
    Guest
    OS
    I/O
    Mapping File
    SAN Storage
  • Hypervisor Storage Options:Direct I/O
    VMware ESX VMDirectPath - Guest VM’s access I/O hardware directly
    Leverages AMD IOMMU or Intel VT-d
    Great!
    Potential for native performance
    Just like RDM but better!
    But…
    No VMotion or Storage VMotion
    No ESX fault tolerance (FT)
    No ESX snapshots or VM suspend
    No device hot-add
    No performance benefit in the real world!
    VM
    Host
    Guest
    OS
    I/O
    Mapping File
    SAN Storage
  • Which VMware Storage Method Performs Best?
    Mixed random I/O
    CPU cost per I/O
    VMFS,
    RDM (p), or RDM (v)
    Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc.,ESX 3.5, 2008
  • vSphere 5: Policy or Profile-Driven Storage
    Allows storage tiers to be defined in vCenter based on SLA, performance, etc.
    Used during provisioning, cloning, Storage vMotion, Storage DRS
    Leverages VASA for metrics and characterization
    All HCL arrays and types (NFS, iSCSI, FC)
    Custom descriptions and tagging for tiers
    Compliance status is a simple binary report
  • Native VMware Thin Provisioning
    VMware ESX 4 allocates storage in 1 MB chunks as capacity is used
    Similar support enabled for virtual disks on NFS in VI 3
    Thin provisioning existed for block, could be enabled on the command line in VI 3
    Present in VMware desktop products
    vSphere 4 fully supports and integrates thin provisioning
    Every version/license includes thin provisioning
    Allows thick-to-thin conversion during Storage VMotion
    In-array thin provisioning also supported (we’ll get to that…)
  • Four Types of VMware ESX Volumes
    Note: FT is not supported
    What will your array do? VAAI helps…
    Friendly to on-array thin provisioning
  • Storage Allocation and Thin Provisioning
    VMware tests show no performance impact from thin provisioning after zeroing
  • Pluggable Storage Architecture:Native Multipathing
    VMware ESX includes multipathing built in
    Basic native multipathing (NMP) is round-robin fail-over only – it will not load balance I/O across multiple paths or make more intelligent decisions about which paths to use
    Pluggable Storage Architecture (PSA)
    VMware NMP
    Third-Party MPP
    VMware SATP
    Third-Party SATP
    VMware PSP
    Third-Party PSP
  • Pluggable Storage Architecture: PSP and SATP
    vSphere 4 Pluggable Storage Architecture allows third-party developers to replace ESX’s storage I/O stack
    ESX Enterprise+ Only
    There are two classes of third-party plug-ins:
    Path-selection plug-ins (PSPs) optimize the choice of which path to use, ideal for active/passive type arrays
    Storage array type plug-ins (SATPs) allow load balancing across multiple paths in addition to path selection for active/active arrays
    EMC PowerPath/VE for vSphere does everything
  • Storage Array Type Plug-ins (SATP)
    ESX native approaches
    Active/Passive
    Active/Active
    Pseudo Active
    Storage Array Type Plug-Ins
    VMW_SATP_LOCAL – Generic local direct-attached storage
    VMW_SATP_DEFAULT_AA – Generic for active/active arrays
    VMW_SATP_DEFAULT_AP – Generic for active/passive arrays
    VMW_SATP_LSI – LSI/NetApp arrays from Dell, HDS, IBM, Oracle, SGI
    VMW_SATP_SVC – IBM SVC-based systems (SVC, V7000, Actifio)
    VMW_SATP_ALUA – Asymmetric Logical Unit Access-compliant arrays
    VMW_SATP_CX – EMC/Dell CLARiiON and Celerra (also VMW_SATP_ALUA_CX)
    VMW_SATP_SYMM – EMC Symmetrix DMX-3/DMX-4/VMAX, Invista
    VMW_SATP_INV – EMC Invista and VPLEX
    VMW_SATP_EQL – Dell EqualLogic systems
    Also, EMC PowerPath and HDS HDLM and vendor-unique plugins not detailed in the HCL
  • Path Selection Plug-ins (PSP)
    VMW_PSP_MRU – Most-Recently Used (MRU) – Supports hundreds of storage arrays
    VMW_PSP_FIXED – Fixed - Supports hundreds of storage arrays
    VMW_PSP_RR – Round-Robin - Supports dozens of storage arrays
    DELL_PSP_EQL_ROUTED – Dell EqualLogic iSCSI arrays
    Also, EMC PowerPath and other vendor unique
  • vStorage APIs for Array Integration (VAAI)
    VAAI integrates advanced storage features with VMware
    Basic requirements:
    A capable storage array
    ESX 4.1+
    A software plug-in for ESX
    Not every implementation is equal
    Block zeroing can be very demanding for some arrays
    Zeroing might conflict with full copy
  • VAAI Support Matrix
  • vSphere 5: VAAI 2
    Block
    (FC/iSCSI)
    T10 compliance is improved - No plug-in needed for many arrays
    File
    (NFS)
    NAS plugins come from vendors, not VMware
  • vSphere 5: vSphereStorage APIs – Storage Awareness (VASA)
    VASA is communication mechanism for vCenter to detect array capabilities
    RAID level, thin provisioning state, replication state, etc.
    Two locations in vCenter Server:
    “System-Defined Capabilities” – per-datastore descriptors
    Storage views and SMS API’s
  • Storage I/O Control (SIOC)
    Storage I/O Control (SIOC) is all about fairness:
    Prioritization and QoS for VMFS
    Re-distributes unused I/O resources
    Minimizes “noisy neighbor” issues
    ESX can provide quality of service for storage access to virtual machines
    Enabled per-datastore
    When a pre-defined latency level is exceeded on a VM it begins to throttle I/O (default 30 ms)
    Monitors queues on storage arrays and per-VM I/O latency
    But:
    vSphere 4.1 with Enterprise Plus
    Disabled by default but highly recommended!
    Block storage only (FC or ISCSI)
    Whole-LUN only (no extents)
    No RDM
  • Storage I/O Control in Action
  • Virtual Machine Mobility
    Moving virtual machines is the next big challenge
    Physical servers are difficult to move around and between data centers
    Pent-up desire to move virtual machines from host to host and even to different physical locations
    VMware DRS would move live VMs around the data center
    The “Holy Grail” for server managers
    Requires networked storage (SAN/NAS)
  • vSphere 5: Storage DRS
    Datastore clusters aggregate multiple datastores
    VMs and VMDKs placement metrics:
    Space - Capacity utilization and availability (80% default)
    Performance – I/O latency (15 ms default)
    When thresholds are crossed, vSphere will rebalance all VMs and VMDKs according to Affinity Rules
    Storage DRS works with either VMFS/block or NFS datastores
    Maintenance Mode evacuates a datastore
  • What’s next
    Lunch
  • Expanding the Conversation
    Converged I/O, storage virtualization and new storage architectures
  • This Hour’s Focus:Non-Hypervisor Storage Features
    Converged networking
    Storage protocols (FC, iSCSI, NFS)
    Enhanced Ethernet (DCB, CAN, FCoE)
    I/O virtualization
    Storage for virtual storage
    Tiered storage and SSD/flash
    Specialized arrays
    Virtual storage appliances (VSA)
  • Introduction: Converging on Convergence
    Data centers rely more on standard ingredients
    What will connect these systems together?
    IP and Ethernet are logical choices
  • Drivers of Convergence
  • Which Storage Protocol to Use?
    Server admins don’t know/care about storage protocols and will want whatever they are familiar with
    Storage admins have preconceived notions about the merits of various options:
    FC is fast, low-latency, low-CPU, expensive
    NFS is slow, high-latency, high-CPU, cheap
    iSCSI is medium, medium, medium, medium
  • vSphere Protocol Performance
  • vSphere CPU Utilization
  • vSphere Latency
  • Microsoft Hyper-V Performance
  • Which Storage Protocols Do People Use?
    Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
  • The Upshot: It Doesn’t Matter
    Use what you have and are familiar with!
    FC, iSCSI, NFS all work well
    Most enterprise production VM data is on FC, many smaller shops using iSCSI or NFS
    Either/or? - 50% use a combination
    For IP storage
    Network hardware and config matter more than protocol (NFS, iSCSI, FC)
    Use a separate network or VLAN
    Use a fast switch and consider jumbo frames
    For FC storage
    8 Gb FC/FCoE is awesome for VMs
    Look into NPIV
    Look for VAAI
  • The Storage Network Roadmap
  • Serious Performance
    10 GbE is faster than most storage interconnects
    iSCSI and FCoE both can perform at wire-rate
  • Latency is Critical Too
    Latency is even more critical in shared storage
    FCoE with 10 GbE can achieve well over 500,000 4K IOPS (if the array and client can handle it!)
  • Benefits Beyond Speed
    10 GbE takes performance off the table (for now…)
    But performance is only half the story:
    Simplified connectivity
    New network architecture
    Virtual machine mobility
    1 GbE Cluster
    4G FC Storage
    1 GbE Network
    10 GbE
    (Plus 6 Gbps extra capacity)
  • Enhanced 10 Gb Ethernet
    • Ethernet and SCSI were not made for each other
    • SCSI expects a lossless transport with guaranteed delivery
    • Ethernet expects higher-level protocols to take care of issues
    • “Data Center Bridging” is a project to create lossless Ethernet
    • AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
    • iSCSI and NFS are happy with or without DCB
    • DCB is a work in progress
    • FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
    • QCN (Qau) is still not ready
    Priority Flow Control (PFC)
    802.1Qbb
    Congestion Management (QCN)
    802.1Qau
    Bandwidth Management (ETS)
    802.1Qaz
    PAUSE
    802.3x
    Data Center Bridging Exchange Protocol (DCBX)
    Traffic Classes 802.1p/Q
  • FCoE CNAs for VMware ESX
    No Intel (OpenFCoE) or Broadcom support in vSphere 4…
  • vSphere 5: FCoE Software Initiator
    Dramatically expands the FCoE footprint from just a few CNAs
    Based on Intel OpenFCoE? – Shows as “Intel Corporation FCoE Adapter”
  • I/O Virtualization: Virtual I/O
    Extends I/O capabilities beyond physical connections (PCIe slots, etc)
    Increases flexibility and mobility of VMs and blades
    Reduces hardware, cabling, and cost for high-I/O machines
    Increases density of blades and VMs
  • I/O Virtualization: IOMMU (Intel VT-d)
    IOMMU gives devices direct access to system memory
    AMD IOMMU or Intel VT-d
    Similar to AGP GART
    VMware VMDirectPath leverages IOMMU
    Allows VMs to access devices directly
    May not improve real-world performance
    System Memory
    IOMMU
    MMU
    I/O Device
    CPU
  • Does SSD Change the Equation?
    RAM and flash promise high performance…
    But you have to use it right
  • Flash is Not A Disk
    Flash must be carefully engineered and integrated
    Cache and intelligence to offset write penalty
    Automatic block-level data placement to maximize ROI
    IF a system can do this, everything else improves
    Overall system performance
    Utilization of disk capacity
    Space and power efficiency
    Even system cost can improve!
  • The Tiered Storage Cliché
    Cost and Performance
    Optimized for Savings!
  • Tiered Storage Evolves
  • Three Approaches to SSD For VM
    EMC Project Lightning promises to deliver all three!
  • Storage for Virtual Servers (Only!)
    New breed of storage solutions just for virtual servers
    Highly integrated (vCenter, VMkernel drivers, etc.)
    High-performance (SSD cache)
    Mostly from startups (for now)
    Tintri– NFS-based caching array
    Virsto+EvoStor – Hyper-V software, moving to VMware
  • Virtual Storage Appliances (VSA)
    What if the SAN was pulled inside the hypervisor?
    VSA = A virtual storage array as a guest VM
    Great for lab or PoC
    Some are not for production
    Can build a whole data center in a hypervisor, including LAN, SAN, clusters, etc
    Physical Server Resources
    Hypervisor
    VM Guest
    VM Guest
    Virtual Storage Appliance
    Virtual SAN
    Virtual LAN
    CPU
    RAM
  • vSphere 5: vSphere Storage Appliance (VSA)
    Aimed at SMB market
    Two deployment options:
    2x replicates storage 4:2
    3x replicates round-robin 6:3
    Uses local (DAS) storage
    Enables HA and vMotion with no SAN or NAS
    Uses NFS for storage access
    Also manages IP addresses for HA
  • Virtual Storage Appliance Options
  • Whew! Let’s Sum Up
    Server virtualization changes everything
    Throw your old assumptions about storage workloads and presentation out the window
    We (storage folks) have some work to do
    New ways of presenting storage to the server
    Converged I/O (Ethernet!)
    New demand for storage virtualization features
    New architectural assumptions
  • Thank You!
    Stephen Foskett
    stephen@fosketts.net
    twitter.com/sfoskett
    +1(508)451-9532
    FoskettServices.com
    blog.fosketts.net
    GestaltIT.com