»Storage Virtualization Seminar
     Presentation Download
     The management of storage devices is often tedious and tim...
Storage Virtualization
School
Presented By:
  •   Marc Staimer, President & CDS
  •   Dragon Slayer Consulting
  •   marcs...
Dragon Slayer Consulting Intro

   Marc Staimer - President & CDS
     •   11+ years
              Storage, SANS, Software...
The Socratic Test of Three




August 2009     Storage Virtualization School   3
Seminar Agenda


   Part 1
     •   What, where, who, when, how storage virtualization

   Part 2
     •   Virtual storage...
What I assume you know

     •   SAN versus File Storage
     •   Storage versus IP Networking
     •   Scalability issues...
Part 1
                         Part 1   Part 2   Part 3




What, Where, Who, When, Why, How
Storage virtualization
Old Man & The Toad




August 2009    Storage Virtualization School   7
Part 1 Agenda
              What

              Where

              Who

              When

              Why

         ...
What is Storage Virtualization?

   Abstract the storage image
     •   From the storage
     •   Different kinds of stora...
SNIA Storage Virtualization Definition
•    The act of abstracting, hiding, or isolating the internal
     function of a s...
Virtualized Storage Image Abstraction
   Abstracting the image means
     •   Masking storage services from applications
 ...
Polls: Storage Virtualization Market


   Virtualization is neither new or strange
     •   Per ESG based on 2008 polls
  ...
IDG 2008 Virtualization Poll
  (Collected Q4 2007)



              Who Took the Survey (464 respondents)



             ...
IDG: Where Are You Investing Now?

                Current Virtualization Investments
                  Currently Investin...
IDG: Where Investing Thru 2010

                Future Virtualization Investments
                    Planning to Invest  ...
Dragon Slayer Consulting Poll
  (265 respondents)


                     Current or Planned Virtualization
Have (or will) ...
Where Storage Virtualization Occurs
   Everywhere
     •   Odds are you’re storage is already virtualized to a degree
    ...
Volume Management
   Server-based storage virtualization
     •   Abstracts block storage (LUNs, HDD) into virtual “volume...
Logical Volume Managers (LVM)
   Platform          Volume Manager                                      Notes

AIX         ...
ZFS: Sun’s Super File System
   a.k.a. “ZB file system”
     •   Combined file system, LVM, disk/partition manager
     • ...
ZFS Limitations

   Adding or removing vdevs is hard/impossible
     •   Especially removing

   Stacked RAID is currently...
IO Path Management Software
  Virtualizing the SAN Pathing App                                              App

         ...
SAN Storage Virtualization

   An abstraction layer
     •   Between hosts & physical storage
     •   That provides a sin...
What does SAN Storage Virtualization
  Do?

   Aggregate storage assets into 1 image
     •   Manages, provisions, protect...
SAN Tends to Be a Popular
  Virtualization Location
   Usually requires less configuration & mgmt
     •   As compared to ...
Shared vs. Split Path
      Shared path intercepts traffic                            Split path redirects traffic

      ...
Pros & Cons of Shared Path



   Pros                  Cons
  Simpler                Scalability limitations
   •    Imple...
Pros & Cons of Split Path



    Pros                               Cons
   Scalability                       Complexity
 ...
Split Path Notes



   Split Path Combinations
     •   Switch – SPAID
              a.k.a. split path architecture indepe...
SPAID Notes




   Switch Centric
     •   Metadata & LUN map on switch
     •   Software & processing done on switch
    ...
Server Centric LVM Notes




   LVM
     •   Virtualization SW, pathing SW, and/or LVM on server
     •   Mapping & proces...
Split Path Hybrids




   Split Path Combinations
     •   Proprietary virtualization SW
              A.k.a. agent and/or...
Advanced Hybrid



   Fabric Centric Storage
     •   Leverages std LVMs
              Symantec Storage Foundation
       ...
SAN Virtualization Products
       Product            Architecture            Location           Multi-vendor Repl.       ...
SAN Virtualization Issues



 Side effects
   •   Spreading a storage pool across more RAID sets, &/or systems
           ...
Ways to Mitigate Increased Data
  Failure Probabilities

Each intelligent storage element
is self-healing, reducing the   ...
Virtual Network Attached Storage (NAS)




   NAS lends itself to virtualization
     •   IP network connectivity and host...
NAS Virtualization Products
        Product           Architecture          Location                                    No...
File Virtualization Issues
(a.k.a. Global Name Space – GNS or Global File System - GFS)



                              V...
Embedded Virtualization has
  Transformed Storage Systems




   Common in storage array controllers
     •   Arrays creat...
Newer Gen of Arrays – Usually Clustered




   Include virtualization derived features
     •   Automated ILM
     •   Thi...
Virtual Storage Appliances (VSAs)

   2 types
     •   Distributed
     •   Proxy




August 2009            Storage Virtu...
Distributed VSAs


   Converts VM DAS into SAN
     •   All or some VM DAS
              In a virtual storage SAN pool
   ...
Proxy Virtual SANs
   Aggregates VM DAS & SAN into storage pool
     •   All or some
     •   Runs as a VM guest
         ...
VSA Products



                                       Server Virtualization
        Product        Architecture          ...
VSA Issues
   Distributed VSAs
     •   Requires a bit more memory & CPU cycles / server
   Proxy VSAs
     •   Requires q...
Virtualized IO                                             TCP/IP
                                                        ...
Virtualized IO – 10 to 40G IBA
              Standard IO                    IBA Virtualized IO




August 2009            ...
Infiniband IO Virtualization Definitions
   HCA – Host Channel Adapter
     •   Server adapter card
   TCA – Target Channe...
Virtualized IO – 10G CEE
              Standard IO                    CEE Virtualized IO




August 2009             Stora...
Ethernet IO Virtualization Definitions
   FCoE – Fibre Channel over Ethernet
     •   FC frames encapsulated in Ethernet p...
How IBA Compares to 10GbE
                          InfiniBand                           10GbE          Notes
    Max pt B...
Multi-Root IO Virtualization
   Moving IO outside the box




August 2009        Storage Virtualization School   53
Simpler
                                     MRIOV – Value Prop
         •   Fewer storage & network pts
         •   Fewe...
How MRIOV Compares




Traditional Inefficiencies                          MRIOV Advantages
 Significant unused IO capacit...
Comparison w/Other IOV Solutions
          Today’s Solution         MRIOV Solution                     InfiniBand     FCoE...
IO Virtualization Products
Vendor        Technology              Product types                                         Not...
IO Virtualization Issues
   IBA
     •   Is primarily utilized for HPCC
     •   Not a large install base in the Enterpris...
Yes, Even SSDs are Virtualized




   Virtualizes single or multi-cell flash
     •   Algorithms manage writes
           ...
SSD Tier-0 Storage Types
   Enterprise storage
     •   Storage system optimized
              Looks like a HDD & fits in ...
Where Storage Virtualization Occurs
   Everywhere
     •   Operating systems
     •   Applications
     •   Volume manager...
Audience Response

Questions?
Break sponsored
by
Part 2
                              Part 1   Part 2   Part 3




Virtual storage in a virtual server world
Top 10 Signs You’re a Storage Admin
1.   ~90%of peers & boss, don’t have a clue about what you really do
2.   Being sick i...
Part 2 Agenda
              Level setting
              •   For virtual servers

              Virtual server issues
     ...
Benefits of the Hypervisor Revolution

          Increased app availability
          Reduced server hardware w/consolidat...
Virtualized Servers
   Adv. features require networked storage
    • SAN or NAS
   Virtualized server advanced functionali...
Source Gartner: 2008 Enterprise
  Virtual Server Market Share

                     Virtual Iron, 3%    SUN, 1%
     Micro...
Distributed Resource Optimization
 Distributed Resource Scheduler
  • Dynamic resource pool balancing allocating on pre-de...
HOT! – VMotion
   Online – Increasing Data Availability
     •   No scheduled downtime
     •   Continuous service availab...
Virtual Desktop Infrastructure
   Increased
     •   Desktop availability, flexibility (not tied to desktop hardware), & s...
Site Recovery Manager
   Faster more automated DR

   Integrated with storage

   Utilize lower cost DR storage & drives

...
ESG Poll: Does Server Virtualization
Improve Storage Utilization?
              Since being implement, what impact has ser...
ESG Server & Storage Virtualization Poll

              Has your org deployed a storage virtualization solution
          ...
Why Use Virtual Storage For Virtual Servers?


   Reasons Most Often Sited: Improved
     •   Mobility of virtual machines...
Server Virtualization Market Trends / ESG


    Server virtualization driving storage system re-evaluation
      •     66%...
What About Server Virtualization Based DR?

   DR is a prime beneficiary of server virtualization
     •   Fewer remote ma...
Server Virtualization = SAN and/or NAS

        SAN                                                       NAS




   Serve...
Types of Networked Storage

   NAS – Network Attached Storage
     •   A.K.A. File based storage
              NFS – Netwo...
NAS & Server Virtualization
   NAS works very well w / Hypervisors & adv features
     •   NFS – VMware, XenServer, KVM, V...
Why Hypervisor Vendors Don’t Usually
Recommend NAS
   Performance, performance, performance
     •   There exception being...
Virtual Servers & NAS
   Many virtual apps are fine w / NAS performance
   Virtual guests can boot from NFS (VMware VMDKs)...
Virtual Server Issues with NAS
  Most NAS Systems don’t scale well
    •   Capacity, file system size, max files, & especi...
SANs & Server Virtualization
   SANs works very well w / Hypervisors & adv features
   Mixed bag on Complexity
     •   FC...
Hypervisor Vendors Recommendations
   Recommended in this order
     •   iSCSI, FC, IBA
   iSCSI Rationale
     •   iSCSI ...
iSCSI SAN & Server Virtualization
   Should have dedicated fabric
     •   Not shared with other IP traffic
   Performance...
FC SAN & Server Virtualization
   FC SANs require NPIV
     •   N_Port ID Virtualization
              Otherwise there is ...
FC SAN & Server Virtualization
      FC SANs are manually intensive
        •     Implementation, ops, change mgmt, mgmt
 ...
Server Virtualization has Storage
Ramifications


   Dramaticallyincreased I/O (storage) demands
   Patchwork of support, ...
Virtualized Server Storage Issues

   Boils down to 4 things to manage
     •   Performance
     •   Complexity
     •   T...
VMware Storage Option-Shared Block Storage
   Shared storage - common/ workstation approach
     •   Stores VMDK image in ...
VMware Storage Option-Shared NFS Storage
   Shared storage on NFS – skip VMFS & use NAS
     •   NTFS is the datastore
   ...
VMware Storage Options-Raw Device Mapping
(RDM)
   Guest VMs access storage directly over iSCSI or FC
     •   VMs can eve...
Physical vs. Virtual RDM
     Virtual Compatibility Mode                            Physical Compatibility Mode
 •    Appe...
Which VMware Storage Method
Performs Best?
        Mixed Random I/O                                       CPU Cost Per I/O...
Server Virtualization Storage Protocol
  Breakout per IDC: 2007

                           7%
                    22%

  ...
Which Storage Protocol Performs Best?

   Throughput by I/O Size                             CPU Cost Per I/O




        ...
Perplexing Server Virtualization Storage
Performance Problems
   App performance drop-off
     •   When moving from physic...
The Issue is often…
   Too Much Oversubscription




   Generally, oversubscription is a very good thing


August 2009    ...
Where Oversubscription Occurs

   Within the:
     •   Hypervisor
     •   LUN
     •   Disk Drives
     •   SAN fabric
  ...
Hypervisor Oversubscription
   Hypervisors are designed for oversubscription
     •   But too much of a good thing…
      ...
LUN Oversubscription
  Combines disks into storage pools
   •    Each storage pool is carved up by the Hypervisor
        ...
HDD Oversubscription – Especially SATA

   Slower SATA drives don’t handle contention well
    •    Nominal buffers or que...
SAN Fabric Oversubscription
   SAN storage is typically oversubscribed
     •   8:1 (server initiators – target storage po...
Failure to Adjust for Virtual Server
Oversubscription Can be Disastrous
   SAN or storage target ports blocks IO
     •   ...
Too Much Oversubscription App Pain Points
              Operationally                                             Economic...
Too Much Oversubscription Work-Arounds

   Assign 1:1 physical LUNs to virtual LUNs
     •   Easiest with iSCSI Storage
  ...
Issues w/Work-Arounds

   RDM means ltd advanced features
     •   Discouraged by Hypervisor vendors
   Reduced or elimina...
Better Alternative Can be Virtualized Storage
   Virtualized SAN &/or NAS (GNS or GFS) Storage

   Can mitigate or elimina...
Of Course, There is the > Probability
  of Data Failure Issue (Previously Discussed)
   Probability of 1 system going down...
Server Virtualization Lack of End-to-end
  Visibility Pain
   Can’t pierce firewall of server virtualization layer
     • ...
No Perfect Solutions
   But some pretty good all around ones
     •   Akorri – Balance Point
     •   EMC – IT Management ...
Server Virtualization DP Pain & Issues

   Local

   Wide Area

   Granularity




August 2009      Storage Virtualization...
Level Setting Definitions
   HA protects against local hardware failures
   DR protects against site failures
   Business ...
HA Requires Redundant Systems

   100% redundancy can be a tad expensive
     •   Upfront & ongoing
     •   For just prot...
Virtual Server DR Tends to Work Better w /
SAN or Virtual SAN Storage
   If hardware hosting VMs fails
     •   VMs can ea...
DR with Shared Storage on SAN
   Virtual guest images live on the SAN Storage

   Each VM guest is then pointed @
     •  ...
High Cost of Networked Storage HA
   Requires duplicated Network Storage for HA
     •   2x Network Storage hardware costs...
Virtualized Storage Can Mitigate Costs
   Virtualized storage (SAN or NAS)
     •   Fewer system images to manage
     •  ...
Wide Area DR
              Primary      FC over WAN                 FC over WAN   Recovery
                Site         Ga...
Storage Virtualization (SV) Can Mitigate Costs
   Not all SV can WAN replicate, ones that do mean
     •   Centralized con...
Server Virtualization WAN DR Issues
   FC over IP Gateways are expensive
    • Cisco & Brocade (QLogic less so)
    • Effe...
Reducing Wide Area Issues
   Replication using native TCP/IP or iSCSI
     •   Allows TCP optimizers to be utilized
     •...
Wide Area DR Technology
   Hypervisor, Storage, OS, or Application based
    •    Mirroring – Sync and/or Async
    •    S...
Typical HA-DR VM Storage & Issues

   Mirroring

   Hypervisor snapshot replication

   CDP typically does not work well o...
Mirroring – Sync, Async, Semi-Sync
 Sync replicates on write
   •   Requires remote acknowledgement before local write is ...
Mirroring Shortcomings
 Sync
  •    Cannot prevent the rolling disaster – disasters are synchronized
  •    Expensive & pe...
Snapshot – Simpler HA-DR Alternative?
   No agents on servers or applications
    • Simple to use
    • Medium to fine gra...
Storage Virtualization Can Again Help
   SV that is integrated with BU software provides
    • Centralized control, fewer ...
Snapshot Imperfections
   Snaps typically not structured data crash consistent
     •   Requires either VSS integration fo...
Issues w/Multi-Vendor App Aware Approach
   Multiple products to manage…separately

   BU SW not aware of replicated snaps...
1st: The Insidious Problem w/Agents
   Agents are software w/admin privileges
     •   A.k.a. plug-ins, lite agents, clien...
Why Admins Despise Agents
  E.G. Operational Headaches

              Agents compromise security

              Agents are...
Agents Compromise Security
   A firewall port must be opened/agent

   Agents have admin privileges
     •   Creates a bac...
Agents Very Difficult to Admin & Manage
     Installing agents can be maddeningly frustrating
        •   Requires an app ...
Continued
  Infrastructure complexity = increased failures
    •   More agent software parts = > failure probability
  Mul...
Agents Misappropriate Server Assets
   Agent software steals server resources
     •   Each agent utilizes 2% or more of a...
Agents Escalate CapEx & OpEx
   >2% of server CapEx & OpEx allocated to agents
     •   More when agents are required for ...
Agent Issues Exacerbated on VMs
                          Ap                                        p.
                   ...
Ultimately it Reduces Virtualization Value

   Agents limit VMs / physical server
     •   Reduces effective consolidation...
Traditional or Legacy Backup & Restore
   Backup to Tape, VTL, or Disk

   RPO & RTO range from coarse to fine grain
     ...
Typical Backup & Restore Failures
   All of the Agent issues in spades
     •   Multiple agents for different functions & ...
CDP
   Typically copies on writes
   It differs from mirroring in 4 ways
     •   Time stamps every write
     •   Can be ...
CDP Fails to Measure Up

    Primarily agent based as well w/agent problems

    Most CDP is not really designed for ROBOs...
VMware’s Agentless Solutions

   VMware Consolidated Backup – VCB

   VMware Data Recovery – VDR




August 2009       Sto...
VCB
   Requires no VM or VMware agents
     •   Utilizes VMware VMDK Snapshots
              RPO & RTO is coarse grain
   ...
Where VCB Comes up a bit Short
   32 max concurrent VMs/proxy, w/5 as best practice
     •   Means more proxy servers or v...
VDR
   Part of vSphere4
     •   Requires no VM or VMware agents
     •   Utilizes VMware VMDK Snapshots
              RPO...
Where VDR Comes up a bit Short
   100 max concurrent VMs
     •   Aimed at smaller VMware environments
   Not file system ...
Other Ongoing HA-DR Virtualization Issues
   Serious backup scalability limitations

   No integration w/Online-Cloud back...
Serious VM DP Scalability Issues
          DP vaults rarely scale w/some exceptions
              •   Meaning more backup ...
Private-Public Cloud Integration a.k.a.
  Hybrid Cloud Integration

   Local backup or archive vaults don’t replicate
    ...
Another More Complete Agentless
  Backup Solution

   Asigra – Hybrid Cloud Backup
     •   No agents
     •   Physical or...
Asigra Agentless VMware Backups
   Agentless Hybrid Cloud Backup
     •   ESX 3i/3.5/3.0 & vShpere4 compatible
     •   Ph...
Asigra Agentless Hybrid Cloud BU Limitations
   No native backup to tape
   Requires replacement of current backup softwar...
VM Agentless Recommendations

   For smaller environments (< 100 VMs)
     •   VDR
     •   Products from Veeam, Vizioncor...
VM Agentless Recommendations

   For medium to larger environments
     •   SME to Enterprise
              Asigra Hybrid ...
Some Storage Configuration Best Practices



   Separate OS & app data
    • OS volumes (C: or /) on different VMFS or LUN...
Conclusions
   Numerous Virtual Server storage Issues

   There are ways to deal with them
     •   Some better than other...
Audience Response

Questions?
Break sponsored
by
Part 3
                           Part 1   Part 2   Part 3




Storage as a dynamic online “on-demand”
resource
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Storage Virtualization Seminar Presentation Download
Upcoming SlideShare
Loading in …5
×

Storage Virtualization Seminar Presentation Download

5,288
-1

Published on

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
5,288
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
362
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Storage Virtualization Seminar Presentation Download

  1. 1. »Storage Virtualization Seminar Presentation Download The management of storage devices is often tedious and time consuming. And as the fragile economy continues to impact everyone this year, we are all going to be required to do more with less. Storage virtualization promises to ease the headaches of increasingly complex storage systems and, ideally, will allow us to still effectively and efficiently do our jobs as IT pros with fewer resources. This presentation highlights how virtualization has progressed from vaporware to an actual concept storage managers are using to minimize the amount of machines needed to manage, centralize data and change the economics of storage. Marc Staimer will also look at how storage virtualization makes heterogeneous storage compatible, eases data migration and enables consolidation. Sponsored By:
  2. 2. Storage Virtualization School Presented By: • Marc Staimer, President & CDS • Dragon Slayer Consulting • marcstaimer@comcast.net • 503-579-3763
  3. 3. Dragon Slayer Consulting Intro Marc Staimer - President & CDS • 11+ years Storage, SANS, Software, Networking, Servers Consults vendors (> 100) Consults end users (> 400) Analysis at trade shows Articles for Tech Target websites & magazines Blog • 29+ years industry experience August 2009 Storage Virtualization School 2
  4. 4. The Socratic Test of Three August 2009 Storage Virtualization School 3
  5. 5. Seminar Agenda Part 1 • What, where, who, when, how storage virtualization Part 2 • Virtual storage in a virtual server world Part 3 • Storage as a dynamic online “on-demand” resource August 2009 Storage Virtualization School 4
  6. 6. What I assume you know • SAN versus File Storage • Storage versus IP Networking • Scalability issues • Storage service issues • Storage management issues August 2009 Storage Virtualization School 5
  7. 7. Part 1 Part 1 Part 2 Part 3 What, Where, Who, When, Why, How Storage virtualization
  8. 8. Old Man & The Toad August 2009 Storage Virtualization School 7
  9. 9. Part 1 Agenda What Where Who When Why How August 2009 Storage Virtualization School 8
  10. 10. What is Storage Virtualization? Abstract the storage image • From the storage • Different kinds of storage SAN, NAS, Unified • Different kinds of storage image abstraction Virtualize, Cluster - Grid, Cloud, & Variations August 2009 Storage Virtualization School 9
  11. 11. SNIA Storage Virtualization Definition • The act of abstracting, hiding, or isolating the internal function of a storage (sub)system or service from applications, compute servers or general network resources for the purpose of enabling application and network independent management of storage or data. • The application of virtualization to storage services or devices for the purpose of aggregating, hiding complexity or adding new capabilities to lower level storage resources. Storage can be virtualized simultaneously in multiple layers of a system, for instance to create HSM like systems. August 2009 Storage Virtualization School 1
  12. 12. Virtualized Storage Image Abstraction Abstracting the image means • Masking storage services from applications Provisioning Increasing storage Additions Filer mounting Data migration • Between storage targets or storage tiers Data protection Change management August 2009 Storage Virtualization School 11
  13. 13. Polls: Storage Virtualization Market Virtualization is neither new or strange • Per ESG based on 2008 polls 52% have already implemented storage virtualization 48% plan to implement August 2009 Storage Virtualization School 12
  14. 14. IDG 2008 Virtualization Poll (Collected Q4 2007) Who Took the Survey (464 respondents) IT Decision Maker IT Architect Other Developer August 2009 Storage Virtualization School 13
  15. 15. IDG: Where Are You Investing Now? Current Virtualization Investments Currently Investing 96% Server Virtualization 86% Desktop Virtualization 47% Storage Virtualization 43% Enterprise Data Center Virtualization 25% Application Virtualization 23% File Virtualization 15% Application Grids 11% IO Virtualization 9% No current virtualization investment 4% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% August 2009 Storage Virtualization School 14
  16. 16. IDG: Where Investing Thru 2010 Future Virtualization Investments Planning to Invest 97% Server Virtualization 81% Desktop Virtualization 62% Storage Virtualization 53% Enterprise Data Center Virtualization 42% Application Virtualization 38% File Virtualization 31% IO Virtualization 18% Application Grids 15% No planned virtualization investments 0.4% Don't Know 3% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% August 2009 Storage Virtualization School 15
  17. 17. Dragon Slayer Consulting Poll (265 respondents) Current or Planned Virtualization Have (or will) implemented virtualization 92% Server Virtualization 78% Desktop Virtualization 56% Storage Virtualization 48% Enterprise Data Center Virtualization 45% Application Virtualization 30% File Virtualization 22% IO Virtualization 14% Application Grids 12% No Current virtualization 11% No planned virtualization 8% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90%100% August 2009 Storage Virtualization School 16
  18. 18. Where Storage Virtualization Occurs Everywhere • Odds are you’re storage is already virtualized to a degree Operating systems Applications Volume managers Hypervisors Storage arrays – RAID NAS Appliances Switches Even SSDs August 2009 Storage Virtualization School 17
  19. 19. Volume Management Server-based storage virtualization • Abstracts block storage (LUNs, HDD) into virtual “volumes” • Common to modern OS – built in Windows Logical Disk Manager, Linux LVM/EVMS, AIX LVM, HP-UX LVM, Solaris Solstice, Veritas Storage Foundation Mostly used for flexibility • Resize volumes • Protect data (RAID) • Add capacity (concatenate or expand stripe or RAID) • Mirror, snapshot, replicate • Migrate data August 2009 Storage Virtualization School 18
  20. 20. Logical Volume Managers (LVM) Platform Volume Manager Notes AIX Logical Volume Manager OSF LVM, no RAID 5, no copy-on-write snapshots HP Logical Volume HP-UX 9.0+ OSF LVM, no RAID 5 Manager FreeBSD Vinum Volume Manager No copy-on-write snapshots Logical Volume Manager Linux 2.2+ and Enterprise Volume Based on OSF LVM, no RAID 5 Management System Solaris Volume Manager Solaris Limited allocation options, no copy-on-write snapshots (was Solstice DiskSuite) AIX, HP-UX, Symantec Veritas Volume Linux, Solaris, Manager (VxVM), Storage Full-featured multi-platform volume manager Windows Foundation Co-developed with Veritas, limited allocation options, Windows 2000+ Logical Disk Manager copy-on-write snapshots introduced in Server 2003 Solaris, BSD, Mac OS X ZFS Combined file system and volume manager 10.6+ August 2009 Storage Virtualization School 19
  21. 21. ZFS: Sun’s Super File System a.k.a. “ZB file system” • Combined file system, LVM, disk/partition manager • Open source (CDDL) project managed by Sun • Replaces UFS (Sun), HFS+ (Apple OSX Snow Leopard Server) • Extensible full featured storage pools Across systems, disks, &optimized for SSDs • File systems contained in “zpools” on “vdevs” W / striping & optional RAID-Z/Z2 • 128-bit addresses mean theoretical near-infinite capacity • “copy-on-write” w / checksums for snapshots, clones, authentication August 2009 Storage Virtualization School 20
  22. 22. ZFS Limitations Adding or removing vdevs is hard/impossible • Especially removing Stacked RAID is currently not possible There is no clustering • Until Sun adds Lustre August 2009 Storage Virtualization School 21
  23. 23. IO Path Management Software Virtualizing the SAN Pathing App App IO Path Mgmt Virtualizes server – storage connection • Failover Request Request • Load balancing strategies SD SD Numerous choices HBA • Veritas DMP (cross-platform, w / Storage Foundation) • EMC PowerPath (supports EMC, HDS, IBM, HP) Interconnect topology • IBM SDD (free for IBM) • HDS (HDLM) • Microsoft MPIO (Windows, supports iSCSI & most FC) • VMware Failover Paths SP-A SP-B August 2009 Storage Virtualization School 22
  24. 24. SAN Storage Virtualization An abstraction layer • Between hosts & physical storage • That provides a single mgmt point For multiple block-level storage devices in a SAN And presents a set of virtual volumes for hosts to use August 2009 Storage Virtualization School 23
  25. 25. What does SAN Storage Virtualization Do? Aggregate storage assets into 1 image • Manages, provisions, protects, etc. Transforms “n” systems into slice & dice monolith Homogeneously or heterogeneously Virtual LUNs • Mapped to physical LUNs • Can be larger than physical Up to a Exabyte August 2009 Storage Virtualization School 24
  26. 26. SAN Tends to Be a Popular Virtualization Location Usually requires less configuration & mgmt • As compared to server based • And it potentially works with all servers & storage Resides in the storage fabric • Appliance, storage controller, switch, & hybrid • Control & data path combined or split August 2009 Storage Virtualization School 25
  27. 27. Shared vs. Split Path Shared path intercepts traffic Split path redirects traffic Where’s Where’s my data? It’s over my data? there! Control Path Data Path Data Path It’s over here! August 2009 Storage Virtualization School 26
  28. 28. Pros & Cons of Shared Path Pros Cons Simpler Scalability limitations • Implementation • Units/nodes clustered • Operations • Performance / unit or node • Management • Capacity / unit or node • Ease of use Performance hits • Additional latency August 2009 Storage Virtualization School 27
  29. 29. Pros & Cons of Split Path Pros Cons Scalability Complexity • Limited only by fabric • Install, ops, mgmt Performance SPAID • Limited only by fabric • Limited by intelligent switch BW Flexibility • Adds latency Similar to shared path appliance August 2009 Storage Virtualization School 28
  30. 30. Split Path Notes Split Path Combinations • Switch – SPAID a.k.a. split path architecture independent data streams Requires processing blades • Server Software Mgmt on appliance in fabric Virtualization agent/driver on server/virtual server August 2009 Storage Virtualization School 29
  31. 31. SPAID Notes Switch Centric • Metadata & LUN map on switch • Software & processing done on switch • Each session is directed by switch to destination Scalability constrained by switch processing & latency August 2009 Storage Virtualization School 30
  32. 32. Server Centric LVM Notes LVM • Virtualization SW, pathing SW, and/or LVM on server • Mapping & processing performed on each server • Each session is server controlled • Overall management is difficult Each server has its own management August 2009 Storage Virtualization School 31
  33. 33. Split Path Hybrids Split Path Combinations • Proprietary virtualization SW A.k.a. agent and/or pathing SW • Meta data appliance or intelligent appliance with SW Meta data mapped from appliance Management of software from appliance Replication, data protection, etc. controlled from appliance August 2009 Storage Virtualization School 32
  34. 34. Advanced Hybrid Fabric Centric Storage • Leverages std LVMs Symantec Storage Foundation OS LVMs, etc. • Leverages Filer (NAS heads) storage virtualizers • Provides proprietary virtualization SW for devices w/o software • Puts mgmt & advanced services on appliance Snapshots, replication, mirroring, etc. August 2009 Storage Virtualization School 33
  35. 35. SAN Virtualization Products Product Architecture Location Multi-vendor Repl. Notes BlueArc Titan/Mercury Shared Path Controller Yes Yes Clusters up to 8 in GNS (HDS OEMs) DataCore Supports FC storage, runs as virtual Shared Path Generic x86 appliance Yes Yes SANSymphony/Melody appliance on VMware Bycast Storage Grid Shared Path Grid of x86 appliances Yes Yes Geographically distributed cloud tech. Primarily utilized for online data EMC Invista Split Path x86 appliance + SPAID Yes No migration (CSCO& BRCD I-Switches) Up to 8 nodecluster, built-in file dedupe, EMC Unified NX-NS Shared Path Controller No No auto tiering, thin provisioning Optional post processing dedupe engine FalconStor NSS Shared Path Generic x86 appliance Yes Yes & runs as a VMware virtual appliance HDS USP V / VM Combination Combination Enterprise controller & Controller Yes Yes Tagmastore Shared/Split Path virtualization engine Combination OEM'ed Hitachi with additional HP HP XP 2xxxx Controller Yes Yes Shared/Split Path software & services x86 Purpose-built Supports most FC storage; large IBM SVC Shared Path Yes Yes Appliance caches; IBM hardware Incipient iNSP Split Path FC switch – SPAID Yes No No caching; supports Cisco FC blades x86 appliance + host No caching; split-path FC with low cost LSI StoreAge SVM Split Path Combo Yes Yes SW or intelligent switch N_Port switch, resold by HP as SVSP Active-active cluster, built-in file NetApp vFiler Shared Path Controller Yes Yes dedupe, also sold by IBM X86 purpose-built NAS/iSCSI that virtualizes internal SAS RELDATA 9240I Shared Path storage Yes Yes & external FC & SAS storage controller/appliance VMware virtual Works w/internal & DAS storage Seanodes Exanodes Shared Path Yes No appliance converts into iSCSI SAN Works only w/ISE (Intelligent Storage XIOtech Emprise 7000 Shared Path Controller No Yes Element) Requires XIOtech server virtualization XIOtech ISE Age Split Path Hybrid Generic x86 appliance Yes Yes agents or Symantec Storage Foundation
  36. 36. SAN Virtualization Issues Side effects • Spreading a storage pool across more RAID sets, &/or systems Increases performance, reduces storage management It also increases probability of data loss • Probability of 1 system going down is low • Probability any 1 of many will fail increases rapidly (P1+ P2+ P3) – (P1* P2* P3) P = probability of a specific RAID group failing August 2009 Storage Virtualization School 35
  37. 37. Ways to Mitigate Increased Data Failure Probabilities Each intelligent storage element is self-healing, reducing the Putting the intelligent storage probability of an actual disk elements into a RAIR reduces failure or RAID set failure to an that probability even further. extremely rare event 1. Self-healing storage elements 2. Redundant array of intelligent RAID (RAIR) • RAIDing the RAID 3. More scalable individual storage systems within pool August 2009 Storage Virtualization School 36
  38. 38. Virtual Network Attached Storage (NAS) NAS lends itself to virtualization • IP network connectivity and host processing possibilities Lots of file servers? Virtualize • Global namespace across all NAS & servers • Share excess capacity • Transparently migrate data (easier than redirecting users) • Reduce number of mount points • Tier files on large “shares” with variety of data • Create multiple virtual file servers August 2009 Storage Virtualization School 37
  39. 39. NAS Virtualization Products Product Architecture Location Notes Server 2003 R2, .NET. Primarily a mapping, AutoVirt Move/Clone/Map Split Path Windows servers data migration, & clone tool BlueArc (also sold as Clustered integrated NAS with global Shared Path Clustered NAS HDS HNAS) namespace EMC Rainfinity Shared Path Appliance or host SW DFS mgmt. Primarily data migration tool. Clustered integrated NAS with global Exanet ExaStore Shared Path Clustered NAS namespace F5 Acopia Shared Path Switch - Appliance Split-path architecture, non-DFS Windows/SMB only; Server 2008, 2003 R2+ Microsoft DFS Split Path Host SW enhanced management Active-Active Clustered NAS “head” with global NetApp vFiler Shared Path Clustered NAS namespace Combines clustered NAS with DFS into a ONStor (LSI) GNS Shared Path Clustered NAS & DFS single global namespace August 2009 Storage Virtualization School 38
  40. 40. File Virtualization Issues (a.k.a. Global Name Space – GNS or Global File System - GFS) Vs. Appliances (x86 or switch) market success fits & starts • Requires a commitment to the appliance versus the NAS • For data protection (Snapshots, replication, etc.) • Another system to manage – often perceived as a point solution • Utilized primarily for non-disruptive data migration GNS migrating into NAS systems as a feature • BlueArc, NetApp, OnStor (LSI), etc. • NAS GNS only works w/that same vendors NAS August 2009 Storage Virtualization School 39
  41. 41. Embedded Virtualization has Transformed Storage Systems Common in storage array controllers • Arrays create large RAID sets Carve out virtual LUNs for use by servers • Controller clusters (and grids) redirect activity Based on workload &availability • Snapshots/mirrors & replication are common features August 2009 Storage Virtualization School 40
  42. 42. Newer Gen of Arrays – Usually Clustered Include virtualization derived features • Automated ILM • Thin provisioning • Data migration • De-duplication • Self configuring and/or self tuning storage August 2009 Storage Virtualization School 41
  43. 43. Virtual Storage Appliances (VSAs) 2 types • Distributed • Proxy August 2009 Storage Virtualization School 42
  44. 44. Distributed VSAs Converts VM DAS into SAN • All or some VM DAS In a virtual storage SAN pool • Data available even if node(s) fail • Runs as a VM guest Can also be a target for VMs • On other physical machines • Low cost vs. NAS or SAN August 2009 Storage Virtualization School 43
  45. 45. Proxy Virtual SANs Aggregates VM DAS & SAN into storage pool • All or some • Runs as a VM guest iSCSI target for internal & external VMs • Plus other physical machines • Low cost vs. NAS or SAN August 2009 Storage Virtualization School 44
  46. 46. VSA Products Server Virtualization Product Architecture Notes Supported DataCore Software VMware ESX & Microsoft Same as std SANSymphony that runs on Proxy SANSymphony HyperV an appliance ltd to 2TB Same as IPStor that runs on an x86 FalconStor NSS VSA Proxy VMware ESX appliance HP LeftHand Networks Clusters up to ~100 nodes. Designed for Distributed VMware ESX VSA ROBO Seanodes Exanodes Protects data up to 16 nodes or volume Distributed VMware ESX VMware edition failures, no replication. VMware ESX (Microsoft StorMagic SvSAN Proxy 2TB license is free HyperV coming) August 2009 Storage Virtualization School 45
  47. 47. VSA Issues Distributed VSAs • Requires a bit more memory & CPU cycles / server Proxy VSAs • Requires quite a bit more memory & CPU cycles On the target proxy Virtual SAN servers • Should have limited add’l VM guests License limits • Some are limited to 2TB Where to use • Primarily small environments and/or ROBOs Simplicity • VSAs are just about as easy as NAS • Utilize standard Ethernet technologies August 2009 Storage Virtualization School 46
  48. 48. Virtualized IO TCP/IP Ethernet Virtualized Pipe FC or iSCSI SAN Takes a very high BW pipe (10G or more) • Makes it appear as multiple unit & protocol types FC SAN, TCP/IP Network, iSCSI SAN • Breaks it out at the switch to different networks & targets Problem it solves – Fabric sprawl 3 types • Infiniband (IBA), Converged Enhanced Ethernet (CEE), MRIOV August 2009 Storage Virtualization School 47
  49. 49. Virtualized IO – 10 to 40G IBA Standard IO IBA Virtualized IO August 2009 Storage Virtualization School 48
  50. 50. Infiniband IO Virtualization Definitions HCA – Host Channel Adapter • Server adapter card TCA – Target Channel Adapter • Storage or Gateway adapter card IBA Shared IO Gateway – Same as IO Virtualization • IBA to IP, FC, iSCSI gateway RDMA – Remote Direct Memory Access • Lowest latency memory to memory transfers iSCSI – IP SCSI • SCSI mapped to TCP/IP HPCC – High Performance Compute Clusters • Large nodal clusters IBA Director – Large port count five 9s switch • 288 to 864 port switches August 2009 Storage Virtualization School 49
  51. 51. Virtualized IO – 10G CEE Standard IO CEE Virtualized IO August 2009 Storage Virtualization School 50
  52. 52. Ethernet IO Virtualization Definitions FCoE – Fibre Channel over Ethernet • FC frames encapsulated in Ethernet packets – lightweight frame maps iSCSI – IP SCSI • SCSI mapped to TCP/IP iWARP – RDMA on Ethernet • Required for HPC clusters 10GbE CNA – Converged Net Adapters • Concurrent FCoE, iSCSI, iWARP, & TCP/IP on 10GbE NIC 10G TOE – 10G TCP offload engine • Provides TCP offload for 10G adapter Split stack & full stack offloads CEE – Converged Enhanced Ethernet • ANSI standard for lossless low latency Ethernet DCE – Data Center Ethernet • Cisco’s brand name for CEE August 2009 Storage Virtualization School 51
  53. 53. How IBA Compares to 10GbE InfiniBand 10GbE Notes Max pt Bandwidth 120Gb/s 10Gb/s Faster is better E2E latency 1 to 1.2us 10 to 50us Lower is better Switch Latency 50 to 150ns 500ns to 10us Lower is better Important for RDMA Built in Voltaire only clustering Multipath Yes Voltaire only Important for Lossless Fabric Yes Voltaire only Storage Power/Port 5W 15-135W Lower is better Largest Enterprise 288 x 20Gbps &864x 288 x 10Gbps More is better Switch 40Gbps Price/Gbps $30 to 50 $150-700 Lower is better August 2009 Storage Virtualization School 52
  54. 54. Multi-Root IO Virtualization Moving IO outside the box August 2009 Storage Virtualization School 53
  55. 55. Simpler MRIOV – Value Prop • Fewer storage & network pts • Fewer storage & network switches Shared IO – Higher Utilization • 75% IO cost reduction MR OV MRIOV • 60% IO Power reduction • 35% Rackspace reduction Scalable IO Bandwidth • On demand 1-40Gbps • No add’l cost • Reduced IO adapters & cables Enhanced Functionality • Shared memory • IPC & PCIe speeds up to 40Gbps Reduced OpEx • Simplified mgmt • Server, OS, app, network, & switch transparent • HA • Changes w/o physical touch Reduced CapEx • Smaller, denser servers • Fewer components • Fewer failures • Fewer opportunities for human error Storage Virtualization School August 54
  56. 56. How MRIOV Compares Traditional Inefficiencies MRIOV Advantages Significant unused IO capacity Flexible, IO capacity on demand Inflexible, rigid server adapters Highest IO utilization, lowest TCO Wasted space, cooling, power Standardized, open technologies August 2009 Storage Virtualization School 55
  57. 57. Comparison w/Other IOV Solutions Today’s Solution MRIOV Solution InfiniBand FCoE Solutions (No IO Virtualization) (PCI Express) Solutions IO components IO components IO IO = 270 = 58 components = components:1 160 60 Config • 2 Racks IO Cost = IO Cost = IO Cost = IO Cost = • 32 Servers $196K $37K $156K $180K • Ethernet & FC • DAS 2 Racks 1 Rack 2 Racks 2 Racks IO power = IO power = IO power = IO power = 3000 W 700 W 2000 W 2200 W Utilization Very Low ~15% Very High ~80%+ Low OK Reliability Neutral High Best Neutral IO Perf 10Gb 80Gb 20Gb 10Gb TCO High Low High High Mgmt Poor Best Best OK IO Cost High Low High High August 2009 Storage Virtualization School 56
  58. 58. IO Virtualization Products Vendor Technology Product types Notes Aprius MRIOV Rack switch, blade switch, silicon PCIe switching w/server software Strong FC focus & install base, acquisition of Brocade CEE CNA & CEE Top of rack switch Foundry provides equivalent Ethernet expertise HCA, IBA directors , switches, Ethernet leader w/strong products in FC, & IBA as Cisco IBA & CEE gateways, CEE top of rack switch well. Invented CEE. 1 of the 2 FC HBA leaders making a strong CNA play Emulex CEE CNA (same driver interface). HCA, TCA, IBA directors , switches, Dominant IBA silicon leader attempting to leverage Mellanox IBA & CEE gateways, CNA position into CEE w/CNAs. FC HBA leader w/strong positions in IBA silicon,HCA, HCA, TCA, IBA directors , switches, QLogic IBA & CEE TCA, directors. switches, & GWs. Invented "Shared gateways, CNA IO". Virtensys MRIOV Rack switch, blade switch, silicon PCIe switching w/o server software HCA, IBA directors , switches, IBA leader in switches, directors, gateways, Voltaire IBA gateways software, & HCAs. Positions as a pure "Virtualized IO". Doesn't Xsigo IBA HCA, gateways mention IBA technology. August 2009 Storage Virtualization School 57
  59. 59. IO Virtualization Issues IBA • Is primarily utilized for HPCC • Not a large install base in the Enterprise today • Few storage systems with native IBA interfac LSI (SUN, SGI, IBM) & DDN • However, it is proven & it works CEE • Technology is early & somewhat immature • A bit pricey (aimed at early adopters) • Requires new NICs & switches to be effective MRIOV • In rack and Blade System only • Primarily OEM tech (e.g. must be supplied by server vendor) August 2009 Storage Virtualization School 58
  60. 60. Yes, Even SSDs are Virtualized Virtualizes single or multi-cell flash • Algorithms manage writes Load balances writes across the cell or cells Makes sure SSDs have similar MTBFs as HDDs Reduces probability of flash cell write failure August 2009 Storage Virtualization School 59
  61. 61. SSD Tier-0 Storage Types Enterprise storage • Storage system optimized Looks like a HDD & fits in rack Simple technology Easy to implement – low risk Performance constrained by storage system back end Memory appliance or server adapter • Application acceleration focused Connects via PCIe (highest perf) (IBA, FC, 10GbE soon) August 2009 Storage Virtualization School 60
  62. 62. Where Storage Virtualization Occurs Everywhere • Operating systems • Applications • Volume managers • Hypervisors • Storage arrays – RAID • NAS • Appliances • Switches • Even SSDs August 2009 Storage Virtualization School 61
  63. 63. Audience Response Questions?
  64. 64. Break sponsored by
  65. 65. Part 2 Part 1 Part 2 Part 3 Virtual storage in a virtual server world
  66. 66. Top 10 Signs You’re a Storage Admin 1. ~90%of peers & boss, don’t have a clue about what you really do 2. Being sick is defined as can't walk or you're in the hospital 3. Your relatives & family describe your job as "computer geek" 4. All real work gets started after 5pm, weekends, or holidays 5. Vacation is something you always do…next year 6. You sit in a cubicle smaller than your bedroom closet 7. Your resume is on a USB drive around your neck 8. You’re so risk averse, you wear a belt, suspenders, & coveralls 9. It's dark when you go to or leave work regardless of the time of yr. 10. You've sat at same desk for 4 yrs & worked for 3 different companies August 2009 Storage Virtualization School 65
  67. 67. Part 2 Agenda Level setting • For virtual servers Virtual server issues • Real world problems Best practices • To solve August 2009 Storage Virtualization School 66
  68. 68. Benefits of the Hypervisor Revolution Increased app availability Reduced server hardware w/consolidation Reduced infrastructure • Storage network • IP network • Power • Cooling • Battery backup Simplified DR August 2009 Storage Virtualization School 67
  69. 69. Virtualized Servers Adv. features require networked storage • SAN or NAS Virtualized server advanced functionality • VMware DRS, VMotion, Storage VMotion, VDI, SRM, SW-FT, VDR, storage API, Thin Provisioning • Microsoft Hyper-V Live Migration • Virtual Iron Live (Migrate, Capacity, Maint, Recovery, Convert, Snap) • Citrix XenServer XenMotion, Global Resource Pooling August 2009 Storage Virtualization School 68
  70. 70. Source Gartner: 2008 Enterprise Virtual Server Market Share Virtual Iron, 3% SUN, 1% Microsoft, 3% Oracle, 1% Citrix, 5% VMware Citrix Microsoft VMware, Virtual Iron 87% SUN Oracle August 2009 Storage Virtualization School 69
  71. 71. Distributed Resource Optimization Distributed Resource Scheduler • Dynamic resource pool balancing allocating on pre-defined rules Value • Aligns IT resources w / bus priorities • Operationally simple • Increases sys admin productivity • Add hardware dynamically • Avoids over-provisioning to peak load • Automates hardware maintenance Dynamic and intelligent allocation of hardware resources to ensure optimal alignment between business and IT August 2009 Storage Virtualization School 70
  72. 72. HOT! – VMotion Online – Increasing Data Availability • No scheduled downtime • Continuous service availability • Complete transaction integrity • Storage Network Support iSCSI SAN, FC w / NPIV, & NAS (Specifically NFS) August 2009 Storage Virtualization School 71
  73. 73. Virtual Desktop Infrastructure Increased • Desktop availability, flexibility (not tied to desktop hardware), & security Decreased • Management & costs August 2009 Storage Virtualization School 72
  74. 74. Site Recovery Manager Faster more automated DR Integrated with storage Utilize lower cost DR storage & drives FCP iSCSI FC Storage SATA Storage Protected Primary Site Recovery / DR Site August 2009 Storage Virtualization School 73
  75. 75. ESG Poll: Does Server Virtualization Improve Storage Utilization? Since being implement, what impact has server virtualization had on your organization’s overall volume of storage capacity? 39% 21% 18% 15% 4% 2% 1% Net Net Net No Change Net Net Net decrease of decrease of decrease of increase of increase of increase of < 20% 11% - 20% 1% - 10% 1% - 10% 11% - 20% > 20% August 2009 Storage Virtualization School 74
  76. 76. ESG Server & Storage Virtualization Poll Has your org deployed a storage virtualization solution in conjunction with its virtual server environment? Don't Know 7% 36% 24% Yes No, but plan to implement within next 12 mos 18% 15% No, but plan to implement within next 24 mos No,and no plans to implement August 2009 Storage Virtualization School 75
  77. 77. Why Use Virtual Storage For Virtual Servers? Reasons Most Often Sited: Improved • Mobility of virtual machines Load balancing between physical servers • DR & BC • Availability • Physical server upgradability w/o app disruptions • Operational recovery of virtual machine images August 2009 Storage Virtualization School 76
  78. 78. Server Virtualization Market Trends / ESG Server virtualization driving storage system re-evaluation • 66% of enterprises (>1000 employees) and 81% of small to mid sized businesses (<1000 employees) expect to purchase a new storage system for their virtualized servers in the next 24 months.* IP SAN emerged as preferred server virtualization storage • 52% of organizations deploying virtualization plan to use iSCSI (NAS 36%, FC 27%).* • Clustered storage architecture advantages with server virtualization Efficient storage utilization Optimized performance Simpler management and flexibility More Cost effective HA/DR * ESG, “The Impact of Server Virtualization on Storage” August 2009 Storage Virtualization School 77
  79. 79. What About Server Virtualization Based DR? DR is a prime beneficiary of server virtualization • Fewer remote machines idling • No need for identical equipment • Quicker recovery (RTO) through preparation &automation Who’s doing it? • 26% are replicating server images, an additional 39% plan to (ESG 2008) • Half have never used replication before (ESG 2008) Based on DSC Polling • 67% say app availability is why they implement server virtualization Justification is based on server consolidation & app up time August 2009 Storage Virtualization School 78
  80. 80. Server Virtualization = SAN and/or NAS SAN NAS Server virtualization transformed the data center • And storage requirements VMware #1 driver of SAN adoption today! 60% of virtual server storage is on SAN or NAS (ESG 2008) 86% have implemented some server virtualization (ESG 2008) Enabled & demanded centralization • And sharing of storage on arrays like never before! August 2009 Storage Virtualization School 79
  81. 81. Types of Networked Storage NAS – Network Attached Storage • A.K.A. File based storage NFS – Network File System CIFS – Computer Internet File System • To a lesser extent AFP – Apple File Protocol SAN – Storage Area Network • A.K.A. Block based storage Fibre Channel (FC) iSCSI Infiniband (IBA) August 2009 Storage Virtualization School 80
  82. 82. NAS & Server Virtualization NAS works very well w / Hypervisors & adv features • NFS – VMware, XenServer, KVM, Virtual Iron • CIFS – Microsoft Hyper-V • Common file system visible for all virtual guests Incredibly simple • Turn it on, mount it, & you’re done App performance is generally modest • Typically less than SAN storage There are exceptions where it is close to equivalent • BlueArc & to a much lesser extent NetApp & Exanet August 2009 Storage Virtualization School 81
  83. 83. Why Hypervisor Vendors Don’t Usually Recommend NAS Performance, performance, performance • There exception being once again BlueArc And to a lesser extent NetApp & Exanet SAN Typical NAS August 2009 Storage Virtualization School 82
  84. 84. Virtual Servers & NAS Many virtual apps are fine w / NAS performance Virtual guests can boot from NFS (VMware VMDKs) • NFS built-in to ESX hypervisor August 2009 Storage Virtualization School 83
  85. 85. Virtual Server Issues with NAS Most NAS Systems don’t scale well • Capacity, file system size, max files, & especially performance • Scaling typically means more systems More systems increases complexity exponentially • Eliminates NAS simplicity advantage Exceptions include BlueArc, Exanet, & NetApp (GNS or GFS) • NAS currently does not work w/storage vMotion & SRM Good news! – NFS will soon work with vMotion & SRM (EOY) August 2009 Storage Virtualization School 84
  86. 86. SANs & Server Virtualization SANs works very well w / Hypervisors & adv features Mixed bag on Complexity • FC is very complex requiring special knowledge & skills • IBA is also complex & requires special knowledge & skills • iSCSI uses Ethernet like NAS & is almost as easy App performance is very fast • iSCSI is fast • FC is a bit faster • IBA is fastest (albeit few choices) Regardless of SAN Type • SANs do not overcome storage system scalability limits August 2009 Storage Virtualization School 85
  87. 87. Hypervisor Vendors Recommendations Recommended in this order • iSCSI, FC, IBA iSCSI Rationale • iSCSI is almost as fast as FC Uses std Ethernet NICs, switches, cables & TCP/IP • iSCSI is almost as easy as NAS • iSCSI is far less expensive than FC or IBA Even less expensive than most brand name NAS August 2009 Storage Virtualization School 86
  88. 88. iSCSI SAN & Server Virtualization Should have dedicated fabric • Not shared with other IP traffic Performance mgmt • VLANs helps • QoS prioritization helps • 1 & 10G proper utilization helps • 1G does not require any hardware offload • 10G may depending on performance expectations 700MB/s no offload 1GB/s with offload 10G excellent for aggregation of VMs August 2009 Storage Virtualization School 87
  89. 89. FC SAN & Server Virtualization FC SANs require NPIV • N_Port ID Virtualization Otherwise there is an HBA / guest Or all guests share the same WWN All physical server must be in same FC Zone • Enables guests to still see their storage when they move • Critical for live migrations & business continuity FCoE will have similar rules to FC • Difference is that it runs on 10GbE • Not a routed protocol, still layer 2 switching • Requires “Smart” (a.k.a. expensive) 10G FCoE switch August 2009 Storage Virtualization School 88
  90. 90. FC SAN & Server Virtualization FC SANs are manually intensive • Implementation, ops, change mgmt, mgmt Software to ease burden • Akorri • NetApp-Onaro • SAN Pulse • TekTools • Virtual Instruments FC SANs generally require dual fabrics • 2x the cost • Necessity for change management & HA FC 8G is 4G & 2G backwards compatible • Same interfaces as 10GbE & IBA August 2009 Storage Virtualization School 89
  91. 91. Server Virtualization has Storage Ramifications Dramaticallyincreased I/O (storage) demands Patchwork of support, few standards • “VMware mode” on storage arrays • Virtual HBA/N_Port ID Virtualization (NPIV) • Everyone is qualifying everyone and jockeying for position Can be “detrimental” to storage utilization Problematic to traditional BU, replication, reporting August 2009 Storage Virtualization School 90
  92. 92. Virtualized Server Storage Issues Boils down to 4 things to manage • Performance • Complexity • Troubleshooting • & of course “Cost” August 2009 Storage Virtualization School 91
  93. 93. VMware Storage Option-Shared Block Storage Shared storage - common/ workstation approach • Stores VMDK image in VMFS datastores • DAS or FC/iSCSI SAN • Hyper-V VHD is similar Why? • Traditional, familiar, common (~90%) VMDKs • Prime features (Storage VMotion, etc) • Multipathing, load balancing, failover* VMFS But… • Overhead of two storage stacks (5-8%) DAS or SAN • Harder to leverage storage features • Often shares storage LUN and queue • Difficult storage management August 2009 Storage Virtualization School 92
  94. 94. VMware Storage Option-Shared NFS Storage Shared storage on NFS – skip VMFS & use NAS • NTFS is the datastore Simple – no SAN • Multiple queues • Flexible (on-the-fly changes) • Simple snap and replicate* • Enables full Vmotion • Use fixed LACP for trunking VMDKs But… • Doesn’t work w/SRM & storage VMotion NFS NAS • CPU load questions • Default limited to 8 NFS datastores • NAS File limitations • Multi-VMDK snap consistency August 2009 Storage Virtualization School 93
  95. 95. VMware Storage Options-Raw Device Mapping (RDM) Guest VMs access storage directly over iSCSI or FC • VMs can even boot from raw devices • Hyper-V pass-through LUN is similar Great • Per-server queues for performance • Easier measurement Mapping File • The only method for clustering I/O But… • Tricky VMotion and DRS • No storage VMotion SAN • More management overhead • Limited to 256 LUNs per data center August 2009 Storage Virtualization School 94
  96. 96. Physical vs. Virtual RDM Virtual Compatibility Mode Physical Compatibility Mode • Appears same as VMDK on VMFS • Appears as LUN on a “hard” host • Retains file locking for clustering • Allows V-to-P clustering, VMware • Allows VM snapshots, clones, locking VMotion • No VM snapshots, VCB, VMotion • Retains same characteristics • All characteristics & SCSI If storage is moved commands (except “Report LUN”) are passed through – required for some SAN management software August 2009 Storage Virtualization School 95
  97. 97. Which VMware Storage Method Performs Best? Mixed Random I/O CPU Cost Per I/O Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc., 2008 August 2009 Storage Virtualization School 96
  98. 98. Server Virtualization Storage Protocol Breakout per IDC: 2007 7% 22% iSCSI SAN 47% FC SAN 24% DAS NAS August 2009 Storage Virtualization School 97
  99. 99. Which Storage Protocol Performs Best? Throughput by I/O Size CPU Cost Per I/O Source: “Comparison of Storage Protocol Performance”, VMware Inc., 2008 August 2009 Storage Virtualization School 98
  100. 100. Perplexing Server Virtualization Storage Performance Problems App performance drop-off • When moving from physical to virtual servers • Often causing fruitless guest migrations • Lots of admin frustration looking for root cause August 2009 Storage Virtualization School 99
  101. 101. The Issue is often… Too Much Oversubscription Generally, oversubscription is a very good thing August 2009 Storage Virtualization School 100
  102. 102. Where Oversubscription Occurs Within the: • Hypervisor • LUN • Disk Drives • SAN fabric • Target Storage ports Too much creates positive loop • Problems feed on themselves August 2009 Storage Virtualization School 101
  103. 103. Hypervisor Oversubscription Hypervisors are designed for oversubscription • But too much of a good thing… Means IO & resource bottlenecks • Figuring out the problem root cause is difficult at best Ap p. Ap p. Ap p. O App. S O S O S OS Hypervisor X86 Architecture August 2009 Storage Virtualization School 102
  104. 104. LUN Oversubscription Combines disks into storage pools • Each storage pool is carved up by the Hypervisor Into virtual storage pools Then assigned to the individual VM guests Each VM guest contests for the same storage pool • Storage systems can’t distinguish between guests Contention decreases traditional Storage performance Traditional SAN Storage Virtual Storage August 2009 Storage Virtualization School 103
  105. 105. HDD Oversubscription – Especially SATA Slower SATA drives don’t handle contention well • Nominal buffers or ques = higher response times FC/SAS Que depth of 0 to 32 SATA Usually 0 7,200 RPM Que depth of 256 to 512 15,000 RPM 10,000 RPM 7,200 RPM August 2009 Storage Virtualization School 104
  106. 106. SAN Fabric Oversubscription SAN storage is typically oversubscribed • 8:1 (server initiators – target storage ports) or more Network blocking can dramatically reduce performance Full storage buffer ques also reduces performance August 2009 Storage Virtualization School 105
  107. 107. Failure to Adjust for Virtual Server Oversubscription Can be Disastrous SAN or storage target ports blocks IO • Causing SCSI timeouts SCSI drivers are notoriously impatient Apps crash Physical oversubscription 8:1 Virtual oversubscription 160:1 •Based on avg 20 guests per VM August 2009 Storage Virtualization School 106
  108. 108. Too Much Oversubscription App Pain Points Operationally Economically App SCSI timeouts Too much • Lots of unscheduled downtime • Admin time chasing tail • Scheduled downtime Slow app performance • Unscheduled downtime • Reducing productivity Lost revenue & productivity Difficult to diagnose causality • Increased downtime • Increased user frustration • Increased lost productivity August 2009 Storage Virtualization School 107
  109. 109. Too Much Oversubscription Work-Arounds Assign 1:1 physical LUNs to virtual LUNs • Easiest with iSCSI Storage Run Hypervisor RDM storage • Manually assign storage LUNs to each guest Limit or Eliminate use of SATA Reduce SAN oversubscription ratios • Upgrade SAN 8G FC w/NPIV 10G iSCSI Use NAS • Eliminates Storage oversubscription August 2009 Storage Virtualization School 108
  110. 110. Issues w/Work-Arounds RDM means ltd advanced features • Discouraged by Hypervisor vendors Reduced or eliminate SATA drives • Increases costs Although fat SAS drives are cost effective alternative NAS may cause some app performance issues • Oversubscription gains potentially wiped out by performance The key is to look at ecosystem holistically • Limit overall oversubscription on the whole August 2009 Storage Virtualization School 109
  111. 111. Better Alternative Can be Virtualized Storage Virtualized SAN &/or NAS (GNS or GFS) Storage Can mitigate or eliminate oversubscription issues by • Spreading volumes and files Across multiple systems, spindles, RAID groups • Increasing IO & throughput By aggregating & virtualizing more HDDs, systems, ports, BW August 2009 Storage Virtualization School 110
  112. 112. Of Course, There is the > Probability of Data Failure Issue (Previously Discussed) Probability of 1 system going down is low • Probability any 1 of many will fail increases rapidly (P1+ P2+ P3) – (P1* P2* P3) • P = probability of a specific RAID group failing Ways to mitigate increased data failure probabilities • Self-healing storage elements • RAIR • More scalable individual storage systems (SAN or NAS) August 2009 Storage Virtualization School 111
  113. 113. Server Virtualization Lack of End-to-end Visibility Pain Can’t pierce firewall of server virtualization layer • Networked storage mgmt only see the storage side • Virtual server mgmt only see the guest side • And they do not correlate automatically Difficult to pierce firewall of virtualized storage too August 2009 Storage Virtualization School 112
  114. 114. No Perfect Solutions But some pretty good all around ones • Akorri – Balance Point • EMC – IT Management Solutions SMARTS® ADM & Family, IT Compliance Mgr, Control Center® • TekTools – Profiler for VM • Virtual Instruments – Virtual Wisdom Some focused ones • VMware – Veeam Monitor SAN Optimization – Data Migration Tools • NetApp – Onaro • SAN Pulse – SANlogics August 2009 Storage Virtualization School 113
  115. 115. Server Virtualization DP Pain & Issues Local Wide Area Granularity August 2009 Storage Virtualization School 114
  116. 116. Level Setting Definitions HA protects against local hardware failures DR protects against site failures Business Continuity means • No business interruptions for data failures or disasters Data protection software protects against • Software failures • Human error • MalWare Granularity determines • Amount of data that can be lost - RPO • Amount of time it takes to recover - RTO August 2009 Storage Virtualization School 115
  117. 117. HA Requires Redundant Systems 100% redundancy can be a tad expensive • Upfront & ongoing • For just protecting against hardware faults August 2009 Storage Virtualization School 116
  118. 118. Virtual Server DR Tends to Work Better w / SAN or Virtual SAN Storage If hardware hosting VMs fails • VMs can easily be restarted Boot from SAN • On a different physical server August 2009 Storage Virtualization School 117
  119. 119. DR with Shared Storage on SAN Virtual guest images live on the SAN Storage Each VM guest is then pointed @ • Appropriate storage image & restarted Essentially RTO is zero • Or instantaneously All guests & data are protected • Available through the SAN August 2009 Storage Virtualization School 118
  120. 120. High Cost of Networked Storage HA Requires duplicated Network Storage for HA • 2x Network Storage hardware costs • 2x Network Storage software costs • More than 2x operational costs More HA systems means much more costs August 2009 Storage Virtualization School 119
  121. 121. Virtualized Storage Can Mitigate Costs Virtualized storage (SAN or NAS) • Fewer system images to manage • Fewer software licenses • Even Capacity based licenses are less costly Higher scalability means lower lower costs August 2009 Storage Virtualization School 120
  122. 122. Wide Area DR Primary FC over WAN FC over WAN Recovery Site Gateway Gateway Site Native TCP/IP Requires VM Storage to mirror over WAN to remote recovery site August 2009 Storage Virtualization School 121
  123. 123. Storage Virtualization (SV) Can Mitigate Costs Not all SV can WAN replicate, ones that do mean • Centralized control, fewer points of contact for WAN replication • Less admin, less bandwidth contention • Better performance • Lower software license costs Primary Recovery Site Site August 2009 Storage Virtualization School 122
  124. 124. Server Virtualization WAN DR Issues FC over IP Gateways are expensive • Cisco & Brocade (QLogic less so) • Effective Data Throughput Greatly reduced by packet loss & distance Limited packet loss mitigation & WAN opt. has little impact NAS NFS usually has performance issues • Native TCP/IP replication effective data throughput reduced By packet loss & latency • Duplicate storage, infrastructure, licenses, maint, etc. August 2009 Storage Virtualization School 123
  125. 125. Reducing Wide Area Issues Replication using native TCP/IP or iSCSI • Allows TCP optimizers to be utilized • Vendors who offer this type of storage replication include BlueArc, Compellent, DELL/EQL, EMC, Exanet, Fujitsu, HDS (HNAS), LHN, NetApp, RELDATA • Some Network Storage have TCP optimizers built-in Fujitsu Eternus 8000 TCP TCP Optimizer Optimizer TCP/IP August 2009 Storage Virtualization School 124
  126. 126. Wide Area DR Technology Hypervisor, Storage, OS, or Application based • Mirroring – Sync and/or Async • Snapshot Replication – Async • CDP August 2009 Storage Virtualization School 125
  127. 127. Typical HA-DR VM Storage & Issues Mirroring Hypervisor snapshot replication CDP typically does not work well over the WAN Traditional Backup August 2009 Storage Virtualization School 126
  128. 128. Mirroring – Sync, Async, Semi-Sync Sync replicates on write • Requires remote acknowledgement before local write is released • RPO and RTO are fine grain Async releases local writes before remote acknowledged • RPO and RTO are medium to fine grain Semi-sync replicates snaps or incremental snaps async • RPO and RTO are medium to fine grain FC over WAN FC over WAN Primary Gateway Gateway Recovery Local or remote Site Site Native TCP/IP August 2009 Storage Virtualization School 127
  129. 129. Mirroring Shortcomings Sync • Cannot prevent the rolling disaster – disasters are synchronized • Expensive & performance limited to ~ 100 circuit miles Async • Remote data vaults can be inconsistent and non-recoverable Semi-sync • Snapshots are typically not crash consistent August 2009 Storage Virtualization School 128
  130. 130. Snapshot – Simpler HA-DR Alternative? No agents on servers or applications • Simple to use • Medium to fine granularity RPO & RTO • Snapshots sent to other site, potentially bi-directional • Snap restores = mount the data, point & you’re done Remote Snapshot can be promoted to a production volume • Fast – virtually instantaneous with no BU Windows • Centrally administer w/storage • In limited cases – deduped MAN/WAN Snaps August 2009 Storage Virtualization School 129
  131. 131. Storage Virtualization Can Again Help SV that is integrated with BU software provides • Centralized control, fewer points of contact for WAN replication • Less admin, less bandwidth contention • Better performance • Lower software license costs BU Agen t MAN/WAN August 2009 Storage Virtualization School 130
  132. 132. Snapshot Imperfections Snaps typically not structured data crash consistent • Requires either VSS integration for Windows • Or “agents” for the structured apps requiring crash consistency A hybrid approach – requires integration with BU SW console • Agents used to quiesce DBMS, providing write consistency • BU software tells storage to take the snapshot There are severe limits on number of snaps / system • And snapshots will typically reduce cap High cost w/Capacity based licensing • Dual licenses for sending AND receiving systems • Storage system tends to be higher cost August 2009 Storage Virtualization School 131
  133. 133. Issues w/Multi-Vendor App Aware Approach Multiple products to manage…separately BU SW not aware of replicated snapshots • Can’t see them or recover from them One exception is CommVault Simpana 8 Requires Agents for crash consistent apps • Or an agent for VSS on Microsoft August 2009 Storage Virtualization School 132
  134. 134. 1st: The Insidious Problem w/Agents Agents are software w/admin privileges • A.k.a. plug-ins, lite agents, client software Role is to collect data & send it to a backup or media server • Complete files and ongoing incremental changes Separate agents typical / OS, database, ERP, & email app • As well as for BU, CDP, & Archiving / app • Can be more than one agent / server OS agent, database agent, email agent, etc. Agent • When agents deduplicate and/or encrypt – at the source They are even more resource intensive August 2009 Storage Virtualization School 133
  135. 135. Why Admins Despise Agents E.G. Operational Headaches Agents compromise security Agents are very difficult to admin & manage • Especially as servers & apps proliferate Agents misappropriate server assets • Particularly acute with virtual servers Agents escalate CapEx & OpEx August 2009 Storage Virtualization School 134
  136. 136. Agents Compromise Security A firewall port must be opened/agent Agents have admin privileges • Creates a backdoor access to everything on the server • Hackers target agents – BU data must be important Agents are listening on a port just waiting to be hacked Hacker can try to hack dozens thousands of servers • Often without being detected • The more clients/agents installed, the more attack points • Lack of encryption in-flight puts transmitted data at risk • Agent encryption wastes even more server resources A no win situation August 2009 Storage Virtualization School 135
  137. 137. Agents Very Difficult to Admin & Manage Installing agents can be maddeningly frustrating • Requires an app disruptive system reboot to initialize Upgrading agents is a manual process (high touch) • Making it just as frustrating as installations1 Agent upgrades must be pushed out to each system • Upgrades also require an app disruptive system reboot1 • OS & app agents are upgraded when SW is upgraded Usually more than once a year • OS as often as once a month And when the OS or apps are upgraded Or when the OS or apps have a major patch 1Some BU software have an automated upgrade process; however, the reboots are still disruptive August 2009 Storage Virtualization School 136
  138. 138. Continued Infrastructure complexity = increased failures • More agent software parts = > failure probability Multi-vendor operations means lots of agent flavors • Platforms, operating systems, databases (all kinds), & email Troubleshooting is complicated • Particularly aggravating when an agent stops working No notification – difficult to detect & difficult to fix • Larger infrastructures take longer to diagnose Exponential complexity when ROBOs are added No automatic discovery of new servers or devices • New Agents must be manually added Agent Management significantly drains IT resources August 2009 Storage Virtualization School 137
  139. 139. Agents Misappropriate Server Assets Agent software steals server resources • Each agent utilizes 2% or more of a server’s resources Many DP systems require multiple agents (OS, app, & function) • Most resource est. are based on average utilization Avg is calculated differently by each vendor • Comes down to how often data is protected • Per scan server resources used times # scans / day • Divided by total available server resources per day It’s a big deal when resources required affect app performance It’s a really big deal when the server is virtualized • And each VM requires it’s own agent • Suddenly, a lot of server resources are dedicated to agents August 2009 Storage Virtualization School 138
  140. 140. Agents Escalate CapEx & OpEx >2% of server CapEx & OpEx allocated to agents • More when agents are required for multiple applications • Virtual server allocation is multiplied by # of VMs HW, SW, network, & infrastructure must be upsized • To accommodate agents while meeting app perf. requirements Based on peak performance • Meaning more HW, SW, networks, & infrastructure More assets under management means higher OpEx • People have productivity limitations – more personnel • SW licensing based on capacity, CPUs, servers, etc. = higher $ • More HW = more power, cooling, rack space, floor space, etc. August 2009 Storage Virtualization School 139
  141. 141. Agent Issues Exacerbated on VMs Ap p. Ap p. Ap p. O App. S O S O S OS ESX 3.5 or vSphere4 X86 Architecture Instead of 1 or 2 agents per physical server • There are lots of agents per physical server Wasting underutilized server resources is one thing • It’s quite another when that server is oversubscribed August 2009 Storage Virtualization School 140
  142. 142. Ultimately it Reduces Virtualization Value Agents limit VMs / physical server • Reduces effective consolidation benefits • Decreases financial savings, payback, & ROI VM backups will often contest for the IO • Simultaneous backups have bandwidth constraints • Backups must be manually scheduled serially August 2009 Storage Virtualization School 141
  143. 143. Traditional or Legacy Backup & Restore Backup to Tape, VTL, or Disk RPO & RTO range from coarse to fine grain • Some even provide CDP August 2009 Storage Virtualization School 142
  144. 144. Typical Backup & Restore Failures All of the Agent issues in spades • Multiple agents for different functions & applications Not typically ROBO or WAN optimized High failure rates on restores • No automated restore testing or validation Backup validation is not the same thing • No time based versioning • Multi-step restores Data has to be restored from backup media • To media or backup server before it can be restored to server • Requires multiple steps &passes Many lack built-in deduplication Most do not have integrated archival August 2009 Storage Virtualization School 143
  145. 145. CDP Typically copies on writes It differs from mirroring in 4 ways • Time stamps every write • Can be transaction or event aware • Allows rollback to any point in time, event, or transaction • Prevents the rolling disaster RPO & RTO is fine grain August 2009 Storage Virtualization School 144
  146. 146. CDP Fails to Measure Up Primarily agent based as well w/agent problems Most CDP is not really designed for ROBOs or WAN1 • Slow over WAN, e.g. not WAN optimized • No deduplication Not integrated in with backup in most cases Many are OS & Application limited • Primarily Windows and Exchange focused 1Asigra is the exception August 2009 Storage Virtualization School 145
  147. 147. VMware’s Agentless Solutions VMware Consolidated Backup – VCB VMware Data Recovery – VDR August 2009 Storage Virtualization School 146
  148. 148. VCB Requires no VM or VMware agents • Utilizes VMware VMDK Snapshots RPO & RTO is coarse grain Mount snaps on Windows proxy server • Agent on proxy server Proxy server backed up, sent to media server, then stored Other Advantages • Reduces LAN traffic & has BMR support Proxy Server BU Transmit BU Mount Media Server Server Snapshot August 2009 Storage Virtualization School 147
  149. 149. Where VCB Comes up a bit Short 32 max concurrent VMs/proxy, w/5 as best practice • Means more proxy servers or very slow BU & restores Multi-step restore – Restore proxy server, restore VMs DBMS, Email, ERP not BU crash consistent1 • E.g. doesn’t ensure all writes are complete, cache flushed etc. Often complex scripting is required (RPO & RTO) is coarse – VMDK only2 • Windows files is exception 1Windows VSS enabled structured apps are the exception 2CommVault Simpana 8 cracks open VMDKs August 2009 Storage Virtualization School 148
  150. 150. VDR Part of vSphere4 • Requires no VM or VMware agents • Utilizes VMware VMDK Snapshots RPO & RTO is coarse grain • Works thru vCenter • Intuitive & simple • Built-in deduplication August 2009 Storage Virtualization School 149
  151. 151. Where VDR Comes up a bit Short 100 max concurrent VMs • Aimed at smaller VMware environments Not file system aware – primarily VMDK • With the exception of Windows Not DBMS, Email, ERP not BU crash consistent • E.g. doesn’t flush cache, complete all writes etc. With the exception of Windows VSS No replication Software is required on every vSphere server RPO & RTO is coarse except for Windows VDR is good, & pretty basic aimed at SMB/SME • Other vendors provide more capable – comparable offerings Veeam, Vizioncore, PhD Technologies August 2009 Storage Virtualization School 150
  152. 152. Other Ongoing HA-DR Virtualization Issues Serious backup scalability limitations No integration w/Online-Cloud backup or DR Does not leverage VM Infrastructure or cloud August 2009 Storage Virtualization School 151
  153. 153. Serious VM DP Scalability Issues DP vaults rarely scale w/some exceptions • Meaning more backup vaults Which increases complexity exponentially • Different servers & apps manually pointed at different vaults Loses a lot of deduplication value Far more time intensive Greater opportunities for human error & backup failures • No load balancing or on-demand allocation of resources Requiring yet even more hardware August 2009 Storage Virtualization School 152
  154. 154. Private-Public Cloud Integration a.k.a. Hybrid Cloud Integration Local backup or archive vaults don’t replicate • To offsite online cloud backup service providers • Or DR providers User Public Site Cloud Site August 2009 Storage Virtualization School 153
  155. 155. Another More Complete Agentless Backup Solution Asigra – Hybrid Cloud Backup • No agents • Physical or virtual appliance Ap p. Ap p. Ap p. App. • Complete protection O O O S S S OS Operating systems ESX 3.5 or vSphere4 File systems Structured data X86 Architecture VMS August 2009 Storage Virtualization School 154
  156. 156. Asigra Agentless VMware Backups Agentless Hybrid Cloud Backup • ESX 3i/3.5/3.0 & vShpere4 compatible • Physical or Virtual Backup Appliance Only agentless VM-level backup product Original & alternate VM restores Time based versioning even agentless CDP • File/App-level backup • Agentless VMDK-level backup Any storage (DAS/SAN/NAS) VMDK restore as pure files COS-less VMDK backup/restore • Backs up entire VI setup Global VI backup set creation • Local & global dedupe Block level & built-in • VCB integration scripts • Highly scalable vaults NOTE: Licensed by deduped, compressed, stored TBs • Autonomic healing w/restore validation 1pass recoveries • Private, public, & hybrid cloud integration August 2009 Storage Virtualization School 155
  157. 157. Asigra Agentless Hybrid Cloud BU Limitations No native backup to tape Requires replacement of current backup software Agentless incredulousness • Can’t believe agentless backup is as good as agent based (It is) August 2009 Storage Virtualization School 156
  158. 158. VM Agentless Recommendations For smaller environments (< 100 VMs) • VDR • Products from Veeam, Vizioncore, & PhD Technologies • VCB & products that run on VCB CommVault Simpana 8 Acronis PHD Technologies Inc. esXpress STORServer VCB Symantec Backup Exec 12.5 Veeam Backup Vizioncore vRanger Pro August 2009 Storage Virtualization School 157
  159. 159. VM Agentless Recommendations For medium to larger environments • SME to Enterprise Asigra Hybrid Cloud Backup Combinations of snapshot & backup w/agents Combinations of snapshot & Asigra agentless backup Public cloud agentless service providers August 2009 Storage Virtualization School 158
  160. 160. Some Storage Configuration Best Practices Separate OS & app data • OS volumes (C: or /) on different VMFS or LUN from apps (D: etc) • Heavy apps get their own VMFS or raw LUN(s) Optimize storage by application • Different tiers or RAID levels for OS, data, transaction logs Automated tiering can help • No more than one VMFS per LUN • Less than 16 production ESX .VMDKs per VMFS Implement Data Reduction Technologies • Dedupe can have a huge impact on VMDKs created from a template Big impact on VDI and on replicated or backup data August 2009 Storage Virtualization School 159
  161. 161. Conclusions Numerous Virtual Server storage Issues There are ways to deal with them • Some better than others Storage Virtualization • Is one of those better ways • Reduced storage costs • Reduced storage SW license costs • Increased app availability • Increased online flexibility August 2009 Storage Virtualization School 160
  162. 162. Audience Response Questions?
  163. 163. Break sponsored by
  164. 164. Part 3 Part 1 Part 2 Part 3 Storage as a dynamic online “on-demand” resource
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×