Storage Virtualization Unleashed
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Storage Virtualization Unleashed

on

  • 742 views

 

Statistics

Views

Total Views
742
Views on SlideShare
742
Embed Views
0

Actions

Likes
0
Downloads
24
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Storage Virtualization Unleashed Presentation Transcript

  • 1. Storage Virtualization Unleashed James Price Principle Fairway Consulting Group, Inc. Jeffrey Slapp Principle Fairway Consulting Group, Inc.
  • 2. Session Overview What is Storage Virtualization? Specifically, its role as it relates to the VMware product line Advantages of Storage Virtualization Hardware independence, increased storage utilization, investment protection, etc…. VMware ESX under a Storage Virtualization architecture Specifics relating to VMware ESX deployments Best practices in Virtual Environments Notes from the field and customer deployments Q&A
  • 3. What is Storage Virtualization? What do we mean when we say “Storage Virtualization”? Storage Virtualization is a concept of storage impersonation. This impersonation takes the form of a virtual disk LUN presented to the virtual server host. The virtual disk LUN is simply an abstraction of the real physical storage being managed by the virtualization controller (Storage Domain Server – SDS). On the client (target) side of the SDS, the SDS presents the virtual disk LUN to the virtual server host (initiator). The virtual disk LUN can be any size up to 2.0 TB (upper limit per LUN on most OS’s) On the storage (initiator) side of the SDS, the SDS consumes the storage presented by the storage devices (targets) and aggregates this storage into a unified pool.
  • 4. What is Storage Virtualization? Here is a basic storage configuration diagram: The SDS presents itself as an initiator/target device simultaneously on the fabric. Target Initiator Initiator Fabric Target Target The VM Hosts are zoned on the Target fabric with the target side of the Storage Domain Server(s) only. Target The storage enclosures are zoned on the fabric with the initiator side of the Storage Domain Server(s) only.
  • 5. Advantages of Storage Virtualization Major Advantages of Storage Virtualization Hardware Independence • The SDS software can run on any commodity-based hardware platform that supports Windows 2000 and higher. This feature gives the administrator a huge selection of chassis to choose from. Heterogeneous Storage Devices • The SDS is a very specialized software package that runs on a Windows platform, therefore any storage device that appears in the Logical Disk Manager can be utilized in the storage pool. This feature gives the administrator a huge selection of storage devices to choose from.
  • 6. Advantages of Storage Virtualization Major Advantages of Storage Virtualization (continued….) Simplifies System Administration • The SDS’s present virtual LUNs in up to 2TB in size. For the large majority of storage scenarios, this virtual size will not be fully utilized by the application server (VM Hosts), so therefore having to extend a volume is no longer an issue.
  • 7. Advantages of Storage Virtualization Major Advantages of Storage Virtualization (continued….) Advanced SAN features • Auto Provisioning – Auto provisioning is responsible for auto-allocating the 128MB blocks on the backend storage devices. As data is written out to the virtual LUN, the SDS automatically allocates the space on the physical disks. • Auto Failover – All SDS mirror partners may actively process I/Os with one handling primary paths for some of the volumes and secondary paths for others. In the event of a failure, the remaining SDS’s take over with the mirrored volumes. Application servers must be adequately configured with qualified multi-path drivers to take advantage of this automatic failover feature. • Snapshot – Point-in-time snapshots are generated using copy-on-write technology for selected virtual volumes. • High Availability (Synchronous mirroring) – Synchronous mirroring duplicates all write I/O to the companion SDS. This provides redundancy in the event of failure. All virtual LUNs are available down all paths for read/write I/O simultaneously from both SDSs. • Asynchronous IP Mirroring – The Asynchronous IP Mirroring (AIM) option replicates selected volumes between a pair of SDS’s using native IP connections over long distances.
  • 8. Advantages of Storage Virtualization Major Advantages of Storage Virtualization (continued….) Drastically Higher Storage Volume Utilization • The SDS’s use a very optimized and efficient method of de-staging data from cache to the backend storage devices. The result is a much better utilization of your storage (typical utilization is between 85-90% of physical storage). Physical Storage 128MB 128MB 128MB 128MB 128MB 128MB 128MB 128MB DATA 128MB 128MB 128MB 128MB 128MB 128MB 128MB 128MB 128MB 128MB 128MB 128MB Fabric Unallocated Storage
  • 9. Advantages of Storage Virtualization Major Advantages of Storage Virtualization (continued….) High Performance Caching System • Inside an SDS, 80% of the available system memory is converted to cache memory. Most typical implementations use chassis’s with 4GB of total system RAM, leaving 3.2GB of RAM dedicated to I/O cache. The amount of cache is only limited by the amount of RAM in the SDS. Simplified Storage Proliferation • When the backend physical storage utilization reaches 85% (by default), the SDS’s notify the administrator that more storage should be added to the pool. The administrator only needs to connect more storage to the fabric (or to the SDS in a non-fabric environment) and add it to the managed storage pool on the SDS. Once the storage has been added to the pool, it is immediately available by the application servers (VM Hosts). Unmatched Storage System Redundancy • SDS redundancy can be as small at two and as many as eight. The redundancy is also very flexible, so the storage system can grow as your business grows.
  • 10. VMware ESX Under a Storage Virtualization Architecture VMware ESX is not treated any differently than another other application server connected to the SDS with regards to I/O and storage allocation. The only instance where the SDS responds differently to ESX specifically is under an HA (high availability) configuration. The native ESX MPIO driver understands “light on” or “light off” with respect to a storage target. In the event of a backend storage failure, the SDS will initiate an ESX host path failover by turning the laser off on the HBA. This will indirectly tell ESX that it should immediately failover to the other path and resume I/O.
  • 11. Best Practices in Virtual Environments (work in progress…..) Notes from the field as it relates to VMware ESX in a High Availability configuration.
  • 12. Best Practices in Virtual Environments (work in progress…..) Notes from the field as it relates to VMware ESX in a High Availability configuration with Asynchronous IP Mirroring. Server 1 Server 2 Server N ISL Trunk Fiber Channel Switch Fabric SDS SDS SDS
  • 13. Best Practices in Virtual Environments (work in progress…..) RDMs vs. VMDKs Discussion about when to use RDMs and when VMDKs may be more appropriate. The discussion will surround these two types of disk modes as they relate to leveraging the Virtual SAN advanced feature sets. Sizing VMFS on a Virtual SAN Virtual LUN sizes with regards to VMFS recommendations.
  • 14. Q&A
  • 15. Thank you
  • 16. Presentation Download Please remember to complete your session evaluation form and return it to the room monitors as you exit the session The presentation for this session can be downloaded at http://www.vmware.com/vmtn/vmworld/sessions/ Enter the following to download (case-sensitive): Username: cbv_rep Password: cbvfor9v9r