Using the power of hybrid storage and ZFS to accelerate your virtualized environment


Published on

Learn how VA Technologies utilize Nexenta and ZFS to create hybrid storage appliances combining SSD and Hard Drive technology to dramatically improve performance in your virtual environment whilst maintaining effect control of storage to stop Virtual Machine sprawl.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Using the power of hybrid storage and ZFS to accelerate your virtualized environment

  1. 1. “ Enterprise class storage for everyone ” ZFS Acceleration of Virtualised Environments Andy Bennett Director Sales Engineering EMEA
  2. 2. What is NexentaStor? July 14, 2010 Nexenta Systems Confidential Achieves enterprise class functionality at 75% savings Unified storage: block and file Leading OpenStorage solution Runs on industry standard hardware <ul><li>Offers unmatched enterprise </li></ul><ul><li>features at 70-80% savings: </li></ul><ul><li>End to end data integrity </li></ul><ul><li>Unlimited file size & snaps </li></ul><ul><li>Synchronous and ZFS replication </li></ul>Superior storage for virtualized environments
  3. 3. VM Sprawl – Storage Nightmare <ul><li>Typical single Server </li></ul><ul><li>Virtual Environment of 20 Server 4 Disk VM’s </li></ul>THE MORE YOU GROW YOUR VM INFRASTRUCTURE THE WORSE THE I/O PROBLEM GETS !!!! <ul><ul><li>20 x 720 IOPS </li></ul></ul><ul><ul><li>Completely Random I/O Workload </li></ul></ul><ul><ul><li>Mixed I/O Workload </li></ul></ul><ul><ul><li>2-6 Traditional SAS / SATA Disks per server </li></ul></ul><ul><ul><li>Single disk 70-180 IOPS </li></ul></ul><ul><ul><li>Even previously Sequential workloads are now random to central storage </li></ul></ul>14400 + IOPS
  4. 4. VDI Workloads Even Worse <ul><li>Typical Windows VDI Workload </li></ul><ul><ul><li>VDI Deployments in the 100 – 1000’s of desktops </li></ul></ul>RANDOM WRITE WORKLOAD A HARD DISKS WORSE NIGHTMARE <ul><ul><li>500 x 35 IOPS </li></ul></ul><ul><ul><li>Completely Random I/O Write Biased Workload </li></ul></ul><ul><ul><li>Up to 85% Write workloads – BASED ON RECENT TESTING </li></ul></ul><ul><ul><li>Testing shows heavily write biased in VDI environments </li></ul></ul><ul><ul><li>Single VDI Desktop 20-35 IOPS </li></ul></ul><ul><ul><li>Even previously Sequential workloads are now random to central storage </li></ul></ul>17,500 + IOPS
  5. 5. ZFS Caches <ul><li>Level 1 Read / Write Cache – THE ARC </li></ul><ul><ul><li>Primary Filesystem cache </li></ul></ul><ul><ul><li>Dynamically grows and shrinks with workload </li></ul></ul><ul><ul><li>Adaptive in nature </li></ul></ul><ul><ul><li>Caches all async writes and streams sequentially to backend storage </li></ul></ul><ul><ul><li>Appliance wide cache for all storage pools </li></ul></ul><ul><li>Level 2 Read Cache – The L2ARC </li></ul><ul><ul><li>Secondary cache stores items evicted from the ARC </li></ul></ul><ul><ul><li>Adaptive in nature </li></ul></ul><ul><ul><li>All data is non resilient also stored on disk </li></ul></ul><ul><ul><li>Nonvolatile RAM card / SSD </li></ul></ul><ul><ul><li>Assigned per storage pool </li></ul></ul><ul><li>Level 2 Write Cache – ZFS Intent log (ZIL) </li></ul><ul><ul><li>Stores small (<32k) sync writes in high speed persistent storage </li></ul></ul><ul><ul><li>Flushes to disk backend periodically – sequential write stream </li></ul></ul><ul><ul><li>Assigned per storage pool </li></ul></ul>
  6. 6. SSD’s - Why use them <ul><li>Flash sits between DRAM and Disk in terms of latency and costs </li></ul><ul><li>DRAM is nanoseconds Flash is microseconds per op , Spinning Disk is milliseconds </li></ul><ul><li>Flash is tens of dollars per GB whereas RAM is hundreds of dollars per GB </li></ul><ul><li>For random-read workloads Flash + 7200 rpm drives can yield up to 5x performance of 15K drives at ¼ the cost </li></ul><ul><li>Up to 40,000 Small File Random Write IOPS per SSD </li></ul><ul><li>Up to 80,000 Read IOPS per SSD </li></ul><ul><li>SSD’s shipping in up to 800GB MLC capacity </li></ul><ul><li>SLC and MLC devices available </li></ul>
  7. 7. Hybrid Storage Pool October 20, 2010 VA Technologies SAS / SATA APPLICATION ZFS L2ARC Disk Pool ZIL SSD
  8. 8. WRITE CACHE – The ZIL <ul><li>All incoming synchronous writes committed to write optimised SSD </li></ul><ul><li>Up to 40,000 Random Write IOPS per SSD </li></ul><ul><li>Massively Improves I/O response for sync writes – NFS / Database especially </li></ul><ul><li>Turns random write workload into sequential write stream to backend disk </li></ul><ul><li>Can allow 7200 rpm disk systems to outperform traditional 15K subsystems </li></ul><ul><li>Only a small device < 16GB Required </li></ul>
  9. 9. READ CACHE – The L2ARC <ul><li>Data soon to be evicted from the ARC is added to a queue to be sent to cache L2ARC SSD </li></ul><ul><ul><li>Another thread sends queue to cache SSD </li></ul></ul><ul><ul><li>Data is copied to the cache SSD with a throttle to limit bandwidth consumption </li></ul></ul><ul><ul><li>Under heavy memory pressure, not all evictions from the ARC will arrive in the cache SSD </li></ul></ul><ul><ul><li>Content considered volatile as also stored on disk </li></ul></ul><ul><li>Perfect use for high capacity MLC SSD </li></ul><ul><ul><li>Significantly improves read latency </li></ul></ul><ul><ul><li>Inexpensive compared to adding DRAM </li></ul></ul><ul><ul><li>Up to 80,000 IOPS per SSD </li></ul></ul><ul><ul><li>Cache warms up over time </li></ul></ul><ul><ul><li>Performance scaleable – Add more SSD’s </li></ul></ul>ARC data soon to be evicted cache vdev
  10. 10. Snapshots and Clones <ul><li>Snapshots – a point-in-time window into the dataset (block or file) – THEY ARE READ ONLY </li></ul><ul><li>Clones are READ - WRITE and based upon a snapshot </li></ul><ul><li>Computationally free, because of Copy on Write architecture </li></ul><ul><li>Very handy feature for VM’s </li></ul><ul><ul><li>Almost instant creation of a VM using cloned templates </li></ul></ul><ul><ul><li>Master template will live in the ARC or L2ARC </li></ul></ul><ul><ul><li>Ultra Space efficient </li></ul></ul><ul><ul><li>Combine with dedupe and compression for space efficient use of storage </li></ul></ul><ul><ul><li>Means pure SSD only pools can be used for VM’s </li></ul></ul>
  11. 11. VMDC Integration <ul><li>Single management interface for VM Infrastructures </li></ul><ul><li>Leverages ZFS snapshot and clone features </li></ul><ul><li>Integrates with standard NexentaStor features </li></ul><ul><ul><li>Auto-snap snapshot service </li></ul></ul><ul><li>Relocate VMs between virtual hosts </li></ul><ul><li>Relocate VMs between NexentaStor appliances </li></ul><ul><li>Only Platform to support multiple virtual hosts from multiple vendors simultaneously </li></ul><ul><ul><li>VMware ESX 3.5 & 4.x </li></ul></ul><ul><ul><li>Citrix Xen 5.x </li></ul></ul><ul><ul><li>Microsoft Hyper-V (RSN) </li></ul></ul>
  12. 12. “ Enterprise Class Storage for Everyone” VA Technologies <ul><ul><li>Thank You </li></ul></ul><ul><ul><li>See NexentaStor in action with VA Technologies </li></ul></ul><ul><ul><li>on stand 828 </li></ul></ul>