Mega Launch Recap Slide Deck

766 views

Published on

EMC's VNX2 Mega Launch Recap

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
766
On SlideShare
0
From Embeds
0
Number of Embeds
10
Actions
Shares
0
Downloads
34
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Mega Launch Recap Slide Deck

  1. 1. EMC VNX FAMILY Transforming Midrange Storage © Copyright 2013 EMC Corporation. All rights reserved. 1
  2. 2. • • Modular Architecture – Maintains Storage Processor, X-Blade and Control Station nodes Sandy Bridge Based Storage Processors – Up to 2 x 8 core – Up to 128GB memory – PCI Gen 3, up to x16 – X-Blade maintains Westmere processor Capacity / Performance Next Generation VNX Platform UP TO 1.1M TRANSACTIONS And 6PB VNX7600 1000 Drives VNX5800 VNX5600 VNX 5400 VNX5200 750 Drives 500 Drives 250 Drives 1500 Drives 125 Drives Disk Processor Enclosure © Copyright 2013 EMC Corporation. All rights reserved. VNX8000 SP Enclosure 2
  3. 3. Family VNX5200 VNX5400 VNX5600 VNX5800 VNX7600 VNX8000 Max # of drives 125 250 500 750 1000 1500 Max FAST Cache 600 1000 2000 3000 4200 4200 # of SPs Embedded I/O ports per SP Storage Pool Modules I/O slots per SP Memory per SP 2 2 2 2 2 2 2 Backend SAS ports 2 Backend SAS ports 2 Backend SAS ports 2 Backend SAS Ports 2 Backend SAS ports 0 3 16GB 4 16GB 5 24GB 5 32GB 5 64GB 11 128GB CPU type # of cores per SP 1.8 GHz, Sandy Bridge 4 Cores 1.8 GHz, Sandy Bridge 4 Cores 2.4 GHz, Sandy Bridge 4 Cores 2.0 GHz, Sandy Bridge 6 Cores 2.2GHz, Sandy Bridge 8 Cores 2x 2.7GHz Sandy Bridge 16 Cores Total Protocols FC, FCoE, iSCSI FC, FCoE, iSCSI FC, FCoE, iSCSI FC, FCoE, iSCSI FC, FCoE, iSCSI FC, FCoE, iSCSI 1 or 2 1 or 2 1 or 2 2 or 3 2-4 2-8 I/O slots per XBlade 3 3 3 4 4 5 Memory per XBlade 6GB 6GB 12GB 12GB 24GB 24GB 2.13 GHz, Westmere 4 Cores 2.13 GHz, Westmere 4 Cores 2.13 GHz, Westmere 4 Cores 2.4 GHz, Westmere 4 Cores 2.8GHz, Westmere 6 Cores 2.8GHz, Westmere 6 Cores NFS, CIFS, pNFS NFS, CIFS, pNFS NFS, CIFS, pNFS NFS, CIFS, pNFS NFS, CIFS, pNFS NFS, CIFS, pNFS # of X-Blades File System Modules X-Blade CPU type # of cores Protocols © Copyright 2013 EMC Corporation. All rights reserved. 3
  4. 4. Introducing MCx New Multi-Core technology to unleash greater system performance Static Core Utilization Dynamic Multicore Optimization 90% 80% 80% Core Utilization 100% 90% Core Utilization 100% 70% 60% 50% 40% 30% 70% 60% 50% 40% 30% 20% 20% 10% 10% 0% 0% 0 1 2 3 4 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Core Number RAID I/O © Copyright 2013 EMC Corporation. All rights reserved. DRAM Cache Core Number FAST Cache Data Services Management Available 4
  5. 5. From Static to Dynamic Core Utilization STATIC 90% 80% 80% Core Utilization 100% 90% Core Utilization 100% 70% 60% 50% 40% 30% 70% 60% 50% 40% 30% 20% 20% 10% 10% 0% 0% 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Core Number Core Number RAID I/O © Copyright 2013 EMC Corporation. All rights reserved. DRAM Cache FAST Cache Data Services Management Available 5
  6. 6. Multi-Core Cache Performance, Efficiency and Ease-of-Use Benefits  Modularization of cache engine – – Delivers seamless performance, scales with cores Fewer forced flush scenarios  One large, shared, mirrored cache; no R/W partitioning – – – Supports large write cache sizes (close to max DRAM size) Improved read hits No manual intervention  Adaptive cache management and improved cache algorithms – – – Adjusts to changing IO profiles Write cache re-use after flush Improved pre-fetch © Copyright 2013 EMC Corporation. All rights reserved. 6
  7. 7. Traditional RAID Cache Read Cache To host Write Cache Host Write Disk Write Discard Disk Read Mirroring © Copyright 2013 EMC Corporation. All rights reserved. 7
  8. 8. MCC Adaptive Cache MCC Adapts by Tracking Recency/Frequency of Data Use Host Write Disk Read Re-Read Flush © Copyright 2013 EMC Corporation. All rights reserved. Over-Write Reuse Flush 8
  9. 9. Multi-Core FAST Cache Improved Performance and Efficiency of FAST Cache  Performance enhancements – New Driver Stack Order ▪ FAST Cache memory map below MCC ▪ Reduced overhead due to cache hit pass-through – Proactive clean optimization ▪ Improved responsiveness as promotions are not delayed by flushing dirty pages Multi-Core Cache Multi-Core FAST Cache Multi-Core RAID Backend Driver – Faster initial warm-up ▪ Initial hit incurs promotion until FAST Cache is 80% full. © Copyright 2013 EMC Corporation. All rights reserved. 9
  10. 10. Cache IO Flow Detail Host IO Host IO FLARE 1 2 FAST Cache Memory Map 1 MCC 3 2 SSD (Hit) (De-stage & 3 Read Miss IO) MCF Memory Map 5 HDD (Miss) FLARE Cache 4 4 FAST Cache SSD SSD © Copyright 2013 EMC Corporation. All rights reserved. SSD (Hit) (De-stage & Read Miss IO) MCF HDD SSD SSD HDD (Miss) HDD SSD 10
  11. 11. Increased FAST Cache Configurations  Same # of max drives supported for each storage system  100 GB drives recommended for best Access Density  200 GB drives recommended for best Capacity VNX Model Max Capacity VNX2 Model 100 GB Drives 200 GB Drives 7500 2100 P0 2100 4200 M1 2100 4200 5700 1500 M2 1500 3000 5500 1000 M3 1000 2000 5300 500 M4 500 1000 5100 100 M5 300 600 © Copyright 2013 EMC Corporation. All rights reserved. 11
  12. 12. Multi-Core RAID Performance Scale with Advanced Availability and Flexibility  Permanent Sparing – Hot Spares become a permanent part of the RAID Group, no equalization required* – Any unused drive is a potential hot spare – 3 Policies for reserving hot spares: ▪ ▪ ▪ Recommended No Hot Spares Custom Bus 2, Enclosure 0 RAID group 1 Bus 0, Enclosure 0 VAULT RAID group 2  Portable Drives – – – – No slot to drive location dependency Drives can be relocated between busses, shelves 5-Minute timeout before hot spare invoked Used for bus balancing; can remove whole RAID groups for longer periods (will Fault) ▪ Must be moved within same VNX * CLI available to copy spare back to replaced drive © Copyright 2013 EMC Corporation. All rights reserved. 12
  13. 13. VNX Rockies Drives 2.5” Drives Drive Type 3.5” Drives Capacity FAST Cache Optimized SSD FAST Cache Optimized SSD FAST VP Optimized SSD FAST VP Optimized SSD FAST VP Optimized SSD 100GB 200GB 100GB 200GB 400GB* 15K RPM SAS 300GB* 10K RPM SAS 600GB* 10K RPM SAS 900GB* 10K RPM SAS 1.2TB*,*** 7.2K RPM NL-SAS 1TB* 15 Dr DAE** 25 Dr DAE 60 Dr DAE** ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ Drive Type FAST Cache Optimized SSD FAST Cache Optimized SSD Capacity 100GB 200GB 15K RPM SAS 300GB* 15K RPM SAS 600GB* 7.2K RPM NL-SAS 2TB 7.2K RPM NL-SAS 3TB 7.2K RPM NL-SAS 4TB*** 15 Dr DAE 60 Dr DAE ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ Note: • • 2.5” 10K RPM SAS, 300GB drives will be qualified (supported) in 15 and 25 drive DAEs, but not sold new with Rockies 3.5” 7.2K RPM NL-SAS, 1TB drives will be qualified (supported) in 15 and 60 drive DAEs, but not sold new with Rockies = New Drive for VNX Rockies * Supported as Vault drives ** 2.5” drives supported in 3.5” carriers © Copyright 2013 EMC Corporation. All rights reserved. *** 4TB & 1.2TB drives will be delivered in post-GA, in the Q3 2013 timeframe 13
  14. 14. Breakthrough Midrange Innovation! MCx™ Dual Controller Comparison 12 11 Vendor D 10 Response Time 9 70% FASTER 8 4X MORE 7 6 5 Vendor C 4 Vendor B 3 NEW PLATFORM w/ Vendor A 2 1 0 - 200,000 400,000 600,000 800,000 1,000,000 IOPS © Copyright 2013 EMC Corporation. All rights reserved. 14
  15. 15. Symmetric Active-Active CX VNX LUN LUN LUN Active-Passive LUN (ALUA) Asymmetric Active-Active Symmetric Active-Active* * Symmetric Active-Active for Classic LUNs only in phase 1 (Rockies) © Copyright 2013 EMC Corporation. All rights reserved. 15
  16. 16. VNX Block Deduplication Reduce CAPEX by 50% or More • Deduplication of data at 8KB granularity User Changes – Deduped LUNs are Thin LUNs • Settable per pool LUN Read De-dupe LUN LUN Free Pool Storage Pool © Copyright 2013 EMC Corporation. All rights reserved. – De-dupe Engine Index Database Thick, Thin and Deduped LUNs supported in a single pool • Freed up blocks/slices are available for re-use within storage pool • Deduplication occurs out of band to minimize I/O impact • Improves VDI efficiency • Reduce $/VM deployment costs 16

×