• Save
Why EMC is the best choice for VMware storage both now and in the future

Like this? Share it with your network

Share

Why EMC is the best choice for VMware storage both now and in the future

  • 2,963 views
Uploaded on

Why EMC is the best choice for VMware storage both now and in the future ...

Why EMC is the best choice for VMware storage both now and in the future
How the rest of EMC's technology portfolio integrates with VMware
What is the joint EMC and VMware vision for Cloud and Big Data

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
2,963
On Slideshare
2,954
From Embeds
9
Number of Embeds
1

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 9

http://online.ipexpo.co.uk 9

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Why EMC Storage is the best choice for VMware, now and in the future Scott Dougherty vSpecialist Sr Manager© Copyright 2011 EMC Corporation. All rights reserved. 1
  • 2. DISRUPTIVE TECHNOLOGY CREATES LASTING CHANGE© Copyright 2011 EMC Corporation. All rights reserved. 2
  • 3. Storage Megatrends – Dimensions of Efficiency Think: “How VMware HW assist can I make vCenter Integration storage more invisible?” Auto Tiering Scale and scaling model “non-disruptive Think: “how can I everything” deal with the IO Modularity + Multiprotocol deluge, and lower my $/power/space Massive x86 Multicore 3TB+ per TB?” Thin 10/40GbE Dense SSD Commoditization Primary Storage Mega Cache Think: “how can I keepCompress/ Dedupe lowering $/power/space Auto Tiering per IOps or MBps?” © Copyright 2011 EMC Corporation. All rights reserved. 3
  • 4. “Where Does Integration Happen?” circa 2011 ESX Storage Stack VI Client VM VMFS NFS VMware LVM NFS Co Snap request -op VSS via VMware Tools client SvMotion request Datamover Co-op VM provisioning cmd vStorage API for Data Turn thin prov on/off Protection (VDDK) vStorage API Network NMP for Multi- pathing Stack Vendor-specific vCenterSRM vCenter Plug-In Vendor-specific iSCSI/FCoE SW VAAI block module View VMware-to-Storage relationships NIC Drivers HBA Drivers Provision datastores more easily Leverage array features (compress/dedupe, file/filesystem/LUN snapshots) FCVendor Specific FCoEvStorage API for VM object NFS NFS VASA iSCSI SRM Array APIs/Mgmt Awareness Module VAAI in array Module VAAI SCSI cmds Standards-based VAAI Vendor-specific VAAI Storage Array NFS operation support SCSI command support © Copyright 2011 EMC Corporation. All rights reserved. 4
  • 5. Hardware-Accelerated Locking• Without API – Reserves the complete LUN (via SCSI- 2 reservation) so that it could update a file-lock – Required several SCSI -2 commands – LUN level locks affect adjacent hosts• With API – Commonly Implemented as Vendor Unique SCSI opcode – Moved to SCSI CAW opcode in vSphere 5 (more standard) – Transfer two 512-byte sectors – Compare first sector to target LBA, if match writes second sector, else returns miscompare© Copyright 2011 EMC Corporation. All rights reserved. 5
  • 6. VAAI Block Zero• Without API – SCSI Write - Many identical small blocks of zeroes moved from host to array for MANY VMware IO operations SCSI WRITE (0000…..) – Extra zeros can be removed by EMC arrays after SCSI WRITE (data) SCSI WRITE SAME (0 * times) the fact by manually initiating “space reclaim” on SCSI WRITE (0000….) SCSI WRITE (data) the entire device SCSI WRITE (data) Repeat MANY times… – New Guest IO to VMDK is “pre-zeroed”• With API VMDK – SCSI Write Same - One giant block of zeroes moved from host to array and repeatedly written – Thin provisioned array skips zero completely (pre “zero reclaim”) – Moved to SCSI UNMAP opcode in vSphere 5 (which will be “more standard”, and will always return blocks to the free pool)© Copyright 2011 EMC Corporation. All rights reserved. 6
  • 7. VAAI Full Copy• Without API “let’s Storage – SCSI Read (Data moved from array to host) VMotion” – SCSI Write (Data moved from host to array) SCSI READ SCSI WRITE – Repeat SCSI SCSI READ SCSI WRITE EXTENDED SCSI READ SCSI WRITE – Huge periods of large VMFS level IO, done COPY ..MANY times… ..MANY times… via millions of small block operations• With API – Subset of SCSI eXtended COPY opcode – Allows copy within or between LUs – Order of magnitude reduction in IO operations – Order of magnitude reduction in array IOps “Give me a VM clone/deploy• Use Cases from template” – Storage VMotion – VM Creation from Template• Speed of operation – mileage may varyhttp://virtualgeek.typepad.com/virtual_geek/2011/07/3-pieces-of-bad-news-and-a-deep-dive-into-apd.html© Copyright 2011 EMC Corporation. All rights reserved. 7
  • 8. VAAI in vSphere 4.1 = Big impacthttp://www.emc.com/collateral/hardware/white-papers/h8115-vmware-vstorage-vmax-wp.pdfSpeed of operation may vary:http://virtualgeek.typepad.com/virtual_geek/2011/07/3-pieces-of-bad-news-and-a-deep-dive-into-apd.html© Copyright 2011 EMC Corporation. All rights reserved. 8
  • 9. vSphere 5 – Thin Provision Stun• Without API Allocate VMFS Allocate VMFS – When a datastore cannot allocate in VMFS Allocate VMFS because of an exhaustion of free blocks in the LUN pool (in the array) this causes VMs to VMFS-5 VMDK VMDK VMDK crash, snapshots to fail, and other badness. Extent – Not a problem with “Thick devices”, as allocation is fixed. SCSI WRITE - OK SCSI WRITE - OK – Thin LUNs can fail to deliver a write BEFORE SCSI WRITE – the VMFS is full ERROR! – Careful management at VMware and Array ! ! ! Thin LUNs level needed Utilization Storage Pool (free blocks)• With API – Rather than erroring on the write, array reports new error message – On receiving this command, VMs are “stunned”, giving the opportunity to expand the thin pool at the array level.© Copyright 2011 EMC Corporation. All rights reserved. 9
  • 10. vSphere 5 – Thin Provision Reclaim• Without API CREATE FILE CREATE FILE CREATE FILE CREATE FILE – When VMFS deletes a file, the file DELETE FILE DELETE FILE allocations are returned for use, and in some cases, SCSI WRITE ZERO would VMFS-5 VMDK VMDK zero out the blocks. Extent – If the blocks were zeroed, manual space reclamation at the device layer could help. SCSI WRITE - DATA SCSI WRITE - DATA SCSI WRITE - DATA• With API SCSI WRITE - DATA SCSI WRITE - ZERO SCSI UNMAP – Rather of SCSI WRITE ZERO, SCSI UNMAP is used. Utilization – The array releases the blocks back to the Storage Pool (free blocks) free pool. – Is used anytime VMFS deletes (svMotion, Delete VM, Delete Snapshot, Delete) – Note that in vSphere 5, SCSI UNMAP is used in many other places where previously SCSI WRITE ZERO would be used, and depends on VMFS-5© Copyright 2011 EMC Corporation. All rights reserved. 10
  • 11. vSphere 5 – NFS Full File Copy• Without API – Some NFS servers can create file replicas NFS Mount – This feature was not used for VMware operations – which were traditional host-based Extent “let’s clone this file copy operations. VM” – Vendors would leverage them via vCenter ESX Host File Read File Write plugins. An example was EMC exposed this “Create a copyFile Write File Read (snap, array feature via the Virtual Storage Integrator clone, version)Filethe File Read of Write Plugin Unified Storage Module. ..MANY file ..MANY times… times… NFS Server• With API – Implemented via NAS vendor plugin, used by Filesystem FOO- FOO.VMDK vSphere for clone, deploy from template COPY.VMDK – Uses EMC VNX OE internal file copy – FAST Copy is “mystery assist” for vSphere 5, uses EMC VNX OE file version – Somewhat analagous to block XCOPY offload – BUT NOTE – not used during svMotion© Copyright 2011 EMC Corporation. All rights reserved. 13
  • 12. VAAI NFS Full copy demo© Copyright 2011 EMC Corporation. All rights reserved. 14
  • 13. © Copyright 2011 EMC Corporation. All rights reserved. 15
  • 14. vSphere 5 – NFS Extended Stats• Without API “just HOW much space does this file take?” – Unlike with VMFS, with NFS datastores, NFS Mount vSphere does not control the filesystem itself. – With the vSphere 4.x client – only basic file and Extent filesystem attributes were used – This lead to challenges with managing space “Filesize = 100GB, but ESX Host when thin VMDKs were used, and it’s a sparse file and has administrators had no visibility to thin state and 24GB of allocations in “Filesize = the filesystem. It is 100GB” oversubscription of both datastores and deduped – so it’s only VMDKs. REALLY using 5GB” NFS Server • think: with thin LUNs under VMFS, you could at least see details on thin VMDKs) Filesystem FOO.VMDK• With API – Can be Implemented via NAS vendor plugin – NFS can client reads extended file/filesystem details – Didn’t quite make it into the vCenter client (another “mystery VAAI assist” – but supported on EMC today.© Copyright 2011 EMC Corporation. All rights reserved. 16
  • 15. vSphere 5 – NFS Reserve Space• Without API – There was no way on NFS datastores to do the equivalent of an “eagerzeroed thick” VMDK (needed for WSFC) or a “zeroed thick” VMDK• With API – Implemented via NAS vendor plugin – Reserves the complete space for a VMDK on an NFS datastore© Copyright 2011 EMC Corporation. All rights reserved. 17
  • 16. vCenter Plugins Matter A LOT…• First generation was basic view/provision “We use EMC Virtual Storage• Second generation exposed Integrator (VSI) to dramatically accelerate and advanced array functions simplify storage configuration, management, and• Third generation worked on multipathing and it has saved simplifying/merging multiple us days of work” plugins Mike Schlimenti, Lead Systems Engineer, Data Center Experian• Fourth generation worked on initial RBAC for VMware/Storage teams• Fifth generation…..© Copyright 2011 EMC Corporation. All rights reserved. 18
  • 17. Virtual Storage Integrator 5.0 demonstration© Copyright 2011 EMC Corporation. All rights reserved. 19
  • 18. © Copyright 2011 EMC Corporation. All rights reserved. 20
  • 19. Integration Report Card for EMC ESX Storage Stack VI Client VM VMFS NFS VMware LVM NFS Co Snap request -op VSS via VMware Tools client SvMotion request Datamover Co-op VM provisioning cmd vStorage API for Data vStorage Turn thin prov on/off Protection (VDDK) API for Network NMP Multi- Stack pathing Vendor-specific vCenterSRM vCenter Plug-In Vendor-specific iSCSI/FCoE SW VAAI block module View VMware-to-Storage relationships NIC Drivers HBA Drivers Provision datastores more easily Leverage array features (compress/dedupe, file/filesystem/LUN snapshots) FCVendor Specific NFS FCoEvStorage API for VM object VASA NFS VAAI iSCSI SRM Array APIs/Mgmt Awarenes Modul Modul s in array e e VAAI SCSI cmds Vendor-specific VAAI Standards-based Storage Array NFS operation VAAI SCSI command support support © Copyright 2011 EMC Corporation. All rights reserved. 21
  • 20. Q: What has the near term effect of VMware been on fundamental storage architectures?© Copyright 2011 EMC Corporation. All rights reserved. 22
  • 21. Storage Architectures and VMware “I want transactional storage for compute” “I want object blob storage for vCloud Director vCloud Director next-gen apps” Presented as Objects (SOAP/Rest) SAN or OR AND RESTful APIs SAN Today NAS (NAS Emerging)“Brains” “Brains” “Brains” “Brains” “Brains” “Brains” “Brains” “Brains” Software Gloms server+ DAS into non- transactional, Object Storage BlobStorage Array Storage Array Storage Array Storage Array Tends to be very “cloud like”, geo- “Type 1” Tends to be lower “Type 2” Tends to be lower cost, distributed, but not very cost, simpler at low scale. simpler at high scale . Latency is transactional, so not good for Failure is hard. hard with NAS VMDKs © Copyright 2011 EMC Corporation. All rights reserved. 23
  • 22. HOWTO – break the 10GBps record• Step 1: Get vSphere 5.• Step 2: Get a 4 moderate servers (2U, 2 socket Nehalem)• Step 3: Use the Intel x520 CNAs with the vSphere 5 Software FCoE initiator• Step 4: Get a “type 1” transactional storage system that can leverage x86 multicore, “Flash first” and drive a LOT of bandwidth• Step 5: Put engineering teams in a room for 2 weeks – let sit.© Copyright 2011 EMC Corporation. All rights reserved. 24
  • 23. VNX EMC VNX 4X PRIOR VMware workload bandwidth RECORD VNX7500 set a new record for VMware workload bandwidth with 10 GBps with vSphere 5 4 node cluster, 2 GBps per host (~4x more total bandwdith than the previous record (also held by EMC – 2.8GBps to 3 CX4- 960s) • Done with 1 VNX instead of 3 CX4 960s • vSphere 5.0 • 320 spindles • Done in partnership with Intel • Highlights the new Intel x520 • Highlights the new vSphere software FCoE implementation© Copyright 2011 EMC Corporation. All rights reserved. 25
  • 24. HOWTO – break the 1,000,000 IOPSrecord• Step 1: Get vSphere 5.• Step 2: Get a monster server.• Step 3: Make sure there’s no network bottleneck.• Step 4: Get a “type 2” transactional storage system that can scale out to leverage x86 and cache at scale with very low latency.• Step 5: Put engineering teams in a room for 2 weeks – let sit.© Copyright 2011 EMC Corporation. All rights reserved. 26
  • 25. The “Monster Server” Specs• vSphere 5 RTM code• Four 2.40 GHz Ten-core Processors, – L2 Cache: 10X256 KB, L3 Cache: 30 MB – Do the math… Total of 96 GHz, 10MB of L2 cache, 120MB of L3 Cache. Yikes. – System Bus Speed: 6.40 GT/s – System Memory Size: 256.0 GB – System Memory Speed: 1067 MHz – Six dual port Emulex LPe12002-M8 – LightPulse X86 BIOS Version 2.02a1© Copyright 2011 EMC Corporation. All rights reserved. 27
  • 26. VMAX array dirty and under load in the lab© Copyright 2011 EMC Corporation. All rights reserved. 28
  • 27. The VMAX under the 1,000,000 IOps load 128 Front End ports 8 engines Front End CPU 8 engines Back End CPU 128 Back End ports 960 drives - FC 450 GB 15k NET: VMAX heartrate is up, but not breaking a sweat. IO mostly from cache, spindles are warming cache. With SSDs, could have been a fraction of the spindle count.© Copyright 2011 EMC Corporation. All rights reserved. 29
  • 28. VMAX EMC VMAX 4X MORE VMware workload I/O 8-engine VMAX set a new record for VMware workload I/O with 1,000,000 IOps with vSphere 5 (previous record held by EMC, 360,000 IOPs using 3 CX4-960s) • 1 million IOPS • vSphere 5.0 • Single ESX server • 4x more than before • 8 engine configuration, 960 spindles • Done in partnership with Intel, Emulex • Highlights the new Intel E7 CPUs • Total of 40 cores in the host© Copyright 2011 EMC Corporation. All rights reserved. 30
  • 29. http://www.vmware.com/files/pdf /techpaper/1M-iops-perf- vsphere5.pdf© Copyright 2011 EMC Corporation. All rights reserved. 31
  • 30. Q: What about scale-out NAS models and VMware?© Copyright 2011 EMC Corporation. All rights reserved. 32
  • 31. What has changed in vSphere 5?• Minor change in NFS v3 client – not NFS v4, NFS v4.1 or pNFS• If the domain name is specified in the NFS datastore configuration: – DNS lookup will occur on every ESX boot – Will honor DNS round robin• Can be useful to distribute NFS client logins across a vSphere 5 cluster (not a single host) for a datastore to different IP addresses – Particularly useful for scale out storage (Example: EMC Isilon)© Copyright 2011 EMC Corporation. All rights reserved. 33
  • 32. vSphere 5 NFS Round Robin Behavior Demo© Copyright 2011 EMC Corporation. All rights reserved. 34
  • 33. Q: What is the long term effect of VMware on Storage Architectures? 1. Line between “storage” and “server” will continue to blend 2. Data will move closer to compute and compute will move closer to storage 3. Distance will get “blurry”© Copyright 2011 EMC Corporation. All rights reserved. 36
  • 34. The Speed Of Light …and leveraging distance© Copyright 2011 EMC Corporation. All rights reserved. 37
  • 35. EMC VPLEX The SAME information... At the SAME time… In SEPARATE locations. Milliseconds vs. Hours/Days© Copyright 2011 EMC Corporation. All rights reserved. 38
  • 36. EMC VPLEX – Mobility and HA for VMware Same information Separate locations Accessible anywhere Zero RTO+RPO with Witness See VMware/EMC joint session BPO2497!© Copyright 2011 EMC Corporation. All rights reserved. 39
  • 37. The Speed Of Light… and storage/compute mashups© Copyright 2011 EMC Corporation. All rights reserved. 40
  • 38. The Future Data Center Enterprise Applications Data Analytics Big Data Applications vSphere vSphere Storage Functionality Storage Functionality© Copyright 2011 EMC Corporation. All rights reserved. 41
  • 39. Project Lightning. Server To Storage. Accelerated Performance. Automated Tiering.© Copyright 2011 EMC Corporation. All rights reserved. 42
  • 40. Project Lightning1. Server Cache. DC DC DC DC DC C C C C C2. FAST Automation. Lightning Distributed Cache Module (DCC)3. Distributed Caching.© Copyright 2011 EMC Corporation. All rights reserved. 43
  • 41. Sever/Storage/Lightning Uber Munge Demo© Copyright 2011 EMC Corporation. All rights reserved. 44
  • 42. vFabric – A Cloud Application Platform Modern Application Development Frameworks ModernvFabric Data Director Data Management Systems vFabric Self-Service Database Provisioning Optimized For VMware© Copyright 2011 EMC Corporation. All rights reserved. 46
  • 43. So… What is vFabric Data Director?© Copyright 2011 EMC Corporation. All rights reserved. 47
  • 44. © Copyright 2011 EMC Corporation. All rights reserved. 48
  • 45. State of the Art VMware Backup • Faster Backup and Faster Recovery – Leverage Changed Block Tracking for backups – Proxy Load Balancing for Superior Throughput – Leverage Changed Block Tracking for recovery • Dramatically speeds recovery time => increases productivity • Accelerate your Virtualization Journey – Next generation backup enables more virtualization … sooner – Virtualize mission critical applications • Application consistency • vCloud Director Support – Backup: Simply import the VM vCenter – Recovery: Redirected restore – Protect vCD Oracle DB© Copyright 2011 EMC Corporation. All rights reserved. 49
  • 46. VADP: Faster Backup + Restore +Agentless Faster Backup 1000VMs, 50TB => 43 minutes • Changed Block Tracking (CBT) • Client-side deduplication • Proxy VM load balancing CBT RESTORE Faster Recovery LAN-free LAN/WAN CBT Blocks VM Image CBT Blocks 30x Faster than Traditional Proxy Restore • Unique Changed Block Restore© Copyright 2011 EMC Corporation. All rights reserved. 50
  • 47. Integration with VMware vCenter ServerBottom-Up Integration makes management easier • Manage data protection and contain virtual machine sprawl • Browse to VMware vCenter Server through the Avamar GUI • Auto-discover virtual machines and their associated groups • Monitor backup/restore operations in the Activity Monitor • View VM protection status (guest/image/none) • Agent-less Single point of management for VMware backup options© Copyright 2011 EMC Corporation. All rights reserved. 51
  • 48. Dedupe – Its a Must-HaveLeverage the best approach - based on workloads Avamar Data Avamar Management Store NAS VM Avamar Client DD Boost DB Direct backups to optimal systems based on workload attributes  Data Domain integration provides Data Domain SharePoint System access to an additional 285TB of DD Boost dedupe capacity© Copyright 2011 EMC Corporation. All rights reserved. 52
  • 49. Backup & Recovery Requirements for vCloudDirector • Must Have Avamar – Image-level recovery of data within a vApp – Image-level recovery of the entire vApp – Granular recovery from image-level backup – vCloud database protection • Oracle plug-in – vCenter database protection • SQL plug-in – High performance • CBT, Load Balancing – Scalability© Copyright 2011 EMC Corporation. All rights reserved. 53
  • 50. So… What does the FUTURE of backup hold?© Copyright 2011 EMC Corporation. All rights reserved. 54
  • 51. © Copyright 2011 EMC Corporation. All rights reserved. 55
  • 52. The Results© Copyright 2011 EMC Corporation. All rights reserved. 56
  • 53. THANK YOU© Copyright 2011 EMC Corporation. All rights reserved. 57
  • 54. YOUR YEAR-ROUND IT RESOURCE – access to everything you’ll need to know
  • 55. THE WHOLETECHNOLOGY STACKfrom start to finish
  • 56. COMMENT & ANALYSISInsights, interviews and the latest thinking on technology solutions
  • 57. VIDEOYour source of live information – all the presentations from our live events
  • 58. TECHNOLOGY LIBRARY Over 3,000 whitepapers,case studies, product overviews and press releases from all the leading IT vendors
  • 59. EVENTS, WEBINARS & PRESENTATIONS Missed the event? Download the presentations thatinterest you. Catch up with convenient webinars. Plan your next visit.
  • 60. DirectoryA comprehensive A-Z listing providing in-depth company overviews
  • 61. ALL FREE TO ACCESS 24/7
  • 62. online.ipexpo.co.uk