Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Inf bco2891 release candidate v11 copy

1,249 views

Published on

Industry leaders Cisco, NetApp, VMware and Symantec have teamed up to develop a best practice framework and performance benchmark based on the VMware vSphere® Storage APIs - Data Protection (VADP). The test configuration uses the popular NetApp FlexPod environment, and the result proves that you can easily protect over 4 TB of virtual machine data per hour. And improved backup performance creates more reliable backups, shorter backup windows and less impact on the vSphere infrastructure.

In this session, we will show how these performance numbers can be easily obtained with minimal hardware and a small budget. In addition to backup performance, we will also discuss restore performance considerations.

Key topics include:

• How to select the correct hardware for the best ROI
• Strategies for minimizing backup impact and maximizing backup throughput
• Performance characteristics of VADP
• SAN or NBD (network) transports: which is recommended?
• Configurations for the fastest possible restores

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Inf bco2891 release candidate v11 copy

  1. 1. INF-BC02891 - Pushing the Backup Performance Envelope 1 INF-BC02891 Pushing the Backup Performance Envelope George Winter Technical Product Manager, Symantec Abdul Rasheed Technical Marketing Manager, Symantec Taking vSphere Storage APIs for Data Protection to the Limit Roger Andersson Director, Technical Marketing, Cisco
  2. 2. Agenda Goal of Performance Benchmark1 Intro to vStorage APIs for Data Protection2 Performance Benchmark Environment3 vStorage APIs for Data Protection Best Practices4 Benchmark Results5 2INF-BC02891 - Pushing the Backup Performance Envelope
  3. 3. Goal of Performance Benchmark 33INF-BC02891 - Pushing the Backup Performance Envelope
  4. 4. Goal of Performance Benchmark Evaluate performance characteristics of VADP • Adjust the variables for maximum backup throughput • Adjust the variables for minimum intrusion to ESXi Test drive underlying infrastructure performance Obtain real world performance numbers Develop VADP best practice guidelines 4INF-BC02891 - Pushing the Backup Performance Envelope
  5. 5. An Introduction: vStorage APIs for Data Protection 55INF-BC02891 - Pushing the Backup Performance Envelope
  6. 6. vStorage APIs for Data Protection (VADP) • Supports ESX 3.5 U2 (and later) Introduced with vSphere 4 Change Block Tracking (CBT) provides true incremental backups • Requires no backup software on virtual machines Very simple implementation 6INF-BC02891 - Pushing the Backup Performance Envelope
  7. 7. VADP Backup Process 7 VMware ESXi Datastore Backup Server INF-BC02891 - Pushing the Backup Performance Envelope ESXi Host
  8. 8. VADP Backup Process 8 VMware ESXi Datastore Backup Server VADP Snapshot of Target VM(s) INF-BC02891 - Pushing the Backup Performance Envelope ESXi Host
  9. 9. VADP Backup Process 9 VMware ESXi Datastore Backup Server VM Backup Sent to Server INF-BC02891 - Pushing the Backup Performance Envelope ESXi Host
  10. 10. VADP Backup Process 10 VMware ESXi Datastore Backup Server Snapshot Released INF-BC02891 - Pushing the Backup Performance Envelope ESXi Host
  11. 11. VADP Backup Snapshot Process 11 A Closer Look 11INF-BC02891 - Pushing the Backup Performance Envelope
  12. 12. Detailed Look at the VADP Snapshot Process • During backup VADP creates snapshot – VSS provider flushes OS buffers within VM – Snapshot created, VMDKs are frozen – Redo log created – all writes redirected to redo log – VM data copied to backup storage – Redo log applied to original VMDKs – Snapshot released - backup completed • Why does this matter? – Each step involves significant I/O – Reducing snapshots per Datastore improves backup performance & reliability – Incremental backups are quick and reduces snapshot impact 12 VMware ESXi INF-BC02891 - Pushing the Backup Performance Envelope
  13. 13. VMware ESXi VADP SAN Transport 13 Hypervisor level snapshots are taken for virtual machines Virtual Machine data is streamed directly from storage to backup server Backup server processes and stores data in disk, deduplicated disk, tape or cloud 100% true offhost backup Suited for enterprise Fibre Channel and iSCSI storage INF-BC02891 - Pushing the Backup Performance Envelope
  14. 14. VMware ESXi VADP Network Block Device (NBD) Transport 14 Hypervisor level snapshots are taken for virtual machines Virtual Machine data is streamed via ESXi VMkernel ports to backup server Backup server processes and stores data in disk, deduplicated disk, tape or cloud Simplest to implement Fibre Channel, iSCSI, DAS, NFS May strain VMkernel if not planned well INF-BC02891 - Pushing the Backup Performance Envelope
  15. 15. VMware ESXi VADP Hotadd Transport 15 Hypervisor level snapshots are taken for virtual machines Virtual Machine snapshots are attached to a dedicated Proxy VM that streams data to backup server Backup server processes and stores data in disk, deduplicated disk, tape or cloud Careful planning provides offhost backups Throttle using traffic shaping Fibre Channel, iSCSI, NFS, limited DAS Proxy VM INF-BC02891 - Pushing the Backup Performance Envelope
  16. 16. Benchmark Environment 1616INF-BC02891 - Pushing the Backup Performance Envelope
  17. 17. Benchmark Environment • FlexPod specifications for VMware vSphere – NetApp FAS and V-Series storage – Cisco UCS 6248 – VMware vSphere 5 • Symantec NetBackup 5220 VMware Ready Backup Appliance – VMware vStorage API support built-in – Inline Symantec V-Ray capable deduplication • Labs sponsored by Datalink 17INF-BC02891 - Pushing the Backup Performance Envelope
  18. 18. Benchmark Environment 18 Cisco UCS Blades 1x vCenter, 6x ESXi Nexus NetBackup 5220 NetApp FAS 6080 NetApp V-Series 6080 Block Storage 8Gb Fibre Channel 10Gb Ethernet INF-BC02891 - Pushing the Backup Performance Envelope
  19. 19. Benchmark Environment 19 Cisco UCS Blades 1x vCenter, 6x ESXi Nexus NetBackup 5220 NetApp FAS 6080 NetApp V-Series 6080 Block storage 8Gb Fibre Channel 10Gb Ethernet INF-BC02891 - Pushing the Backup Performance Envelope Cisco UCS • 2 x 6248 Fabric Interconnects • 4 x B200-M3 Server (ESXi 5.0) • 2 x B230-M2 Server (ESXi 5.0) • 1 x B440-M2 Server (vCenter 5.0) • 2 x 2208 IO Modules
  20. 20. Cisco UCS B-Series Blade Servers and UCS Manager Cisco Nexus Family Switches NetApp FAS OnCommand Software Suite Features  Standard, pre-validated, converged platform  Virtualized and non-virtualized environments  Flexible: One platform scales up or out to fit most workloads  Add Applications and workloads Benefits  Flexibility  Built-in data center efficiencies  Reduced risk  Secure Multi-tenancy Joint Cisco and NetApp Solution for Virtualized Infrastructure and Cloud
  21. 21. • UCS designed from ground up for next generation data center • Tightly integrates x86 servers, adapters and LAN/SAN connectivity • Policy-Driven Integrated Management across Blade and Rack servers
  22. 22. • UCS Virtual Interface Adapter • I/O Bandwidth Capacity • Extended Memory Technology • Policy-Based Infrastructure Management CPU Memory VM VM VM VM VM VM VM VM VM VM • Virtualizes more Applications • Increase VM density • Increase visibility and management Infrastructure Management
  23. 23. This is an upcoming product—Feature set for first release subject to change Unified Management at Scale 10,000 Blade and Rack Servers Coming in 2H 2012 UCS Manager UCS Manager Data Center 1 UCS Manager UCS Manager Data Center 2 UCS Manager Data Center 3 UCS Central • Unifies management of multi UCS domains • Leverages UCS Manager technology • Simplify global operations with centralized inventory, faults, logs and server console • Delivers global policies, service profiles, ID pools and templates Foundation for high availability, disaster recovery and workload mobility • Model based API for large scale automation UCS Central
  24. 24. Best CPU Performance Best Virtualization Performance Best Cloud Computing Performance Best Enterprise Application Performance Best Enterprise Middleware Performance Best HPC Performance VMmark 2.0 Overall B200 M2 VMmark 2.1 2-socket Blade B200 M2 VMmark 1.x 2 –socket Blade B230 M1 VMmark 1.x Overall C460 M1 VMmark 1.x 2-socket B200 M1 VMmark 1.x 2-socket B250 M2 VMmark 1.x Overall C460 M1 VMmark 1.x Blade Server B440 M1 VMmark 1.x 2-socket B200 M1 VMmark 2.1 Overall C460 M2 VMmark 2.1 Two–node 4-socket C460 M2 VMmark 2.1 4-socket C460 M2 SPECompLbase2001 2-socket B200 M2 SPECompMbase2001 2-socket B230 M2 SPECompLbase2001 2-socket B230 M2 SPECompMbase2001 4-socket C460 M2 SPECompMbase2001 2-socket B200 M2 SPECompLbase2001 2-socket B200 M2 LinPack 2-socket B200 M2 LS-Dyna 4-socket C460 M1 SPECompMbase2001 4-socket C460 M1 SPECompMbase2001 2-socket B200 M2 Oracle E-Business Suite Medium Model Payroll Batch B200 M2 Oracle E-Business Suite Xtra Large Model Payroll B200 M3 Oracle E-Business Suite Medium Model Payroll Batch B200 M2 Oracle E-Business Suite Medium Model Order-to-Cash B200 M2 Oracle E-Business Suite Large Model Order-to-Cash B200 M3 Oracle E-Business Suite Ex- large Model Payroll Batch B200 M2 SPECjbb2005 2-socket C260 M2 SPECjbb2005 2-socket B230 M2 SPECjbb2005 4-socket B440 M2 SPECjbb2005 2-socket B230 M2 SPECjbb2005 X86 2-socket B200 M2 SPECjbb2005 X86 4-socket C460 M1 SPECjAppServer2004 2-node B230 M1 SPECjbb2005 X86 2-socket B230 M1 SPECjbb2005 X86 2-socket B230 M1 SPECjAppServer2004 1-node 2-socket C250 M2 SPECfp_rate_base2006 2- socket C260 M2 SPECint_rate_base2006 2- socket C260 M2 SPECint_rate2006 X86 4-socket C460 M2 SPECint_rate_base2006 X86 4-socket C460 M1 SPECint_rate_base2006 X86 2-socket B200 M2 SPECint_rate_base2006 X86 2-socket B200 M1 SPECfp_rate_base2006 X86 2-socket B200 M2 SPECint_rate_base2006 X86 2-socket B200 M2 SPECfp_rate_base2006 X86 4-socket C460 M1 SPECfp_rate_base2006 X86 2-socket B200 M1 SPECjEnterprise2010 Overall B440 M1 SPECjEnteprise2010 2-node B440 M2 Cisco UCS Benchmarks that held world record performance records as of date of publication Oracle E-Business Suite Xtra Large Model Payroll Batch B230 M2 SPECompMbase2001 4-socket C460 M1 SPECompMbase2001 4-socket C460 M2 SPECompMbase2001 2-socket C240 M3 VMmark 2.1 2-socket B200 M3 TPC-C Oracle DB 11g & OEL C250 M2 TPC-H 1000GB Microsoft SQL Server C460 M2 SPECjbb2005 X86 2-socket C220 M3 SPECfp_rate_base2006 X86 2-socket C220 M3 SPECint_rate_base2006 X86 2-socket C220 M3 SPECfp_base2006 X86 2-socket C220 M3 TPC-H 100GB VectorWise C250 M2 TPC-H 300GB VectorWise C250 M2
  25. 25. Benchmark Environment 25 Cisco UCS Blades 1x vCenter, 6x ESXi Nexus NetBackup 5220 NetApp FAS 6080 NetApp V-Series 6080 Block Storage 8Gb Fibre Channel 10Gb Ethernet NetApp Storage • Dual head FAS 6080 cluster • 8 DS14MK4 storage trays • 15k RPM, 300GB FC drives • Hosting storage for ESXi hosts 1 and 4 INF-BC02891 - Pushing the Backup Performance Envelope
  26. 26. Benchmark Environment 26 Cisco UCS Blades 1x vCenter, 6x ESXi Nexus NetBackup 5220 NetApp FAS 6080 NetApp V-Series 6080 Block Storage 8Gb Fibre Channel 10Gb Ethernet INF-BC02891 - Pushing the Backup Performance Envelope NetApp Storage • Dual head V-Series 6080 cluster • 8 DS14MK4 storage trays • Block Storage • Hosting storage for ESXi hosts 2,3,4 & 6
  27. 27. Benchmark Environment 27 Cisco UCS Blades 1x vCenter, 6x ESXi Nexus NetBackup 5220 NetApp FAS 6080 NetApp V-Series 6080 Block Storage 8Gb Fibre Channel 10Gb Ethernet INF-BC02891 - Pushing the Backup Performance Envelope Symantec NetBackup 5220 • VADP access via 2x10GigE • VADP access via 2x8Gb FC • Built-in V-Ray deduplication
  28. 28. Backup System Used for Benchmark NetBackup5220 • Dual Intel E5620 CPUs • 24GB RAM • 4TB Dedupe storage • Connectivity • 2x8Gb Fibre Channel • 2x10Gb Ethernet • Typical power consumption: < 415W 28INF-BC02891 - Pushing the Backup Performance Envelope
  29. 29. vStorage API for Data Protection Best Practices 2929INF-BC02891 - Pushing the Backup Performance Envelope
  30. 30. VMware ESXi VADP SAN Transport Hypervisor level snapshots are taken for virtual machines Virtual Machine data is streamed directly from storage to backup server Backup server processes and stores data in disk, deduplicated disk, tape or cloud 100% true offhost backup Suited for enterprise Fibre Channel and iSCSI storage Cisco UCS NetApp Storage NetBackup 5220 30INF-BC02891 - Pushing the Backup Performance Envelope CiscoandNetAppFlexPod
  31. 31. VMware ESXi VADP Network Block Device (NBD) Transport Hypervisor level snapshots are taken for virtual machines Virtual Machine data is streamed via ESXi VMkernel ports to backup server Backup server processes and stores data in disk, deduplicated disk, tape or cloud Simplest to implement Fiber Channel, iSCSI, DAS, NFS May strain VMkernel if not planned well Cisco UCS NetApp Storage NetBackup 5220 31INF-BC02891 - Pushing the Backup Performance Envelope CiscoandNetAppFlexPod
  32. 32. Intel Case Study: 10Gb Ethernet for Virtualization and file transfer 32 Theoretical max Everyday copy workloads ESXi management traffic cap Source: Intel case study, Maximizing Gigabit performance for file transfer and virtualization Single threaded streams cannot deliver! INF-BC02891 - Pushing the Backup Performance Envelope
  33. 33. Intel Case Study: 10Gb Ethernet for Virtualization and file transfer 33 Theoretical max Esxi management traffic cap 8 streams 3-4 streams sufficient to meet management traffic cap Source: Intel case study, Maximizing Gigabit performance for file transfer and virtualization INF-BC02891 - Pushing the Backup Performance Envelope
  34. 34. Network impact of NBD on ESXi host 34 Single stream 2 streams 3 streams Diminishing returns on improving sustained throughput after 3-4 streams because of traffic cap INF-BC02891 - Pushing the Backup Performance Envelope
  35. 35. CPU impact of NBD on ESXi host 35 Single stream 2 streams 3 streams Average 5% CPU utilization per stream for entire host INF-BC02891 - Pushing the Backup Performance Envelope
  36. 36. VADP Performance Characteristics • Single ESXi host – NBD Performance Per Stream 36 # of streams: 1 2 3 4 100 MB/Sec 100 MB/sec 75 MB/Sec 60 MB/Sec 55 MB/Sec 4 Stream Aggregate Throughput = 220 MB/sec Based on Cisco/VMware/NetBackup Benchmark Testing 220 MB/sec Creates VMkernel strain on ESXi host INF-BC02891 - Pushing the Backup Performance Envelope
  37. 37. VADP Performance Characteristics • Four Separate ESXi hosts – Creating One Stream Per ESXi 37 100 MB/Sec 100 MB/sec Possible Network Performance 100 MB/Sec 100 MB/Sec 100 MB/Sec Aggregate Throughput = 400 MB/sec 400 MB/sec Creates Less Strain on VMkernel per ESXi host INF-BC02891 - Pushing the Backup Performance Envelope # of streams: 1 1 1 1
  38. 38. Best Practices in using NBD and 10Gb • Limit 3-4 streams from a given ESXi system • The VMkernel (management) port may stay in the default VM Network, rest of the bandwidth complements VM traffic • Automated in NetBackup using VMware Resource Limits 38 Limit streams per ESXi Illustration INF-BC02891 - Pushing the Backup Performance Envelope Tip for NetBackup Users!
  39. 39. Best Practices in using NBD and 10Gb • Backup VMs from multiple ESXi systems to fill the pipe to backup server – 1 backup from 10 ESXi hosts is better than 10 backups from a single ESXi host! – Effective utilization of backup server – Concurrency = Smaller backup window – Lower strain on each ESXi host – Automated in NetBackup using VMware Intelligent Policies 39 Distribute over multiple ESXi hosts VMware ESXi VMware ESXi VMware ESXi INF-BC02891 - Pushing the Backup Performance Envelope Tip for NetBackup Users!
  40. 40. VMware ESXi VADP Hotadd Transport Hypervisor level snapshots are taken for virtual machines Virtual Machine snapshots are attached to a dedicated Proxy VM that streams data to backup server Backup server processes and stores data in disk, deduplicated disk, tape or cloud Careful planning provide offhost backups Throttle using traffic shaping Fibre Channel, iSCSI, NFS, limited DAS Proxy VM Cisco UCS NetBackup 5220 NetApp Storage 40INF-BC02891 - Pushing the Backup Performance Envelope CiscoandNetAppFlexPod
  41. 41. Best Practices for Hotadd Transport • Proxy VMs need to access the datastore of VMs being protected • Recommended to have a Proxy VM at each ESXi system • Proxy VMs can undergo vMotion or VMware HA • Proxy VM Operating System – SuSE Enterprise Server for VMware – SuSE Enterprise Server – Microsoft Windows • Proxy VMs do not need to be backed up, exclude them using Intelligent Policy in NetBackup 41 Proxy VM Considerations VMware ESXi VMware ESXi VMware ESXi VMware ESXi Proxy VM 1 Proxy VM 3 Proxy VM 4 Proxy VM 2 INF-BC02891 - Pushing the Backup Performance Envelope Tips for NetBackup Users!
  42. 42. Best Practices for Hotadd Transport • Offload deduplication at the source • Deduplicated data sent to backup server, reduces network utilization • NetBackup lets you select deduplication location at Proxy VM level for flexibility 42 Scale-out deduplication VMware ESXi VMware ESXi VMware ESXi VMware ESXi Proxy VM 1 Proxy VM 3 Proxy VM 4 Proxy VM 2 INF-BC02891 - Pushing the Backup Performance Envelope Tip for NetBackup Users!
  43. 43. Hotadd with Deduplication at Proxy: Impact on Virtual Machine Network in ESXi 43 Datastore: Single stream ~200 MB/sec Average Datastore: 2 streams ~175 MB/sec Average Network: Single stream ~1 MB/sec Average Network: Single stream ~1 MB/sec Average Up to 99% bandwidth savings! INF-BC02891 - Pushing the Backup Performance Envelope
  44. 44. Hotadd with Deduplication at Proxy: Impact on ESXi CPU 44 Datastore: Single stream ~200 MB/sec Average Datastore: 2 streams ~175 MB/sec Average CPU: Single stream 13% Average Network: Single stream 13% Average INF-BC02891 - Pushing the Backup Performance Envelope
  45. 45. Benchmark Results 4545INF-BC02891 - Pushing the Backup Performance Envelope
  46. 46. 0 200 400 600 800 1000 1200 Performance Advantage: NetBackup 7.5 and NetBackup 5220 for VMware VMware Consolidated Backup Backup Throughput – MB/sec B-Mark 1: 63 MB/sec 46INF-BC02891 - Pushing the Backup Performance Envelope
  47. 47. 0 200 400 600 800 1000 1200 Performance Advantage: NetBackup 7.5 and NetBackup 5220 for VMware VMware Consolidated Backup Backup Throughput – MB/sec B-Mark 1: 63 MB/sec 47 B-Mark 2: 600 MB/sec VADP – Cisco / NetBackup INF-BC02891 - Pushing the Backup Performance Envelope
  48. 48. 0 200 400 600 800 1000 1200 Performance Advantage: NetBackup 7.5 and NetBackup 5220 for VMware VMware Consolidated Backup Backup Throughput – MB/sec B-Mark 1: 63 MB/sec 48 B-Mark 2: 600 MB/sec VADP – Cisco / NetBackup INF-BC02891 - Pushing the Backup Performance Envelope VADP – Cisco / NetApp FlexPod / NetBackup B-Mark 3: 1340 MB/sec
  49. 49. 0 200 400 600 800 1000 1200 Performance Advantage: NetBackup 7.5 and NetBackup 5220 for VMware VMware Consolidated Backup Backup Throughput – MB/sec B-Mark 1: 63 MB/sec 49 B-Mark 2: 600 MB/sec VADP – Cisco / NetBackup INF-BC02891 - Pushing the Backup Performance Envelope VADP – Cisco / NetApp FlexPod / NetBackup B-Mark 3: 1340 MB/sec 0.27 TB/hr
  50. 50. 0 200 400 600 800 1000 1200 Performance Advantage: NetBackup 7.5 and NetBackup 5220 for VMware VMware Consolidated Backup Backup Throughput – MB/sec B-Mark 1: 63 MB/sec 50 B-Mark 2: 600 MB/sec VADP – Cisco / NetBackup INF-BC02891 - Pushing the Backup Performance Envelope VADP – Cisco / NetApp FlexPod / NetBackup B-Mark 3: 1340 MB/sec 0.27 TB/hr 2.1 TB/hr
  51. 51. 0 200 400 600 800 1000 1200 Performance Advantage: NetBackup 7.5 for VMware VMware Consolidated Backup Backup Throughput – MB/sec B-Mark 1: 63 MB/sec 51 B-Mark 2: 600 MB/sec VADP – Cisco / NetBackup INF-BC02891 - Pushing the Backup Performance Envelope VADP – Cisco / NetApp FlexPod / NetBackup B-Mark 3: 1340 MB/sec 0.27 TB/hr 2.1 TB/hr 4.8 TB/hr Single VADP host!
  52. 52. 0 200 400 600 800 1000 1200 Performance Advantage: NetBackup 7.5 for VMware VMware Consolidated Backup Backup Throughput – MB/sec B-Mark 1: 63 MB/sec 52 B-Mark 2: 600 MB/sec VADP – Cisco / NetBackup INF-BC02891 - Pushing the Backup Performance Envelope VADP – Cisco / NetApp FlexPod / NetBackup B-Mark 3: 1340 MB/sec 270 VMs 2100 VMs 4800 VMs Single VADP host! Weekend full backups, daily incremental backups, 100GB VMs, 60GB data
  53. 53. NetBackup SE Interlock 2012 Restore Performance • Restore process involves more I/O than backup process – Disks (vmdk’s) must be first created as target of restore – Type of vmdk can impact restore speed and I/O required • Single restore typically won’t saturate restore path – Typically restore performance about half of backup performance – As with backup – balance restores across ESX or Datastore • Restore performance highly dependent on VMDK format – Thin – faster restores when data is small percent of VMDK reserved size – Thick – faster when VMDK nearly full 53
  54. 54. Thank you! Copyright © 2012 Symantec Corporation. All rights reserved. Symantec and the Symantec Logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners. This document is provided for informational purposes only and is not intended as advertising. All warranties relating to the information in this document, either express or implied, are disclaimed to the maximum extent allowed by law. The information in this document is subject to change without notice. 54 George Winter, Symantec Corporation Abdul Rasheed, Symantec Corporation Roger Andersson, Cisco Systems, Inc. INF-BC02891 - Pushing the Backup Performance Envelope

×