Proven Solution Guide: EMC Infrastructure for VMware View 5.0, EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Proven Solution Guide: EMC Infrastructure for VMware View 5.0, EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7

  • 912 views
Uploaded on

This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure for VMware View 5.0 by using EMC VNX Series (NFS), VMware vSphere 5.0, VMware View......

This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure for VMware View 5.0 by using EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7. This document focuses on sizing and scalability, and highlights new features introduced in EMC VNX, VMware vSphere, and VMware View.

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
912
On Slideshare
912
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
28
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Proven Solutions GuideEMC® INFRASTRUCTURE FORVMWARE® VIEW™ 5.0EMC VNX™ Series (NFS), VMware vSphere® 5.0,VMware® View™ 5.0, VMware® View™ Persona Management, andVMware® View™ Composer 2.7  Simplify management and decrease TCO  Guarantee a quality desktop experience  Minimize the risk of virtual desktop deployment EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure for VMware View 5.0 by using EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7. This document focuses on sizing and scalability, and highlights new features introduced in EMC VNX, VMware vSphere, and VMware View. July 2012
  • 2. Copyright © 2012 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware, ESX, VMware vCenter, VMware View, and VMware vSphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number: h10737.2 EMC Infrastructure for VMware View 5.0 EMC Infrastructure for VMware View 5.0 2 2EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View PersonaEMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7—Proven Solutions Guide Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 3. Table of contentsTable of contents1 Executive Summary .................................................................................... 13Introduction to the EMC VNX series .................................................................... 13 Introduction ........................................................................................................................ 13 Software suites available .................................................................................................... 14 Software packages available .............................................................................................. 14Business case ................................................................................................... 14Solution overview ............................................................................................. 14Key results and recommendations ..................................................................... 152 Introduction ............................................................................................... 16Document overview ........................................................................................... 16 Use case definition ............................................................................................................. 16 Purpose .............................................................................................................................. 17 Scope ................................................................................................................................. 17 Not in scope........................................................................................................................ 17 Audience ............................................................................................................................ 17 Prerequisites ...................................................................................................................... 17 Terminology ........................................................................................................................ 17Reference Architecture ...................................................................................... 18 Corresponding reference architecture ................................................................................. 18 Reference architecture diagram .......................................................................................... 19Configuration .................................................................................................... 19 Hardware resources ............................................................................................................ 19 Software resources ............................................................................................................. 22 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 3 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 4. Table of contents3 VMware View Infrastructure ........................................................................ 23VMware View 5.0 .............................................................................................. 23 Introduction ........................................................................................................................ 23 Deploying VMware View components ................................................................................. 23 View Manager Server .......................................................................................................... 24 View Composer 2.7 ............................................................................................................. 24 VMware View Persona Management ................................................................................... 24 Floating assignment desktop pools .................................................................................... 24 View Composer linked clones ............................................................................................. 25vSphere 5.0 Infrastructure ................................................................................. 26 vSphere 5.0 overview ......................................................................................................... 26 Desktop vSphere clusters ................................................................................................... 26 Infrastructure vSphere cluster ............................................................................................. 26Windows infrastructure ..................................................................................... 27 Introduction ........................................................................................................................ 27 Microsoft Active Directory ................................................................................................... 27 Microsoft SQL Server .......................................................................................................... 27 DNS server .......................................................................................................................... 27 DHCP server ........................................................................................................................ 274 Storage Design........................................................................................... 28EMC VNX series storage architecture .................................................................. 28 Introduction ........................................................................................................................ 28 Storage layout .................................................................................................................... 29 Storage layout overview ...................................................................................................... 31 File system layout ............................................................................................................... 32 EMC VNX FAST Cache .......................................................................................................... 33 VSI for VMware vSphere ...................................................................................................... 34 vCenter Server storage layout ............................................................................................. 34 VNX shared file systems ..................................................................................................... 34 VMware View Persona Management and folder redirection ................................................ 35 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 4 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 5. Table of contents EMC VNX for File Home Directory feature ............................................................................. 35 Capacity .............................................................................................................................. 355 Network Design .......................................................................................... 36Considerations ................................................................................................. 36 Network layout overview ..................................................................................................... 36 Logical design considerations ............................................................................................ 37 Link aggregation ................................................................................................................. 37VNX for File network configuration ..................................................................... 38 Data Mover ports ................................................................................................................ 38 LACP configuration on the Data Mover ................................................................................ 38 Data Mover interfaces ......................................................................................................... 39 Enable jumbo frames on Data Mover interface .................................................................... 39vSphere network configuration .......................................................................... 40 NIC teaming ........................................................................................................................ 40 Increase the number of vSwitch virtual ports ...................................................................... 41 Enable jumbo frames for the VMkernel port used for NFS ................................................... 41Cisco Nexus 5020 configuration ........................................................................ 43 Overview ............................................................................................................................. 43 Cabling ............................................................................................................................... 43 Enable jumbo frames on Nexus switch ............................................................................... 43 vPC for Data Mover ports..................................................................................................... 44Cisco Catalyst 6509 configuration ...................................................................... 45 Overview ............................................................................................................................. 45 Cabling ............................................................................................................................... 45 Server uplinks ..................................................................................................................... 45 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 5 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 6. Table of contents6 Installation and Configuration..................................................................... 46Installation overview ......................................................................................... 46VMware View components ................................................................................. 47 VMware View installation overview ..................................................................................... 47 VMware View setup............................................................................................................. 47 VMware View desktop pool configuration ........................................................................... 47 VMware View Persona Management configuration ............................................................. 51Storage components ......................................................................................... 52 Storage pools ..................................................................................................................... 52 NFS active threads per Data Mover ..................................................................................... 53 NFS performance fix ............................................................................................................ 54 Enable FAST Cache.............................................................................................................. 55 VNX Home Directory feature ................................................................................................ 567 Testing and Validation ................................................................................ 58Validated environment profile............................................................................ 58 Profile characteristics ......................................................................................................... 58 Use cases ........................................................................................................................... 59 Login VSI............................................................................................................................. 59 Login VSI launcher .............................................................................................................. 60 FAST Cache configuration ................................................................................................... 60Boot storm results ............................................................................................. 60 Test methodology ............................................................................................................... 60 Pool individual disk load .................................................................................................... 61 Pool LUN load ..................................................................................................................... 61 Storage processor IOPS ...................................................................................................... 62 Storage processor utilization .............................................................................................. 62 FAST Cache IOPS ................................................................................................................. 63 Data Mover CPU utilization ................................................................................................. 64 Data Mover NFS load........................................................................................................... 64 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 6 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 7. Table of contents vSphere CPU load ............................................................................................................... 65 VSphere disk response time ............................................................................................... 65Antivirus results ................................................................................................ 66 Test methodology ............................................................................................................... 66 Pool individual disk load .................................................................................................... 66 Pool LUN load ..................................................................................................................... 67 Storage processor IOPS ...................................................................................................... 68 Storage processor utilization .............................................................................................. 68 FAST Cache IOPS ................................................................................................................. 69 Data Mover CPU utilization ................................................................................................. 69 Data Mover NFS load........................................................................................................... 70 vSphere CPU load ............................................................................................................... 71 vSphere disk response time................................................................................................ 71Patch install results........................................................................................... 72 Test methodology ............................................................................................................... 72 Pool individual disk load .................................................................................................... 72 Pool LUN load ..................................................................................................................... 72 Storage processor IOPS ...................................................................................................... 73 Storage processor utilization .............................................................................................. 74 FAST Cache IOPS ................................................................................................................. 74 Data Mover CPU utilization ................................................................................................. 75 Data Mover NFS load........................................................................................................... 75 vSphere CPU load ............................................................................................................... 76 vSphere disk response time................................................................................................ 77Login VSI results ............................................................................................... 77 Test methodology ............................................................................................................... 77 Desktop logon time............................................................................................................. 77 Pool individual disk load .................................................................................................... 78 Pool LUN load ..................................................................................................................... 79 Storage processor IOPS ...................................................................................................... 79 Storage processor utilization .............................................................................................. 80 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 7 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 8. Table of contents FAST Cache IOPS ................................................................................................................. 80 Data Mover CPU utilization ................................................................................................. 81 Data Mover NFS load........................................................................................................... 81 vSphere CPU load ............................................................................................................... 82 vSphere disk response time................................................................................................ 83Recompose results ............................................................................................ 83 Test methodology ............................................................................................................... 83 Pool individual disk load .................................................................................................... 83 Pool LUN load ..................................................................................................................... 84 Storage processor IOPS ...................................................................................................... 85 Storage processor utilization .............................................................................................. 85 FAST Cache IOPS ................................................................................................................. 86 Data Mover CPU utilization ................................................................................................. 86 Data Mover NFS load........................................................................................................... 87 vSphere CPU load ............................................................................................................... 88 vSphere disk response time................................................................................................ 88Refresh results .................................................................................................. 89 Test methodology ............................................................................................................... 89 Pool individual disk load .................................................................................................... 89 Pool LUN load ..................................................................................................................... 89 Storage processor IOPS ...................................................................................................... 90 Storage processor utilization .............................................................................................. 91 FAST Cache IOPS ................................................................................................................. 91 Data Mover CPU utilization ................................................................................................. 92 Data Mover NFS load........................................................................................................... 92 vSphere CPU load ............................................................................................................... 93 vSphere disk response time................................................................................................ 94 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 8 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 9. Table of contents8 Conclusion................................................................................................. 95Summary .......................................................................................................... 95References ........................................................................................................ 95 Supporting documents ....................................................................................................... 95 VMware documents ............................................................................................................ 95 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 9 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 10. List of TablesList of Tables Table 1. Terminology ............................................................................................................... 17 Table 2. VMware View—Solution hardware .............................................................................. 19 Table 3. VMware View—Solution software ............................................................................... 22 Table 4. VNX7500—File systems .............................................................................................. 35 Table 5. vSphere—Port groups in vSwitch0 and vSwitch1 ........................................................ 40 Table 6. VMware View—environment profile ............................................................................ 58 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 10 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 11. List of FiguresList of Figures Figure 1. VMware View—Reference architecture ........................................................................ 19 Figure 2. VMware View—Linked clones ..................................................................................... 25 Figure 3. VMware View–Logical representation of linked clone and replica disk........................ 25 Figure 4. VNX7500–Core reference architecture physical storage layout ................................... 29 Figure 5. VNX7500–Full reference architecture physical storage layout..................................... 30 Figure 6. VNX7500–NFS file system layout................................................................................ 32 Figure 7. VNX7500–CIFS file system layout ............................................................................... 33 Figure 8. VMware View–Network layout overview ...................................................................... 37 Figure 9. VNX7500–Ports of the two Data Movers ..................................................................... 38 Figure 10. vSphere–vSwitch configuration .................................................................................. 40 Figure 11. vSphere—Load balancing policy ................................................................................. 40 Figure 12. vSphere—vSwitch virtual ports ................................................................................... 41 Figure 13. vSphere–vSwitch MTU setting .................................................................................... 42 Figure 14. vSphere–VMkernel port MTU setting .......................................................................... 43 Figure 15. VMware View–Select Automated Pool ........................................................................ 47 Figure 16. VMware View–Select View Composer linked clones ................................................... 48 Figure 17. VMware View–Select Provision Settings ..................................................................... 49 Figure 18. VMware View–vCenter Settings .................................................................................. 49 Figure 19. VMware View–Select Datastores ................................................................................ 50 Figure 20. VMware View–vCenter Settings .................................................................................. 50 Figure 21. VMware View–Guest Customization ........................................................................... 51 Figure 22. VMware View Persona Management–Initial configuration........................................... 51 Figure 23. VMware View Persona Management–Folder Redirection policies ................................ 52 Figure 24. VNX7500–Storage pools ............................................................................................ 52 Figure 25. VNX7500–nThreads properties .................................................................................. 54 Figure 26. VNX7500–File System Mount Properties .................................................................... 55 Figure 27. VNX7500–FAST Cache tab .......................................................................................... 56 Figure 28. VNX7500–Enable FAST Cache .................................................................................... 56 Figure 29. VNX7500–Home Directory MMC snap-in..................................................................... 57 Figure 30. VNX7500–Sample Home Directory User folder properties........................................... 57 Figure 31. Boot storm—Disk IOPS for a single SAS drive.............................................................. 61 Figure 32. Boot storm—Pool LUN IOPS and response time .......................................................... 62 Figure 33. Boot storm—Storage processor total IOPS .................................................................. 62 Figure 34. Boot storm—Storage processor utilization .................................................................. 63 Figure 35. Boot storm—FAST Cache IOPS .................................................................................... 63 Figure 36. Boot storm—Data Mover CPU utilization ..................................................................... 64 Figure 37. Boot storm—Data Mover NFS load .............................................................................. 64 Figure 38. Boot storm—vSphere CPU load ................................................................................... 65 Figure 39. Boot storm—Average Guest Millisecond/Command counter ....................................... 66 Figure 40. Antivirus—Disk I/O for a single SAS drive ................................................................... 67 Figure 41. Antivirus—Pool LUN IOPS and response time.............................................................. 67 Figure 42. Antivirus—Storage processor IOPS ............................................................................. 68 Figure 43. Antivirus—Storage processor utilization ..................................................................... 68 Figure 44. Antivirus—FAST Cache IOPS........................................................................................ 69 Figure 45. Antivirus—Data Mover CPU utilization ........................................................................ 70 Figure 46. Antivirus—Data Mover NFS load ................................................................................. 70 Figure 47. Antivirus—vSphere CPU load ...................................................................................... 71 Figure 48. Antivirus—Average Guest Millisecond/Command counter .......................................... 71 Figure 49. Patch install—Disk IOPS for a single SAS drive ........................................................... 72 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 11 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 12. List of FiguresFigure 50. Patch install—Pool LUN IOPS and response time ........................................................ 73Figure 51. Patch install—Storage processor IOPS ........................................................................ 73Figure 52. Patch install—Storage processor utilization................................................................ 74Figure 53. Patch install—FAST Cache IOPS .................................................................................. 74Figure 54. Patch install—Data Mover CPU utilization ................................................................... 75Figure 55. Patch install—Data Mover NFS load ............................................................................ 76Figure 56. Patch install—vSphere CPU load................................................................................. 76Figure 57. Patch install—Average Guest Millisecond/Command counter ..................................... 77Figure 58. Login VSI—Desktop login time .................................................................................... 78Figure 59. Login VSI—Disk IOPS for a single SAS drive ................................................................ 78Figure 60. Login VSI—Pool LUN IOPS and response time ............................................................. 79Figure 61. Login VSI—Storage processor IOPS............................................................................. 79Figure 62. Login VSI—Storage processor utilization .................................................................... 80Figure 63. Login VSI—FAST Cache IOPS ....................................................................................... 80Figure 64. Login VSI—Data Mover CPU utilization ........................................................................ 81Figure 65. Login VSI—Data Mover NFS load ................................................................................. 82Figure 66. Login VSI — vSphere CPU load .................................................................................... 82Figure 67. Login VSI—Average Guest Millisecond/Command counter.......................................... 83Figure 68. Recompose—Disk IOPS for a single SAS drive ............................................................ 84Figure 69. Recompose—Pool LUN IOPS and response time ......................................................... 84Figure 70. Recompose—Storage processor IOPS ......................................................................... 85Figure 71. Recompose—Storage processor utilization ................................................................. 85Figure 72. Recompose—FAST Cache IOPS ................................................................................... 86Figure 73. Recompose—Data Mover CPU utilization .................................................................... 87Figure 74. Recompose—Data Mover NFS load ............................................................................. 87Figure 75. Recompose—vSphere CPU load .................................................................................. 88Figure 76. Recompose—Average Guest Millisecond/Command counter ...................................... 88Figure 77. Refresh—Disk IOPS for a single SAS drive ................................................................... 89Figure 78. Refresh—Pool LUN IOPS and response time................................................................ 90Figure 79. Refresh—Storage processor IOPS ............................................................................... 90Figure 80. Refresh—Storage processor utilization ....................................................................... 91Figure 81. Refresh—FAST Cache IOPS.......................................................................................... 91Figure 82. Refresh—Data Mover CPU utilization .......................................................................... 92Figure 83. Refresh—Data Mover NFS load ................................................................................... 93Figure 84. Refresh—vSphere CPU load ........................................................................................ 93Figure 85. Refresh—Average Guest Millisecond/Command counter ............................................ 94 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 12 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 13. Chapter 1: Executive Summary1 Executive Summary This chapter summarizes the proven solution described in this document and includes the following sections:  Introduction to the EMC VNX series  Business case  Solution overview  Key results and recommendationsIntroduction to the EMC VNX seriesIntroduction The EMC® VNX™ series delivers uncompromising scalability and flexibility for the mid- tier user while providing market-leading simplicity and efficiency to minimize total cost of ownership. Customers can benefit from VNX features such as:  Next-generation unified storage, optimized for virtualized applications.  Extended cache by using Flash drives with Fully Automated Storage Tiering for Virtual Pools (FAST VP) and EMC FAST™ Cache that can be optimized for the highest system performance and lowest storage cost simultaneously on both block and file.  Multiprotocol support for file, block, and object with object access through EMC Atmos™ Virtual Edition (Atmos VE).  Simplified management with EMC Unisphere™ for a single management framework for all NAS, SAN, and replication needs.  Up to three times improvement in performance with the latest Intel Xeon multicore processor technology, optimized for Flash.  6 Gb/s SAS back end with the latest drive technologies supported:  3.5 in. 100 GB and 200 GB Flash, 3.5 in. 300 GB, and 600 GB 15k or 10k rpm SAS, and 3.5 in. 1 TB, 2 TB and 3 TB 7.2k rpm NL-SAS  2.5” 100 GB and 200 GB Flash, 300 GB, 600 GB and 900 GB 10k rpm SAS  Expanded EMC UltraFlex™ I/O connectivity—Fibre Channel (FC), Internet Small Computer System Interface (iSCSI), Common Internet File System (CIFS), network file system (NFS) including parallel NFS (pNFS), Multi-Path File System (MPFS), and Fibre Channel over Ethernet (FCoE) connectivity for converged networking over Ethernet. The VNX series includes five software suites and three software packs that make it easier and simpler to attain the maximum overall benefits. EMC Infrastructure for VMware View 5.0 13 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 14. Chapter 1: Executive SummarySoftware suites  VNX FAST Suite—Automatically optimizes for the highest system performanceavailable and the lowest storage cost simultaneously (FAST VP is not part of the FAST Suite for the VNX5100™).  VNX Local Protection Suite—Practices safe data protection and repurposing.  VNX Remote Protection Suite—Protects data against localized failures, outages and disasters.  VNX Application Protection Suite—Automates application copies and proves compliance.  VNX Security and Compliance Suite—Keeps data safe from changes, deletions, and malicious activity.Software packages  VNX Total Efficiency Pack—Includes all five software suites (not available foravailable the VNX5100).  VNX Total Protection Pack—Includes local, remote, and application protection suites.  VNX Total Value Pack—Includes all three protection software suites and the Security and Compliance Suite (the VNX5100 exclusively supports this package).Business case Customers require a scalable, tiered, and highly available infrastructure to deploy their virtual desktop environment. There are several new technologies available to assist them in architecting a virtual desktop solution. The customers need to know how best to use these technologies to maximize their investment, support service- level agreements, and reduce their desktop total cost of ownership. The purpose of this solution is to build a replica of a common customer end-user computing (EUC) environment, and validate the environment for performance, scalability, and functionality. Customers will achieve:  Increased control and security of their global, mobile desktop environment, typically their most at-risk environment.  Better end-user productivity with a more consistent environment.  Simplified management with the environment contained in the data center.  Better support of service-level agreements and compliance initiatives.  Lower operational and maintenance costs.Solution overview This solution demonstrates how to use an EMC VNX platform to provide storage resources for a robust VMware® View™ 5.0 environment and Windows 7 virtual desktops. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 14 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 15. Chapter 1: Executive Summary Planning and designing the storage infrastructure for VMware View is a critical step as the shared storage must be able to absorb large bursts of input/output (I/O) that occur throughout the course of a day. These large I/O bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users can often adapt to slow performance, but unpredictable performance will quickly frustrate them. To provide predictable performance for an EUC environment, the storage must be able to handle peak I/O load from clients without resulting in high response times. Designing for this workload involves deploying several disks to handle brief periods of extreme I/O pressure and it is which is expensive to implement. This solution uses EMC VNX FAST Cache to reduce the number of disks required, and thus minimizes the cost.Key results and recommendations EMC VNX FAST Cache provides measurable benefits in a desktop virtualization environment. It not only reduces the response time for both read and write workloads, but also effectively supports more virtual desktops on fewer drives, and greater IOPS density with a lower drive requirement. Chapter 7: Testing and Validation provides more details. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 15 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 16. Chapter 2: Introduction2 Introduction EMCs commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently faced by its customers. This Proven Solutions Guide summarizes a series of best practices that were discovered or validated during testing of the EMC infrastructure for VMware View 5.0 solution by using the following products:  EMC VNX series  VMware® View™ Manager 5.0  VMware® View™ Composer 2.7  VMware® View™ Persona Management  VMware vSphere® 5.0 This chapter includes the following sections:  Document overview  Reference architecture  Prerequisites and supporting documentation  TerminologyDocument overviewUse case definition The following seven use cases are examined in this solution:  Boot storm  Antivirus scan  Microsoft security patch install  Login storm  User workload simulated with Login Consultants Login VSI 3.5 tool  View recompose  View refresh Chapter 7: Testing and Validation contains the test definitions and results for each use case. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 16 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 17. Chapter 2: IntroductionPurpose The purpose of this solution is to provide a virtualized infrastructure for virtual desktops powered by VMware View 5.0, VMware vSphere 5.0, EMC VNX series (NFS), VNX FAST Cache, and storage pools. This solution includes all the components required to run this environment such as, the infrastructure hardware, software platforms including Microsoft Active Directory, and the required VMware View configuration. Information in this document can be used as the basis for a solution build, white paper, best practices document, or training.Scope This Proven Solutions Guide contains the results observed from testing the EMC Infrastructure for VMware View 5.0 solution. The objectives of this testing are to establish:  A reference architecture of validated hardware and software that permits easy and repeatable deployment of the solution.  The best practices for storage configuration that provides optimal performance, scalability, and protection in the context of the midtier enterprise market.Not in scope Implementation instructions are beyond the scope of this document. Information on how to install and configure VMware View 5.0 components, vSphere 5.0, and the required EMC products is outside the scope of this document. References to supporting documentation for these products are provided where applicable.Audience The intended audience for this Proven Solutions Guide is:  Internal EMC personnel  EMC partners  CustomersPrerequisites It is assumed the reader has a general knowledge of the following products:  VMware vSphere 5.0  VMware View 5.0  EMC VNX series  Cisco Nexus and Catalyst switchesTerminology Table 1 lists the terms frequently used in this document. Table 1. Terminology Term Definition EMC VNX FAST Cache A feature that enables the use of Flash drive as an expanded cache layer for the array. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 17 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 18. Chapter 2: Introduction Term Definition Linked clone A virtual desktop created by VMware View Composer from a writeable snapshot paired with a read-only replica of a master image. Login VSI A third-party benchmarking tool developed by Login Consultants that simulates real-world EUC workloads. Login VSI uses an AutoIT script and determines the maximum system capacity based on the response time of the users. Replica A read-only copy of a master image used to deploy linked clones. VMware View Integrates effectively with VMware View Manager to provide Composer advanced image management and storage optimization. VMware View Persona Preserves user profiles and dynamically synchronizes them Management with a remote profile repository. Floating Assignment A pool of desktops that are assigned to users at login time. Desktop Pool After logout, these desktops are returned to the pool for others to use.Reference ArchitectureCorresponding This Proven Solutions Guide has a corresponding Reference Architecture documentreference that is available on EMC online support website and EMC.com. EMC Infrastructure forarchitecture VMware View 5.0 - EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7—Reference Architecture provides more details. If you do not have access to these documents, contact your EMC representative. The reference architecture and the results in this Proven Solutions Guide are valid for 5,000 Windows 7 virtual desktops conforming to the workload described in the Validated environment profile section. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 18 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 19. Chapter 2: IntroductionReference Figure 1 shows the reference architecture of the midsize solution.architecturediagram Figure 1. VMware View—Reference architectureConfigurationHardware Table 2 lists the hardware used to validate the solution.resources Table 2. VMware View—Solution hardware Hardware Quantity Configuration Notes EMC VNX7500™ 1 Four Data Movers (3 VNX shared storage active and 1 passive) for core solution Eight disk-array enclosures (DAEs) configured with:  Seventy-two 600 GB, 15k-rpm 3.5 in. SAS disks  Eleven 200 GB, 3.5 in. Flash drives EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 19 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 20. Chapter 2: Introduction Hardware Quantity Configuration Notes  One additional Optional; for user Data Mover data  Four additional DAEs  Sixty-six 2 TB, 7,200 rpm 3.5 in. NL-SAS disks Ten additional 600 GB, Optional; for 15k-rpm 3.5 in. SAS infrastructure disks storage Intel-based 30  Memory: 96 GB of Virtual desktop servers RAM vSphere clusters one to six  CPU: Two Intel Xeon E5649 2.53 GHz hex-core processors  Internal storage: One 73 GB internal SAS disk 3  External storage: Optional; vSphere VNX7500 (NFS) cluster to host infrastructure  NIC: Quad-port virtual machines Broadcom BCM5709 1000Base-T adapters Intel-based 9  Memory: 256 GB Virtual desktop servers of RAM vSphere clusters seven to nine  CPU: Four Intel Xeon E7-4860 2.27 GHz deca- core processors  Internal storage: One 73 GB internal SAS disk  External storage: VNX7500 (NFS)  NIC: Quad-port Broadcom BCM5709 1000Base-T adapters Cisco Catalyst 2  WS-6509-E 1-gigabit host 6509 switch connections distributed over two  WS-x6748 1- line cards EMC Infrastructure for VMware View 5.0EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 20 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 21. Chapter 2: Introduction Hardware Quantity Configuration Notes gigabit line cards  WS-SUP720-3B supervisor Cisco Nexus 5020 2 Forty 10-gigabit ports Redundant LAN A/B configuration EMC Infrastructure for VMware View 5.0EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 21 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 22. Chapter 2: IntroductionSoftware resources Table 3 lists the software used to validate the solution. Table 3. VMware View—Solution software Software Configuration VNX7500 (shared storage, file systems) VNX OE for File Release 7.0.50.2 VNX OE for Block Release 31 (05.31.000.5.704) VSI for VMware vSphere: Unified Storage Version 5.2 Management VSI for VMware vSphere: Storage Viewer Version 5.2 Cisco Nexus Cisco Nexus 5020 Version 5.1(5) vSphere servers vSphere vSphere 5.0.0 (515841) EMC vSphere Storage APIs for Array Integration Version 1.0-10 (VAAI) Plug-in VMware servers OS Windows 2008 R2 SP1 VMware vCenter Server 5.0 VMware View Manager 5.0 VMware View Composer 2.7 Virtual desktops Note: Software used to generate the test load. OS MS Windows 7 Enterprise SP1 (32-bit) VMware tools 8.6.0 build-515842 Microsoft Office Office Enterprise 2007 (Version 12.0.6562.5003) Internet Explorer 8.0.7601.17514 Adobe Reader 9.1.0 McAfee Virus Scan 8.7 Enterprise Adobe Flash Player 11 Bullzip PDF Printer 6.0.0.865 Login VSI (EUC workload generator) 3.5 Professional Edition EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 22 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 23. Chapter 3: VMware View Infrastructure3 VMware View Infrastructure This chapter describes the general design and layout instructions that apply to the specific components used during the development of this solution. This chapter includes the following sections:  VMware View 5.0  vSphere 5.0 Infrastructure  Windows infrastructureVMware View 5.0Introduction VMware View delivers rich and personalized virtual desktops as a managed service from a virtualization platform built to deliver the entire desktop, including the operating system, applications, and user data. With VMware View 5.0, administrators can virtualize the operating system, applications, user data, and deliver modern desktops to end users. VMware View 5.0 provides centralized, automated management of these components with increased control and cost savings. VMware View 5.0 also improves business agility while providing a flexible high-performance desktop experience for end users across a variety of network conditions.Deploying VMware This solution used four VMware View Manager Server instances each capable ofView components scaling up to 2,000 virtual desktops. This solution also used four View Manager servers to provide for redundancy if one server was unavailable. Deployments of up to 10,000 virtual desktops are possible by using additional View Manager servers. The core elements of a VMware View 5.0 implementation are:  VMware® View™ Manager Connection Server 5.0  VMware View Composer 2.7  VMware View Persona Management  VMware vSphere 5.0 Additionally, the following components are required to provide the infrastructure for a VMware View 5.0 deployment:  Microsoft Active Directory  Microsoft SQL Server  DNS server  Dynamic Host Configuration Protocol (DHCP) server EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 23 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 24. Chapter 3: VMware View InfrastructureView Manager The View Manager Connection Server is the central management location for virtualServer desktops and has the following key roles:  Broker connections between the users and the virtual desktops  Control the creation and retirement of virtual desktop images  Assign users to desktops  Control the state of the virtual desktops  Control access to the virtual desktopsView Composer 2.7 View Composer 2.7 works directly with vCenter Server to deploy, customize, and maintain the state of the virtual desktops when using linked clones. The tiered storage capabilities of View Composer 2.7 enable the read-only replica and the linked clone disk images to be on dedicated storage. This allows for superior scaling in large configurations. View Composer is installed on each of the three vCenter servers.VMware View VMware View Persona Management is a new feature introduced with VMware ViewPersona 5.0 that preserves user profiles and dynamically synchronizes them with a remoteManagement profile repository. VMware View Persona Management does not require the configuration of Windows roaming profiles, eliminating the need to use Active Directory to manage View user profiles.  The combination of VMware View Persona Management and floating assignment desktop pools provides the experience of a dedicated desktop while potentially minimizing the number of desktops required in an organization.  With VMware View Persona Management, a user’s remote profile is dynamically downloaded when the user logs in to a View desktop. View downloads persona information only when the user needs it.  During login, VMware View Personal Management downloads only the files that Windows requires such as, user registry files. Other files are copied to the local desktop when the user or an application opens them from the local profile folder.  VMware View Persona Management copies recent changes in the local profile to the remote repository at a configurable interval.  During logoff, only files that were updated since the last replication are copied to the remote repository.  VMware View Persona Management can be configured to store user profiles in a secure, centralized repository.Floating Floating assignment desktop pools can reduce the number of desktops required inassignment situations where not all users need to be logged into desktops at the same time. Indesktop pools addition, desktop storage requirements are reduced as a persistent data disk will not be required for each desktop. The combination of VMware View Persona Management and floating assignment desktop pools provides the experience of a dedicated EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 24 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 25. Chapter 3: VMware View Infrastructure desktop while potentially minimizing the number of desktops required in an organization.View Composer VMware View with View Composer uses the concept of linked clones to quicklylinked clones provision virtual desktops. This solution uses the tiered storage feature of View Composer to build linked clones and place their replica images on separate datastores as shown in Figure 2. Figure 2. VMware View—Linked clones The operating system reads all the common data from the read-only replica, and the unique data that is created by the operating system or user. This unique data is stored on the linked clone. A logical representation of this relationship is shown in Figure 3. Figure 3. VMware View–Logical representation of linked clone and replica disk EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 25 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 26. Chapter 3: VMware View InfrastructurevSphere 5.0 InfrastructurevSphere 5.0 VMware vSphere 5.0 is the market-leading virtualization hypervisor used acrossoverview thousands of IT environments around the world. VMware vSphere 5.0 can transform or virtualize computer hardware resources, including the CPUs, RAM, hard disks, and network controllers, to create fully functional virtual machines. Each virtual machine runs their own operating systems and applications just like physical computers. The high-availability features in VMware vSphere 5.0 along with VMware Distributed Resource Scheduler (DRS) and Storage vMotion® enable seamless migration of virtual desktops from one vSphere server to another with minimal or no disruption to the customers.Desktop vSphere This solution deploys nine vSphere clusters to host virtual desktops. The desktopclusters vSphere 5.0 clusters consist of two different vSphere 5.0 server configurations. These server types were chosen due to availability. Similar results should be achievable with a variety of server configurations assuming that the ratios of server RAM per desktop and number of desktops per CPU core are upheld.  Cluster Type A consists of three quad deca-core vSphere 5.0 servers to support 741 desktops, each resulting in around 246 to 247 virtual machines per server. Each cluster has access to five NFS datastores; four for storing desktop linked clones and one for storing a desktop replica image.  Cluster Type B consists of five dual hex-core vSphere 5.0 servers to support 463 additional desktops, each resulting in around 92 to 93 virtual machines per vSphere server. Each cluster has access to four NFS datastores; three for storing desktop linked clones and one for storing a desktop replica image.Infrastructure One vSphere cluster is deployed in this solution for hosting the infrastructure servers.vSphere cluster This cluster is not required if the resources needed to host the infrastructure servers are already present within the host environment. The infrastructure vSphere 5.0 cluster consists of three dual hex-core vSphere 5.0 servers. The cluster has access to a single datastore that is used for storing the infrastructure server virtual machines. The infrastructure cluster hosts the following virtual machines:  Two Windows 2008 R2 SP1 domain controllers — Provides DNS, Active Directory, and DHCP services.  Three VMware vCenter 5 Servers each running on Windows 2008 R2 SP1 — Provides management services for the VMware clusters and View Composer. One of these three servers also runs vSphere 5.0 Update Manager.  Four VMware View 5.0 Manager Servers each running on Windows 2008 R2 SP1 — Provides services to manage the virtual desktops.  SQL Server 2008 SP2 on Windows 2008 R2 SP1 — Hosts databases for each of the three VMware Virtual Center Servers and their associated VMware View Composer installations. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 26 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 27. Chapter 3: VMware View Infrastructure  Windows 7 Key Management Service (KMS)—Provides a method to activate Windows 7 desktops.Windows infrastructureIntroduction Microsoft Windows provides the infrastructure that is used to support the virtual desktops and includes the following components:  Microsoft Active Directory  Microsoft SQL Server  DNS server  DHCP serverMicrosoft Active The Windows domain controllers run the Active Directory service that provides theDirectory framework to manage and support the virtual desktop environment. Active Directory performs the following functions:  Manages the identities of users and their information  Applies group policy objects  Deploys software and updatesMicrosoft SQL Microsoft SQL Server is a relational database management system (RDBMS). AServer dedicated SQL Server 2008 SP2 is used to provide the required databases to vCenter Server and View Composer.DNS server DNS is the backbone of Active Directory and provides the primary name resolution mechanism for Windows servers and clients. In this solution, the DNS role is enabled on the domain controllers.DHCP server The DHCP server provides the IP address, DNS server name, gateway address, and other information to the virtual desktops. In this solution, the DHCP role is enabled on one of the domain controllers. The DHCP scope is configured with an IP range that is large enough to support 5,000 virtual desktops. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 27 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 28. Chapter 4: Storage Design4 Storage Design This chapter describes the storage design that applies to the specific components of this solution.EMC VNX series storage architectureIntroduction The EMC VNX series is a dedicated network server optimized for file and block access that delivers high-end features in a scalable and easy-to-use package. The VNX series delivers a single-box block and file solution that offers a centralized point of management for distributed environments. This makes it possible to dynamically grow, share, and cost-effectively manage multiprotocol file systems and provide multiprotocol block access. Administrators can take advantage of simultaneous support for NFS and CIFS protocols by enabling Windows and Linux/UNIX clients to share files by using the sophisticated file-locking mechanisms of VNX for File and VNX for Block for high-bandwidth or for latency-sensitive applications. This solution uses file-based storage to leverage the benefits that each of the following provides:  File-based storage over the NFS protocol is used to store the VMDK files for all virtual desktops. This has the following benefit:  Unified Storage Management plug-in provides seamless integration with VMware vSphere to simplify the provisioning of datastores or virtual machines.  EMC vSphere Storage APIs for Array Integration (VAAI) plug-in for vSphere 5.0 supports the vSphere 5.0 VAAI primitives for NFS on the EMC VNX platform.  File-based storage over the CIFS protocol is used to store user data and the VMware View Persona Management repository. This has the following benefits:  Redirection of user data and VMware View Persona Management data to a central location for easy backup and administration.  Single instancing and compression of unstructured user data to provide the highest storage utilization and efficiency. This section explains the configuration of the storage provisioned over NFS for the vSphere cluster to store the VMDK images and the storage provisioned over CIFS to redirect user data and provide storage for the VMware View Persona Management repository. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 28 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 29. Chapter 4: Storage DesignStorage layout Figure 4 shows the physical storage layout of the disks in the core reference architecture; this configuration accommodates only the virtual desktops. Figure 5 shows the physical storage layout of the disks in the full reference architecture that includes the capacity needed to store the infrastructure servers and user data. The Storage layout overview section provides more details about the physical storage configuration. The disks are distributed among four different VNX7500 storage buses to maximize array performance. Figure 4. VNX7500–Core reference architecture physical storage layout EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 29 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 30. Chapter 4: Storage DesignFigure 5. VNX7500–Full reference architecture physical storage layout EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 30 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 31. Chapter 4: Storage DesignStorage layout The following configurations are used in the core reference architecture as shown inoverview Figure 4:  Four SAS disks (0_0_0 to 0_0_3) are used for the VNX OE.  Disks 0_0_6, 0_0_7, 4_0_2, and 6_0_4 are hot spares. These disks are denoted as Hot Spares in the storage layout diagram.  Ten Flash drives (0_0_4, 0_0_5, 2_0_0, 2_0_1, 4_0_0, 4_0_1, 6_0_0 to 6_0_3) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.  Sixty-five SAS disks (0_1_0 to 0_1_14, 2_1_0 to 2_1_14, 4_1_0 to 4_1_14, 6_1_0 to 6_1_14, and 6_0_4 to 6_0_8) in a RAID 5 storage pool (Storage Pool 0) are used to store linked clones and replicas. FAST Cache is enabled for the entire pool. Forty-two NFS file systems are provisioned and presented to the vSphere servers as datastores. The following configurations are used in the full reference architecture as shown in Figure 5:  Four SAS disks (0_0_0 to 0_0_3) are used for the VNX OE.  Disks 0_0_6, 0_0_7, 0_0_8, 4_0_7, 4_0_8, 6_0_10, and 6_0_11 are hot spares. These disks are denoted as Hot Spares in the storage layout diagram.  Ten Flash drives (0_0_4, 0_0_5, 2_0_0, 2_0_1, 4_0_0, 4_0_1, 6_0_0 to 6_0_3) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.  Sixty-five SAS disks (0_1_0 to 0_1_14, 2_1_0 to 2_1_14, 4_1_0 to 4_1_14, 6_1_0 to 6_1_14, and 6_0_4 to 6_0_8) in a RAID 5 storage pool (Storage Pool 0) are used to store linked clones and replicas. FAST Cache is enabled for the entire pool. Forty-two NFS file systems are provisioned and presented to the vSphere servers as datastores.  Ten SAS disks (2_0_2 to 2_0_6 and 4_0_2 to 4_0_6) in a RAID 5 storage pool (Storage Pool 1) are used to store infrastructure virtual machines. A 1.5-TB file system is provisioned and presented to the vSphere servers as a datastore.  Sixty-four NL-SAS disks (0_2_0 to 0_2_14, 2_2_0 to 2_2_14, 4_2_0 to 4_2_14, 6_2_0 to 6_2_14, 0_0_9, 2_0_7, 4_0_7, and 6_0_9) are configured in a RAID 6 (6+2) storage pool (Storage Pool 2) and used to store user data and the VMware View Persona Management repository. FAST Cache is enabled for the entire pool. Two VNX file systems are provisioned and presented as Windows file shares. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 31 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 32. Chapter 4: Storage DesignFile system layout Figure 6 shows the layout of the NFS file systems. Figure 6. VNX7500–NFS file system layout Sixty-five LUNs of 323 GB each are provisioned out of a RAID 5 storage pool configured with 65 SAS drives. Sixty-five drives are used because the block-based storage pool internally creates 4+1 RAID 5 groups. Therefore, the number of SAS drives used is a multiple of five. Likewise, sixty-five LUNs are used because AVM stripes across five dvols; the number of dvols is a multiple of five. The LUNs are presented to VNX File as dvols that belong to a system-defined pool. Forty-two file systems are then provisioned out of an Automatic Volume Management (AVM) system pool and are presented to the vSphere servers as datastores. File systems 1 to 9 are used to store replicas. File systems 20 to 53 are used to store the linked clones. A total of 5,000 desktops are provisioned and each replica is responsible for 741 or 463 linked clones, depending on the server type used in the desktop cluster. Starting from VNX for File version 7.0.35.3, AVM is enhanced to intelligently stripe across dvols that belong to the same block-based storage pool. There is no need to manually create striped volumes and add them to user-defined file-based pools. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 32 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 33. Chapter 4: Storage Design Figure 7 shows the layout of the optional CIFS file systems. Figure 7. VNX7500–CIFS file system layout Sixty-five LUNs of 231 GB each are provisioned out of a RAID 6 storage pool configured with 64 NL-SAS-drives. Sixty-four drives are used because the block-based storage pool internally creates 6+2 RAID 6 groups. Therefore, the number of NL-SAS drives used is a multiple of eight. Likewise, sixty-five LUNs are used because AVM stripes across five dvols, so the number of dvols is a multiple of five. The LUNs are presented to VNX File as dvols that belong to a system-defined pool. Like the NFS file systems, the CIFS file systems are provisioned from an AVM system pool to store user home directories and the VMware View Persona Management repository. The two file systems are grouped in the same storage pool because their I/O profiles are sequential. FAST Cache is enabled on both storage pools that are used to store the NFS and CIFS file systems.EMC VNX FAST VNX Fully Automated Storage Tiering (FAST) Cache, a part of the VNX FAST Suite, usesCache Flash drives as an expanded cache layer for the array. VNX7500 is configured with ten 200 GB Flash drives in a RAID 1 configuration for a 916 GB read/write-capable cache. This is the minimum amount of FAST Cache. Larger configurations are supported for scaling beyond 5,000 desktops. FAST Cache is an array-wide feature available for both file and block storage. FAST Cache works by examining 64 KB chunks of data in FAST Cache-enabled objects on the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to the Flash drives. The use of Flash drives dramatically EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 33 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 34. Chapter 4: Storage Design improves the response times for very active data and reduces data hot spots that can occur within the LUN. FAST Cache is an extended read/write cache that enables VMware View to deliver consistent performance at Flash-drive speeds by absorbing read-heavy activities (such as boot storms and antivirus scans), and write-heavy workloads (such as operating systems patches and application updates). This extended read/write cache is an ideal caching mechanism for View Composer because the base desktop image and other active user data are so frequently accessed that the data is serviced directly from the Flash drives without accessing the slower drives at the lower storage tier.VSI for VMware EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in to the vSpherevSphere client that provides a single management interface for managing EMC storage within the vSphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience that allows new features to be introduced rapidly in response to changing customer requirements. The following VSI features were used during the validation testing:  Storage Viewer (SV)—Extends the vSphere client to facilitate the discovery and identification of EMC VNX storage devices that are allocated to VMware vSphere hosts and virtual machines. SV presents the underlying storage details to the virtual data center administrator, merging the data of several different storage mapping tools into a few seamless vSphere client views.  Unified Storage Management—Simplifies storage administration of the EMC VNX platforms. It enables VMware administrators to provision new NFS and VMFS datastores, and RDM volumes seamlessly within vSphere client. The EMC VSI for VMware product guides available on the EMC online support website provide more information.vCenter Server FS1 to FS9—Each of the 25 GB datastores store a replica that is responsible for eitherstorage layout 741 or 463 linked clone desktops, determined by the vSphere cluster configuration. The input/output to these LUNs is strictly read-only except during operations that require copying a new replica into the datastore. FS20 to FS53—Each of these 600 GB datastores accommodate an average of 152 virtual desktops. This allows each desktop to grow to a maximum average size of approximately 3.9 GB. Each pool of desktops provisioned in View Manager is balanced across either three datastores in each hex-core server cluster, or five datastores in each deca-core server cluster.VNX shared file Virtual desktops use two VNX shared file systems, one for VMware View Personasystems Management data and the other to redirect user storage. Each file system is exported to the environment through a CIFS share. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 34 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 35. Chapter 4: Storage Design Table 4 lists the file systems used for user profiles and redirected user storage. Table 4. VNX7500—File systems File system Use Size profiles_fs VMware View Persona Management 4 TB repository userdata1_fs User data 5 TBVMware View Local user profiles are not recommended in an EUC environment. One reason for thisPersona is that a performance penalty is incurred when a new local profile is created when aManagement and user logs in to a new desktop image. Solutions such as VMware View Personafolder redirection Management and folder redirection enable user data to be stored centrally on a network location that resides on a CIFS share hosted by the EMC VNX array. This reduces the performance impact during user logon, while allowing user data to roam with the profiles.EMC VNX for File The EMC VNX for File Home Directory feature uses the userdata1_fs file system toHome Directory automatically map the H: drive of each virtual desktop to the users’ own dedicatedfeature subfolder on the share. This ensures that each user has exclusive rights to a dedicated home drive share. This share is created by the File Home Directory feature, and does not need to be created manually. The Home Directory feature automatically maps this share for each user. The Documents folder for each user is also redirected to this share. This allows users to recover the data in the Documents folder by using the VNX Snapshots for File. The file system is set at an initial size of 1 TB and extends itself automatically when more space is required.Capacity The file systems leverage EMC Virtual Provisioning™ and compression to provide flexibility and increased storage efficiency. If single instancing and compression are enabled, unstructured data, such as user documents, typically leads to a 50 percent reduction in consumed storage. The VNX file systems for the VMware View Persona Management repository and user documents are configured as follows:  profiles_fs is configured to consume 4 TB of space. With 50 percent space saving, each profile can grow up to 1.6 GB in size. The file system extends, if more space is required.  userdata1_fs is configured to consume 5 TB of space. With 50 percent space saving, each user is able to store 10 GB of data. The file system extends, if more space is required. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 35 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 36. Chapter 5: Network Design5 Network Design This chapter describes the network design used in this solution and contains the following sections:  Considerations  VNX for File network configuration  vSphere network configuration  Cisco Nexus 5020 configuration  Cisco Catalyst 6509 configurationConsiderationsNetwork layout Figure 8 shows the 10-gigabit Ethernet (GbE) connectivity between the two Ciscooverview Nexus 5020 switches and the EMC VNX storage. The uplink Ethernet ports from the Nexus switches can be used to connect to a 10 Gb or 1 Gb external LAN. In this solution, a 1 Gb LAN through Cisco Catalyst 6509 switches is used to extend Ethernet connectivity to the desktop clients, VMware View components, and the Windows Server infrastructure. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 36 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 37. Chapter 5: Network Design Figure 8. VMware View–Network layout overviewLogical design This validated solution uses virtual local area networks (VLANs) to segregate networkconsiderations traffic of various types to improve throughput, manageability, application separation, high availability, and security. The IP scheme for the virtual desktop network must be designed with enough IP addresses in one or more subnets for the DHCP server to assign them to each virtual desktop.Link aggregation VNX platforms provide network high availability or redundancy by using link aggregation. Link aggregation one of the methods used to address the problem of link or switch failure. This method enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 37 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 38. Chapter 5: Network Design In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining two 10 GbE ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.VNX for File network configurationData Mover ports The EMC VNX7500 in this solution includes five Data Movers–three Data Movers are used for the virtual desktop file systems, one Data Mover is used for the infrastructure servers NFS file system and user CIFS shares, and one Data Mover is used as a failover device. The Data Movers can be configured in an active/active or an active/passive configuration. In the active/passive configuration, the passive Data Mover serves as a failover device for any of the active Data Movers. In this solution, the Data Movers operate in the active/passive mode. The Data Mover used for the infrastructure server NFS file system, VMware View Persona Management repository, and user data CIFS file systems is not required if the storage required is available from other resources. The VNX7500 Data Movers are configured for two 10-gigabit interfaces on a single I/O module. Link Aggregation Control Protocol (LACP) is used to configure ports fxg-1-0 and fxg-1-1 to support virtual machine traffic, home folder access, and external access for the VMware View Persona Management repository. Figure 9 shows the rear view of two VNX7500 Data Movers that include two 10-gigabit fiber Ethernet (fxg) ports each in I/O expansion slot 1. fxg-1-1 fxg-1-1 3 3 3 3 3 3 3 3 3 3 Data Data 2 2 2 2 2 2 2 2 2 2 Mover 3 Mover 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 fxg-1-0 fxg-1-0 Figure 9. VNX7500–Ports of the two Data MoversLACP configuration To configure the link aggregation that uses fxg-1-0 and fxg-1-1 on Data Mover 2, runon the Data Mover the following command: $ server_sysconfig server_2 -virtual -name <Device Name> -create trk –option "device=fxg-1-0,fxg-1-1 protocol=lacp" To verify if the ports are channeled correctly, run the following command: $ server_sysconfig server_2 -virtual -info lacp1 server_2: *** Trunk lacp1: Link is Up *** *** Trunk lacp1: Timeout is Short *** *** Trunk lacp1: Statistical Load Balancing is IP *** EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 38 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 39. Chapter 5: Network Design Device Local Grp Remote Grp Link LACP Duplex Speed -------------------------------------------------------------- fxg-1-0 10000 4480 Up Up Full 10000 Mbs fxg-1-1 10000 4480 Up Up Full 10000 Mbs The remote group number must match for both ports and the LACP status must be “Up.” Verify if appropriate speed and duplex are established as expected.Data Mover It is recommended to create two Data Mover interfaces and IP addresses on the sameinterfaces subnet with the VMkernel port on the vSphere servers. Half of the NFS datastores are accessed by using one IP address and the other half by using the second IP. This allows the VMkernel traffic to be load balanced among the vSphere NIC teaming members. The following command shows an example of assigning two IP addresses to the same virtual interface named lacp1: $ server_ifconfig server_2 -all server_2: lacp1-1 protocol=IP device=lacp1 inet=192.168.16.2 netmask=255.255.255.0 broadcast=192.168.16.255 UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:92 lacp1-2 protocol=IP device=lacp1 inet=192.168.16.3 netmask=255.255.255.0 broadcast=192.168.16.255 UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:93Enable jumbo To enable jumbo frames for the link aggregation interface, run the followingframes on Data command to increase the MTU size:Mover interface $ server_ifconfig server_2 lacp1-1 mtu=9000 To verify if the MTU size is set correctly, run the following command: $ server_ifconfig server_2 lacp1-1 server_2: lacp1 protocol=IP device=lacp1 inet=192.168.16.2 netmask=255.255.255.0 broadcast=192.168.16.255 UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:92 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 39 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 40. Chapter 5: Network DesignvSphere network configurationNIC teaming All network interfaces on the vSphere servers in this solution use 1 GbE connections. All virtual desktops are assigned an IP address by using a DHCP server. The Intel- based servers use four onboard Broadcom GbE Controllers for all the network connections. Figure 10 shows the vSwitch configuration in vCenter Server. Figure 10. vSphere–vSwitch configuration Virtual switches vSwitch0 and vSwitch1 use two physical network interface cards (NICs) each. Table 5 lists the configured port groups in vSwitch0 and vSwitch1. Table 5. vSphere—Port groups in vSwitch0 and vSwitch1 Virtual Configured port Used for switch groups vSwitch0 Service console VMkernel port used for vSphere host management vSwitch0 VLAN277 Network connection for virtual desktops and LAN traffic vSwitch1 NFS NFS datastore traffic The NIC teaming load balancing policy for the vSwitches needs to be set to Route based on IP hash as shown in Figure 11. Figure 11. vSphere—Load balancing policy EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 40 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 41. Chapter 5: Network DesignIncrease the By default, a vSwitch is configured with 120 virtual ports, which may not be sufficientnumber of vSwitch in an EUC environment. On the vSphere servers that host the virtual desktops, eachvirtual ports virtual desktop consumes one port. Set the number of ports based on the number of virtual desktops that will run on each vSphere server as shown in Figure 12. Note: Reboot the vSphere server for the changes to take effect. Figure 12. vSphere—vSwitch virtual ports If a vSphere server fails or needs to be placed in the maintenance mode, other vSphere servers within the cluster must accommodate the additional virtual desktops that are migrated from the vSphere server that goes offline. Consider the worst-case scenario when the maximum number of virtual ports per vSwitch is determined. If there are not enough virtual ports, the virtual desktops will not be able to obtain an IP address from the DHCP server.Enable jumbo For a VMkernel port to access the NFS datastores by using jumbo frames, the MTUframes for the size for the vSwitch to which the VMkernel port belongs and the VMkernel port itselfVMkernel port must be set accordingly.used for NFS The MTU size is set from the properties page of both the vSwitch and the VMkernel port. Figure 13 and Figure 14 show how a vSwitch and a VMkernel port are configured to support jumbo frames. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 41 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 42. Chapter 5: Network DesignFigure 13. vSphere–vSwitch MTU setting EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 42 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 43. Chapter 5: Network Design Figure 14. vSphere–VMkernel port MTU setting The MTU values of the vSwitch and the VMkernel support must be set to 9,000 to enable jumbo frame support for NFS traffic between the vSphere hosts and the NFS datastores.Cisco Nexus 5020 configurationOverview The two 40-port Cisco Nexus 5020 switches provide redundant high-performance, low-latency 10 GbE, delivered by a cut-through switching architecture for 10 GbE server access in next-generation data centers.Cabling In this solution, the VNX Data Mover cabling is spread across two Nexus 5020 switches to provide redundancy and load balancing of the network traffic.Enable jumbo The following excerpt of the switch configuration shows the commands that areframes on Nexus required to enable jumbo frames at the switch level because per-interface MTU is notswitch supported: policy-map type network-qos jumbo class type network-qos class-default EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 43 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 44. Chapter 5: Network Design mtu 9216 system qos service-policy type network-qos jumbovPC for Data Mover Because the Data Mover connections for the two 10-gigabit network ports are spreadports across two Nexus switches and LACP is configured for the two Data Mover ports, virtual Port Channel (vPC) must be configured on both switches. The following excerpt is an example of the switch configuration pertaining to the vpc setup for one of the Data Mover ports. The configuration on the peer Nexus switch is mirrored for the second Data Mover port: n5k-1# show running-config … feature vpc … vpc domain 2 peer-keepalive destination <peer-nexus-ip> … interface port-channel3 description channel uplink to n5k-2 switchport mode trunk vpc peer-link spanning-tree port type network interface port-channel4 switchport mode trunk vpc 4 switchport trunk allowed vlan 275-277 … interface Ethernet1/4 description 1/4 vnx dm2 fxg-1-0 switchport mode trunk switchport trunk allowed vlan 275-277 channel-group 4 mode active interface Ethernet1/5 description 1/5 uplink to n5k-2 1/5 switchport mode trunk channel-group 3 mode active interface Ethernet1/6 description 1/6 uplink to n5k-2 1/6 switchport mode trunk channel-group 3 mode active To verify if the vPC is configured correctly, run the following command on both the switches. The output should look like this: n5k-1# show vpc Legend: (*) - local vPC is down, forwarding via vPC peer-link vPC domain id : 2 Peer status : peer adjacency formed ok vPC keep-alive status : peer is alive Configuration consistency status: success vPC role : secondary Number of vPCs configured : 1 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 44 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 45. Chapter 5: Network Design Peer Gateway : Disabled Dual-active excluded VLANs : - vPC Peer-link status ------------------------------------------------------------------ id Port Status Active vlans -- ---- ------ ----------------------------------------------- 1 Po3 up 1,275-277 vPC status ------------------------------------------------------------------ id Port Status Consistency Reason Active vlans ------ ----------- ------ ----------- --------------- ----------- 4 Po4 up success success 275-277Cisco Catalyst 6509 configurationOverview The 9-slot Cisco Catalyst 6509-E switch provides high port densities that are ideal for many wiring closet, distribution, and core network deployments as well as data center deployments.Cabling In this solution, the vSphere server cabling is evenly spread across two WS-x6748 1 Gb line cards to provide redundancy and load balancing of the network traffic.Server uplinks The server uplinks to the switch are configured in a port channel group to increase the utilization of server network resources and to provide redundancy. The vSwitches are configured to load balance the network traffic based on IP hash. The following is an example of the configuration for one of the server ports: description 8/10 9048-43 rtpsol189-1 switchport switchport trunk encapsulation dot1q switchport trunk allowed vlan 276,516-527 switchport mode trunk mtu 9216 no ip address spanning-tree portfast trunk channel-group 23 mode on EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 45 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 46. Chapter 6: Installation and Configuration6 Installation and Configuration This chapter describes how to install and configure this solution and includes the following sections:  Installation overview  VMware View components  Storage componentsInstallation overview This section provides an overview of the configuration of the following components:  Desktop pools  Storage pools  FAST Cache  VNX Home Directory The installation and configuration steps for the following components are available on the VMware website:  VMware View Connection Server 5.0  VMware View Composer 2.7  VMware View Persona Management  VMware vSphere 5.0 The installation and configuration steps for the following components are not covered:  Microsoft System Center Configuration Manager (SCCM) 2007 R3  Microsoft Active Directory, Group Policies, DNS, and DHCP  Microsoft SQL Server 2008 SP2 EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 46 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 47. Chapter 6: Installation and ConfigurationVMware View componentsVMware View The VMware View Installation document available on the VMware website hasinstallation detailed procedures on how to install View Manager Server and View Composer 2.7.overview No special configuration instructions are required for this solution. The vSphere Installation and Setup Guide available on the VMware website contains detailed procedures that describe how to install and configure vCenter Server and vSphere. As a result, these subjects are not covered in further detail in this paper. No special configuration instructions are required for this solution.VMware View Before deploying the desktop pools, ensure that the following steps from the VMwaresetup View Installation document have been completed:  Prepare Active Directory  Install View Composer 2.7 on the vCenter Server  Install the View Manager Server  Add the vCenter Server instance to View ManagerVMware View VMware supports a maximum of 1,000 desktops per replica image, which requiresdesktop pool creating a unique pool for every 1,000 desktops. In this solution, nine persistentconfiguration automated desktop pools were used. To create one of the persistent automated desktop pools as configured for this solution, complete the following steps: 1. Log in to the VMware View Administration page, which is located at https://server/admin where “server” is the IP address or DNS name of the View Manager server. 2. Click Pools in the left pane. 3. Click Add under the Pools banner. The Add Pool page appears. 4. Under Pool Definition, click Type. The Type page appears on the right pane. 5. Select Automated Pool as shown in Figure 15. Figure 15. VMware View–Select Automated Pool EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 47 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 48. Chapter 6: Installation and Configuration 6. Click Next. The User Assignment page appears. 7. Select Floating. 8. Click Next. The vCenter Server page appears. 9. Select View Composer linked clones and select a vCenter Server that supports View Composer as shown in Figure 16.Figure 16. VMware View–Select View Composer linked clones 10. Click Next. The Pool Identification page appears. 11. Enter the required information. 12. Click Next. The Pool Settings page appears. 13. Make the required changes. 14. Click Next. The View Composer Disks page appears. 15. Leave the Disposable File Redirection settings at their defaults. 16. Click Next. The Provisioning Settings page appears. 17. Perform the following as shown in Figure 17: a. Select Use a naming pattern. b. In the Naming Pattern field, type the naming pattern. c. In the Max number of desktops field, type the number of desktops to provision. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 48 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 49. Chapter 6: Installation and ConfigurationFigure 17. VMware View–Select Provision Settings 18. Click Next. The vCenter Settings page appears. 19. Perform the following as shown in Figure 18: a. Click Browse to select a default image, a folder for the virtual machines, the cluster hosting the virtual desktops, and the resource pool to store the desktops.Figure 18. VMware View–vCenter Settings b. In the Datastores field, click Browse. The Select Datastores page appears. 20. Select Use different datastore for View Composer replica disks and in the Use For list box, select Replica disks or Linked clones as shown in Figure 19. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 49 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 50. Chapter 6: Installation and ConfigurationFigure 19. VMware View–Select Datastores 21. Click OK. The vCenter Settings page appears as shown in Figure 20.Figure 20. VMware View–vCenter Settings 22. Verify the settings. 23. Click Next. The Guest Customization page appears. 24. Perform the following as shown in Figure 21: a. In the Domain list box, select the domain. b. In the AD container field, click Browse, and then select the AD container. c. Select Use QuickPrep. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 50 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 51. Chapter 6: Installation and Configuration Figure 21. VMware View–Guest Customization 25. Click Next. The Ready to Complete page appears. 26. Verify the settings for the pool. 27. Click Finish. The deployment of the virtual desktops starts.VMware View The profiles_fs CIFS file system is used for the VMware View Persona ManagementPersona repository. VMware View Persona Management is enabled using a Windows groupManagement policy template. The group policy template is located on the View 5 Manager Server inconfiguration the Install DriveProgram FilesVMwareVMware ViewServerextrasGroupPolicyFiles directory. The group policy template titled ViewPM.adm is needed to configure VMware View Persona Management. VMware View Persona Management is enabled by using computer group policies that are applied to the organizational unit containing the virtual desktop computer objects. Figure 22 shows a summary of the policies configured to enable VMware View Persona Management in the reference architecture environment. Figure 22. VMware View Persona Management–Initial configuration EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 51 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 52. Chapter 6: Installation and Configuration When deploying VMware View Persona Management in a production environment, it is recommended to redirect the folders that users commonly use to store documents or other files. Figure 23 shows the VMware View Persona Management group policy settings required to redirect the user desktop, downloads, My Documents, and My Pictures folders. Figure 23. VMware View Persona Management–Folder Redirection policiesStorage componentsStorage pools Storage pools in the EMC VNX OE support heterogeneous drive pools. Three storage pools were configured in this solution as shown in Figure 24:  A RAID 5 storage pool (Pool 0) was configured from 65 SAS drives. Sixty-five 323 GB thick LUNs were created from this storage pool. This pool was used to store the NFS file systems containing the virtual desktops. FAST Cache was enabled for the pool.  A RAID 6 storage pool (Pool 1) was configured from 64 NL-SAS drives. Sixty- five 231 GB thick LUNs were created from this storage pool. This pool was used to store the user home directory and VMware View Persona Management repository CIFS file shares. FAST Cache was enabled for the pool.  A RAID 5 storage pool (Pool 2) was configured from 10 SAS drives. Ten 150 GB thick LUNs were created from this storage pool. This pool was used to store the NFS file systems containing the infrastructure virtual servers. FAST Cache was enabled for the pool. Figure 24. VNX7500–Storage pools EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 52 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 53. Chapter 6: Installation and ConfigurationNFS active threads The default number of threads dedicated to serve NFS requests is 384 per Data Moverper Data Mover on the VNX. Some use cases such as the scanning of desktops might require more number of NFS active threads. It is recommended to increase the number of active NFS threads to the maximum of 2,048 on each Data Mover. The nthreads parameter can be set by using the following command: # server_param server_2 –facility nfs –modify nthreads –value 2048 Reboot the Data Mover for the change to take effect. Type the following command to confirm the value of the parameter: # server_param server_2 -facility nfs -info nthreads server_2 : name = nthreads facility_name = nfs default_value = 384 current_value = 2048 configured_value = 2048 user_action = reboot DataMover change_effective = reboot DataMover range = (32,2048) description = Number of threads dedicated to serve nfs requests This param represents number of threads dedicated to serve nfs requests. Any changes made to this param will be applicable after reboot only Repeat this command for each active Data Mover. The NFS active threads value can also be configured by editing the properties of the nthreads Data Mover parameter in Settings–Data Mover Parameters menu in Unisphere, as shown in Figure 25. Highlight the nthreads value you want to edit and select Properties to open the nthreads properties window. Update the Value field with the new value and click OK as shown in Figure 25. Perform this procedure for each of the nthreads Data Mover parameters listed menu. Reboot the Data Movers for the change to take effect. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 53 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 54. Chapter 6: Installation and Configuration Figure 25. VNX7500–nThreads propertiesNFS performance VNX file software contains a performance fix that significantly reduces NFS writefix latency. The minimum software patch required for the fix is 7.0.13.0. In addition to the patch upgrade, the performance fix only takes effect when the NFS file system is mounted by using the uncached option as shown below: # server_mount server_2 -option uncached fs1 /fs1 The uncached option can be verified by using the following command: # server_mount server_2 server_2 : root_fs_2 on / uxfs,perm,rw root_fs_common on /.etc_common uxfs,perm,ro fs1 on /fs1 uxfs,perm,rw,uncached fs2 on /fs2 uxfs,perm,rw,uncached fs3 on /fs3 uxfs,perm,rw,uncached fs20 on /fs4 uxfs,perm,rw,uncached fs21 on /fs5 uxfs,perm,rw,uncached fs22 on /fs6 uxfs,perm,rw,uncached fs23 on /fs4 uxfs,perm,rw,uncached fs24 on /fs5 uxfs,perm,rw,uncached fs25 on /fs6 uxfs,perm,rw,uncached fs26 on /fs4 uxfs,perm,rw,uncached fs27 on /fs5 uxfs,perm,rw,uncached fs28 on /fs6 uxfs,perm,rw,uncached fs23 on /fs4 uxfs,perm,rw,uncached fs24 on /fs5 uxfs,perm,rw,uncached EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 54 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 55. Chapter 6: Installation and Configuration The uncached option can also be configured by editing the properties of the file system mount in Storage–Storage Configuration–File Systems–Mounts menu in Unisphere. Highlight the file system mount you want to edit and select Properties to open the Mount Properties window as shown in Figure 26. Select the Set Advanced Options checkbox to display the advanced menu options, and then select the Direct Writes Enabled checkbox and click OK. The uncached option is now enabled for the selected file system. Figure 26. VNX7500–File System Mount PropertiesEnable FAST Cache FAST Cache is enabled as an array-wide feature in the system properties of the array in EMC Unisphere. Click the FAST Cache tab, then click Create and select the Flash drives to create the FAST Cache. There are no user-configurable parameters for FAST Cache. Figure 27 shows the FAST Cache settings for VNX7500 array used in this solution. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 55 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 56. Chapter 6: Installation and Configuration Figure 27. VNX7500–FAST Cache tab To enable FAST Cache for any LUN in a pool, navigate to the Storage Pool Properties page in Unisphere, and then click the Advanced tab. Select Enabled to enable FAST Cache as shown in Figure 28. Figure 28. VNX7500–Enable FAST CacheVNX Home The VNX Home Directory installer is available on the NAS Tools and Applications CDDirectory feature for each VNX OE for File release and can be downloaded from the EMC online support website. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 56 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 57. Chapter 6: Installation and ConfigurationAfter the VNX Home Directory feature is installed, use the Microsoft ManagementConsole (MMC) snap-in to configure the feature. A sample configuration is shown inFigure 29 and Figure 30.Figure 29. VNX7500–Home Directory MMC snap-inFor any user account that ends with a suffix between 1 and 5,000, the sampleconfiguration shown in Figure 30 automatically creates a user home directory in thefollowing location and maps the H: drive to the following path:userdata_fs file system in the formatuserdata_fs<user>Each user has exclusive rights to the folder.Figure 30. VNX7500–Sample Home Directory User folder properties EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 57 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 58. Chapter 7: Testing and Validation7 Testing and Validation This chapter provides a summary and characterization of the tests performed to validate the solution. The goal of the testing is to characterize the performance of the solution and its component subsystems during the following scenarios:  Boot storm of all desktops  McAfee antivirus full scan on all desktops  Security patch install with Microsoft SCCM 2007 R3 on all desktops  User workload testing using Login VSI on all desktops  View recompose  View refreshValidated environment profileProfile Table 6 provides the validated environment profile.characteristics Table 6. VMware View—environment profile Profile characteristic Value Number of virtual desktops 5,000 Windows 7 Enterprise SP1 (32- Virtual desktop OS bit) CPU per virtual desktop 1 vCPU  Cluster Type A—6.2 Number of virtual desktops per CPU core  Cluster Type B—7.7 RAM per virtual desktop 1 GB Average storage available for each virtual desktop 3.9 GB Average IOPS per virtual desktop in steady state 8.7 Average peak IOPS per virtual desktop during boot 11 storm Number of datastores used to store linked clones 33 Number of datastores used to store replicas 9 Number of virtual desktops per datastore 152 (average) RAID 5, 600 GB, 15k rpm, 3.5 in. Disk and RAID type for datastores SAS disks EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 58 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 59. Chapter 7: Testing and Validation Profile characteristic Value Disk and RAID type for CIFS shares to host the VMware RAID 6, 2 TB, 7,200 rpm, 3.5 in. View Persona Management repository and home NL-SAS disks directories Number of VMware clusters for virtual desktops 9  Cluster Type A—3 Number of vSphere servers in each cluster  Cluster Type B—5  Cluster Type A—741 Number of virtual desktops in each cluster  Cluster Type B—463Use cases Six common use cases were executed to validate whether the solution performed as expected under heavy-load situations. The following use cases were tested:  Simultaneous boot of all desktops  Full antivirus scan of all desktops  Installation of a monthly release of security updates using SCCM 2007 R3 on all desktops  Login and steady-state user load simulated using the Login VSI medium workload on all desktops  Recompose of all desktops  Refresh of all desktops In each use case, a number of key metrics are presented that show the overall performance of the solution.Login VSI Virtual Session Index (VSI) version 3.5 was used to run a user load on the desktops. VSI provided the guidance to gauge the maximum number of users a desktop environment can support. The Login VSI workload is categorized as light, medium, heavy, multimedia, core, and random (also known as workload mashup). A medium workload that was selected for this testing had the following characteristics:  The workload emulated a medium knowledge worker who used Microsoft Office Suite, Internet Explorer, Java, and Adobe Acrobat Reader.  After a session started, the medium workload repeated every 12 minutes.  The response time was measured every 2 minutes during each loop.  The medium workload opened up to five applications simultaneously.  The type rate was 160 ms for each character.  Approximately 2 minutes of idle time was included to simulate real-world users. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 59 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 60. Chapter 7: Testing and Validation Each loop of the medium workload used the following applications:  Microsoft Outlook 2007—Browsed 10 email messages.  Microsoft Internet Explorer (IE)—On one instance of IE, the BBC.co.uk website was opened, another instance browsed Wired.com, Lonelyplanet.com, another instance opened a flash-based 480p video file, and another instance opened a JAVA-based application.  Microsoft Word 2007—One instance of Microsoft Word 2007 was used to measure the response time, while another instance was used to edit a document.  Bullzip PDF Printer and Adobe Acrobat Reader—The Word document was printed and the PDF was reviewed.  Microsoft Excel 2007—A very large Excel worksheet was opened and random operations were performed.  Microsoft PowerPoint 2007—A presentation was reviewed and edited.  7-zip—Using the command line version, the output of the session was zipped.Login VSI launcher A Login VSI launcher is a Windows system that launches desktop sessions on target virtual desktops. There are two types of launchers—master and slave. There is only one master in a given test bed, but there can be several slave launchers as required. The number of desktop sessions a launcher can run is typically limited by CPU or memory resources. By default, the graphics device interface (GDI) limit is not tuned. In such a case, Login consultants recommend using a maximum of 45 sessions per launcher with two CPU cores (or two dedicated vCPUs) and a 2 GB RAM. When the GDI limit is tuned, this limit extends to 60 sessions per two-core machine. In this validated testing, 5,000 desktop sessions were launched from 90 launchers, with approximately 56 sessions per launcher. Each launcher was allocated two vCPUs and 4 GB of RAM. No bottlenecks were observed on the launchers during the Login VSI tests.FAST Cache For all tests, FAST Cache was enabled for the storage pools holding the replica andconfiguration linked clone datastores as well as the user home directories and VMware View Persona Management repository.Boot storm resultsTest methodology This test was conducted by selecting all the desktops in vCenter Server, and then selecting Power On. Overlays are added to the graphs to show when the last power- on task was completed and when the IOPS to the pool LUNs achieved a steady state. For the boot storm test, all 5,000 desktops were powered on within 10 minutes and achieved a steady state approximately 5 minutes later. All desktops were available for login in approximately 12 minutes. This section describes the boot storm results for each of the three use cases when powering on the desktop pools. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 60 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 61. Chapter 7: Testing and ValidationPool individual Figure 31 shows the disk IOPS and response time for a single SAS drive in the storagedisk load pool. The statistics from all the drives in the pool were similar. Therefore, a single drive is reported for clarity and readability of the graph. Figure 31. Boot storm—Disk IOPS for a single SAS drive During peak load, the disk serviced 137 IOPS and experienced a response time of 7.2 ms. The Data Mover cache and FAST Cache both helped to reduce the disk load associated with the boot storm.Pool LUN load Figure 32 shows the replica LUN IOPS and the response time of one of the desktop storage pool LUNs. The statistics from each LUN were similar. Therefore, a single LUN is reported for clarity and readability of the graph. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 61 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 62. Chapter 7: Testing and Validation Figure 32. Boot storm—Pool LUN IOPS and response time During peak load, the LUN serviced 575 IOPS and experienced a response time of 2.6 ms.Storage processor Figure 33 shows the total IOPS serviced by the storage processors during the test.IOPS 60000 Power-on period & settle time Steady state 50000 Throughput (IOPS) 40000 30000 20000 10000 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Time (mins) SP A: Total Throughput (IOPS) SP B: Total Throughput (IOPS) Figure 33. Boot storm—Storage processor total IOPS During peak load, the storage processors serviced approximately 52,000 IOPS.Storage processor Figure 34 shows the storage processor utilization during the test. The pool-basedutilization LUNs were split across both the storage processors to balance the load equally. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 62 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 63. Chapter 7: Testing and Validation Figure 34. Boot storm—Storage processor utilization The virtual desktops generated high levels of I/O during the peak load of the boot storm test. The storage processor utilization remained below 44 percent.FAST Cache IOPS Figure 35 shows the IOPS serviced from FAST Cache during the boot storm test. Figure 35. Boot storm—FAST Cache IOPS During peak load, FAST Cache serviced almost 40,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the 10 Flash drives alone serviced approximately 35,000 IOPS during peak load. A sizing exercise using EMCs standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it takes approximately 195 SAS drives to achieve the same level of performance. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 63 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 64. Chapter 7: Testing and ValidationData Mover CPU Figure 36 shows the Data Mover CPU utilization during the boot storm test. Theutilization results shown are an average of the CPU utilization of each of the three Data Movers used for the file systems containing the virtual desktops. Figure 36. Boot storm—Data Mover CPU utilization The Data Mover achieved a peak CPU utilization of approximately 31 percent during peak load.Data Mover NFS Figure 37 shows the NFS operations per second on the Data Mover during the bootload storm test. The results shown are an average of the NFS operations per second of each of the three Data Movers used for the file systems that contain the virtual desktops. Figure 37. Boot storm—Data Mover NFS load EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 64 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 65. Chapter 7: Testing and Validation During peak load, there were approximately 87,000 total NFS operations per second. The Data Mover cache helped reduce the load on the disks.vSphere CPU load Figure 38 shows the CPU load from the vSphere servers in the VMware clusters. Each server with the same CPU type had similar results. Therefore, only the results from one hex-core and one deca-core server are shown in the graph. Figure 38. Boot storm—vSphere CPU load The hex-core vSphere server achieved a peak CPU utilization of approximately 55 percent during peak load and the deca-core server achieved 54 percent. Hyper- threading was enabled to double the number of logical CPUs.VSphere disk Figure 39 shows the Average Guest Millisecond/Command counter, which is shownresponse time as GAVG in vSphere top. This counter represents the response time for I/O operations initiated to the storage array. For each server CPU type, the datastore hosting the replica storage is shown as Replica FS GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone FS GAVG in the graph. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 65 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 66. Chapter 7: Testing and Validation Figure 39. Boot storm—Average Guest Millisecond/Command counter The peak GAVG of the file system hosting the replica image was 180 ms, and the linked clone file system was 95 ms. The overall impact of this brief spike in GAVG values was minimal as all 5,000 desktops attained steady state in less than 14 minutes after the initial power on.Antivirus resultsTest methodology This test was conducted by scheduling a full scan of all desktops using a custom script to initiate an on-demand scan using McAfee VirusScan 8.7i. The full scans were started on all the desktops. The difference between start time and finish time was approximately 4 hours and 5 minutes.Pool individual Figure 40 shows the disk I/O for a single SAS drive in the storage pool that stores thedisk load virtual desktops. The statistics from all drives in the pool were similar. Therefore, only a single drive is reported for clarity and readability of the graph. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 66 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 67. Chapter 7: Testing and Validation Figure 40. Antivirus—Disk I/O for a single SAS drive During peak load, the disk serviced 82 IOPS and experienced a response time of 9.5 ms. The FAST Cache and Data Mover cache helped to reduce the load on the disks.Pool LUN load Figure 41 shows the replica LUN IOPS and the response time of one of the storage pool LUNs. The statistics from the LUNs were similar. Therefore, a single LUN is reported for clarity and readability of the graph. Figure 41. Antivirus—Pool LUN IOPS and response time During peak load, the LUN serviced 170 IOPS and experienced a response time of 2.5 ms. The majority of the read I/O was served by the FAST Cache and Data Mover cache. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 67 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 68. Chapter 7: Testing and ValidationStorage processor Figure 42 shows the total IOPS serviced by the storage processor during the test.IOPS Figure 42. Antivirus—Storage processor IOPS During peak load, the storage processors serviced over 24,900 IOPS.Storage processor Figure 43 shows the storage processor utilization during the antivirus scan test.utilization Figure 43. Antivirus—Storage processor utilization During peak load, the antivirus scan operations caused moderate CPU utilization of approximately 33 percent. The load was shared between both the storage processors during the antivirus scan. EMC VNX7500 had sufficient scalability headroom for this workload. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 68 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 69. Chapter 7: Testing and ValidationFAST Cache IOPS Figure 44 shows the IOPS serviced from FAST Cache during the test. Figure 44. Antivirus—FAST Cache IOPS During peak load, FAST Cache serviced approximately 17,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and storage processor memory cache . If memory cache hits are excluded, the 10 Flash drives alone serviced approximately 14,000 IOPS during peak load. A sizing exercise using EMCs standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it takes approximately 78 SAS drives to achieve the same level of performance.Data Mover CPU Figure 45 shows the Data Mover CPU utilization during the antivirus scan test. Theutilization results shown are an average of the CPU utilization of each of the three Data Movers used for the file systems containing the virtual desktops. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 69 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 70. Chapter 7: Testing and Validation Figure 45. Antivirus—Data Mover CPU utilization The Data Mover achieved a peak CPU utilization of approximately 19 percent during peak load in this test.Data Mover NFS Figure 46 shows the NFS operations per second from the Data Mover during theload antivirus scan test. The results shown are an average of the NFS operations per second of each of the three Data Movers used for the file systems containing the virtual desktops. Figure 46. Antivirus—Data Mover NFS load During peak load, there were approximately 38,000 NFS operations per second. The Data Mover cache helped reduce the load on the disks. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 70 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 71. Chapter 7: Testing and ValidationvSphere CPU load Figure 47 shows the CPU load from the vSphere servers in the VMware clusters. Each server with the same CPU type had similar results. Therefore, only the results from one hex-core and one deca-core server are shown in the graph. Figure 47. Antivirus—vSphere CPU load The peak CPU load on the vSphere server was 30 percent during this test. Hyper- threading was enabled to double the number of logical CPUs.vSphere disk Figure 48 shows the Average Guest Millisecond/Command counter, which is shownresponse time as GAVG in esxtop. This counter represents the response time for I/O operations initiated on the storage array. For each server CPU type, the datastore hosting the replica storage is shown as Replica FS GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone FS GAVG in the graph. Figure 48. Antivirus—Average Guest Millisecond/Command counter EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 71 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 72. Chapter 7: Testing and Validation The peak GAVG of the file system hosting the replica image was 75 ms, and the linked clone file system 59 ms.Patch install resultsTest methodology This test was performed by pushing a monthly release of Microsoft security updates to all desktops using Microsoft System Center Configuration Manager (SCCM) 2007 R3. The desktops were all placed in single collection within SCCM. The collection was configured to install updates in a 1-minute staggered schedule that occurred 30 minutes after the patches were available for download. All patches were installed within ten minutes.Pool individual Figure 49 shows the disk IOPS for a single SAS drive that is part of the storage pool.disk load The statistics from each drive in the pool were similar. Therefore, only the statistics of a single drive are shown for clarity and readability of the graph. Figure 49. Patch install—Disk IOPS for a single SAS drive During patch installation, the disk serviced 165 IOPS and experienced a response time of 9.0 ms. The FAST Cache and Data Mover cache helped to reduce the load on the disks.Pool LUN load Figure 50 shows the replica LUN IOPS and response time of one of the storage pool LUNs. The statistics from each LUN in the pool were similar. Therefore, only the statistics of a single LUN are shown for clarity and readability of the graph. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 72 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 73. Chapter 7: Testing and Validation Figure 50. Patch install—Pool LUN IOPS and response time During patch installation, the LUN serviced 620 IOPS and experienced a response time of 8.0 ms.Storage processor Figure 51 shows the total IOPS serviced by the storage processor during the test.IOPS Figure 51. Patch install—Storage processor IOPS During peak load, the storage processors serviced approximately 70,000 IOPS. The load was shared between both storage processors during the patch install operation on each pool of virtual desktops. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 73 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 74. Chapter 7: Testing and ValidationStorage processor Figure 52 shows the storage processor utilization during the test.utilization Figure 52. Patch install—Storage processor utilization The patch install operations caused moderate CPU utilization during peak load, reaching a maximum of 43 percent utilization. The EMC VNX7500 had sufficient scalability headroom for this workload.FAST Cache IOPS Figure 53 shows the IOPS serviced from FAST Cache during the test. Figure 53. Patch install—FAST Cache IOPS During patch installation, FAST Cache serviced over 36,000 IOPS from datastores. The FAST Cache hits include IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the 10 Flash drives alone serviced approximately 18,500 IOPS during peak load. A sizing exercise using EMCs standard EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 74 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 75. Chapter 7: Testing and Validation performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it takes approximately 103 SAS drives to achieve the same level of performance.Data Mover CPU Figure 54 shows the Data Mover CPU utilization during the patch install test. Theutilization results shown are an average of the CPU utilization of each of the three Data Movers used for the file systems containing the virtual desktops. Figure 54. Patch install—Data Mover CPU utilization The Data Mover achieved a peak CPU utilization of approximately 15 percent during peak load in this test.Data Mover NFS Figure 55 shows the NFS operations per second from the Data Mover during the patchload install test. The results shown are an average of the NFS operations per second of each of the three Data Movers used for the file systems containing the virtual desktops. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 75 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 76. Chapter 7: Testing and Validation Figure 55. Patch install—Data Mover NFS load During peak load, the Data Mover serviced over 43,500 NFS operations per second. The Data Mover cache helped reduce the load on the disks.vSphere CPU load Figure 56 shows the CPU load from the vSphere servers in the VMware clusters. As each server with the same CPU type had similar results, only the results from one hex- core and one deca-core server are shown. Figure 56. Patch install—vSphere CPU load The vSphere server CPU load was well within the acceptable limits during the test, reaching a maximum of 36 percent utilization. Hyper-threading was enabled to double the number of logical CPUs. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 76 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 77. Chapter 7: Testing and ValidationvSphere disk Figure 57 shows the Average Guest Millisecond/Command counter, which is shownresponse time as GAVG in esxtop. This counter represents the response time for I/O operations initiated on the storage array. For each server CPU type, the datastore hosting the replica storage is shown as Replica FS GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone FS GAVG in the graph. Figure 57. Patch install—Average Guest Millisecond/Command counter The peak replica LUN GAVG value was 62 ms while the peak linked clone LUN GAVG was approximately 71 ms.Login VSI resultsTest methodology This test was conducted by scheduling 5,000 users to connect through remote desktop in approximately a 90-minute window, and starting the Login VSI-medium workload. The workload was run for one hour in a steady state to observe the load on the system.Desktop logon Figure 58 shows the time required for the desktops to complete the user logintime process. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 77 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 78. Chapter 7: Testing and Validation Figure 58. Login VSI—Desktop login time The time required to complete the login process reached a maximum of 10.5 seconds during peak load of the 5,000 desktop login storm.Pool individual Figure 59 shows the disk IOPS for a single SAS drive that is part of the storage pool.disk load The statistics from each drive in the pool were similar. Therefore, only the statistics of a single drive are shown for clarity and readability of the graph. Figure 59. Login VSI—Disk IOPS for a single SAS drive During peak load, the disk serviced 100 IOPS and experienced a response time of 7.1 ms. The FAST Cache and Data Mover cache helped to reduce the load on the disks. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 78 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 79. Chapter 7: Testing and ValidationPool LUN load Figure 60 shows the Replica LUN IOPS and response time from one of the storage pool LUNs. The statistics from each LUN were similar. Therefore, only a single LUN is reported for clarity and readability of the graph. Figure 60. Login VSI—Pool LUN IOPS and response time During peak load, the LUN serviced 275 IOPS and experienced a response time of 3.0 ms.Storage processor Figure 61 shows the total IOPS serviced by the storage processor during the test.IOPS Figure 61. Login VSI—Storage processor IOPS EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 79 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 80. Chapter 7: Testing and Validation During peak load, the storage processors serviced a maximum of approximately 37,000 IOPS.Storage processor Figure 62 shows the storage processor utilization during the test.utilization Figure 62. Login VSI—Storage processor utilization The storage processor peak utilization was below 37 percent during the login storm. The load was shared between both the storage processors during the VSI load test.FAST Cache IOPS Figure 63 shows the IOPS serviced from FAST Cache during the test. Figure 63. Login VSI—FAST Cache IOPS EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 80 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 81. Chapter 7: Testing and Validation During peak load, FAST Cache serviced over 26,500 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache.. If memory cache hits are excluded, the 10 Flash drives alone serviced approximately 18,500 IOPS at peak load. A sizing exercise using EMCs standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would take approximately 103 SAS drives to achieve the same level of performance.Data Mover CPU Figure 64 shows the Data Mover CPU utilization during the Login VSI test. The resultsutilization shown are an average of the CPU utilization of each of the three Data Movers used for the file systems containing the virtual desktops. Figure 64. Login VSI—Data Mover CPU utilization The Data Mover achieved a peak CPU utilization of approximately 13 percent during peak load in this test.Data Mover NFS Figure 65 shows the NFS operations per second from the Data Mover during the Loginload VSI test. The results shown are an average of the NFS operations per second of each of the three Data Movers used for the file systems containing the virtual desktops. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 81 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 82. Chapter 7: Testing and Validation Figure 65. Login VSI—Data Mover NFS load During peak load, there were over 16,500 NFS operations per second. The Data Mover cache helped reduce the load on the disks.vSphere CPU load Figure 66 shows the CPU load from the vSphere servers in the VMware clusters. As each server with the same CPU type had similar results, only the results from one hex- core and one deca-core server are shown in the graph. Figure 66. Login VSI — vSphere CPU load The CPU load on the vSphere server was less than 33 percent utilization during peak load. Hyper-threading was enabled to double the number of logical CPUs. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 82 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 83. Chapter 7: Testing and ValidationvSphere disk Figure 67 shows the Average Guest Millisecond/Command counter, which is shownresponse time as GAVG in ESXitop. This counter represents the response time for I/O operations initiated to the storage array. For each server CPU type, the datastore hosting the replica storage is shown as Replica FS GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone FS GAVG in the graph. Figure 67. Login VSI—Average Guest Millisecond/Command counter The peak GAVG of the file system hosting the replica image was 4 ms, and the linked clone file system 8.5 ms.Recompose resultsTest methodology This test was conducted by performing a VMware View desktop recompose operation of all desktop pools. A new virtual machine snapshot of the master virtual desktop image was taken to serve as the target for the recompose operation. Overlays are added to the graphs to show when the last power-on task completed and when the IOPS to the pool LUNs achieved a steady state. A recompose operation deletes the existing virtual desktops and creates new ones. To enhance the readability of the graphs and to show the array behavior during high I/O periods, only those tasks involved in creating new desktops were performed and shown in the graphs. All desktop recompose operations were initiated simultaneously and took approximately 300 minutes to complete the entire process.Pool individual Figure 68 shows the disk IOPS for a single SAS drive that is part of the storage pool.disk load The statistics from each drive in the pool were similar. Therefore, only the statistics of a single drive are shown for clarity and readability of the graph. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 83 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 84. Chapter 7: Testing and Validation Figure 68. Recompose—Disk IOPS for a single SAS drive During peak load, the disk serviced 95 IOPS and experienced a response time of 7.5 ms. The FAST Cache and Data Mover cache helped to reduce the load on the disks.Pool LUN load Figure 69 shows the replica LUN IOPS and response time from one of the storage pool LUNs. The statistics from each LUN were similar. Therefore, only a single LUN is reported for clarity and readability of the graph. Figure 69. Recompose—Pool LUN IOPS and response time EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 84 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 85. Chapter 7: Testing and Validation Copying the new replica images caused heavy sequential-write workloads on the LUN during the copy process. During peak load, the LUN serviced 300 IOPS and experienced a response time of 2.5 ms.Storage processor Figure 70 shows the total IOPS serviced by the storage processor during the test.IOPS Figure 70. Recompose—Storage processor IOPS During peak load, the storage processors serviced over 27,500 IOPS.Storage processor Figure 71 shows the storage processor utilization during the test.utilization Figure 71. Recompose—Storage processor utilization EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 85 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 86. Chapter 7: Testing and Validation The storage processor utilization peaked at 32 percent during the logon storm. The load was shared between the two storage processors during peak load.FAST Cache IOPS Figure 72 shows the IOPS serviced from FAST Cache during the test. Figure 72. Recompose—FAST Cache IOPS During peak load, FAST Cache serviced approximately 15,000 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache.. If memory cache hits are excluded, the 10 Flash drives alone serviced approximately 6,800 IOPS at peak load. A sizing exercise using EMCs standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it takes approximately 38 SAS drives to achieve the same level of performance.Data Mover CPU Figure 73 shows the Data Mover CPU utilization during the recompose test. Theutilization results shown are an average of the CPU utilization of each of the three Data Movers used for the file systems containing the virtual desktops. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 86 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 87. Chapter 7: Testing and Validation Figure 73. Recompose—Data Mover CPU utilization The Data Mover achieved a peak CPU utilization of approximately 29 percent during peak load in this test.Data Mover NFS Figure 74 shows the NFS operations per second from the Data Mover during theload recompose test. The results shown are an average of the NFS operations per second of each of the three Data Movers used for the file systems containing the virtual desktops. Figure 74. Recompose—Data Mover NFS load During peak load there were approximately 40,000 NFS operations per second. The Data Mover cache helped reduce the load on the disks. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 87 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 88. Chapter 7: Testing and ValidationvSphere CPU load Figure 75 shows the CPU load from the vSphere servers in the VMware clusters. As each server with the same CPU type had similar results, only the results from one hex- core and one deca-core server are shown in the graph. Figure 75. Recompose—vSphere CPU load The CPU load of the hex-core vSphere server reached a peak load of 25 percent and the deca-core server reached a peak load of 15 percent. Hyper-threading was enabled to double the number of logical CPUs.vSphere disk Figure 76 shows the Average Guest Millisecond/Command counter, which is shownresponse time as GAVG in esxtop. This counter represents the response time for I/O operations initiated to the storage array. For each server CPU type, the datastore hosting the replica storage is shown as Replica FS GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone FS GAVG in the graph. Figure 76. Recompose—Average Guest Millisecond/Command counter EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 88 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 89. Chapter 7: Testing and Validation The peak GAVG of the file system hosting the replica image was 8 ms, and the linked clone file system was 10 ms.Refresh resultsTest methodology This test was conducted by selecting a refresh operation for all desktop pools from the View Manager administration console. The refresh operations for all pools were initiated at the same time by scheduling the refresh operation within the View administration console. No user was logged in during the test. Overlays were added to the graphs to show when the last power-on task completed and when the IOPS to the pool LUNs achieved a steady state. The refresh operation took approximately 245 minutes to complete.Pool individual Figure 77 shows the disk IOPS for a single SAS drive that is part of the storage pool.disk load Since the statistics from each drive in the pool is similar, the statistics of a single drive is shown for clarity and readability of the graph. Figure 77. Refresh—Disk IOPS for a single SAS drive During peak load, the disk serviced 100 IOPS and experienced a response time of 7.5 ms. The FAST Cache and Data Mover cache helped to reduce the load on the disks.Pool LUN load Figure 78 shows the Replica LUN IOPS and response time from one of the storage pool LUNs. The statistics from each LUN is similar. Therefore, only a single LUN is reported for clarity and readability of the graph. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 89 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 90. Chapter 7: Testing and Validation Figure 78. Refresh—Pool LUN IOPS and response time During peak load, the LUN serviced 240 IOPS and experienced a response time of 2.1 ms.Storage processor Figure 79 shows the total IOPS serviced by the storage processor during the test.IOPS Figure 79. Refresh—Storage processor IOPS During peak load, the storage processors serviced over 24,000 IOPS. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 90 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 91. Chapter 7: Testing and ValidationStorage processor Figure 80 shows the storage processor utilization during the test.utilization Figure 80. Refresh—Storage processor utilization The storage processor peak utilization was below 34 percent during the refresh test. The load was shared between both the storage processors during the test.FAST Cache IOPS Figure 81 shows the IOPS serviced from FAST Cache during the test. Figure 81. Refresh—FAST Cache IOPS During peak load, FAST Cache serviced approximately 14,000 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache . If memory cache hits are excluded, the 10 Flash drives EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 91 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 92. Chapter 7: Testing and Validation alone serviced approximately 8,500 IOPS during peak load. A sizing exercise using EMCs standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it takes approximately 48 SAS drives to achieve the same level of performance.Data Mover CPU Figure 82 shows the Data Mover CPU utilization during the Refresh test. The resultsutilization shown are an average of the CPU utilization of each of the three Data Movers used for the file systems containing the virtual desktops. Figure 82. Refresh—Data Mover CPU utilization The Data Mover achieved a peak CPU utilization of approximately 19 percent during the test.Data Mover NFS Figure 83 shows the NFS operations per second from the Data Mover during theload Refresh test. The results shown are an average of the NFS operations per second of each of the three Data Movers used for the file systems containing the virtual desktops. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 92 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 93. Chapter 7: Testing and Validation Figure 83. Refresh—Data Mover NFS load During peak load, there were approximately 30,000 NFS operations per second. The Data Mover cache helped reduce the load on the disks.vSphere CPU load Figure 84 shows the CPU load from the vSphere servers in the VMware clusters. As each server with the same CPU type had similar results, only the results from one hex- core and one deca-core server are shown in the graph. Figure 84. Refresh—vSphere CPU load The peak vSphere CPU load was 17 percent for the hex-core server and 13 percent for the deca-core vSphere server. Hyper-threading was enabled to double the number of logical CPUs. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 93 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 94. Chapter 7: Testing and ValidationvSphere disk Figure 85 shows the Average Guest Millisecond/Command counter, which is shownresponse time as GAVG in esxtop. This counter represents the response time for I/O operations initiated to the storage array. For each server CPU type, the datastore hosting the replica storage is shown as Replica FS GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone FS GAVG in the graph. Figure 85. Refresh—Average Guest Millisecond/Command counter The peak GAVG of the file system hosting the replica image was 7 ms, and the linked clone file system 7 ms. EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 94 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 95. Chapter 8: Conclusion8 Conclusion This chapter includes the following sections:  Summary  ReferencesSummary As shown in Chapter 7: Testing and Validation, EMC VNX FAST Cache provides measurable benefits in a desktop virtualization environment. It not only reduces the response time for both read and write workloads, but also effectively supports more users on fewer drives, and greater IOPS density with a lower drive requirement.ReferencesSupporting The following documents, located on the EMC online support website, providedocuments additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative:  EMC Infrastructure for VMware View 5.0, EMC VNX Series (NFS),VMware vSphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7—Reference Architecture  EMC Infrastructure For Virtual Desktops Enabled by EMC VNX Series (NFS),VMware vSphere 4.1, VMware View 4.6, and VMware View Composer 2.6—Reference Architecture  EMC Infrastructure For Virtual Desktops Enabled by EMC VNX Series (NFS),VMware vSphere 4.1, VMware View 4.6, and VMware View Composer 2.6—Proven Solution Guide  EMC Performance Optimization for Microsoft Windows XP for the Virtual Desktop Infrastructure—Applied Best Practices  Deploying Microsoft Windows 7 Virtual Desktops with VMware View—Applied Best Practices GuideVMware The following documents, located on the VMware website, also provide usefuldocuments information:  VMware View Architecture Planning  VMware View Installation  VMware View Administration EMC Infrastructure for VMware View 5.0 EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 95 Management, and VMware View Composer 2.7—Proven Solutions Guide
  • 96. Chapter 8: Conclusion  VMware View Security  VMware View Upgrades  VMware View Integration  VMware View Windows XP Deployment Guide  VMware View Optimization Guide for Windows 7  VMware View Persona Management Deployment Guide  vSphere Installation and Setup Guide  Anti-Virus Practices for VMware View  VMware KB Article 1027713 EMC Infrastructure for VMware View 5.0EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.0, VMware View Persona 96 Management, and VMware View Composer 2.7—Proven Solutions Guide