VMware vSphere 4.1 deep dive - part 2

9,914
-1

Published on

This is a level 200 - 300 presentation.
It assumes:
Good understanding of vCenter 4, ESX 4, ESXi 4.
Preferably hands-on
We will only cover the delta between 4.1 and 4.0
Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc
Good understanding of related storage, server, network technology
Target audience
VMware Specialist: SE + Delivery from partners

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
9,914
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
634
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

VMware vSphere 4.1 deep dive - part 2

  1. 1. 4.1 New Features: Network<br />
  2. 2. Network<br />Receive Side Scaling (RSS) Support Enhancements<br />Improvements to RSS support for guests via enhancements to VMXNET3. <br />Enhanced VM to VM Communication<br />Further, inter-VM throughput performance will be improved under conditions where VMs are communicating directly with one another over the same virtual switch on the same ESX/ESXi host (inter-VM traffic).<br />This is achieved through networking asynchronous TX processing architecture which enables the leveraging of additional physical CPU cores for processing inter-VM traffic. <br />VM – VM throughput improved by 2X, to up to 19 Gbps<br />10% improvement when going out to physical network<br />
  3. 3. Other Improvements – Network Performance<br />NetQueue Support Extension<br />NetQueue support is extended to include support for hardware based LRO (large receive off-load) further improving CPU and throughput performance in 10 GE environments. <br />LRO support<br />Large Receive Offload<br />Each packets transmitted causes CPU to react <br />Lots of small packets received from physical media result in high CPU load<br />LRO merges packets and transmits them at once<br />Receive tests indicate 5-30% improvement in throughput<br />40 - 60% decrease in CPU cost<br />Enabled for pNICs Broadcoms bnx2x and Intels Niantic<br />Enabled for vNIC vmxnet2 and vmxnet3, but only recent Linux guestOS<br />3<br />
  4. 4. IPv6—Progress towards full NIST “Host” Profile Compliance<br />VI 3 (ESX 3.5)<br />IPv6 supported in guests<br />vSphere 4.0<br />IPv6 support for <br />ESX 4 <br />vSphere Client <br />vCenter<br />vMotion<br />IP Storage (iSCSI, NFS) — EXPERIMENTAL<br />Not supported for vSphere vCLI, HA, FT, Auto Deploy<br />vSphere 4.1 <br />NIST compliance with “Host” Profile (http://www.antd.nist.gov/usgv6/usgv6-v1.pdf)<br />Including IPSEC, IKEv2, etc.<br />Not supported for vSphere vCLI, HA, FT<br />
  5. 5. Cisco Nexus 1000V—Planned Enhancements <br />Easier software upgrade<br />In Service Software Upgrade (ISSU) for VSM and VEM<br />Binary compatibility<br />Weighted Fair Queuing (s/w scheduler)<br />Increased Scalability, inline with vDS scalability<br />SPAN to and from Port Profile<br />VLAN pinning to PNIC<br />Installer app for VSM HA and L3 VEM/VSM communication<br />Start of EAL4 Common Criteria certification<br />4094 active VLANs<br />Scale Port Profiles > 512<br />Always check with Cisco for latest info.<br />
  6. 6. Network I/O Control<br />
  7. 7. 1GigE pNICs<br />10 GigE pNICs<br />Network Traffic Management—Emergence of 10 GigE<br />iSCSI<br />iSCSI<br />FT<br />vMotion<br />NFS<br />FT<br />vMotion<br />NFS<br />TCP/IP<br />TCP/IP<br />vSwitch<br />vSwitch<br />10 GigE<br />1GigE<br />Traffic Types compete. Who gets what share of the vmnic?<br /><ul><li>NICs dedicated for some traffic types e.g. vMotion, IP Storage
  8. 8. Bandwidth assured by dedicated physical NICs
  9. 9. Traffic typically converged to two 10 GigE NICs
  10. 10. Some traffic types & flows could dominate others through oversubscription</li></li></ul><li>Traffic Shaping<br />Features in 4.0/4.1<br />vSwitch or vSwitch Port Group<br />Limit outbound traffic<br />Average bandwidth<br />Peek bandwidth<br />Burst Size<br />vDS dvPortGroup<br />Ingress/ Egress Traffic Shaping<br />Average bandwidth<br />Peak bandwidth<br />Burst Size<br />Not optimised for 10 GE<br />iSCSI<br />COS<br />vMotion<br />VMs<br />10 Gbit/s NIC<br />
  11. 11. Traffic Shaping<br />Traffic Shaping Disadvantages<br />Limits are fixed- even if there is bandwidth available it will not be used for other services<br />bandwidth cannot be guaranteed without limiting other traffic (like reservations)<br />VMware recommended to have separate pNICs for iSCSI/ NFS/ vMotion/ COS to have enough bandwidth available for these traffic types<br />Customers don’t want to waste 8-9Gbit/s if this pNIC is dedicated for vMotion<br />Instead of 6 1Gbit pNICs customers might have two 10Gbit pNICs sharing traffic<br />Guaranteed bandwidth for vMotion limits bandwidth for other traffic even in the case where there is no vMotion active<br />Traffic shaping is only a static way to control traffic<br />iSCSI<br />unused<br />COS<br />unused<br />vMotion<br />VMs<br />10Gbit/s NIC<br />
  12. 12. Network I/O Control<br />Network I/O Control Goals<br />Isolation<br />One flow should not dominate others<br />Flexible Partitioning<br />Allow isolation and over commitment<br />Guarantee Service Levels when flows compete<br />Note: This feature is only available with vDS (Enterprise Plus)<br />
  13. 13. Overall Design<br />
  14. 14. Parameters<br />Limits and Shares <br />Limits specify the absolute maximumbandwidth for a flow over a Team<br />Specified in Mbps<br />Traffic from a given flow will never exceed its specified limit<br />Egress from ESX host<br />Shares specify the relative importance of an egress flow on a vmnic i.e. guaranteed minimum<br />Specified in abstract units, from 1-100<br />Presets for Low (25 shares), Normal (50 shares), High (100 shares), plus Custom<br />Bandwidth divided between flows based on their relative shares<br />Controls apply to output from ESX host <br />Shares apply to a given vmnic<br />Limits apply across the team<br />
  15. 15. Configuration from vSphere Client <br />Limits<br />Maximum bandwidth for traffic class/type<br />Shares<br />Guaranteed minimum service level<br />vDS only feature!<br />Preconfigured Traffic Classes<br />e.g. VM traffic in this example: - limited to max of 500 Mbps (aggregate of all VMs) - with minimum of 50/400 of pNIC bandwidth (50/(100+100+50+50+50+50) <br />
  16. 16. Resource Management<br />Shares<br />Normal = 50<br />Low = 25<br />High = 100<br />Custom = any values between 1 and 100<br />Default values<br />VM traffic = High (100)<br />All others = Normal (50)<br />No limit set<br />
  17. 17. Implementation<br />Each host calculates the shares separately or independantly<br />One host might have only 1Gbit/s NICs while another one has already 10Gbit/s ones<br />So resulting guaranteed bandwidth is different<br />Only outgoing traffic is controlled<br />Inter-switch traffic is not controlled, only the pNICs are affected<br />Limits are still valid even if the pNIC is opted out<br />Scheduler uses a static “Packets-In-Flight” window<br />inFlightPackets: Packets that are actually in flight and in transmit process in the pNIC<br />Window size is 50 kB<br />No more than 50 kB are in flight (to the wire) at a given moment<br />
  18. 18. Excluding a physical NIC<br />Physical NICs per hosts can be excluded from Network Resource Management<br />Host configuration -> Advanced Settings -> Net -> Net.ResMgmtPnicOptOut<br />Will exclude specified NICs from shares calculation, notfromlimits!<br />
  19. 19. Results<br />With QoS in place, performance is less impacted<br />
  20. 20. Load-Based Teaming<br />
  21. 21. Current Teaming Policy<br />In vSphere 4.0 three policies<br />Port ID<br />IP hash<br />MAC Hash<br />Disadvantages<br />Static mapping<br />No load balancing<br />Could cause unbalanced load on pNICs<br />Did not differ between pNIC bandwidth<br />
  22. 22. NIC Teaming Enhancements—Load Based Teaming (LBT)<br />Note: adjacent physical switch configuration is same as other teaming types (except IP-hash). i.e. same L2 domain<br />LBT invoked if saturation detected on Tx or Rx (>75% mean utilization over 30s period)<br />30 sec period—long period avoids MAC address flapping issues with adjacent physical switches <br />
  23. 23. Load Based Teaming<br />Initial mapping<br />Like PortID<br />Balanced mapping between ports and pNICs<br />Mapping not based on load (as initially no load existed)<br />Adjusting the mapping<br />Based on time frames; the load on a pNIC during a timeframe is taken into account<br />In case load is unbalanced one VM (to be precise: the vSwitch port) will get re-assigned to a different pNIC<br />Parameters<br />Time frames and load threshold<br />Default frame 30 seconds, minimum value 10 seconds<br />Default load threshold 75%, possible values 0-100<br />Both Configurable through command line tool (only for debug purpose - not for customer)<br />
  24. 24. Load Based Teaming<br />Advantages<br />Dynamic adjustments to load<br />Different NIC speeds are taken into account as this is based on % load<br />Can have a mix of 1 Gbit, 10 Gbit and even 100 Mbit NICs<br />Dependencies<br />LBT works independent from other algorithms<br />Does not take limits or reservation from traffic shaping or Network I/O Management into account<br />Algorithm based on the local host only<br />DRS has to take care of cluster wide balancing <br />Implemented on vNetwork Distributed Switch only<br />Edit dvPortGroup to change setting<br />
  25. 25. 4.1 New Features: Storage<br />
  26. 26. NFS & HW iSCSI in vSphere 4.1 <br />Improved NFS performance<br />Up to 15% reduction in CPU cost for both read & write<br />Up to 15% improvement in Throughput cost for both read & write<br />Broadcom iSCSI HW Offload Support<br />89% improvement in CPU read cost!<br />83% improvement in CPU write cost!<br />
  27. 27. VMware Data Recovery: New Capabilities<br />Backup and Recovery Appliance<br /><ul><li>Support for up to 10 appliances per vCenter instance to allow protection of up to 1000 VMs
  28. 28. File Level Restore client for Linux VMs</li></ul>VMware vSphere 4.1<br /><ul><li>Improved VSS support for Windows 2008 and Windows 7: application level quiescing</li></ul>Destination Storage<br /><ul><li>Expanded support for DAS, NFS, iSCSI or Fibre Channel storage plus CIFS shares as destination
  29. 29. Improved deduplication performance</li></ul>vSphere Client Plug-In<br /><ul><li>Ability for seamless switch between multiple backup appliances
  30. 30. Improved usability and user experience</li></ul>VMware vCenter<br />
  31. 31. ParaVirtual SCSI (PVSCSI) <br />We will now support PVSCSI when used with these guest OS: <br />Windows XP (32bit and 64bit) <br />Vista (32bit and 64bit) <br />Windows 7 (32bit and 64bit) <br />/vmimages/floppies<br />Point the VM Floppy Driver at the .FLP file<br />When installing press F6 key to read the floppy<br />
  32. 32. ParaVirtual SCSI <br />VM configured with a PVSCSI adapter can be part of an Fault Tolerant cluster.<br />PVSCSI adapters already support hot-plugging or hot-unplugging of virtual devices, but the guest OS is not notified of any changes on the SCSI bus. <br />Consequently, any addition/removal of devices need to be followed by a manual rescan of the bus from within the guest.<br />
  33. 33. Storage IO Control<br />
  34. 34. The I/O Sharing Problem<br />Low priority VM can limit I/O bandwidth for high priority VMs <br />Storage I/O allocation should be in line with VM priorities<br />What you want to see<br />What you see<br />MicrosoftExchange<br />MicrosoftExchange<br />online store<br />online store<br />data mining<br />data mining<br />datastore<br />datastore<br />
  35. 35. Solution: Storage I/O Control<br />CPU shares: Low<br />Memory shares: Low<br />CPU shares: High<br />Memory shares: High<br />CPU shares: High<br />Memory shares: High<br />I/O shares: High<br />I/O shares: Low<br />I/O shares: High<br />32GHz 16GB<br />MicrosoftExchange<br />online store<br />data mining<br />Datastore A<br />
  36. 36. Setting I/O Controls<br />
  37. 37. Enabling Storage I/O Control<br />
  38. 38. Enabling Storage I/O Control<br />Click the Storage I/O Control ‘Enabled’ checkbox to turn the feature on for that volume.<br />
  39. 39. Enabling Storage I/O Control<br /><ul><li>Clicking on the Advanced button allow you to change the congestion threshold.
  40. 40. If the latency rises above this value, Storage I/O Control will kick in, and prioritize a VM’s I/O based on its shares value.</li></li></ul><li>Viewing Configuration Settings<br />
  41. 41. Allocate I/O Resources<br />Shares translate into ESX I/O queue slots<br />VMs with more shares are allowed to send more I/O’s at a time<br />Slot assignment is dynamic, based on VM shares and current load<br />Total # of slots available is dynamic, based on level of congestion<br />data mining<br />MicrosoftExchange<br />online store<br />I/O’s in flight<br />STORAGE ARRAY <br />
  42. 42. Experimental Setup<br />
  43. 43. 14%<br />21%<br />42%<br />15%<br />Without Storage I/O Control (Default)<br />Performance without Storage IO Control<br />
  44. 44. 14%<br />22%<br />8%<br />500 shares<br />500 shares<br />750 shares<br />750 shares<br />4000 shares<br />With Storage I/O Control (Congestion Threshold: 25ms)<br />Performance with Storage IO Control<br />
  45. 45. Storage I/O Control in Action: Example #2<br />Two Windows VMs running SQL Server on two hosts<br />250 GB data disk, 50 GB log disk<br />VM1: 500 shares<br />VM2: 2000 shares<br />Result: VM2 with higher shares gets more orders/min & lower latency!<br />
  46. 46. Step 1: Detect Congestion<br />No benefit beyond certain load<br />Throughput<br />(IOPS or MB/s)<br />Congestion signal: ESX-array response time > threshold<br />Default threshold: 35ms<br />We will likely recommend different defaults for SSD and SATA<br />Changing default threshold (not usually recommended)<br />Low latency goal: set lower if latency is critical for some VMs<br />High throughput goal: set close to IOPS maximization point<br />Total Datastore Load (# of IO’s in flight)<br />
  47. 47. Storage I/O Control Internals<br /><ul><li>There are two I/O schedulers involved in Storage I/O Control.
  48. 48. The first is the local VMI/O scheduler. This is called SFQ, the start-time fair queuing scheduler. This scheduler ensures share-based allocation of I/O resources between VMs on a per host basis.
  49. 49. The second is the distributed I/O scheduler for ESX hosts. This is called PARDA, the Proportional Allocation of Resources for Distributed Storage Access.
  50. 50. PARDA
  51. 51. carves out the array queue amongst all the VMs which are sending I/O to the datastore on the array.
  52. 52. adjusts the per host per datastore queue size (aka LUN queue/device queue) depending on the sum of the per VM shares on the host.
  53. 53. communicates this adjustment to each ESX via VSI nodes.
  54. 54. ESX servers also share cluster wide statistics between each other via a statsfile</li></li></ul><li>New VSI Nodes for Storage I/O Control<br /><ul><li>ESX 4.1 introduces a number of new VSI nodes for Storage I/O Control purposes:</li></ul>A new VSI node per datastore to get/set the latency threshold.<br />A new VSI node per datastore to enable/disable PARDA.<br />A new maxQueueDepth VSI nodes for /storage/scsifw/devices/* has been introduced which means that each device has a logical queue depth/ slot size parameter that the PARDA scheduler enforces. <br />
  55. 55. Host-LevelIssue Queues<br />Array Queue<br />Storage Array<br />SFQ<br />SFQ<br />SFQ<br />Queue lengths varied dynamically<br />Storage I/O Control Architecture<br />PARDA<br />PARDA<br />PARDA<br />
  56. 56. Requirements<br />Storage I/O Control <br />supported on FC or iSCSI storage. NFS datastores are not supported.<br />not supported on datastores with multiple extents.<br />Array with Automated Storage Tiering capability<br />Automated storage tiering is the ability of an array (or group of arrays) to automatically migrate LUNs/volumes or parts of LUNs/volumes to different types of storage media (SSD, FC, SAS, SATA) based on user-set policies and current I/O patterns. <br />Before using Storage I/O Control on datastores that are backed by arrays with automated storage tiering capabilities, check the VMware Storage/SAN Compatibility Guide to verify whether your automated tiered storage array has been certified to be compatible with Storage I/O Control<br />No special certification is required for arrays that do not have any such automatic migration/tiering feature, including those that provide the ability to manually migrate data between different types of storage media<br />
  57. 57. Hardware-Assist Storage Operation<br />Formally known as vStorage API for Array Integration<br />
  58. 58. vStorage APIs for Array Integration (VAAI) <br />Improves performance by leveraging efficient array-based operations as an alternative to host-based solutions<br />Three Primitives include:<br />Full Copy – Xcopy like function to offload work to the array<br />Write Same -Speeds up zeroing out of blocks or writing repeated content<br />Atomic Test and Set – Alternate means to locking the entire LUN<br />Helping function such as:<br />Storage vMotion<br />Provisioning VMs from Template<br />Improves thin provisioning disk performance<br />VMFS share storage pool scalability<br />Notes:<br />Requires firmware from Storage Vendors (6 participating)<br />supports block based storage only. NFS not yet supported in 4.1<br />
  59. 59. Array Integration Primitives: Introduction<br />Atomic Test & Set (ATS)<br />A mechanism to modify a disk sector to improve the performance of the ESX when doing metadata updates.<br />Clone Blocks/Full Copy/XCOPY<br />Full copy of blocks and ESX is guaranteed to have full space access to the blocks.<br />Default offloaded clone size is 4MB.<br />Zero Blocks/Write Same<br />Write Zeroes. This will address the issue of time falling behind in a VM when the guest operating system writes to previously unwritten regions of its virtual disk:<br />http://kb.vmware.com/kb/1008284<br />This primitive will improve MSCS in virtualization environment solutions where we need to zero out the virtual disk.<br />Default zeroing size is 1MB.<br />
  60. 60. Hardware Acceleration<br />All vStorage support will be grouped into one attribute, called "Hardware Acceleration". <br />Not Supported implies one or more Hardware Acceleration primitives failed.<br />Unknown implies Hardware Acceleration primitives have not yet been attempted.<br />
  61. 61. VM Provisioning from Template with Full Copy<br />Benefits<br />Reduce installation time<br />Standardize to ensure efficient management, protection & control<br />Challenges<br />Requires a full data copy<br />100 GB template (10 GB to copy): 5-20 minutes<br />FT requires additional zeroing of blocks<br />Improved Solution<br />Use array’s native copy/clone & zeroing functions<br />Up to 10-20x speedup in provisioning time<br />
  62. 62. Storage vMotion with Array Full Copy Function<br /> Benefits<br />Zero-downtime migration<br />Eases array maintenance, tiering, load balancing, upgrades, space mgmt<br /> Challenges<br />Performance impact on host, array, network<br />Long migration time (0.5 - 2.5 hrs for 100GB VM)<br />Best practice: use infrequently<br /> Improved solution<br />Use array’s native copy/clone functionality<br />
  63. 63. VAAI Speeds Up Storage vMotion - Example<br />42:27 - 39:12 = <br />2 Min 21 sec w/out<br />(141 seconds)<br />33:04 - 32:37 =<br />27 Sec with VAAI<br />141 sec vs. 27 sec<br />
  64. 64. Copying Data – Optimized Cloning with VAAI<br /><ul><li>VMFS directs storage to move data directly
  65. 65. Much less time!
  66. 66. Up to 95% reduction
  67. 67. Dramatic reduction in load on:
  68. 68. Servers
  69. 69. Network
  70. 70. Storage</li></li></ul><li>Scalable Lock Management<br /><ul><li>A number of VMFS operations cause the LUN to temporarily become locked for exclusive write use by one of the ESX nodes, including:
  71. 71. Moving a VM with vMotion
  72. 72. Creating a new VM or deploying a VM from a template
  73. 73. Powering a VM on or off
  74. 74. Creating a template
  75. 75. Creating or deleting a file, including snapshots
  76. 76. A new VAAI feature, atomic_test_and_set allows the ESX Server to offload the management of the required locks to thestorage and avoids locking the entire VMFS file system.</li></li></ul><li>Atomic Test & Set<br />Original file locking technique<br />Acquire SCSI reservation<br />Acquire file lock<br />Release SCSI reservation<br />Do work on VMFS file/metadata<br />Release file lock<br />New file locking technique<br />Acquire ATS lock<br />Acquire file lock<br />Release ATS lock<br />Do work on VMFS file/metadata<br />Release file lock<br />The main difference with using the ATS lock is that it does not affect the other ESX hosts sharing the datastore<br />
  77. 77. VMFS Scalability with Atomic Test and Set (ATS)<br />Makes VMFS more scalable overall, by offloading block locking mechanism<br />Using Atomic Test and Set (ATS) capability provides an alternate option to use of SCSI reservations to protect the VMFS metadata from being written to by two separate ESX Servers at one time.<br />Normal VMware Locking (No ATS)<br />Enhanced VMware Locking (With ATS)<br />
  78. 78. For more details on VAAI<br />vSphere 4.1 Documentation also describes use of this features in the ESX Configuration Guide Chapter 9 (pages 124 - 125)<br />Listed in TOC as “Storage Hardware Acceleration”<br />Three setting under advanced settings:<br />DataMover.HardwareAcceratedMove - Full Copy<br />DataMover.HardwareAcceratedInit - Write Same<br />VMFS3.HarwareAccerated Locking - Atomic Test Set<br />Additional Collateral planned for release after GA<br />Frequently Asked Questions<br />Datasheet or webpage content<br />Partners include: Dell/EQL, EMC, HDS, HP, IBM and NetApp<br />
  79. 79. Requirements<br />The VMFS data mover will not leverage hardware offloads, and will use software data movement instead, in the following cases: <br />If the source and destination VMFS volumes have different block size; in such situations data movement will fall back to the generic FSDM layer, which will only do software data movement. <br />If the source file type is RDM and the destination file type is non-RDM (regular file) <br />If the source VMDK type is eagerzeroedthick and the destination VMDK type is thin. <br />If either source or destination VMDK is any sort of sparse or hosted format. <br />If the logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device. <br />
  80. 80. VMFS Data Movement Caveats<br />VMware supports VAAI primitives on VMFS with multiple LUNs/extents, if they are all on the same array and the array supports offloading.<br />VMware does not support VAAI primitives on VMFS with multiple LUNs/extents, if they are all on different arrays, but all arrays support offloading. <br />HW cloning between arrays (even if it's within the same VMFS volume) won't work, so that would fall back to Software data movement.<br />
  81. 81. vSphere 4.1 New Features: Management<br />Management related features<br />
  82. 82. Management – New Features Summary<br />vCenter<br />32-bit to 64-bit data migration<br />Enhanced Scalability<br />Faster response time<br />Update Manager<br />Host Profile Enhancements<br />Orchestrator<br />Active Directory Support (Host and vMA)<br />VMware Converter<br />Hyper-V Import.<br />Win08 R2 and Win7 convert<br />Virtual Serial Port Concentrator<br />
  83. 83. Scripting & Automation<br />Host Profiles, Orchestrator, vMA, CLI, PowerCLI<br />
  84. 84. Summary<br />Host Profiles<br />VMware Orchestrator<br />VMware vMA<br />PowerShell<br />esxtop<br />vscsiStats<br />VMware Tools<br />
  85. 85. Host Profiles Enhancements<br />Host Profiles<br />Cisco support<br />PCI device ordering (support for selecting NICs)<br />iSCSI support<br />Admin password (setting root password) <br />Logging on the host<br />File is at C:Documents and SettingsAll UsersApplication DataVMwareVMware VirtualCenterLogsPyVmomiServer.log<br />Config not covered by Host Profiles are: <br />Licensing <br />vDS policy configuration (however you can do non-policy vDS stuff) <br />iSCSI <br />Multipathing<br />
  86. 86. Host Profiles Enhancements<br />Lbtd<br />Lsassd (Part of AD. See the AD preso)<br />Lwiod (Part of AD)<br />Netlogond (part of AD)<br />vSphere 4.1<br />vSphere 4.0<br />
  87. 87. Orchestrator Enhancements<br />provides a client and server for 64-bit installations, with an optional 32-bit client.<br />performance enhancements due to 64-bit installation<br />
  88. 88. VMware Tools Command Line Utility<br />This feature provides an alternative to the VMware Tools control panel (the GUI dialog box)<br />The command line based toolbox will allow for administrators to automate the use of the toolbox functionalities by writing their own scripts<br />
  89. 89. vSphere Management Assistant (vMA)<br />A convenient place to perform administration<br />Virtual Appliance packaged as an OVF<br />Distributed, maintained and supported by VMware <br />Not included with ESXi – must be downloaded separately<br />The environment has the following pre-installed:<br />64-bit Enterprise Linux OS<br />VMware Tools<br />Perl Toolkit<br />vSphere Command Line Interface (VCLI)<br />JRE (to run applications built with the vSphere SDK)<br />VI Fast Pass (authentication service for scripts)<br />VI Logger (log aggregator)<br />
  90. 90. vMA<br />Improvements in 4.1<br />Improved authentication capability – Active Directory support <br />Transition from RHEL to CentOS<br />Security<br />The security hole that exposed clear text passwords on ESX(i) or vCenter hosts when using vifpinit (vi-fastpass) is fixed<br />vMA as netdump server<br />You can configure ESXi host to get the netcoredump onto a remote server in case of crash or panic.<br />Each ESXi must be configured to write the core dump.<br />
  91. 91. For Tech Partner: VMware CIM API<br />What it is:<br />for developers building management applications. With the VMware CIM APIs, developers can use standards-based CIM-compliant applications to manage ESX/ESXi hosts.<br />The VMware Common Information Model (CIM) APIs allow you to:<br />view VMs and resources using profiles defined by the Storage Management Initiative Specification (SMI-S)<br />manage hosts using the System Management Architecture for Server Hardware (SMASH) standard. SMASH profiles allow CIM clients to monitor system health of a managed server. <br />What’s new in 4.1<br />www.vmware.com/support/developer/cim-sdk/4.1/cim_410_releasenotes.html<br />
  92. 92. vCLI and PowerCLI: primary scripting interfaces<br />vCLI and PowerCLI built on same API as vSphere Client<br />Same authentication (e.g. Active Directory), roles and privileges, event logging<br />API is secure, optimized for remote environments, firewall-friendly, standards-based<br />vCLI<br />vSpherePowerCLI<br />Other utility scripts<br />Otherlanguages<br />vSphere SDK<br />vSphere Client<br />vSphere Web Service API<br />
  93. 93. vCLI for Administrative and Troubleshooting Tasks<br />Areas of functionality<br />Host Configuration:<br />NTP, SNMP, Remote syslog, ESX conf, Kernel modules, local users<br />Storage Configuration:<br />NAS, SAN, iSCSI, vmkfstools, storage pathing, VMFS volume management<br />Network Configuration:<br />vSwitches (standard and distributed), physical NICs, Vmkernel NICs, DNS, Routing<br />Miscellaneous:<br />Monitoring, File management, VM Management, host backup, restore, and update<br />vCLI can point to an ESXi host or to vCenter<br />vMA is a convenient way for accessing vCLI<br />Remote CLI now run faster in 4.1 relative to 4.0<br />
  94. 94. Anatomy of a vCLI command<br />Run directly on ESXi Host<br />vicfg-nics<br />--server hostname --user username --password mypassword<br />options<br />Hostname of ESXi host<br />User defined locally on ESXi host<br />Run through vCenter<br />vicfg-nics<br />--server hostname --user username --password mypassword --vihost hostname<br />options<br />Hostname of vCenter host<br />User defined in vCenter (AD)<br />Target ESXi host<br />
  95. 95. Additional vCLI configuration commands in 4.1<br />Storage<br />esxcliswiscsi session: Manage iSCSI sessions <br />esxcliswiscsinic: Manage iSCSI NICs<br />esxcliswiscsivmknic: List VMkernel NICs available for binding to particular iSCSI adapter <br />esxcliswiscsivmnic: List available uplink adapters for use with a specified iSCSI adapter<br />esxclivaai device: Display information about devices claimed by the VMware VAAI (vStorage APIs for Array Integration) Filter Plugin.<br />esxclicorestorage device: List devices or plugins. Used in conjunction with hardware acceleration. <br />
  96. 96. Additional vCLI commands<br />Network<br />esxcli network: List active connections or list active ARP table entries. <br />vicfg-authconfig --server=<ESXi_IP_Adress> --username=root --password '' --authscheme AD --joindomain <ad_domain_name> --adusername=<ad_user_name> --adpassword=<ad_user_password><br />Storage<br />NFS statistics available in resxtop<br />VM<br />esxclivms: Forcibly stop VMs that do not respond to normal stop operations, by using kill commands.<br /># esxcli vms vm kill --type <kill_type> --world-id <ID><br />Note: designed to kill VMs in a reliable way (not dependent upon well-behaving system)<br />Eliminating one of the most common reasons for wanting to use TSM.<br />
  97. 97. esxcli - New Namespaces<br />esxcli has got 3 new namespaces – network, vaai and vms<br />[root@cs-tse-i132 ~]# esxcli<br />Usage: esxcli [disp options] <namespace> <object> <command><br />For esxcli help please run esxcli –help<br />Available namespaces:<br />corestorage VMware core storage commands.<br />network VMware networking commands.<br />nmp VMware Native Multipath Plugin (NMP). This is the VMware default<br /> implementation of the Pluggable Storage Architecture.<br />swiscsi VMware iSCSI commands.<br />vaai Vaai Namespace containing vaai code.<br />vms Limited Operations on VMs.<br />
  98. 98. Control VM Operations<br /># esxcli vms vm<br />Usage: esxcli [disp options] vms vm <command><br />For esxcli help please run esxcli –help<br />Available commands:<br />kill Used to forcibly kill VMs that are stuck and not responding to normal stop operations.<br />list List the VMs on this system. This command currently will only list running VMs on the system.<br />[root@cs-tse-i132 ~]# esxcli vms vm list<br />vSphere Management Assistant (vMA)<br /> World ID: 5588<br /> Process ID: 27253<br /> VMX Cartel ID: 5587<br /> UUID: 42 01 a1 98 d6 65 6b e8-79 3b 2a 7c 9d 88 70 05<br /> Display Name: vSphere Management Assistant (vMA)<br /> Config File: /vmfs/volumes/4b1e10ed-8ce9ce16-f692-00215e364468/vSphere Management Assistant (vM/vSphere Management Assistant (vM.vmx<br />
  99. 99. esxtop – Disk Devices View<br />Use the ‘u’ option to display ‘Disk Devices’.<br />NFS statistics can now be observed. <br />Here we are looking at throughput and latency stats for the devices.<br />
  100. 100. New VAAI Statistics in esxtop (1 of 2)<br /><ul><li>There are new fields in esxtop which look at VAAI statistics.
  101. 101. Each of the three primitives has their own unique set of statistics.
  102. 102. Toggle VAAI fields (‘O’ and ‘P’) to on for VAAI specific statistics.</li></li></ul><li>New VAAI Statistics in esxtop (2 of 2)<br />Clone (Move) Ops<br />VMFS<br />Lock<br />Ops<br />Zeroing (Init) Ops<br />Latencies<br /><ul><li>The way to track failures is via esxtop or resxtop. Here you'll see CLONE_F, which is clone failures. Similarly, you'll see ATS_F, ZERO_F and so on.</li></li></ul><li>esxtop – VM View<br />esxtop also provides a mechanism to view VM I/O & latency statistics, even if they reside on NFS.<br />The VM with GID 65 (SmallVMOnNAS) above resides on an NFS datastore.<br />
  103. 103. VSI<br /># vsish<br />/> cat /vmkModules/nfsclient/mnt/isos/properties<br />mount point information {<br /> server name:rhtraining.vmware.com<br /> server IP:10.21.64.206<br /> server volume:/mnt/repo/isos<br /> UUID:4f125ca5-de4ee74d<br /> socketSendSize:270336<br /> socketReceiveSize:131072<br />reads:7<br /> writes:0<br /> readBytes:92160<br /> writeBytes:0<br /> readTime:404366<br /> writeTime:0<br /> aborts:0<br /> active:0<br /> readOnly:1<br /> isMounted:1<br /> isAccessible:1<br /> unstableWrites:0<br /> unstableNoCommit:0<br />}<br />NFS I/O statistics are also available via the VSI nodes<br />
  104. 104. vm-support enhancements<br />vm-support now enables user to run 3rd party scripts. <br />To make vm-support run such scripts, add the scripts to "/etc/vmware/vm-support/command-files.d" directory and run vm-support. <br />The results will be added to the vm-support archive.<br />Each script that is run will have its own directory which contain output and log files for that script in the vm-support archive. <br />These directories are stored in top-level directory "vm-support-commands-output".<br />
  105. 105. Power CLI<br />Feature Highlights:<br />Easier to customize and extend PowerCLI, especially for reporting <br />Output objects can be customized by adding extra properties<br />Better readability and less typing in scripts based on Get-View. Each output object has its associated view as nested property. Less typing is required to call Get-View and convert between PowerCLI object IDs and managed object IDs.<br />Basic vDS support – moving VMs from/to vDS, adding/removing hosts from/to vDS<br />More reporting: new getter cmdlets, new properties added to existing output objects, improvements in Get-Stat.<br />Cmdlets for host HBAs<br />PowerCLI Cmdlet Reference now documents all output types<br />Cmdlets to control host routing tables<br />Faster Datastore provider<br />http://blogs.vmware.com/vipowershell/2010/07/powercli-41-is-out.html<br />
  106. 106. If you are really really curious…. <br />Additional commands (not supported)<br />http://www.petri.co.il/vmware-esxi4-console-secret-commands.htm<br />
  107. 107. vCenter specific<br />
  108. 108. vCenter improvement<br />Better load balancing with improved DRS/DPM algorithm effectiveness<br />Improved performance at higher vCenter inventory limits – up to 7x higher throughput and up to 75% reduced latency<br />Improved performance at higher cluster inventory limits – up to 3x higher throughput and up to 60% reduced latency<br />Faster vCenter startup – around 5 minutes for maximum vCenter inventory size<br />Better vSphere Client responsiveness, quicker user interaction, and faster user login<br />Faster host operations and VM operations on standalone hosts – up to 60% reduction in latency<br />Lower resource usage by vCenter agents by up to 40%<br />Reduced VM group power-on latency by up to 25%<br />Faster VM recovery with HA – up to 60% reduction in total recovery time for 1.6x more VMs<br />
  109. 109. 88<br />Enhanced vCenter Scalability<br />
  110. 110. vCenter 4.1 install<br />New option: Managing the RAM of JVM<br />
  111. 111. vCenter Server: Changing JVM Sizing<br />The same change should be visible by launching "Configure Tomcat" from the program menu (Start->Programs->VMware->VMware Tomcat). <br />
  112. 112. vCenter: Services in Windows<br />The following are not shown as services<br />Licence Reporting manager<br />
  113. 113. New Alarms<br />
  114. 114. Predefined Alarms<br />
  115. 115. Remote Console to VM<br />Formally known as Virtual Serial Port Concentrator<br />
  116. 116. Overview<br />Many customers rely on managing physical hosts by connecting to the target machine over the serial port. <br />Physical serial port concentrators are used by such admins to multiplex connections to multiple hosts. <br />Provides a suitable way to remote a VM’s serial port(s) over a network connection, and supporting a “virtual serial port concentrator” utility.<br />Using VMs you lose this functionality and the ability to do remote management using scripted installs and management. <br />Virtual Serial Port Concentrator<br />Communicate between VMs and IP-enabled serial devices. <br />Connect to VM's serial port over the network, using telnet /ssh. <br />Have this connection uninterrupted during vmotion and other similar events. <br />
  117. 117. Virtual Serial Port Concentrator<br />What it is<br />Redirect VM serial ports over a standard network link<br />vSPC aggregates traffic from multiple serial ports onto one management console. It behaves similarly as physical serial port concentrators. <br />Benefits<br />Using a vSPC also allows network connections to a VM's serial ports to migrate seamlessly when the VM is migrated using vMotion<br />Management efficiencies<br />Lower costs for multi-host management<br />Enables 3rd party concentrator integration if required<br />
  118. 118. Example (using Avocent)<br />ACS 6000 Advanced Console Server running as a vSPC. <br />There is not a serial port or virtual serial port in the ACS6000 console server. <br />ACS6000 console server has a telnet daemon (server) listen to connections coming from ESX. <br />ESX will make one telnet connection for each virtual serial port configured to send data to ACS6000 console server. <br />The serial daemon will implement the telnet server with support to all telnet extensions implemented by VMware.<br />
  119. 119.
  120. 120. Configuring Virtual Ports on a VM<br />
  121. 121. Configuring Virtual Ports on a VM<br />Enables two VMs or a VM and a process on the host to<br />communicate as if they were physical machines connected by a serial cable. For example, this can be used for remote debugging on a VM<br />vSPC, which will act as proxy.<br />
  122. 122. Configuring Virtual Ports on a VM<br />Example (for Avocent):<br />Type ACSID://ttySxx in the Port URI, where xx is between 1 to 48. <br />It defines which virtual serial port from the ACS6000 console server this serial port will connect to.<br />1 VM 1 port.<br />ACS6000 has 48 ports only<br />Type telnet://<IP of Avocent VM>:8801 <br />
  123. 123. Configuring Virtual Ports on a VM<br />
  124. 124. Configure VM to redirect Console Login<br />Check your system's serial support<br />Check operating system recognizes serial ports in your hardware<br />Configure your /etc/inittab to support serial console logins<br />Add the following lines to the /etc/inittab<br /># Run agetty on COM1/ttyS0<br />s0:2345:respawn:/sbin/agetty -L -f /etc/issueserial 9600 ttyS0 vt100 <br />
  125. 125. Configure VM to redirect Console Login<br />Activate the changes that you made in /etc/inittab<br /># init q<br />If you want to be able to login via serial console as the root user, you will need to edit the /etc/securetty configuration file.<br />Add ttyS0 as an entry in the /etc/securetty<br /> console<br />ttyS0<br /> vc/1 <br /> vc/2 <br />
  126. 126. Configure serial port as the system console<br />Use options in /etc/grub.conf to redirect console output to one of your serial ports<br />Enables you to see all of the bootup and shutdown messages from your terminal.<br />The text to add to the config file is highlighted :<br />
  127. 127. Accessing the Serial Port of the Virtual Machine<br />Open a Web connection to the AvocentACS6000<br />Click on the Portsfolder and click SerialPorts<br />Based on the SerialPort connection configured in the VirtualMachine, you should see Signals of CTS|DSR|CD|RI<br />
  128. 128. Accessing the Serial Port of the Virtual Machine<br /><ul><li>Click in the SerialViewer link and a console will open
  129. 129. Enter password of avocent and hit the Enterkey to establish the connection</li></li></ul><li>Performance Monitoring<br />
  130. 130. UI > Performance > Advanced <br />vSphere 4.1<br />vSphere 4.0<br />Additional Chart Options in vSphere 4.1 around storage performance statistics:<br />Datastore, Power, Storageadapter & Storage path.<br />
  131. 131. 110<br />Performance Graphs<br />Additional Performance GraphViews added to vSphere 4.1<br />Host – Datastore, Management Agent, Power, Storage Adapter, Storage Path<br />VM – Datastore, Power, Virtual Disk<br />110<br />
  132. 132. Storage Statistics: vCenter & esxtop<br />Not available in this timeframe: Aggregation at cluster level in vCenter (possible through APIs)<br />*Network-based storage (NFS, iSCSI) I/O breakdown still being researched<br />** Not applicable to NFS; datastore is the equivalent<br /> ESXTOP publishes throughput and latency for LUN, if datastore has only one LUN then LUN will be equal datastore<br />
  133. 133. Volume Stats for NFS Device<br />
  134. 134. Datastore Activity Per Host<br />
  135. 135. Other Host Stats <br />
  136. 136. Datastore Activity per VM<br />
  137. 137. Virtual Disk Activity per VM<br />
  138. 138. VMware Update Manager<br />
  139. 139. Update Manager<br /><ul><li>Central automated, actionable VI patch compliance management solution
  140. 140. Define, track, and enforce software update compliance for ESX hosts/clusters, 3rd party ESX extensions, Virtual Appliances, VMTools/VM Hardware, online*/offline VMs, templates
  141. 141. Patch notification and recall
  142. 142. Cluster level pre-remediation check analysis and report
  143. 143. Framework to support 3rd party IHV/ISV updates, customizations: mass install, /update of EMC’s PowerPath module
  144. 144. Enhanced compatibility with DPM for cluster level patch operations
  145. 145. Performance and scalability enhancements to match vCenter</li></li></ul><li>Overview<br /><ul><li>vCenter Update Manager enables centralized, automated patch and version management .
  146. 146. Define, track, and enforce software update compliance and support for :
  147. 147. ESX/ESXi hosts
  148. 148. VMs
  149. 149. Virtual Appliances
  150. 150. 3rd Party ESX Modules
  151. 151. Online/Offline VMs, Templates
  152. 152. Automate and Generate Reports using Update Manager Database Views </li></ul>ESX/ESXi <br />VM<br />Virtual Appliance<br />VMTools<br />VM H/W<br />VMTools<br />VM H/W<br />Online/offline ; Templates<br />3rd party extensions<br />vCenter Update Manager<br />
  153. 153. Deployment Components<br />vCenter<br />Server<br />VI Client<br />Update<br />Manager<br />Server<br />Update Manager Components:<br />Update Manager Server + DB<br />2. Update Manager VI Client Plug-in <br />3. Update Manager Download Service<br />Virtualized<br />Infrastructure<br />External <br />Patch Feeds<br />Confidential<br />
  154. 154. New Features in 4.1<br />Update Manager now provides management of host upgrade packages.<br />Provisioning, patching, and upgrade support for third-party modules.<br />Offline bundles.<br />Recalled patches<br />Enhanced cluster operation.<br />Better handling of low bandwidth and high latency network<br />PowerCLI<br />Better support for virtual vCenter<br />
  155. 155. Notifications<br />As we have already seen with the notification Schedule, Update Manager 4.1 contacts VMware at regular intervals to download notifications about patch recalls, new fixes and alerts. <br />If patches with problems/potential issues are released, these patches are recalled in the metadata and VUM marks them as recalled. <br />If you try to install a recalled patch, Update Manager notifies you that the patch is recalled and does not install it on the host. <br />If you have already installed such a patch, VUM notifies you that the recalled patch is installed on certain hosts, but does not remove the recalled patch from the host.<br />Update Manager also deletes all the recalled patches from the Update Manager patch repository.<br />When a patch fixing the problem is released, Update Manager 4.1 downloads the new patch and prompts you to install it. <br />
  156. 156. Notifications<br />Notifications which Update Manager downloads are displayed on the Notifications tab of the Update Manager Administration view.<br /><ul><li>An Alarm is Generated and an email sent if the Notification Check Schedule is configured
  157. 157. Update Manager shows the patch as recalled</li></li></ul><li>Notifications - Patch Recall Details<br />
  158. 158. Notifications<br />Alarms posted for recalled and fixed Patches<br />RecalledPatches are represented by a Flag<br />
  159. 159. VUM 4.1 Feature - Notification Check Schedule<br />By default Update Manager checks for notifications about patch recalls, patch fixes and alerts at certain time intervals.<br />Edit Notifications to define the Frequency (hourly, daily, weekly, Monthly) and the Start time ( minutes after hour ), the Interval and the email address of who to Notify for recalled Patches<br />
  160. 160. VUM 4.1 Feature - ESX Host/Cluster Settings <br />When Remediating objects in a cluster with Distributed Power Management (DPM), High Availability (HA), and Fault Tolerance (FT) you should temporarilydisable these features for the entire cluster.<br />VUM does not remediate hosts on which these features are enabled.<br /> When the update completes, VUM restores these features<br />These settings become the default failure response. You can specify differentsettings when you configure individual remediation tasks.<br />
  161. 161. VUM 4.1 Feature - ESX Host/Cluster Settings<br />Update Manager can not remediate hosts where VMs have connected CD/DVD drives.<br />CD/DVD drives that are connected to the VMs on a host might prevent the host from entering maintenancemode and interrupt remediation.<br />Select Temporarily disable any CD-ROMs that may prevent a host from entering maintenance mode.<br />
  162. 162. Baselines and Groups<br />Baselines might be upgrade, extension or patchbaselines. Baselines contain a collection of one or more patches, servicepacks and bugfixes, extensions or upgrades.<br />Baseline groups are assembled from existingbaselines and might contain one upgradebaseline per type and one or more patch and extensionbaselines, or a combination of multiple patch and extension baselines.<br />Preconfigured Baselines<br />Hosts – 2 Baselines<br />VM/VA – 6 Baselines<br />
  163. 163. Baselines and Groups<br />Update Manager 4.1 introduces a new Host Extension Baseline<br />Host Extension baselines contain additional software for ESX/ESXi hosts. This additional software might be VMwaresoftware or third-partysoftware.<br />
  164. 164. Patch Download Settings<br />Update Manager can downloadpatches and extensions either from the Internet ( vmware.com ) or from a shared repository.<br />A new feature of Update Manager 4.1 allows you to import both VMware and Third-party patches manually from a ZIP file, called an Offline Bundle. <br />You download these patches from the Internet or copy them from a media drive, and then save them as offline bundle ZIP files on a local drive.<br />Use the Import Patches to upload to the Update Manager Repository<br />
  165. 165. Patch Download Settings<br />Click Import Patches at the bottom of the Patch Download Sources pane.<br />Browse to locate the ZIP file containing the patches you want to import in the Update Manager patch repository.<br />
  166. 166. Patch Download Settings<br />The patches are successfully imported into the Update Manager Patch Repository.<br />Use the Search box to filter e.g. ThirdPartyRight Mouse Click Patch and select Show Patch Detail<br />
  167. 167. VUM 4.1 Feature - Host Upgrade Releases<br />You can upgrade the hosts in your environment using HostUpgrade ReleaseBaselines which is a new feature of Update Manager 4.1.<br />This feature facilitates fasterremediation of hosts by having the Upgrade Release media already uploaded to the VUM Repository. Previously, the media had to be uploaded for each remediation.<br />To create a Host Upgrade Release Baseline, download the hostupgradefiles from vmware.com and then upload themto the Update Manager Repository.<br />Each upgradefile that you upload contains information about the targetversion to which it will upgrade the host. <br />Update Manager distinguishes the target release versions and combines the uploaded Host Upgrade files into Host Upgrade Releases. <br />A host upgrade release is a combination of host upgrade files, which allows you to upgradehosts to a particular release.<br />
  168. 168. VUM 4.1 Feature - Host Upgrade Releases<br />You cannot delete an Host Upgrade Release if it is included in a baseline. First delete any Baselines that have the Host Upgrade Release included.<br />Update Manager 4.1 supportsupgrades from versions ESX 3.0.x and later as well as ESXi 3.5 and later to versions ESX4.0.x and ESX4.1. <br />The remediation from ESX 4.0 to ESX 4.0.x is a patching operation, while the remediation from ESX 4.0.x to ESX 4.1 is considered an upgrade. <br />
  169. 169. VUM 4.1 Feature - Host Upgrade Releases<br />The Upgrade files that you upload are ISO or ZIP files. <br />The file type depends on the host type, host version and on the upgrade that you want to perform. <br />The following Table represents the types of the upgradefiles that you must upload for upgrading the ESX/ESXi hosts in your environment.<br />
  170. 170. VUM 4.1 Feature - Host Upgrade Releases<br />Depending on the files that you upload, host upgrade releases can be partial or complete.<br />Partial upgrade releases are host upgrade releases that do not contain all of the upgrade files required for an upgrade of both the ESX and ESXi hosts.<br />Complete upgrade releases are host upgrade releases that contain all of the upgrade files required for an upgrade of both the ESX and ESXi hosts. <br />To upgrade all of the ESX/ESXi hosts in your vSphere environment to version 4.1, you must upload all of the files required for this upgrade (three ZIP files and one ISO file):<br />esx-DVD-4.1.0-build_number.iso for ESX 3.x hosts<br />upgrade-from-ESXi3.5-to-4.1.0.build_number.zip for ESXi 3.x hosts<br />upgrade-from-ESX-4.0-to-4.1.0-0.0.build_number-release.zip for ESX 4.0.x hosts<br />upgrade-from-ESXi4.0-to-4.1.0-0.0.build_number-release.zip for ESXi 4.0.x hosts<br />
  171. 171. VUM 4.1 Feature - Host Upgrade Releases<br />You can upgrademultiple ESX/ESXi hosts of different versions simultaneously if you import a complete release bundle.<br />You import and manage host upgrade files from the Host Upgrade Releases tab of the Update Manager Administration view.<br />
  172. 172. VUM 4.1 Feature - Host Upgrade Releases<br /><ul><li>Wait until the file upload completes.</li></ul>The uploaded Host Upgrade Release files appear in the ImportedUpgradeReleases pane as an upgrade release.<br />139<br />
  173. 173. VUM 4.1 Feature - Host Upgrade Releases<br />Host Upgrade Releases are stored in the <patchStore> location specified in the vci-integrity.xml file in the host_upgrade_packages folder.<br />We can use the Update Manager Database View called VUMV_HOST_UPGRADES to locate them.<br />
  174. 174. Patch Repository<br />Patch and extensionmetadata is kept in the Update Manager Patch Repository. <br />You can use the repository to managepatches and extensions, check on new patches and extensions, view patch and extension details, view in which baseline a patch or an extension is included, view the recalled patches and import patches.<br />141<br />
  175. 175. Import Offline Patch to Repository<br />From the Patch Repository you can include available, recentlydownloadedpatches and extensions in a baseline you select.<br />Instead of using a shared repository or the Internet as a patch download source, you can import patches manually by using an offline bundle.<br />
  176. 176. Notifications<br />As we have already seen with the notification Schedule, Update Manager 4.1 contacts VMware at regular intervals to downloadnotifications about patch recalls, new fixes and alerts. <br />If patches with problems/potential issues are released, these patches are recalled in the metadata and VUM marks them as recalled. <br />If you try to install a recalled patch, Update Manager notifies you that the patch is recalled and does not install it on the host. <br />If you have already installed such a patch, VUM notifies you that the recalled patch is installed on certain hosts, but does notremove the recalled patch from the host.<br />Update Manager also deletes all the recalledpatches from the Update Manager patch repository.<br />When a patch fixing the problem is released, Update Manager 4.1 downloads the new patch and prompts you to install it. <br />
  177. 177. Notifications<br />Notifications which Update Manager downloads are displayed on the Notifications tab of the Update Manager Administration view.<br /><ul><li>An Alarm is Generated and an email sent if the Notification Check Schedule is configured
  178. 178. Update Manager shows the patch as recalled</li></li></ul><li>Notifications - Patch Recall Details<br />
  179. 179. Notifications<br />Alarms posted for recalled and fixed Patches<br />RecalledPatches are represented by a Flag<br />
  180. 180. VMware Converter<br />
  181. 181. Converter 4.2 (not 4.1)<br />Physical to VM conversion support for Linux sources including: <br />Red Hat Enterprise Linux 2.1, 3.0, 4.0, and 5.0<br />SUSE Linux Enterprise Server 8.0, 9.0, 10.0, and 11.0<br />Ubuntu 5.x, 6.x, 7.x, and 8.x<br />Hot cloning improvements to clone any incremental changes to physical machine during the P2V conversion process<br />Support for converting new third-party image formats including Parallels Desktop VMs, newer versions of Symantec, Acronis, and StorageCraft<br />Workflow automation enhancements:<br />automatic source shutdown, automatic start-up of the destination VM as well as shutting down one or more services at the source and starting up selected services at the destination<br />Destination disk selection and the ability to specify how the volumes are laid out in the new destination VM<br />Destination VM configuration, including CPU, memory, and disk controller type<br />Support for importing powered-off Microsoft Hyper-V R1 and Hyper-V R2 VMs<br />Support for importing Windows 7 sources<br />Ability to throttle the data transfer from source to destination based on network bandwidth or CPU<br />
  182. 182. Converter – Hyper-V Import<br />Microsoft Hyper-V Import<br />Hyper-V can be compared to VMware Server<br />Runs on top of operating system<br />By default only manageable locally<br />Up to now import went through P2V inside of the VM<br />Converter imports VMs from Hyper-V now as V2V<br />Collects information from the Hyper-V server re VMs<br />Does not go through Hyper-V administration tools<br />Uses default Windows methods to access the VM<br />Requirements<br />Converter needs administrator credentials to import a VM<br />Hyper-V must be able to create a network connection to destination ESX host<br />VM to be imported must be powered off<br />VM OS must be supported guestOS by vSphere<br />
  183. 183. Implementation Services<br />Upgrading, Next Steps, etc<br />
  184. 184. Support Info<br />VMware Converter plug-in. <br />vSphere 4.1 and its updates/patches are the last releases for the VMware Converter plug-in for vSphere Client.<br />We will continue to update and support the free Converter Standalone product<br />VMware Guided Consolidation. <br />vSphere 4.1 and its update/patch are the last major releases for VMware Guided Consolidation. <br />VMware Update Manager: Guest OS patching<br />Update Manager 4.1 and its update are the last releases to support scanning and remediation of patches for Windows and Linux guest OS. <br />The ability to perform VM operations such as upgrade of VMware Tools and VM hardware will continue to be supported and enhanced.<br />VMware Consolidated Backup 1.5 U2 <br />VMware has extended the end of availability timeline for VCB and added VCB support for vSphere 4.1. VMware supports VCB 1.5 U2 for vSphere 4.1 and its update/patch through the end of their lifecycles. <br />VMware Host Update utility<br />No longer used. Use Update Manager or CLI to patch ESX<br />vSphere Client no longer bundled with ESX/ESXi<br />Reduced size by around 160 MB.<br />
  185. 185. Support Info<br />VMI Paravirtualized Guest OS support. <br />vSphere 4.1 is the last release to support the VMI guest OS paravirtualization interface. For information about migrating VMs that are enabled for VMI so that they can run on future vSphere releases, see Knowledge Base article 1013842. <br />vSphere Web Access. <br />Support is now on best effort basis.<br />Linux Guest OS Customization. <br />vSphere 4.1 is the last release to support customization for these Linux guest OS: <br />RedHat Enterprise Linux (AS/ES) 2.1, RedHat Desktop 3, RedHat Enterprise Linux (AS/ES) 3.0, <br />SUSE Linux Enterprise Server 8<br />Ubuntu 8.04, Ubuntu 8.10, Debian 4.0, Debian 5.0<br />Microsoft Clustering with Windows 2000 is not supported in vSphere 4.1. <br />See the Microsoft Website for additional information. <br />Likely due to MSCS with Win2K EOL. Need to double confirm.<br />
  186. 186. vCenter MUST be hosted on 64-bit Windows OS<br /> 32-bit OS NOT supported as a host OS with vCenter vSphere 4.1<br />Why the change?<br />Scalability is restricted by the x86 32 bit virtual address space and moving to 64 bit will eliminate this problem<br />Reduces dev and QA cycles and resources (faster time to market)<br />Two Options<br />vCenter in a VM running 64-bit Windows OS<br />vCenter install on a 64-bit Windows OS<br /> Best Practice – Use Option 1<br />http://kb.vmware.com/kb/1021635<br />vCenter – Migration to 64-bit <br />
  187. 187. Data Migration Tool - What is backed up ?<br />vCenter <br />LDAP data <br />Configuration <br />Port settings <br />HTTP/S ports <br />Heartbeat port <br />Web services HTTP/S ports <br />LDAP / LDAP SSL ports <br />Certificates <br />SSL folder <br />Database <br />Bundled SQL Server Express only <br />Install Data <br />License folder <br />
  188. 188. Data Migration Tool - Steps to Backup the Configuration<br />Example of the start of the backup.bat command running<br />
  189. 189. Compatibility<br />vSphere Client compatibility<br />Can use the “same” client to access 4.1, 4.0 and 3.5<br />vCenter LinkedMode<br />vCenter 4.1 and 4.0 can co-exist in Linked Mode<br />After both versions of vSphere Client are installed, you can access vCenter linked objects with either client. <br />For Linked Mode environments with vCenter 4.0 and vCenter 4.1, you must have vSphere Client 4.0 Update 1 and vSphere Client 4.1. <br />MS SQL Server<br />Unchanged. 4.1, 4.0 U2, 4.0 U1 and 4.0 have identical support<br />32 bit DB is also supported.<br />
  190. 190. Compatibility<br />vCenter 4.0 does not support ESX 4.1<br />Upgrade vCenter before upgrading ESX<br />vCenter 4.1 does not support ESX 2.5<br />ESX 2.5 has reached the limited/non support status<br />vCenter 4.1 adds support for ESX 3.0.3 U1<br />Storage:<br />No change in VMFS format<br />Network<br />Distributed Switch 4.1 needs ESX 4.1<br />Quiz: how to upgrade?<br />
  191. 191. Upgrading Distributed Switch<br />Source:<br />Manual. ESX Configuration Guide, see “Upgrade a vDS to a Newer Version”<br />
  192. 192. Compatibility<br />View<br />Need to upgrade to 4.5<br />View 4.0 composer is a 32-bit application, while vCenter 4.1 is 64 bit.<br />SRM<br />need to upgrade to SRM 4.1<br />SRM 4.1 supports vSphere 4.0 U1, 4.0 U2 and 3.5 U5<br />SRM 4.1 needs vCenter 4.1<br />SRM 4.1 needs 64 bit OS. SRM 4.1 adds support for Win08 R2<br />CapacityIQ<br />CapacityIQ 1.0.3 (the current shipping release) is not known to have any issues with VC 4.1 but you need to use a “–NoVersionCheck” flag when registering CIQ with it.<br />CapacityIQ 1.0.4 will be released soon to address that.<br />
  193. 193. Compatibility: Win08 R2<br />This is for R2, not R1<br />This is to run the VMware products on Windows, not to host Win08 as Guest OS<br />Win08 as guest is supported on 4.0<br />Minimum vSphere products version to run on Windows 2008 R2:<br />vSphere Client 4.1<br />vCenter 4.1<br />Guest OS Customization for 4.0 and 4.1<br />vCenter Update Manager as its server. It is not yet supported for patching Win08 R2. Update Manager also does not patch Win7<br />vCenter Converter<br />Vmware Orchestrator vCO: Client and Server 4.1<br />SRM 4.1<br />
  194. 194. Known Issues<br />Full list: https://www.vmware.com/support/vsphere4/doc/vsp_esxi41_vc41_rel_notes.html#sdk<br />IPv6 Disabled by Default when installing ESXi 4.1.<br />Hardware iSCSI.<br />Broadcom Hardware iSCSI does not support Jumbo Frames or IPv6. Dependent hardware iSCSI does not support iSCSI access to the same LUN when a host uses dependent and independent hardware iSCSI adapters simultaneously.<br />VM MAC address conflicts<br />Each vCenter system has a vCenter instance ID. This ID is a number between 0 and 63 that is randomly generated at installation time but can be reconfigured after installation. <br />vCenter uses the vCenter instance ID to generate MAC addresses and UUIDs for VMs. If two vCenter systems have the same vCenter instance ID, they might generate identical MAC addresses for VMs. This can cause conflicts if the VMs are on the same network, leading to packet loss and other problems.<br />
  195. 195. Thank You<br />I’m sure you are tired too <br />
  196. 196. Useful references<br />http://vsphere-land.com/news/tidbits-on-the-new-vsphere-41-release.html<br />]http://www.petri.co.il/virtualization.htm<br />http://www.petri.co.il/vmware-esxi4-console-secret-commands.htm<br />http://www.petri.co.il/vmware-data-recovery-backup-and-restore.htm<br />http://www.delltechcenter.com/page/VMware+Tech<br />http://www.kendrickcoleman.com/index.php?/Tech-Blog/vm-advanced-iso-free-tools-for-advanced-tasks.html<br />http://www.ntpro.nl/blog/archives/1461-Storage-Protocol-Choices-Storage-Best-Practices-for-vSphere.html<br />http://www.ntpro.nl/blog/archives/1539-vSphere-4.1-Virtual-Serial-Port-Concentrator.html<br />http://www.virtuallyghetto.com/2010/07/vsphere-41-is-gift-that-keeps-on-giving.html<br />http://www.virtuallyghetto.com/2010/07/script-automate-vaai-configurations-in.html<br />http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1516821,00.html<br />http://vmware-land.com/esxcfg-help.html<br />http://virtualizationreview.com/blogs/everyday-virtualization/2010/07/esxi-hosts-ad-integrated-security-gotcha.aspx<br />http://www.MS.com/licensing/about-licensing/client-access-license.aspx#tab=2<br />http://www.MSvolumelicensing.com/userights/ProductPage.aspx?pid=348<br />
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×