vSphere
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

vSphere

on

  • 1,848 views

vSphere

vSphere

Statistics

Views

Total Views
1,848
Views on SlideShare
972
Embed Views
876

Actions

Likes
2
Downloads
127
Comments
0

3 Embeds 876

http://gloster.hu 700
http://www.gloster.hu 172
http://gloster.ubit.hu 4

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

vSphere Presentation Transcript

  • 1. vSphere 5.0 – What’s New Lovas Balázs Vmware oktató Arrow ECS Kft.
  • 2. Agenda • Platform • Misc • Storage • Network • HA • Data Recovery • AutoDeploy • SRM 5
  • 3. PLATFORM
  • 4. New ESXi Hardware Maximums New for ESXi 5.0: – 2TB host memory – Up to 160 logical CPUs – 512 virtual machines per host – 2,048 virtual CPUs per host 2TB160 LCPUs 2048 vCPUs 512 VMs
  • 5. ESXi Convergence Most Trusted vSphere ESXi  vSphere 5.0 will utilize the ESXi hypervisor exclusively  ESXi is the gold standard for hypervisors Overview Benefits  Thin architecture  Smaller security footprint  Streamlined deployment and configuration  Simplified patching and updating model
  • 6. ESXi 5.0 Firewall Features • ESXi 5.0 has a new firewall engine which is not based on iptables. • The firewall is service oriented, and is a stateless firewall.
  • 7. DCUI over ssh
  • 8.  Create virtual machines with up to:  32 vCPU  1 TB of RAM  4x size of previous vSphere versions  Run even the largest applications in vSphere, including very large databases  Virtualize even more applications than ever before (Tier 1 and 2) vSphere 5.0 – Scaling Virtual Machines Overview Benefits
  • 9. New Virtual Machine Features • vSphere 5.0 supports the industry’s most capable virtual machines Other new features VM Scalability Broader Device Coverage  32 virtual CPUs per VM  UI for multi-core virtual CPUs  Client-connected USB devices  USB 3.0 devices  Smart Card Readers for VM Console Access  1TB RAM per VM  4x previous capabilities!  Support for Mac OS X servers Richer Desktop Experience  3D graphics  VM BIOS boot order config API and PowerCLI interface  EFI fimware
  • 10. Misc
  • 11. Update Manager Features • VM patching REMOVED • Optimized Cluster Patching and Upgrade: – Based on available cluster capacity, it can remediate an optimal number of ESX/ESXi servers simultaneously without virtual machine downtime. – For those scenarios where turnaround time is more important than virtual machine uptime, you have the choice to remediate all ESX servers in a cluster simultaneously. • Less Downtime for VMware Tools Upgrade – can schedule an upgrade to occur at the time of next virtual machine reboot. • New Update Manager Utility: – helps users reconfigure the setup of Update Manager – change the database password and proxy authentication – replace the SSL certificates for Update Manager.
  • 12. Update Manager: ESX to ESXi Migration • Supported Paths – Migration from ESX (“Classic”) 4.x to ESXi 5.0 – For VUM-driven migration, pre-4.x hosts will have to be upgraded to 4.x first • Might be better just to do fresh install of ESXi 5.0 • Preservation of Configuration Information – Most standard configurations will be preserved, but not all: • Information that’s not applicable to ESXi will not be preserved, e.g. – /etc/yp.conf (no NIS in ESXi) – /etc/sudoers (no sudo in ESXi) • Any additional custom configuration files will not be preserved, e.g. – Any scripts added to /etc/rc.d 12
  • 13. vSphere 5.0 – vCenter Server Appliance (Linux)  Run vCenter Server as a Linux-based appliance  Simplified setup and configuration  Enables deployment choices according to business needs or requirements  Leverages vSphere availability features for protection of the management layer Overview Benefits
  • 14. vCenter Linux • vCenter Server Appliance (VCSA) consists of: – A pre-packaged 64 bit application running on SLES 11 • Distributed with sparse disks • Disk Footprint – A built in enterprise level database with optional support for a remote Oracle /BD2 databases. – Limits are the same for VC and VCSA • Embedded DB – 5 hosts/50 VMs • External DB – <300 hosts/<3000 VMs (64 bit) – A web-based configuration interface Distribution Min Deployed Max Deployed 3.6GB ~5GB ~80GB
  • 15. Configuration • Complete configuration is possible through a powerful web-based interface!
  • 16. vSphere 5.0 – Web Client  Run and manage vSphere from any web browser anywhere in the world  Platform independence  Replaces Web Access GUI  Building block for cloud based administration Overview Benefits
  • 17. Web Client Use Case – VM Management • VM Provisioning • Edit VM, VM power ops, Snapshots, Migration • VM Resource Management • View all vSphere objects (hosts, clusters, datastores, folders, etc) – Basic Health Monitoring – Viewing the VM console remotely – Search through large, complex environments – vApp Management • vApp Provisioning, vApp Editing, vApp Power Operations
  • 18. vSphere 5.0 – vMotion Enhancements • Multi-NIC Support • Support up to four 10Gbps or sixteen 1Gbps NICs (ea. NIC must have it's own IP) • Single vMotion can now scale over multiple NICs (load balance across multiple NICs) • Faster vMotion times and allows for a higher number of concurrent vMotions • Reduced Application Overhead • Slowdown During Page Send (SDPS) feature throttles busy VMs to reduce timeouts and improve success • Ensures less than 1 Second switchover time in almost all cases • Support for higher latency networks ( up to ~10ms) • Extend vMotion capabilities over slower networks
  • 19. Host Profiles Enhancements • New feature enables greater flexibility and automation – Integration to AutoDeploy – Host Profiles now has support for a greatly expanded set of configurations, including: • iSCSI • FCoE • Native Multipathing • Device Claiming and PSP Device Settings • Kernel Module Settings • And more
  • 20. STORAGE
  • 21. VMFS-5 vs VMFS-3 Feature comparison Feature VMFS-3 VMFS-5 2TB+ VMFS Volumes Yes (using extents) Yes Support for 2TB+ Physical RDMs No Yes Unified Block size (1MB) No Yes Atomic Test & Set Enhancements (part of VAAI, locking mechanism) No Yes Sub-blocks for space efficiency 64KB (max ~3k) 8KB (max ~30k) Small file support No 1KB
  • 22. VMFS-3 to VMFS-5 Upgrade • The Upgrade to VMFS-5 is clearly displayed in the vSphere Client under Configuration -> Storage view. • It is also displayed in the Datastores -> Configuration view. • Non-disruptive upgrades.
  • 23. VAAI Thin Provisioning - Dead Space Reclamation • Dead space is previously written blocks that are no longer used by the VM. For instance after a Storage vMotion • vSphere conveys block information to storage system via VAAI & storage system reclaims the dead blocks vSphere VMFS volume A VMFS volume B Storage vMotion
  • 24. ‘Out Of Space’ User Experience VMware VMware Space exhaustion, affected VMs paused, LUN online & awaiting space allocation. Space exhaustion warning in UI Storage vMotion based evacuation or add space
  • 25. Tier 1 Tier 2 Tier 3  Tier storage based on performance characteristics (i.e. datastore cluster)  Simplify initial storage placement  Load balance based on I/O Overview Benefits  Eliminate VM downtime for storage maintenance  Reduce time for storage planning/configuration  Reduce errors in the selection and management of VM storage  Increase storage utilization by optimizing placement High IO Throughput Profile-driven Storage
  • 26. Selecting a Storage Profile during provisioning By selecting a VM Storage Profile, datastores are now split into Compatible & Incompatible. The Celerra_NFS datastore is the only datastore which meets the GOLD Profile requirements – i.e. it is the only datastore that has our user- defined storage capability associated with it.
  • 27. Storage Capabilities & VM Storage Profiles Storage Capabilities surfaced by VASA or user-defined xxx VM Storage Profile referencing Storage Capabilities VM Storage Profile associated with VM Compliant Not Compliant
  • 28. Software FCoE Adapters • A software FCoE adapter is a software code that performs some of the FCoE processing. • This adapter can be used with a number of NICs that support partial FCoE offload. • Unlike the hardware FCoE adapter, the software adapter needs to be activated, similar to Software iSCSI.
  • 29. Storage vMotion VMkernel Guest OS Mirror Driver Source Destination Datamover VMM/Guest Userworld
  • 30. Storage DRS Storage DRS provides the following: 1. Initial Placement of VMs and VMDKS based on available space and I/O capacity. 2. Load balancing between datastores in a datastore cluster via Storage vMotion based on storage space utilization. 3. Load balancing via Storage vMotion based on I/O metrics, i.e. latency.
  • 31. Datastore Cluster • An integral part of SDRS is to create a group of datastores called a datastore cluster. • Datastore Cluster without Storage DRS – Simply a group of datastores. • Datastore Cluster with Storage DRS - Load Balancing domain similar to a DRS Cluster. • A datastore cluster , without SDRS is just a datastore folder. It is the functionality provided by SDRS which makes it more than just a folder. datastore cluster 2TB datastores 500GB 500GB 500GB 500GB
  • 32. Storage DRS Operations - Thresholds
  • 33. Storage DRS Operations Datastore Cluster VMDK affinity  Keep a Virtual Machine’s VMDKs together on the same datastore  Maximize VM availability when all disks needed in order to run  On by default for all VMs VMDK anti-affinity  Keep a VM’s VMDKs on different datastores  Useful for separating log and data disks of database VMs  Can select all or a subset of a VM’s disks Datastore Cluster VM anti-affinity  Keep VMs on different datastores  Similar to DRS anti-affinity rules  Maximize availability of a set of redundant VMs Datastore Cluster
  • 34. So what does it look like? Provisioning…
  • 35. So what does it look like? Load Balancing • It will show “utilization before” and “after” • There’s always the option to override the recommendations
  • 36. VSA vSphere Storage Appliance
  • 37. Introduction • Each ESXi server has a VSA deployed to it as a Virtual Machine. • The appliances use the available space on the local disk(s) of the ESXi servers & present one replicated NFS volume per ESXi server. This replication of storage makes the VSA very resilient to failures. vSpherevSphere vSphere VSA VSA VSA NFS NFS NFS vSphere Client VSA Manager
  • 38. vCenter Server Manage VSA Manager VSA Cluster Service VSA Datastore 2 VSA Datastore 1 Volume 1 (Replica) Volume 2 VSA cluster with 2 members Volume 1 Volume 2 (Replica)
  • 39. vCenter Server Manage VSA Manager Volume 1 Volume 3 (Replica) Volume 2 (Replica) Volume 3 Volume 1 (Replica) Volume 2 VSA Datastore 2 VSA Datastore 3 VSA Datastore 1 VSA cluster with 3 members
  • 40. Simplified UI for VSA Cluster Configuration 1 2 3 4 Introduction Datacenter Selection ESXi host Selection IP Address Assignment
  • 41. VSA Cluster Recovery • In the event of a vCenter server loss, re-install the VSA plugin andn choose to Recover the VSA cluster.
  • 42. vSphere Storage Appliance – Licensing Shared storage capabilities, without the cost and complexity vSphere Storage Appliance $5,995List Price PricingLicensing  vSphere Storage Appliance is licensed on a per- instance basis (like vCenter Server)  Each VSA instance supports up to 3 nodes  At least two nodes needs to be part of a VSA deployment vSphere Storage Appliance available at 40% off when purchased with vSphere Essentials Plus vSphere Essentials Plus w/ vSphere Storage Appliance + $4,495Essentials Plus $7,995List Price $3,500 (40% off)vSphere Storage Appliance
  • 43. NETWORK
  • 44. LLDP Neighbour Info – vSphere side Sample output using LLDPD Utility
  • 45. NetFlow • NetFlow is a networking protocol that collects IP traffic information as records and sends them to third party collectors such as CA NetQoS, NetScout etc . vDS VM A VM B trunk Physical switchCollector •The Collector/Analyzer report on various information such as • Current top flows consuming the most bandwidth • Which flows are behaving irregularly • Number of bytes a particular flow has sent and received in the past 24 hours. NetFlow session Host VM traffic Legend :
  • 46. Port Mirror Ingress Source Destination vDS Egress Source Destination vDS Ingress Source Destination vDS External System Egress Source Destination vDS External System Intra-VM traffic Inter-VM traffic Mirror Flow Legend : VM Traffic
  • 47. Server Admin Mgmt NFS iSCSI vMotion FT Traffic Shares Limit (Mbps) 802.1p 5 150 1 30 -- 10 250 -- 10 2 20 2000 4 5 -- 15 -- Teaming Policy vNetwork Distributed Switch HBR NETIOC VM traffic Coke VM Pepsi VMs HBR Mgmt vMotion NFS Pepsi VMs Coke VMs Other VMs
  • 48. 802.1p Tag for Resource Pool • vSphere infrastructure does not provide QoS based on these tags. • vDS simply tags the packets according to the Resource Pool setting, and it is down to the physical switch to understand the flag and act upon it.
  • 49. High Availability
  • 50. HA vSphere HA feature provides organizations the ability to run their critical business applications with confidence. Enhancements allow: • A solid, scalable foundation upon which to build to the cloud • Ease of management • Ease of troubleshooting • Increased communications mechanisms VMware ESX VMware ESX VMware ESXi Resource Pool Failed Server Operating ServerOperating Server
  • 51. vSphere HA Primary Components • Every host runs a agent – Referred to as ‘FDM’ or Fault Domain Manger – One of the agents within the cluster is chosen to assume the role of the Master – All other agents assume the role of Slaves • There is no more Primary/Secondary concept with vSphere HA – There is only one Master per cluster during normal operations vCenter ESX 02 ESX 01 ESX 03 ESX 04
  • 52. Storage Level Communications • One of the most exciting new features of vSphere HA is its ability to use a storage subsystem for communication. • The datastores used for this are referred to as ‘Heartbeat Datastores’. • Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning. ESX 02 ESX 01 ESX 03 ESX 04
  • 53. Data Recovery
  • 54. vDR: Deduplication Performance Improvements Overall Improvements 1. New compression algorithm will speed up compressing of data 2. More efficient IO path when accessing slab files 3. Group transactions together with parent (i.e. daily backups of the same VMs stored in same slab file) Integrity Check Improvements 1. Periodic checkpoints allows suspending and resuming IC operation 2. Group similar transactions together so they can be processed in bulk 3. Additional tweaking of IC options via datarecovery.ini file (for example, what day you want the full integrity check to run and frequency per month)
  • 55. Email Reports – Sample Good backup – no errors
  • 56. Supported Environment • VMware vSphere vCenter 4.1 Update 1 and later • VMware vSphere 4.0 Update 3 and later
  • 57. vDR: Destination Maintenance Allows separation of backup and maintenance windows. Some use cases 1) Delay start of integrity checks so backups complete as expected 2) Ensure no activity on dedupe store so files can be safely copied off to alternate media
  • 58. Ability To Suspend Backup Jobs • Backup Job can be suspended individually • Right click backup job and select Suspend Future Tasks • Currently running tasks are not affected
  • 59. New datarecovery.ini options Option Description Range FullIntegrityCheckInterval The number of days between automated full integrity check 1-30; Default is 7 days FullIntegrityCheckDay Specify the day of the week that the automated full integrity check is run 1=Sunday, 2=Monday, etc SerializeHotadd Disables parallel SCSI Hot-Add operations and returns hot-add behavior to VDR 1.2 level 0-1; Default is 0 BackupUnusedData Excludes backups of Windows and Linux swap partitions 0-1; Default is 0
  • 60. Auto Deploy
  • 61. vSphere vSpherevSphere vSphere 5.0 – Auto Deploy vCenter Server with Auto Deploy Host ProfilesImage Profiles  Deploy and patch vSphere hosts in minutes using a new “on the fly” model  Coordination with vSphere Host Profiles Overview Benefits  Rapid deploy/recovery/patching of hosts  Centralized host and image management  Reduce manual deployment and patch processes  No bootdisks vSphere • Target Audience for – Customers with large vSphere deployments – High host refresh rates
  • 62. Composition of an ESXi Image CIM Providers Drivers Core Hypervisor Plug-in Components
  • 63. Windows Host with PowerCLI and Image Builder Snap-in Building an Image Image Builder OEM VIBs Driver VIBs ESXi VIBs Image Profile PXE-bootable Image ISO Image Depots Generate new image
  • 64. Auto Deploy Depots Auto Depoy Example – Initial Boot OEM VIBs Driver VIBs ESXi VIBs Rules Engine “Waiter” Provision new host Image Profile Image Profile Image Profile vCenter Server Host Profile Host Profile Host Profile TFTP DHCP
  • 65. Auto Deploy Depots Auto Depoy Example – Initial Boot OEM VIBs Driver VIBs ESXi VIBs Rules Engine “Waiter” 1) PXE Boot server Image Profile Image Profile Image Profile vCenter Server Host Profile Host Profile Host Profile TFTP DHCP DHCP request gPXE image
  • 66. Auto Deploy Depots Auto Depoy Example – Initial Boot OEM VIBs Driver VIBs ESXi VIBs Rules Engine “Waiter” 2) Contact Auto Deploy Server Image Profile Image Profile Image Profile vCenter Server Host Profile Host Profile Host Profile Cluster A Cluster B
  • 67. Auto Deploy Depots Auto Depoy Example – Initial Boot OEM VIBs Driver VIBs ESXi VIBs Rules Engine “Waiter” Image Profile Image Profile Image Profile vCenter Server Host Profile Host Profile Host Profile 3) Determine Image Profile, Host Profile and cluster •Image Profile X •Host Profile 1 •Cluster B Cluster A Cluster B
  • 68. Auto Deploy Depots Auto Depoy Example – Initial Boot OEM VIBs Driver VIBs ESXi VIBs Rules Engine “Waiter” Image Profile Image Profile Image Profile vCenter Server Host Profile Host Profile Host Profile 4) Push image to host, apply host profile Cluster A Cluster B Image Profile Host Profile cache
  • 69. Auto Deploy Depots Auto Depoy Example – Initial Boot OEM VIBs Driver VIBs ESXi VIBs Rules Engine “Waiter” Image Profile Image Profile Image Profile vCenter Server Host Profile Host Profile Host Profile 5) Place host into cluster Cluster A Cluster B Image Profile Host Profile cache
  • 70. Boot Disk What is Auto Deploy Configuration: networking, storage, date/time, firewall, admin password, … Running State: VM Inventory, HA state, License, DPM configuration Event Recording: log files, core dump Platform Composition: ESXi base, drivers, CIM providers, … •No Boot Disk? Where does it go? Image Profile Host Profile vCenter Server Add-on Components
  • 71. Auto Deploy Components Component Sub-Components Notes PXE Boot Infrastructure • DHCP Server • TFTP Server • Setup independently • gPXE file from vCenter • Can use Auto Deploy Appliance Auto Deploy Server • Rules Engine • PowerCLI Snap-in • Web Server • Build/Manage Rules • Match server to Image and Host Profile • Deploy server Image Builder • Image Profiles, • PowerCLI Snap-in • Combine ESXi image with 3rd party VIBs to create custom Image Profiles vCenter Server • Stores Rules • Host Profiles • Answer Files • Provides store for rules • Host configs saved in Host Profiles • Custom Host settings saved in Answer Files
  • 72. Oktatás
  • 73. vSphere oktatás - ARROW ECS vSphere 5: What's New (2 nap) AKCIÓS jelentkezés év végéig • Két mérnök akció: hallgató páronként 338.000,. helyett 255.000.- ft • VCP upgrade : What's New + VCP vizsgakupon 189.000.- ft A tanfolyam ára: 169.000.- Időpontok: • Okt. 3. • Okt 27. • Nov 24. vmware@arrowecs.hu
  • 74. vSphere oktatás - ARROW ECS VMware vSphere: Install, Manage, Configure [v5] (4 nap) Listaár: 290.000.- Ingyenes VCP kupon a Webex résztvevőknek! Kupon kód: webex Tanfolyami időpontok: Okt 17. Nov 14. vmware@arrowecs.hu
  • 75. Q/A
  • 76. SRM v5
  • 77. ESXi Recovery SiteProtected Site ESXESXESXi VSR Agent vSphere Replication Server Tightly Integrated With SRM, vCenter and ESX Site Recovery Manager Site Recovery Manager vSphere Replication Management Server vSphere Replication Management Server Any storage supported by vSphere Any storage supported by vSphere vCenter Server vCenter Server vSphere Replication Architecture
  • 78. Replication UI  Select VMs to replicate from within the vSphere client by right click options  Can do this on one VM, or multiple VMs simultaneously
  • 79. vSphere Replication 1.0 Limitations • Focus on virtual disks of powered-on VMs. – ISOs and floppy images are not replicated. – Powered-off/suspended VMs not replicated. – Non-critical files not replicated (e.g. logs, stats, swap, dumps). • vSR works at the virtual device layer. – Snapshots work with vSR, snapshot is replicated, but VM is recovered with collapse snapshots. – Physical RDMs are not supported. • FT, linked clones, VM templates are not supported with vSR. • Automated failback of vSR-protected VMs will be late, but will be supported in the future. • Virtual Hardware 7, or later, in the VM is required.
  • 80. vSphere Replication vs Storage Replication Replication Provider Cost Management Performance vSphere Replication VMware • Low-end storage supported • No additional replication software • VM’ granularity • Managed directly in vCenter • 15 min RPOs • Scales to 500 VMs • File-level consistency • No automated failback, FT, linked clones, physical RDM Storage- based Replication • Higher-end replicating storage • Additional replication software • LUN – VM layout • Storage team coordination • Synchronous replication • High data volumes • Application consistency possible
  • 81. Planned Migrations = Consistency & No Data Loss Overview Benefits Two workflows can be applied to recovery plans:  DR failover  Planned migration Planned migration ensures application consistency and no data-loss during migration  Graceful shutdown of production VMs in application consistent state  Data sync to complete replication of VMs  Recover fully replicated VMs Better support for planned migrations  No loss of data during migration process  Recover ‘application-consistent’ VMs at recovery site Planned Migration Site BSite A Replication 1 Shut down production VMs 2 Sync data, stop replication and present LUNs to vSphere 3 Recover app- consistent VMs vSphere vSphere
  • 82. Reprotect  After you use planned migration (or DR Event) to migrate to your recovery site, you must reprotect to enable the failback.
  • 83. Simplify failback process  Automate replication management  Eliminate need to set up new recovery plan Streamline frequent bi-directional migarations Automated Failback Re-protect VMs from Site B to Site A  Reverse replication  Apply reverse resource mapping Automate failover from Site B to Site A  Reverse original recovery plan Restrictions  Does not apply if Site A has undergone major changes / been rebuilt  Not available with vSphere Replication Overview Benefits Automated Failback Site BSite A Reverse Replication Reverse original recovery plan vSphere vSphere
  • 84. SRM Scalability Maximum Enforced Protected virtual machines total 1000 No Protected virtual machines in a single protection group 500 No Protection groups 250 No Simultaneous running recovery plans 30 No vSphere Replicated virtual machines 500 No
  • 85. Q/A