V sphere 5 roadshow final

1,105
-1

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,105
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
79
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Every one of our customers has existing applications, running in existing datacenters, that represents significant investments and ongoing value. The first thing we are doing with these customers, is helping them stand-up a Private Cloud, to get the most efficiency and agility out of their existing assets. And this can be done in a pragmatic, evolutionary way. We have over 250,000 customers worldwide that are already on this path, because they are leveraging vSphere to virtualize the entire fabric of the datacenter, including CPU & memory, storage, and networking. And because they are using vSphere, they get built-in high-availability, and automated, dynamic resource scheduling to give them the cloud attributes of elastic, pooled capacity. <click>With virtualization in place, the independent silos are broken down, enabling us to automate many of the mundane, repetitive administration tasks with our vCenter management suite, further decreasing opex in the datacenter.
  • With vSphere 5.0, multiple enhancements have been introduced to increase efficiency of the Storage vMotion process, to improve overall performance, and for enhanced supportability. Storage vMotion in vSphere 5.0 now also supports the migration of virtual machines with a vSphere Snapshot and the migration of linked clones.
  • The Storage vMotion control logic is in the VMXThe Storage VMotion thread first creates the destination disk.After that, a stun/unstun of the VM allows the SVM Mirror Driver to be installed. I/Os to source will be mirrored to destination. The new driver will leverage the Data Mover to implement a single-pass block copy of the source to the destination disk. In additional to this it will mirror I/O between the two disks. This is a synchronous write meaning that the mirror driver will acknowledge the write to the Guest OS when it has received the acknowledgement from both the source and destination
  • Accelerate VM storage placement decision to a storage pod by:Capturing VM storage SLA requirementsMapping to the storage with the right characteristics and spare space
  • Storage DRS provides initial placement recommendations to datastores in a Storage DRS-enabled datastore cluster based on I/O and space capacity. During the provisioning of a virtual machine, a datastore cluster can be selected as the target destination for this virtual machine or virtual machine disk after which a recommendation for initial placement is done based on I/O and space capacity. As just mentioned Initial placement in a manual provisioning process has proven to be very complex in most environments and as such important provisioning factors like current I/O load or space utilization are often ignored. Storage DRS ensures initial placement recommendations are made in accordance with space constraints and with respect to the goals of space and I/O load balancing. Although people are really excited about automated load balancing… It is Initial Placement where most people will start off with and where most people will benefit from the most as it will reduce operational overhead associated with the provisioning of virtual machines.
  • Ongoing balancing recommendations are made when one or more datastores in a datastore cluster exceeds the user-configurable space utilization or I/O latency thresholds. These thresholds are typically defined during the configuration of the datastore cluster. Storage DRS utilizes vCenter Server’s datastore utilization reporting mechanism to make recommendations whenever the configured utilized space threshold is exceeded. I/O load is evaluated by default every 8 hours currently with a default latency threshold of 15ms. Only when this I/O latency threshold is exceeded Storage DRS will calculate all possible moves to balance the load accordingly while considering the cost and the benefit of the migration. If the benefit doesn’t at least last for 24 hours Storage DRS will not make the recommendation.
  • Ongoing balancing recommendations are made when one or more datastores in a datastore cluster exceeds the user-configurable space utilization or I/O latency thresholds. These thresholds are typically defined during the configuration of the datastore cluster. Storage DRS utilizes vCenter Server’s datastore utilization reporting mechanism to make recommendations whenever the configured utilized space threshold is exceeded. I/O load is evaluated by default every 8 hours currently with a default latency threshold of 15ms. Only when this I/O latency threshold is exceeded Storage DRS will calculate all possible moves to balance the load accordingly while considering the cost and the benefit of the migration. If the benefit doesn’t at least last for 24 hours Storage DRS will not make the recommendation.
  • Today:Currently we identify the requirements of the virtual machine, try to find the optimal datastore based on the requirements and create the virtual machine or disk. In some cases customers even periodically check if VMs are compliant but in many cases this is neglected.Storage DRS:Storage DRS only solves that problem partly. As still manually we will need to identify the correct datastore cluster and even when grouping datastores in to a cluster we need to manually verify if all LUNs are “alike”…. And again there is that manual periodically checkStorage DRS and Profile Driven Storage:When using Profile Driven Storage and Storage DRS in conjunction these problems are solved. Datastore cluster can be created based on the characteristics provided through vasa or the custom tags. When deploying virtual machines a storage profile can be selected ensuring that the virtual will be compliant!
  • Step 1The diagram we just showed gave a total overview, but most customers are concerned about just one thing: compliancy so how does this work? As mentioned Capabilities are surfaced through VASAStep2:And these capabilties are linked to a specific VM Storage ProfileStep 3:When a new is created or a excisting virtual machine is tagged the resultStep 4:Will be either complaint or not compliant it is as simple as that.
  • Auto Deploy is a new method for provisioning ESXi hosts in vSphere 5.1. At a high level the ESXi host boots over the network (using PXE/gPXE), contacts the Auto Deploy Server which loads ESXi into the hosts memory. After loading the ESXi image the Auto Deploy Server coordinates with vCenter Server to configure the host (using Host Profiles and Answer Files (answer files are new in 5.0). Auto Deploy eliminates the need for a dedicated boot device, enables rapid deployment for many hosts, and also simplifies ESXi host management by eliminating the need to maintain a separate “boot image” for each host.
  • Agent is ~50Kb in size. FDM Agent is not tied to vpxd at all
  • Customers are getting hit by core and physical memory restrictions“How will I license vSphere when my CPUs are over 6 or 12 cores?”CPU cores and physical entitlements are tied to a single server and cannot be shared among multiple ones reducing flexibility and utilizationRapid introduction of new hardware technologies require constant amendments to the licensing mode creating uncertainty over planning“What happens if I use SSD or hyperthreading or etc.?”Hardware based entitlements make it difficult for customers to transition to the usage based cost and chargeback models that characterize cloud computing and IT as a Service
  • Customers are getting hit by core and physical memory restrictions“How will I license vSphere when my CPUs are over 6 or 12 cores?”CPU cores and physical entitlements are tied to a single server and cannot be shared among multiple ones reducing flexibility and utilizationRapid introduction of new hardware technologies require constant amendments to the licensing mode creating uncertainty over planning“What happens if I use SSD or hyperthreading or etc.?”Hardware based entitlements make it difficult for customers to transition to the usage based cost and chargeback models that characterize cloud computing and IT as a Service
  • The FAST Suite improves performance and maximizes storage efficiency by deploying this FLASH 1st strategy. FAST Cache, an extendable cache of up to 2 TB, gives a real-time performance boost by ensuring the hottest data is served from the highest performing Flash drives for as long as needed. FAST VP then complements FAST Cache by optimizing storage pools on a regular, scheduled basis. You define how and when data is tiered using policies that dynamically move the most active data to high-performance drives (e.g., Flash), and less active data to high-capacity drives, all in one-gigabyte increments for both block and file data.Together, they automatically optimize for the highest system performance and the lowest storage cost simultaneously.
  • The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.
  • The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools.  FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler.  Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
  • The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools.  FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler.  Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
  • The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools.  FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler.  Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
  • The Culham is managed by Unisphere, and the base software includes file deduplication & compression, block compression, virtual provisioning and SAN copy.Rather than ordering a number of “a la carte” products, we’ve simplified the optional software into five attractively priced suites: The FAST Suite improves performance and maximizes storage efficiency. It includes FAST VP, FAST Cache, Unisphere Analyzer, and Unisphere Quality of Service manager.The Security and Compliance Suite helps ensure that data is protected from unwanted changes, deletions, and malicious activity. It includes the event enabler for anti-virus, quota management & auditing, file-level retention and Host Encryption. The Local Protection Suite delivers any point-in-time recovery with DVR-like roll-back capabilities. Copies of production data can also be used for development, testing, decision support and backup. This suite includes: SnapView, SnapSure and RecoverPoint/SE CDP.The Remote Protection Suite delivers Unified block and file replication, giving customers one way to protect everything better. It includes Replicator, MirrorView and RecoverPoint/SE CRR.The Application Protection Suite automates application-consistent copies and proves customers can recover to defined service levels. This suite includes Replication Manager and Data Protection Advisor for Replication.Finally, the total efficiency pack and protection packs bundle the suites to further simplify ordering and lower costs.
  • The EMC VNX Series also had the lowest overall response time (ORT) of systems tested, taking the top spot with a response time of .96 milliseconds. EMC’s response time is 3 times faster than the IBM offering in second place. Faster response times enable end-users to access information quicker and more efficiently. Chris Mellor in The Register blog entry EMC kills SPEC benchmark with all-flash VNX (http://www.theregister.co.uk/2011/02/23/enc_vnx_secsfs2008_benchmark/) writes about IBM, HP and NetApp: “For all three companies, any ideas they previously had of having top-level SPECsfs2008 results using disk drives have been blown out of the water by this EMC result. It is a watershed benchmark moment. ®”
  • V sphere 5 roadshow final

    1. 1. Blue Chip <br />Change is the only constant in business... ...evolution is the key to survival<br />www.bluechip.uk.com<br />
    2. 2. Who isBlue Chip...?<br />Established in 1992, Blue Chip is one of the UK's leading providers of business <br />IT infrastructure solutions. We provide consultancy, design, procurement, implementation, <br />support and maintenance, training and outsourcing services to organisations across the UK.<br />As your solutions partner, Blue Chip will ensure that your organisation keeps pace with the ever <br />changing demands of technology lifecycle management. The result is that your organisation can<br />evolve into an innovative business that is fully enabled by technology. <br />Key Facts and Figures<br /><ul><li>Locations in Poole, Bedford, Southampton and Leeds
    3. 3. The South's Largest VUE and Prometric Training Centre with capacity for 100+ delegates a week.
    4. 4. £3million worth of dedicated facilities – training, hosting and offices
    5. 5. 160 + staff, 75% of whom are technical resources
    6. 6. 1000+ clients varying in size between 5-5000 users
    7. 7. CRN and CNA Award winners
    8. 8. Supporting in excess of 80,000 PC‘s</li></li></ul><li>Our Key Areas...Technology, Services and Training<br /><ul><li>Virtualisation – VMware VSphere, VMware View, HyperV, Citrix XenDesktop, Citrix XenApp, appV, RDS
    9. 9. Microsoft – core infrastructure services, Active Directory, Exchange, Sharepoint, SQL, System Centre
    10. 10. Unified Communications – Cisco UC Manager and Microsoft Office Communications Server Lync
    11. 11. Technical Training - Microsoft, Cisco, VMware, Mac OSX, UNIX, Linux, Citrix, ITIL, PRINCE2
    12. 12. Mobility and Wireless – Cisco, Microsoft
    13. 13. Resourcing & Outsourcing - Fully managed services, TUPE agreements, contract, project management
    14. 14. Service Desk and Support - 24/7 Service Desk, SLAs, system monitoring, warranty management
    15. 15. Proactive Maintenance – Scheduled administration and system monitoring
    16. 16. Storage and Data Archiving – EMC, Symantec Enterprise Vault
    17. 17. High Availability Solutions – VMware, Microsoft, HP, EMC
    18. 18. Security & Unified Threat Management – Fortinet and Cisco
    19. 19. Business Management Applications – Microsoft Dynamics NAV, Microsoft CRM, SharePoint
    20. 20. Cloud Services –IAAS Platform, Offsite backup & DR</li></li></ul><li>Our clients…<br />Industry sectors include – Education, Finance, Medical/Healthcare, Logistics and Transport, Manufacturing, Construction and Housing, Professional Services, Legal, Not for Profit and Public Sector.<br />
    21. 21. Our partners…<br />Blue Chip recognises that to deliver the best, we must work with the best! Through carefully selected and managed alliances, Blue Chip holds strategic partnerships with the world's best of breed manufacturers. <br />
    22. 22. Cloud Infrastructure Launch – What’s New?<br />Clive Wenman<br />Systems Engineer - VMware<br />
    23. 23. vSphere<br />vSphere<br />vSphere<br />Virtualisation is the Foundation for Cloud<br />“Virtualization is a modernization catalyst and unlocks cloud computing.” ―Gartner, May 2010<br />
    24. 24. Virtualising.. Bus. Critical Apps <br />vSphere<br />vSphere<br />vSphere<br /><ul><li>The Niche Apps(LOB apps, Tier 2 DB, etc.)</li></ul>>60% Virtualized<br /><ul><li>SAP
    25. 25. Custom Java Apps
    26. 26. SharePoint
    27. 27. Exchange</li></ul>Accelerate<br />App Lifecycle<br />Improve App Quality of Service<br />Improve App Efficiency<br /><ul><li>SQL
    28. 28. Oracle </li></ul>30% Virtualized<br /><ul><li>The Easy Apps(infrastructure, file, print)</li></li></ul><li>vSphere<br />vSphere<br />vSphere<br />Hybrid Cloud Stack…<br />vCloud Director<br />vShield Security<br />vCenter Management<br />
    29. 29. Bring Cloud Architecture to Existing Datacenters<br />Organization: Marketing<br />Organization: Finance<br /><ul><li>Leverage virtualization to transform physical silos into elastic, virtual capacity</li></ul>Virtual Datacenters<br />Catalogs<br />Virtual Datacenters<br />Catalogs<br />Users & Policies<br />Users & Policies<br /><ul><li>Increase automation through built-in intelligent policy management
    30. 30. Move from static, physical security to dynamic, embedded security
    31. 31. Enable secure, self-service to pre-defined IT services, with pay-for-use</li></ul>Compute<br />Storage<br />Network<br />
    32. 32. New<br />vSphere<br />vSphere<br />vSphere<br />In 2011 VMware has Introduced a major upgrade of the entire Cloud Infrastructure Stack<br />Cloud Infrastructure Launch<br />vCloud Director 1.5<br />vCloud Director<br />vShield 5.0<br />vShield Security<br />vCenter SRM 5.0<br />vCenter Management<br />vSphere 5.0<br />
    33. 33. New Virtual Machine Capabilities<br />
    34. 34. vSphere 5.0 – Scaling Virtual Machines<br />Overview<br /><ul><li>Create virtual machines with up to 32 vCPU and 1 TB of RAM
    35. 35. 4x size of previous vSphere versions
    36. 36. Run even the largest applications in vSphere, including very large databases
    37. 37. Virtualize even more applications than ever before (tier 1 and 2)</li></ul>Benefits<br />4x<br />
    38. 38. vSphere 5.0 – Web Client<br />Overview<br /><ul><li>Run and manage vSphere from any web browser anywhere in the world
    39. 39. Platform independence
    40. 40. Replaces Web Access GUI
    41. 41. Building block for cloud based administration</li></ul>Benefits<br />
    42. 42. Demo<br />New Hardware<br />Hot Add<br />CPU<br />Memory<br />Resources – guest memory lock<br />VMware Hardware status monitor<br />Web Client<br />Linux client or MAC can now mange vCentre<br />Resume tasks<br />Advanced search - history of vm's<br />Customise view<br />IPAD Client<br />
    43. 43. Storage vMotion – Introduction<br />In vSphere 5.0, a number of new enhancements were made to Storage vMotion.<br />Storage vMotion will now work with Virtual Machines that have snapshots, which means coexistence with other VMware products & features such as VCB, VDR & HBR.<br />Storage vMotion will support the relocation of linked clones.<br />Storage vMotion has a new use case – Storage DRS – which uses Storage vMotion for Storage Maintenance Mode & Storage Load Balancing (Space or Performance).<br />
    44. 44. Storage vMotion Architecture Enhancements<br />Guest OS<br />VMM/Guest<br />Datamover<br />Mirror Driver<br />VMkernel<br />Userworld<br />Source<br />Destination<br />
    45. 45. Profile-Driven Storage & Storage DRS<br />Overview<br />High IO throughput<br /><ul><li>Tier storage based on performance characteristics (i.e. datastore cluster)
    46. 46. Simplify initial storage placement
    47. 47. Load balance based on I/O</li></ul>Tier 1<br />Tier 2<br />Tier 3<br />Benefits<br /><ul><li>Eliminate VM downtime for storage maintenance
    48. 48. Reduce time for storage planning/configuration
    49. 49. Reduce errors in the selection and mgmt of VM storage
    50. 50. Increase storage utilization by optimizing placement</li></li></ul><li>Storage DRS Operations – Initial Placement<br />Initial Placement – VM/VMDK create/clone/relocate<br />When creating a VM you select a datastore cluster rather than an individual datastore and let SDRS choose the appropriate datastore.<br />SDRS will select a datastore based on space utilization and I/O load.<br />By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters.<br />2TB<br />datastore cluster<br />500GB<br />500GB<br />500GB<br />500GB<br />datastores<br />300GB available<br />260GB available<br />265GB available<br />275GB available<br />
    51. 51. Storage DRS Operations:Load Balancing<br />Load balancing: SDRS triggers on space usage & latency threshold.<br /><ul><li>Algorithm makes migration recommendations when I/O response time and/or space utilization thresholds have been exceeded.</li></ul>Space utilization statistics are constantly gathered by vCenter, default threshold 80%. <br />I/O load trend is currently evaluated every 8 hours based on a past day history, default threshold 15ms.<br />Load Balancing is based on I/O workload and space which ensures that no datastore exceeds the configured thresholds.<br />Storage DRS will do a cost / benefit analysis!<br />For I/O load balancing Storage DRS leverages Storage I/O Control functionality.<br />
    52. 52. Storage DRS Operations<br />Datastore Cluster<br />Datastore Cluster<br />Datastore Cluster<br />VMDK affinity<br />VMDK anti-affinity<br />VM anti-affinity<br /><ul><li>Keep a Virtual Machine’s VMDKs together on the same datastore
    53. 53. Maximize VM availability when all disks needed in order to run
    54. 54. On by default for all VMs
    55. 55. Keep a VM’s VMDKs on different datastores
    56. 56. Useful for separating log and data disks of database VMs
    57. 57. Can select all or a subset of a VM’s disks
    58. 58. Keep VMs on different datastores
    59. 59. Similar to DRS anti-affinity rules
    60. 60. Maximize availability of a set of redundant VMs</li></li></ul><li>Save OPEX by Reducing Repetitive Planning & Effort!<br />Identify requirements<br />Find optimal datastore<br />Create VM<br />Periodically check compliance<br />Today<br />Initial setup<br />Identify storage characteristics<br />Identify requirements<br />Create VM<br />Periodically check compliance<br />Storage<br />DRS<br />Groupdatastores<br />Initial setup<br />Discover storage characteristics<br />Storage DRS + Profile driven storage<br />Select VM Storage profile<br />Create VM<br />Groupdatastores<br />
    61. 61. Storage Capabilities & VM Storage Profiles<br />Compliant<br />Not Compliant<br />VM Storage Profile associated with VM<br />VM Storage Profile referencing Storage Capabilities<br />Storage Capabilities surfaced by VASA or user-defined<br />
    62. 62. VM Storage Profile Compliance<br />Policy Compliance is visible from the Virtual Machine Summary tab.<br />
    63. 63. Demo<br />Storage Driven Profiles<br />Show Datastore storage profile<br />Assign storage profile to a VM<br />Profile compliance<br />Create a new VM and place on storage cluster - will then place depending on load<br />Storage DRS<br />Storage Load balancing<br />Storage Anti affinity<br />Storage I/O<br />
    64. 64. vSphere 5.0 – Auto Deploy<br />Overview<br />vCenter Server with <br />Auto Deploy<br />Deploy and patch vSphere hosts in minutes using a new “on the fly” model<br /><ul><li>Coordination with vSphere Host Profiles</li></ul>Host Profiles<br />Image Profiles<br />Benefits<br /><ul><li>Rapid provisioning: initial deployment and patching of hosts
    65. 65. Centralized host and image management
    66. 66. Reduce manual deployment and patch processes</li></ul>vSphere<br />vSphere<br />vSphere<br />vSphere<br />
    67. 67. ESXi Image Deployment<br />Challenges<br />Standard ESXi image from VMware download site is sometimes limited<br />Doesn’t have all drivers or CIM providers for specific hardware<br />Doesn’t contain vendor specific plug-in components<br />Missing CIMprovider<br />?<br />Missing driver<br />StandardESXi ISO<br /><ul><li>Base providers
    68. 68. Base drivers</li></li></ul><li>Auto Deploy - Building an Image<br />Depots<br /> Generate new image<br />ImageProfile<br />Windows Host with PowerCLIand Image Builder Snap-in<br />ESXiVIBs<br />Image <br />Builder<br />DriverVIBs<br />ISO Image<br />PXE-bootableImage<br />OEM VIBs<br />
    69. 69. More Auto Deploy<br />New host deployment method introduced in vSphere 5.0:<br />Based on PXE Boot <br />Works with Image Builder, vCenter Server, and Host Profiles<br />How it works:<br />PXE boot the server<br />ESXi image profile loaded into host memory via Auto Deploy Server<br />Configuration applied using Answer File / Host Profile<br />Host placed/connected in vCenter<br />Benefits:<br />No boot disk<br />Quickly and easily deploy large numbers of ESXi hosts<br />Share a standard ESXi image across many hosts<br />Host image decoupled from the physical server<br />Recover host w/out recovering hardware or having to restore from backup<br />
    70. 70. Host Profiles Enhancements<br />New feature enables greater flexibility and automation<br />Using an Answer File, administrators can configure host-specific settings to be used in conjunction with the common settings in the Host Profile, avoiding the need to type in any host-specific parameters.  <br />This feature enables the use of Host Profiles to fully configure a host during an automated deployment.<br />Host Profiles now has support for a greatly expanded set of configurations, including:<br />iSCSI<br />FCoE<br />Native Multipathing<br />Device Claiming and PSP Device Settings<br />Kernel Module Settings<br />And more<br />
    71. 71. VMFS<br />VMFS<br />vSphere 5.0 New HA Architecture<br />Overview<br /><ul><li>New architecture for High Availability feature of vSphere
    72. 72. Simplified clustering setup and configuration
    73. 73. Enhanced reliability through better resource guarantees and monitoring
    74. 74. Enhanced scalability</li></ul>Storage vMotion<br />VMware Fault Tolerance, High Availability,DRS Maintenance Mode, vMotion<br />Benefits<br />NIC Teaming, Multipathing<br />Component<br />Server<br />Storage<br />
    75. 75. What’s New in vSphere 5 High Availability?<br />Complete re-write of vSphere HA:<br />Provides a foundation for increased scale and functionality<br />Eliminates common issues (DNS resolution)<br />Multiple Communication Paths<br />Can leverage storage as well as the management network for communications<br />Enhances the ability to detect certain types of failures and provides redundancy<br />IPv6 Support<br />Enhanced Error Reporting<br />One log file per host eases troubleshooting efforts<br />Enhanced User Interface<br />
    76. 76. vSphere HA Primary Components<br />FDM<br />FDM<br />FDM<br />FDM<br />Every host runs an Agent<br />Referred to as ‘FDM’ or Fault Domain Manager<br />One of the agents within the cluster is chosen to assume the role of the Master<br />There is only one Master per cluster during normal operations<br />All other agents assume the role of Slaves<br />There is no more Primary/Secondary concept with vSphere HA<br />ESX 02<br />ESX 01<br />ESX 03<br />ESX 04<br />vCenter<br />
    77. 77. Storage-Level Communications<br />FDM<br />FDM<br />FDM<br />FDM<br />One of the most exciting new features of vSphere HA is its ability to use a storage subsystem for communication.<br />The datastores used for this are referred to as ‘Heartbeat Datastores’.<br />This provides for increased communication redundancy.<br />Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning.<br />ESX 02<br />ESX 01<br />ESX 03<br />ESX 04<br />vCenter<br />
    78. 78. Demo<br />Host Profiles<br />vMotion<br />HA<br />FT<br />DRS<br />Resource Pools<br />
    79. 79. vSphere 5.0 – vCenter Server Appliance (Linux)<br />Overview<br /><ul><li>Run vCenter Server as a Linux-based appliance
    80. 80. Simplified setup and configuration
    81. 81. Enables deployment choices according to business needs or requirements
    82. 82. Leverages vSphere availability features for protection of the management layer</li></ul>Benefits<br />
    83. 83. vSphere 5.0: The Best of the Rest<br /><ul><li>Platform </li></ul>Hardware Version 8 - EFI virtual BIOS <br /><ul><li>Network </li></ul>Distributed Switch (Netflow, SPAN support, LLDP) <br />Network I/O Controls (per VM), ESXi firewall<br /><ul><li>Storage</li></ul>VMFS 5<br />iSCSI UI<br />Storage I/O Control (NFS)<br />Array Integration for Thin Provisioning, <br />Swap to SSD, 2TB+ VMFS datastores<br />Storage vMotion Snapshot Support<br /><ul><li>Availability
    84. 84. vMotion with higher latency links
    85. 85. Management
    86. 86. Inventory Extensibility
    87. 87. Solution Installation and Management
    88. 88. iPad client</li></li></ul><li>SRM v5<br />Traditional DR Coverage Often Limited To Tier 1 Apps<br />Tier 1 Apps - Protected<br />Need to expand DR protection<br /><ul><li>Tier 2 / 3 applications in larger datacenters
    89. 89. Small and medium businesses
    90. 90. Remote office / branch offices</li></ul>Tier 2 / 3 Apps – Not protected <br />APP<br />APP<br />APP<br />APP<br />APP<br />APP<br />APP<br />APP<br />APP<br />APP<br />Small sites – Not protected<br />OS<br />OS<br />OS<br />OS<br />OS<br />OS<br />OS<br />OS<br />OS<br />OS<br />Small Business<br />Remote Office / Branch Office<br />Corporate Datacenter<br />
    91. 91. SRM Provides Broad Choice of Replication Options<br />vCenter Server<br />Site Recovery Manager<br />vCenter Server<br />Site Recovery Manager<br />vSphere Replication<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />vSphere<br />vSphere<br />Storage-based replication<br />vSphere Replication: simple, cost-efficient replication for Tier 2 applications and smaller sites<br />Storage-based replication: High-performance replication for business-critical applications in larger sites<br />
    92. 92. Planned Migrations For App Consistency & No Data Loss<br />Overview<br />Planned Migration<br />Two workflows can be applied to recovery plans:<br /><ul><li>DR failover
    93. 93. Planned migration</li></ul>Planned migration ensures application consistency and no data-loss during migration<br /><ul><li>Graceful shutdown of production VMs in application consistent state
    94. 94. Data sync to complete replication of VMs
    95. 95. Recover fully replicated VMs</li></ul>Shut down production VMs<br />Recover app-consistent VMs<br />3<br />1<br />Site A<br />Site B<br />vSphere<br />vSphere<br />Replication<br />Benefits<br />2<br />Sync data, stop replication and present LUNs to vSphere<br />Better support for planned migrations<br /><ul><li>No loss of data during migration process
    96. 96. Recover ‘application-consistent’ VMs at recovery site</li></li></ul><li>Automated Failback To<br />Streamline Bi-Directional Migrations<br />Site A<br />Overview<br />Automated Failback<br />Re-protect VMs from Site B to Site A<br /><ul><li>Reverse replication
    97. 97. Apply reverse resource mapping</li></ul>Automate failover from Site B to Site A<br /><ul><li>Reverse original recovery plan</li></ul>Restrictions<br /><ul><li>Does not apply if Site A has undergone major changes / been rebuilt
    98. 98. Not available with vSphere Replication</li></ul>Reverse original recovery plan<br />Reverse Replication<br />Site B<br />vSphere<br />vSphere<br />Benefits<br />Simplify failback process<br /><ul><li>Automate replication management
    99. 99. Eliminate need to set up new recovery plan</li></ul>Streamline frequent bi-directional migrations<br />
    100. 100. Demo<br />SRM DEMO<br />
    101. 101. vSphere 5 Licensing and Pricing<br />Overview<br />
    102. 102. vSphere 5 Licensing<br /> Evolution Without Disruption<br />!<br />
    103. 103. What is vRAM?<br /><ul><li>vRAM is the memory configured to a virtual machine
    104. 104. Assigning a certain amount of vRAM is a required step in the creation of a virtual machine</li></li></ul><li>Key vRAM Concepts <br />Each vSphere 5 processor license comes with certain amount of vRAM entitlement<br />1<br />Pooled vRAM Entitlement<br />2<br />Sum of all processor license entitlements<br />Consumed vRAM<br />3<br />Sum of vRAM configured into all powered on VMs<br />4<br />Compliance = 12 month rolling average of Consumed vRAM < Pooled vRAM Entitlement<br />
    105. 105. Key Concepts - Example<br />Each vSphere Enterprise Edition license entitles to 64GB of vRAM.<br />4 licenses of vSphere Enterprise Edition provide a vRAM pool of 256GB (4 * 64 GB)<br />64GB<br />64GB<br />64GB<br />64GB<br />vRAM Pool (256GB)<br />Consumed vRAM = 80 GB<br />Customer creates 20 VMs with 4GB vRAM each<br />vSphere Ent <br />vSphere Ent <br />1<br />1<br />1<br />1<br />CPU<br />CPU<br />CPU<br />CPU<br />Host A<br />Host B<br />Compliance = 12 month rolling average of Consumed vRAM < Pooled vRAM Entitlement<br />
    106. 106. vSphere 5.0 More Detail<br />
    107. 107. Demo<br />vRam Tool Demo<br />
    108. 108. VNX Overview<br />Next Generation Storage<br />
    109. 109. Next-Generation Unified Storage<br />Optimised for today’s virtualised IT<br />EMC Unisphere<br />VNXe3100<br />VNX7500<br />VNX5700<br />VNXe3300<br />VNX5100<br />VNX5500<br />VNX5300<br />Affordable.Simple. Efficient.Powerful.<br />
    110. 110. VNXe Series Models<br />Simple. Efficient. Affordable.<br />
    111. 111. VNX Series Hardware<br />Simple. Efficient. Powerful.<br />
    112. 112. EMC: The VMware Choice<br />2 out of 3 CIOs pick EMC for their VMware environments<br />“Which vendor(s) supplied the networked (SAN or NAS) storage used for your virtual server environment?”<br />Trusted storage platform for the most critical and demanding VMware environments<br />Advanced integration and functionality that maximizes the value of a virtualized data center<br />Flexibility to meet infrastructure to business and technical needs<br />Knowledge, experience, and partnerships to make your virtual data center a reality<br />“Which is your storage vendor of choice in a virtual server environment?”<br />“EMC remains the clear storage leader in virtualized environments.”<br />
    113. 113. 3x Better Performance <br />More users, more transactions, better response time<br />FAST Cache<br />FAST VP <br />3X<br />VNXPlatform<br />Faster<br />CX/NS Platforms<br />
    114. 114. Virtualisation Management<br />EMC Virtual <br />Storage <br />Integrator<br />EMC Virtual Storage Integrator plug-in <br />Integrated point of control to simplify and speed VMware storage management tasks<br /><ul><li>One unified storage tool for all Symmetrix, CLARiiON, Celerra, VNX series, and VNXe series</li></ul>VMware vSphere<br />Unified storage<br />
    115. 115. The FAST Suite<br />Highest performance & capacity efficiency…automatically! <br /><ul><li>FAST Cache continuously ensures that the hottest data is served from high-performance Flash SSDs
    116. 116. FAST VP supporting both file and block optimizes storage pools automatically, ensuring only active data is being served from SSDs, while cold data is moved to lower-cost disk tiers
    117. 117. Togetherthey deliver a fully automated FLASH 1st storage strategy for optimal performance at the lowest cost attainable</li></ul>Real-time caching withFAST Cache <br />FlashSSD<br />High Perf. HDD<br />High Cap.<br />HDD<br />Scheduled optimization with FAST VP<br />
    118. 118. MAP<br />FAST Cache Approach<br />Exchange<br />SharePoint<br />Oracle<br />Database<br />File<br />VMware<br />SAP<br /><ul><li>Page requests satisfied from DRAM if available
    119. 119. If not, FAST Cache driver checks map to determine where page is located
    120. 120. Page request satisfied from disk drive if not in FAST Cache
    121. 121. Policy Engine promotes a page to FAST Cache if it is being used frequently
    122. 122. Subsequent requests for this page satisfied from FAST Cache
    123. 123. Dirty pages are copied back to disk drives as background activity</li></ul>DRAM<br />FAST Cache<br />Policy<br />Engine<br />Driver<br />Disk Drives<br />
    124. 124. FAST VP for Block & File Access<br />Optimise VNX for minimum TCO<br />BEFORE<br />AFTER<br />LUN 1<br />Automatesmovement of hot or cold blocks<br />Optimizesuse of high performance and high capacity drives<br />Improves cost and performance<br />Pool<br />Tier 0<br />LUN 2<br />Tier 1<br />Most activity<br />Neutral activity <br />Least activity <br />Tier 2<br />
    125. 125. User B10 GB<br />User A<br />10 GB<br />User C10 GB<br />Logical <br />application <br />and user view<br />Physical <br />allocation<br />4 GB<br />Physical consumed storage<br />2 GB<br />2 GB<br />VNX Thin Provisioning<br />Only allocate the actual capacity required by the application<br />Capacity oversubscription allows intelligent use of resources<br />File systems<br />FC and iSCSI LUNs<br />Logical size greater than physical size<br />VNX Thin Provisioning safeguards to avoid running out of space<br />Monitoring and alerting<br />Automatic and dynamic extension past logical size<br />Automatic NAS file system extension<br />FC and iSCSI dynamic LUN extension<br />VNX THIN PROVISIONING<br />Capacity on demand<br />
    126. 126. VNX Virtual Provisioning<br />Thick pool LUN:<br />Full capacity allocation<br />Near RAID-Group LUN performance<br />Capacity reserved at LUN creation<br />1 GB chunks allocated as relative block address is written<br />Thin pool LUN:<br />Only allocates capacity as data is written by the host<br />Capacity allocated in 1 GB chunks<br />8 KB blocks contiguously written within 1 GB<br />8 KB mapping incurs some performance overhead <br />
    127. 127. VNX Series Software<br />Software Solutions Made Simple<br />Attractively Priced Packsand Suites<br />Total Efficiency Pack<br />FAST Suite<br />Security and Compliance Suite<br />TotalProtection Pack <br />Local Protection Suite<br />Remote Protection Suite<br />Application Protection Suite<br />
    128. 128. VNX: Faster than the Rest <br />12<br />Highest number of transactions and lowest response time<br />10<br />IBM<br />8<br />3X<br />FASTER<br />THAN IBM<br />6<br />HP<br />4<br />RESPONSE TIME IN MS—LOWER IS BETTER<br />NetApp<br />2<br />0<br />50,000<br />100,000<br />150,000<br />200,000<br />250,000<br />300,000<br />350,000<br />400,000<br />450,000<br />500,000<br />TRANSACTIONS—HIGHER IS BETTER <br />Note: SPECsfs2008 NFSv3 <br />
    129. 129. VNX Series for Virtual Desktop<br />4x the number of Virtual Desktop users with VNX Series, FAST VP & FAST Cache at Sustained Performance<br />Up to 70% reduction in storage cost for same I/O performance<br />Boot Storm:<br />3x Faster: Boot & settle 500 desktops in 8 min vs. 27 min<br />FAST Cache absorbs the majority of the Boot work-load (i.e. I/O to spinning drives)<br />Desktop Refresh:<br />Refresh 500 desktops in 50min vs. 130min<br />Fast Cache serviced the majority of the IO during refresh and prevents Linked clones from overloading<br />Celerra NS<br />183x 300GB 15K FC Disks<br />VNX series<br />5x 100GB SSD<br />21x 300GB 15H SAS<br />15x 2TB NL-SAS<br />
    130. 130. VNX Demo<br />UnisphereConsole:<br />Dashboard<br />Customised view<br />System<br />Disks<br />System Properties<br />Fast Cache<br />Storage<br />Pools<br />LUNS<br />Compression – compression on LUN<br />Thin Provisioning<br />Auto tiering<br />Hosts/Storage Groups/Virtualisation<br />Analyser – Monitor and Alerting<br />USM<br />
    131. 131. Questions and Answers<br />
    132. 132. vSphere 5 Training Offers<br />Take advantage of any of the below VMware course offers which are taking place at our Southampton Training Centre and receive a FREE place on Deploying & Managing Microsoft System Center Virtual Machine Manager, worth £895.VMware vSphere: TroubleshootingDuration: 4 DaysCost: £2,075.00 + VAT per delegateDates: 03-06 October Offer: Book 1 space and save 20% or book 2 spaces and save 30% VMware vSphere: Install, Configure & ManagerDuration: 5 DaysCost: £2,595.00 + VAT per delegateDates: 10-14 October (v4.1), 17-21 October (v5) & 12-16 December (v5)Offer: Book 1 space and save 15% or book 2 spaces and save 25%Exam: Includes Free Exam VoucherVMware vSphere: Skills for Operators?Duration: 2 DaysCost: £1,095.00 + VAT per delegateDates: 29-30 September & 07-08 NovemberOffer: Buy 2 Spaces Get 1 Free<br />
    133. 133. For further information on vSphere 5, or to book a one to one consultation, please contact your account manager or email ict@bluechip.uk.com<br />
    134. 134. Blue Chip <br />Change is the only constant in business... ...evolution is the key to survival<br />www.bluechip.uk.com<br />
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×