Your SlideShare is downloading. ×

V sphere 5 roadshow final

1,019

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,019
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
78
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Every one of our customers has existing applications, running in existing datacenters, that represents significant investments and ongoing value. The first thing we are doing with these customers, is helping them stand-up a Private Cloud, to get the most efficiency and agility out of their existing assets. And this can be done in a pragmatic, evolutionary way. We have over 250,000 customers worldwide that are already on this path, because they are leveraging vSphere to virtualize the entire fabric of the datacenter, including CPU & memory, storage, and networking. And because they are using vSphere, they get built-in high-availability, and automated, dynamic resource scheduling to give them the cloud attributes of elastic, pooled capacity. <click>With virtualization in place, the independent silos are broken down, enabling us to automate many of the mundane, repetitive administration tasks with our vCenter management suite, further decreasing opex in the datacenter.
  • With vSphere 5.0, multiple enhancements have been introduced to increase efficiency of the Storage vMotion process, to improve overall performance, and for enhanced supportability. Storage vMotion in vSphere 5.0 now also supports the migration of virtual machines with a vSphere Snapshot and the migration of linked clones.
  • The Storage vMotion control logic is in the VMXThe Storage VMotion thread first creates the destination disk.After that, a stun/unstun of the VM allows the SVM Mirror Driver to be installed. I/Os to source will be mirrored to destination. The new driver will leverage the Data Mover to implement a single-pass block copy of the source to the destination disk. In additional to this it will mirror I/O between the two disks. This is a synchronous write meaning that the mirror driver will acknowledge the write to the Guest OS when it has received the acknowledgement from both the source and destination
  • Accelerate VM storage placement decision to a storage pod by:Capturing VM storage SLA requirementsMapping to the storage with the right characteristics and spare space
  • Storage DRS provides initial placement recommendations to datastores in a Storage DRS-enabled datastore cluster based on I/O and space capacity. During the provisioning of a virtual machine, a datastore cluster can be selected as the target destination for this virtual machine or virtual machine disk after which a recommendation for initial placement is done based on I/O and space capacity. As just mentioned Initial placement in a manual provisioning process has proven to be very complex in most environments and as such important provisioning factors like current I/O load or space utilization are often ignored. Storage DRS ensures initial placement recommendations are made in accordance with space constraints and with respect to the goals of space and I/O load balancing. Although people are really excited about automated load balancing… It is Initial Placement where most people will start off with and where most people will benefit from the most as it will reduce operational overhead associated with the provisioning of virtual machines.
  • Ongoing balancing recommendations are made when one or more datastores in a datastore cluster exceeds the user-configurable space utilization or I/O latency thresholds. These thresholds are typically defined during the configuration of the datastore cluster. Storage DRS utilizes vCenter Server’s datastore utilization reporting mechanism to make recommendations whenever the configured utilized space threshold is exceeded. I/O load is evaluated by default every 8 hours currently with a default latency threshold of 15ms. Only when this I/O latency threshold is exceeded Storage DRS will calculate all possible moves to balance the load accordingly while considering the cost and the benefit of the migration. If the benefit doesn’t at least last for 24 hours Storage DRS will not make the recommendation.
  • Ongoing balancing recommendations are made when one or more datastores in a datastore cluster exceeds the user-configurable space utilization or I/O latency thresholds. These thresholds are typically defined during the configuration of the datastore cluster. Storage DRS utilizes vCenter Server’s datastore utilization reporting mechanism to make recommendations whenever the configured utilized space threshold is exceeded. I/O load is evaluated by default every 8 hours currently with a default latency threshold of 15ms. Only when this I/O latency threshold is exceeded Storage DRS will calculate all possible moves to balance the load accordingly while considering the cost and the benefit of the migration. If the benefit doesn’t at least last for 24 hours Storage DRS will not make the recommendation.
  • Today:Currently we identify the requirements of the virtual machine, try to find the optimal datastore based on the requirements and create the virtual machine or disk. In some cases customers even periodically check if VMs are compliant but in many cases this is neglected.Storage DRS:Storage DRS only solves that problem partly. As still manually we will need to identify the correct datastore cluster and even when grouping datastores in to a cluster we need to manually verify if all LUNs are “alike”…. And again there is that manual periodically checkStorage DRS and Profile Driven Storage:When using Profile Driven Storage and Storage DRS in conjunction these problems are solved. Datastore cluster can be created based on the characteristics provided through vasa or the custom tags. When deploying virtual machines a storage profile can be selected ensuring that the virtual will be compliant!
  • Step 1The diagram we just showed gave a total overview, but most customers are concerned about just one thing: compliancy so how does this work? As mentioned Capabilities are surfaced through VASAStep2:And these capabilties are linked to a specific VM Storage ProfileStep 3:When a new is created or a excisting virtual machine is tagged the resultStep 4:Will be either complaint or not compliant it is as simple as that.
  • Auto Deploy is a new method for provisioning ESXi hosts in vSphere 5.1. At a high level the ESXi host boots over the network (using PXE/gPXE), contacts the Auto Deploy Server which loads ESXi into the hosts memory. After loading the ESXi image the Auto Deploy Server coordinates with vCenter Server to configure the host (using Host Profiles and Answer Files (answer files are new in 5.0). Auto Deploy eliminates the need for a dedicated boot device, enables rapid deployment for many hosts, and also simplifies ESXi host management by eliminating the need to maintain a separate “boot image” for each host.
  • Agent is ~50Kb in size. FDM Agent is not tied to vpxd at all
  • Customers are getting hit by core and physical memory restrictions“How will I license vSphere when my CPUs are over 6 or 12 cores?”CPU cores and physical entitlements are tied to a single server and cannot be shared among multiple ones reducing flexibility and utilizationRapid introduction of new hardware technologies require constant amendments to the licensing mode creating uncertainty over planning“What happens if I use SSD or hyperthreading or etc.?”Hardware based entitlements make it difficult for customers to transition to the usage based cost and chargeback models that characterize cloud computing and IT as a Service
  • Customers are getting hit by core and physical memory restrictions“How will I license vSphere when my CPUs are over 6 or 12 cores?”CPU cores and physical entitlements are tied to a single server and cannot be shared among multiple ones reducing flexibility and utilizationRapid introduction of new hardware technologies require constant amendments to the licensing mode creating uncertainty over planning“What happens if I use SSD or hyperthreading or etc.?”Hardware based entitlements make it difficult for customers to transition to the usage based cost and chargeback models that characterize cloud computing and IT as a Service
  • The FAST Suite improves performance and maximizes storage efficiency by deploying this FLASH 1st strategy. FAST Cache, an extendable cache of up to 2 TB, gives a real-time performance boost by ensuring the hottest data is served from the highest performing Flash drives for as long as needed. FAST VP then complements FAST Cache by optimizing storage pools on a regular, scheduled basis. You define how and when data is tiered using policies that dynamically move the most active data to high-performance drives (e.g., Flash), and less active data to high-capacity drives, all in one-gigabyte increments for both block and file data.Together, they automatically optimize for the highest system performance and the lowest storage cost simultaneously.
  • The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.
  • The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools.  FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler.  Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
  • The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools.  FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler.  Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
  • The slide above shows how FAST cache works. FAST Cache is based on the locality of reference of the dataset requested by a host. Systems with high locality of reference confine the majority of IOs to a relatively small capacity, where systems with low locality of reference spread IOs more evenly across the total capacity – this is also sometimes referred to as skew. The dataset with high locality of reference/skew (blocks close to one another tending to be accessed together) is a good candidate to be copied to FAST Cache. By promoting this dataset to FAST Cache, any subsequent access to this data for read-write is serviced faster from Flash drives. This reduces workload on back-end disk drives.A write operation works in similar fashion. Writes with high locality of reference are directed to Flash drives. When the time comes to flush this data to disk, the flushing operation is significantly faster as writes are now at Flash drive speeds. This can have a big impact in heavy-write workloads that require a large system cache to be flushed to the underlying disks more frequentlyThe FAST Cache map is maintained in the DRAM cache and consumes DRAM space so care should be taken to choose which pools and RAID-group LUNs it should be enabled for. EMC TS resources have tools which are available to our direct and channel champions community, to analyze existing environments for the best candidates.FAST Cache operates at a 64KB granularity for increased efficiency. If a 64KB block is referenced 3 times in a given period of time (the time will depend on the IO activity of the system), the block will be promoted into FAST Cache. As the data ages and becomes less active, it will fall out of FAST Cache to be replaced by a more active chunk of data.The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools.  FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler.  Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
  • The Culham is managed by Unisphere, and the base software includes file deduplication & compression, block compression, virtual provisioning and SAN copy.Rather than ordering a number of “a la carte” products, we’ve simplified the optional software into five attractively priced suites: The FAST Suite improves performance and maximizes storage efficiency. It includes FAST VP, FAST Cache, Unisphere Analyzer, and Unisphere Quality of Service manager.The Security and Compliance Suite helps ensure that data is protected from unwanted changes, deletions, and malicious activity. It includes the event enabler for anti-virus, quota management & auditing, file-level retention and Host Encryption. The Local Protection Suite delivers any point-in-time recovery with DVR-like roll-back capabilities. Copies of production data can also be used for development, testing, decision support and backup. This suite includes: SnapView, SnapSure and RecoverPoint/SE CDP.The Remote Protection Suite delivers Unified block and file replication, giving customers one way to protect everything better. It includes Replicator, MirrorView and RecoverPoint/SE CRR.The Application Protection Suite automates application-consistent copies and proves customers can recover to defined service levels. This suite includes Replication Manager and Data Protection Advisor for Replication.Finally, the total efficiency pack and protection packs bundle the suites to further simplify ordering and lower costs.
  • The EMC VNX Series also had the lowest overall response time (ORT) of systems tested, taking the top spot with a response time of .96 milliseconds. EMC’s response time is 3 times faster than the IBM offering in second place. Faster response times enable end-users to access information quicker and more efficiently. Chris Mellor in The Register blog entry EMC kills SPEC benchmark with all-flash VNX (http://www.theregister.co.uk/2011/02/23/enc_vnx_secsfs2008_benchmark/) writes about IBM, HP and NetApp: “For all three companies, any ideas they previously had of having top-level SPECsfs2008 results using disk drives have been blown out of the water by this EMC result. It is a watershed benchmark moment. ®”
  • Transcript

    • 1. Blue Chip
      Change is the only constant in business... ...evolution is the key to survival
      www.bluechip.uk.com
    • 2. Who isBlue Chip...?
      Established in 1992, Blue Chip is one of the UK's leading providers of business
      IT infrastructure solutions. We provide consultancy, design, procurement, implementation,
      support and maintenance, training and outsourcing services to organisations across the UK.
      As your solutions partner, Blue Chip will ensure that your organisation keeps pace with the ever
      changing demands of technology lifecycle management. The result is that your organisation can
      evolve into an innovative business that is fully enabled by technology.
      Key Facts and Figures
      • Locations in Poole, Bedford, Southampton and Leeds
      • 3. The South's Largest VUE and Prometric Training Centre with capacity for 100+ delegates a week.
      • 4. £3million worth of dedicated facilities – training, hosting and offices
      • 5. 160 + staff, 75% of whom are technical resources
      • 6. 1000+ clients varying in size between 5-5000 users
      • 7. CRN and CNA Award winners
      • 8. Supporting in excess of 80,000 PC‘s
    • Our Key Areas...Technology, Services and Training
      • Virtualisation – VMware VSphere, VMware View, HyperV, Citrix XenDesktop, Citrix XenApp, appV, RDS
      • 9. Microsoft – core infrastructure services, Active Directory, Exchange, Sharepoint, SQL, System Centre
      • 10. Unified Communications – Cisco UC Manager and Microsoft Office Communications Server Lync
      • 11. Technical Training - Microsoft, Cisco, VMware, Mac OSX, UNIX, Linux, Citrix, ITIL, PRINCE2
      • 12. Mobility and Wireless – Cisco, Microsoft
      • 13. Resourcing & Outsourcing - Fully managed services, TUPE agreements, contract, project management
      • 14. Service Desk and Support - 24/7 Service Desk, SLAs, system monitoring, warranty management
      • 15. Proactive Maintenance – Scheduled administration and system monitoring
      • 16. Storage and Data Archiving – EMC, Symantec Enterprise Vault
      • 17. High Availability Solutions – VMware, Microsoft, HP, EMC
      • 18. Security & Unified Threat Management – Fortinet and Cisco
      • 19. Business Management Applications – Microsoft Dynamics NAV, Microsoft CRM, SharePoint
      • 20. Cloud Services –IAAS Platform, Offsite backup & DR
    • Our clients…
      Industry sectors include – Education, Finance, Medical/Healthcare, Logistics and Transport, Manufacturing, Construction and Housing, Professional Services, Legal, Not for Profit and Public Sector.
    • 21. Our partners…
      Blue Chip recognises that to deliver the best, we must work with the best! Through carefully selected and managed alliances, Blue Chip holds strategic partnerships with the world's best of breed manufacturers.
    • 22. Cloud Infrastructure Launch – What’s New?
      Clive Wenman
      Systems Engineer - VMware
    • 23. vSphere
      vSphere
      vSphere
      Virtualisation is the Foundation for Cloud
      “Virtualization is a modernization catalyst and unlocks cloud computing.” ―Gartner, May 2010
    • 24. Virtualising.. Bus. Critical Apps
      vSphere
      vSphere
      vSphere
      • The Niche Apps(LOB apps, Tier 2 DB, etc.)
      >60% Virtualized
      Accelerate
      App Lifecycle
      Improve App Quality of Service
      Improve App Efficiency
      30% Virtualized
      • The Easy Apps(infrastructure, file, print)
    • vSphere
      vSphere
      vSphere
      Hybrid Cloud Stack…
      vCloud Director
      vShield Security
      vCenter Management
    • 29. Bring Cloud Architecture to Existing Datacenters
      Organization: Marketing
      Organization: Finance
      • Leverage virtualization to transform physical silos into elastic, virtual capacity
      Virtual Datacenters
      Catalogs
      Virtual Datacenters
      Catalogs
      Users & Policies
      Users & Policies
      • Increase automation through built-in intelligent policy management
      • 30. Move from static, physical security to dynamic, embedded security
      • 31. Enable secure, self-service to pre-defined IT services, with pay-for-use
      Compute
      Storage
      Network
    • 32. New
      vSphere
      vSphere
      vSphere
      In 2011 VMware has Introduced a major upgrade of the entire Cloud Infrastructure Stack
      Cloud Infrastructure Launch
      vCloud Director 1.5
      vCloud Director
      vShield 5.0
      vShield Security
      vCenter SRM 5.0
      vCenter Management
      vSphere 5.0
    • 33. New Virtual Machine Capabilities
    • 34. vSphere 5.0 – Scaling Virtual Machines
      Overview
      • Create virtual machines with up to 32 vCPU and 1 TB of RAM
      • 35. 4x size of previous vSphere versions
      • 36. Run even the largest applications in vSphere, including very large databases
      • 37. Virtualize even more applications than ever before (tier 1 and 2)
      Benefits
      4x
    • 38. vSphere 5.0 – Web Client
      Overview
      • Run and manage vSphere from any web browser anywhere in the world
      • 39. Platform independence
      • 40. Replaces Web Access GUI
      • 41. Building block for cloud based administration
      Benefits
    • 42. Demo
      New Hardware
      Hot Add
      CPU
      Memory
      Resources – guest memory lock
      VMware Hardware status monitor
      Web Client
      Linux client or MAC can now mange vCentre
      Resume tasks
      Advanced search - history of vm's
      Customise view
      IPAD Client
    • 43. Storage vMotion – Introduction
      In vSphere 5.0, a number of new enhancements were made to Storage vMotion.
      Storage vMotion will now work with Virtual Machines that have snapshots, which means coexistence with other VMware products & features such as VCB, VDR & HBR.
      Storage vMotion will support the relocation of linked clones.
      Storage vMotion has a new use case – Storage DRS – which uses Storage vMotion for Storage Maintenance Mode & Storage Load Balancing (Space or Performance).
    • 44. Storage vMotion Architecture Enhancements
      Guest OS
      VMM/Guest
      Datamover
      Mirror Driver
      VMkernel
      Userworld
      Source
      Destination
    • 45. Profile-Driven Storage & Storage DRS
      Overview
      High IO throughput
      • Tier storage based on performance characteristics (i.e. datastore cluster)
      • 46. Simplify initial storage placement
      • 47. Load balance based on I/O
      Tier 1
      Tier 2
      Tier 3
      Benefits
      • Eliminate VM downtime for storage maintenance
      • 48. Reduce time for storage planning/configuration
      • 49. Reduce errors in the selection and mgmt of VM storage
      • 50. Increase storage utilization by optimizing placement
    • Storage DRS Operations – Initial Placement
      Initial Placement – VM/VMDK create/clone/relocate
      When creating a VM you select a datastore cluster rather than an individual datastore and let SDRS choose the appropriate datastore.
      SDRS will select a datastore based on space utilization and I/O load.
      By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters.
      2TB
      datastore cluster
      500GB
      500GB
      500GB
      500GB
      datastores
      300GB available
      260GB available
      265GB available
      275GB available
    • 51. Storage DRS Operations:Load Balancing
      Load balancing: SDRS triggers on space usage & latency threshold.
      • Algorithm makes migration recommendations when I/O response time and/or space utilization thresholds have been exceeded.
      Space utilization statistics are constantly gathered by vCenter, default threshold 80%.
      I/O load trend is currently evaluated every 8 hours based on a past day history, default threshold 15ms.
      Load Balancing is based on I/O workload and space which ensures that no datastore exceeds the configured thresholds.
      Storage DRS will do a cost / benefit analysis!
      For I/O load balancing Storage DRS leverages Storage I/O Control functionality.
    • 52. Storage DRS Operations
      Datastore Cluster
      Datastore Cluster
      Datastore Cluster
      VMDK affinity
      VMDK anti-affinity
      VM anti-affinity
      • Keep a Virtual Machine’s VMDKs together on the same datastore
      • 53. Maximize VM availability when all disks needed in order to run
      • 54. On by default for all VMs
      • 55. Keep a VM’s VMDKs on different datastores
      • 56. Useful for separating log and data disks of database VMs
      • 57. Can select all or a subset of a VM’s disks
      • 58. Keep VMs on different datastores
      • 59. Similar to DRS anti-affinity rules
      • 60. Maximize availability of a set of redundant VMs
    • Save OPEX by Reducing Repetitive Planning & Effort!
      Identify requirements
      Find optimal datastore
      Create VM
      Periodically check compliance
      Today
      Initial setup
      Identify storage characteristics
      Identify requirements
      Create VM
      Periodically check compliance
      Storage
      DRS
      Groupdatastores
      Initial setup
      Discover storage characteristics
      Storage DRS + Profile driven storage
      Select VM Storage profile
      Create VM
      Groupdatastores
    • 61. Storage Capabilities & VM Storage Profiles
      Compliant
      Not Compliant
      VM Storage Profile associated with VM
      VM Storage Profile referencing Storage Capabilities
      Storage Capabilities surfaced by VASA or user-defined
    • 62. VM Storage Profile Compliance
      Policy Compliance is visible from the Virtual Machine Summary tab.
    • 63. Demo
      Storage Driven Profiles
      Show Datastore storage profile
      Assign storage profile to a VM
      Profile compliance
      Create a new VM and place on storage cluster - will then place depending on load
      Storage DRS
      Storage Load balancing
      Storage Anti affinity
      Storage I/O
    • 64. vSphere 5.0 – Auto Deploy
      Overview
      vCenter Server with
      Auto Deploy
      Deploy and patch vSphere hosts in minutes using a new “on the fly” model
      • Coordination with vSphere Host Profiles
      Host Profiles
      Image Profiles
      Benefits
      • Rapid provisioning: initial deployment and patching of hosts
      • 65. Centralized host and image management
      • 66. Reduce manual deployment and patch processes
      vSphere
      vSphere
      vSphere
      vSphere
    • 67. ESXi Image Deployment
      Challenges
      Standard ESXi image from VMware download site is sometimes limited
      Doesn’t have all drivers or CIM providers for specific hardware
      Doesn’t contain vendor specific plug-in components
      Missing CIMprovider
      ?
      Missing driver
      StandardESXi ISO
      • Base providers
      • 68. Base drivers
    • Auto Deploy - Building an Image
      Depots
      Generate new image
      ImageProfile
      Windows Host with PowerCLIand Image Builder Snap-in
      ESXiVIBs
      Image
      Builder
      DriverVIBs
      ISO Image
      PXE-bootableImage
      OEM VIBs
    • 69. More Auto Deploy
      New host deployment method introduced in vSphere 5.0:
      Based on PXE Boot
      Works with Image Builder, vCenter Server, and Host Profiles
      How it works:
      PXE boot the server
      ESXi image profile loaded into host memory via Auto Deploy Server
      Configuration applied using Answer File / Host Profile
      Host placed/connected in vCenter
      Benefits:
      No boot disk
      Quickly and easily deploy large numbers of ESXi hosts
      Share a standard ESXi image across many hosts
      Host image decoupled from the physical server
      Recover host w/out recovering hardware or having to restore from backup
    • 70. Host Profiles Enhancements
      New feature enables greater flexibility and automation
      Using an Answer File, administrators can configure host-specific settings to be used in conjunction with the common settings in the Host Profile, avoiding the need to type in any host-specific parameters.  
      This feature enables the use of Host Profiles to fully configure a host during an automated deployment.
      Host Profiles now has support for a greatly expanded set of configurations, including:
      iSCSI
      FCoE
      Native Multipathing
      Device Claiming and PSP Device Settings
      Kernel Module Settings
      And more
    • 71. VMFS
      VMFS
      vSphere 5.0 New HA Architecture
      Overview
      • New architecture for High Availability feature of vSphere
      • 72. Simplified clustering setup and configuration
      • 73. Enhanced reliability through better resource guarantees and monitoring
      • 74. Enhanced scalability
      Storage vMotion
      VMware Fault Tolerance, High Availability,DRS Maintenance Mode, vMotion
      Benefits
      NIC Teaming, Multipathing
      Component
      Server
      Storage
    • 75. What’s New in vSphere 5 High Availability?
      Complete re-write of vSphere HA:
      Provides a foundation for increased scale and functionality
      Eliminates common issues (DNS resolution)
      Multiple Communication Paths
      Can leverage storage as well as the management network for communications
      Enhances the ability to detect certain types of failures and provides redundancy
      IPv6 Support
      Enhanced Error Reporting
      One log file per host eases troubleshooting efforts
      Enhanced User Interface
    • 76. vSphere HA Primary Components
      FDM
      FDM
      FDM
      FDM
      Every host runs an Agent
      Referred to as ‘FDM’ or Fault Domain Manager
      One of the agents within the cluster is chosen to assume the role of the Master
      There is only one Master per cluster during normal operations
      All other agents assume the role of Slaves
      There is no more Primary/Secondary concept with vSphere HA
      ESX 02
      ESX 01
      ESX 03
      ESX 04
      vCenter
    • 77. Storage-Level Communications
      FDM
      FDM
      FDM
      FDM
      One of the most exciting new features of vSphere HA is its ability to use a storage subsystem for communication.
      The datastores used for this are referred to as ‘Heartbeat Datastores’.
      This provides for increased communication redundancy.
      Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning.
      ESX 02
      ESX 01
      ESX 03
      ESX 04
      vCenter
    • 78. Demo
      Host Profiles
      vMotion
      HA
      FT
      DRS
      Resource Pools
    • 79. vSphere 5.0 – vCenter Server Appliance (Linux)
      Overview
      • Run vCenter Server as a Linux-based appliance
      • 80. Simplified setup and configuration
      • 81. Enables deployment choices according to business needs or requirements
      • 82. Leverages vSphere availability features for protection of the management layer
      Benefits
    • 83. vSphere 5.0: The Best of the Rest
      • Platform
      Hardware Version 8 - EFI virtual BIOS
      • Network
      Distributed Switch (Netflow, SPAN support, LLDP)
      Network I/O Controls (per VM), ESXi firewall
      • Storage
      VMFS 5
      iSCSI UI
      Storage I/O Control (NFS)
      Array Integration for Thin Provisioning,
      Swap to SSD, 2TB+ VMFS datastores
      Storage vMotion Snapshot Support
      • Availability
      • 84. vMotion with higher latency links
      • 85. Management
      • 86. Inventory Extensibility
      • 87. Solution Installation and Management
      • 88. iPad client
    • SRM v5
      Traditional DR Coverage Often Limited To Tier 1 Apps
      Tier 1 Apps - Protected
      Need to expand DR protection
      • Tier 2 / 3 applications in larger datacenters
      • 89. Small and medium businesses
      • 90. Remote office / branch offices
      Tier 2 / 3 Apps – Not protected
      APP
      APP
      APP
      APP
      APP
      APP
      APP
      APP
      APP
      APP
      Small sites – Not protected
      OS
      OS
      OS
      OS
      OS
      OS
      OS
      OS
      OS
      OS
      Small Business
      Remote Office / Branch Office
      Corporate Datacenter
    • 91. SRM Provides Broad Choice of Replication Options
      vCenter Server
      Site Recovery Manager
      vCenter Server
      Site Recovery Manager
      vSphere Replication
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      VM
      vSphere
      vSphere
      Storage-based replication
      vSphere Replication: simple, cost-efficient replication for Tier 2 applications and smaller sites
      Storage-based replication: High-performance replication for business-critical applications in larger sites
    • 92. Planned Migrations For App Consistency & No Data Loss
      Overview
      Planned Migration
      Two workflows can be applied to recovery plans:
      • DR failover
      • 93. Planned migration
      Planned migration ensures application consistency and no data-loss during migration
      • Graceful shutdown of production VMs in application consistent state
      • 94. Data sync to complete replication of VMs
      • 95. Recover fully replicated VMs
      Shut down production VMs
      Recover app-consistent VMs
      3
      1
      Site A
      Site B
      vSphere
      vSphere
      Replication
      Benefits
      2
      Sync data, stop replication and present LUNs to vSphere
      Better support for planned migrations
      • No loss of data during migration process
      • 96. Recover ‘application-consistent’ VMs at recovery site
    • Automated Failback To
      Streamline Bi-Directional Migrations
      Site A
      Overview
      Automated Failback
      Re-protect VMs from Site B to Site A
      • Reverse replication
      • 97. Apply reverse resource mapping
      Automate failover from Site B to Site A
      • Reverse original recovery plan
      Restrictions
      • Does not apply if Site A has undergone major changes / been rebuilt
      • 98. Not available with vSphere Replication
      Reverse original recovery plan
      Reverse Replication
      Site B
      vSphere
      vSphere
      Benefits
      Simplify failback process
      • Automate replication management
      • 99. Eliminate need to set up new recovery plan
      Streamline frequent bi-directional migrations
    • 100. Demo
      SRM DEMO
    • 101. vSphere 5 Licensing and Pricing
      Overview
    • 102. vSphere 5 Licensing
      Evolution Without Disruption
      !
    • 103. What is vRAM?
      • vRAM is the memory configured to a virtual machine
      • 104. Assigning a certain amount of vRAM is a required step in the creation of a virtual machine
    • Key vRAM Concepts
      Each vSphere 5 processor license comes with certain amount of vRAM entitlement
      1
      Pooled vRAM Entitlement
      2
      Sum of all processor license entitlements
      Consumed vRAM
      3
      Sum of vRAM configured into all powered on VMs
      4
      Compliance = 12 month rolling average of Consumed vRAM < Pooled vRAM Entitlement
    • 105. Key Concepts - Example
      Each vSphere Enterprise Edition license entitles to 64GB of vRAM.
      4 licenses of vSphere Enterprise Edition provide a vRAM pool of 256GB (4 * 64 GB)
      64GB
      64GB
      64GB
      64GB
      vRAM Pool (256GB)
      Consumed vRAM = 80 GB
      Customer creates 20 VMs with 4GB vRAM each
      vSphere Ent
      vSphere Ent
      1
      1
      1
      1
      CPU
      CPU
      CPU
      CPU
      Host A
      Host B
      Compliance = 12 month rolling average of Consumed vRAM < Pooled vRAM Entitlement
    • 106. vSphere 5.0 More Detail
    • 107. Demo
      vRam Tool Demo
    • 108. VNX Overview
      Next Generation Storage
    • 109. Next-Generation Unified Storage
      Optimised for today’s virtualised IT
      EMC Unisphere
      VNXe3100
      VNX7500
      VNX5700
      VNXe3300
      VNX5100
      VNX5500
      VNX5300
      Affordable.Simple. Efficient.Powerful.
    • 110. VNXe Series Models
      Simple. Efficient. Affordable.
    • 111. VNX Series Hardware
      Simple. Efficient. Powerful.
    • 112. EMC: The VMware Choice
      2 out of 3 CIOs pick EMC for their VMware environments
      “Which vendor(s) supplied the networked (SAN or NAS) storage used for your virtual server environment?”
      Trusted storage platform for the most critical and demanding VMware environments
      Advanced integration and functionality that maximizes the value of a virtualized data center
      Flexibility to meet infrastructure to business and technical needs
      Knowledge, experience, and partnerships to make your virtual data center a reality
      “Which is your storage vendor of choice in a virtual server environment?”
      “EMC remains the clear storage leader in virtualized environments.”
    • 113. 3x Better Performance
      More users, more transactions, better response time
      FAST Cache
      FAST VP
      3X
      VNXPlatform
      Faster
      CX/NS Platforms
    • 114. Virtualisation Management
      EMC Virtual
      Storage
      Integrator
      EMC Virtual Storage Integrator plug-in
      Integrated point of control to simplify and speed VMware storage management tasks
      • One unified storage tool for all Symmetrix, CLARiiON, Celerra, VNX series, and VNXe series
      VMware vSphere
      Unified storage
    • 115. The FAST Suite
      Highest performance & capacity efficiency…automatically!
      • FAST Cache continuously ensures that the hottest data is served from high-performance Flash SSDs
      • 116. FAST VP supporting both file and block optimizes storage pools automatically, ensuring only active data is being served from SSDs, while cold data is moved to lower-cost disk tiers
      • 117. Togetherthey deliver a fully automated FLASH 1st storage strategy for optimal performance at the lowest cost attainable
      Real-time caching withFAST Cache
      FlashSSD
      High Perf. HDD
      High Cap.
      HDD
      Scheduled optimization with FAST VP
    • 118. MAP
      FAST Cache Approach
      Exchange
      SharePoint
      Oracle
      Database
      File
      VMware
      SAP
      • Page requests satisfied from DRAM if available
      • 119. If not, FAST Cache driver checks map to determine where page is located
      • 120. Page request satisfied from disk drive if not in FAST Cache
      • 121. Policy Engine promotes a page to FAST Cache if it is being used frequently
      • 122. Subsequent requests for this page satisfied from FAST Cache
      • 123. Dirty pages are copied back to disk drives as background activity
      DRAM
      FAST Cache
      Policy
      Engine
      Driver
      Disk Drives
    • 124. FAST VP for Block & File Access
      Optimise VNX for minimum TCO
      BEFORE
      AFTER
      LUN 1
      Automatesmovement of hot or cold blocks
      Optimizesuse of high performance and high capacity drives
      Improves cost and performance
      Pool
      Tier 0
      LUN 2
      Tier 1
      Most activity
      Neutral activity
      Least activity
      Tier 2
    • 125. User B10 GB
      User A
      10 GB
      User C10 GB
      Logical
      application
      and user view
      Physical
      allocation
      4 GB
      Physical consumed storage
      2 GB
      2 GB
      VNX Thin Provisioning
      Only allocate the actual capacity required by the application
      Capacity oversubscription allows intelligent use of resources
      File systems
      FC and iSCSI LUNs
      Logical size greater than physical size
      VNX Thin Provisioning safeguards to avoid running out of space
      Monitoring and alerting
      Automatic and dynamic extension past logical size
      Automatic NAS file system extension
      FC and iSCSI dynamic LUN extension
      VNX THIN PROVISIONING
      Capacity on demand
    • 126. VNX Virtual Provisioning
      Thick pool LUN:
      Full capacity allocation
      Near RAID-Group LUN performance
      Capacity reserved at LUN creation
      1 GB chunks allocated as relative block address is written
      Thin pool LUN:
      Only allocates capacity as data is written by the host
      Capacity allocated in 1 GB chunks
      8 KB blocks contiguously written within 1 GB
      8 KB mapping incurs some performance overhead
    • 127. VNX Series Software
      Software Solutions Made Simple
      Attractively Priced Packsand Suites
      Total Efficiency Pack
      FAST Suite
      Security and Compliance Suite
      TotalProtection Pack
      Local Protection Suite
      Remote Protection Suite
      Application Protection Suite
    • 128. VNX: Faster than the Rest
      12
      Highest number of transactions and lowest response time
      10
      IBM
      8
      3X
      FASTER
      THAN IBM
      6
      HP
      4
      RESPONSE TIME IN MS—LOWER IS BETTER
      NetApp
      2
      0
      50,000
      100,000
      150,000
      200,000
      250,000
      300,000
      350,000
      400,000
      450,000
      500,000
      TRANSACTIONS—HIGHER IS BETTER
      Note: SPECsfs2008 NFSv3
    • 129. VNX Series for Virtual Desktop
      4x the number of Virtual Desktop users with VNX Series, FAST VP & FAST Cache at Sustained Performance
      Up to 70% reduction in storage cost for same I/O performance
      Boot Storm:
      3x Faster: Boot & settle 500 desktops in 8 min vs. 27 min
      FAST Cache absorbs the majority of the Boot work-load (i.e. I/O to spinning drives)
      Desktop Refresh:
      Refresh 500 desktops in 50min vs. 130min
      Fast Cache serviced the majority of the IO during refresh and prevents Linked clones from overloading
      Celerra NS
      183x 300GB 15K FC Disks
      VNX series
      5x 100GB SSD
      21x 300GB 15H SAS
      15x 2TB NL-SAS
    • 130. VNX Demo
      UnisphereConsole:
      Dashboard
      Customised view
      System
      Disks
      System Properties
      Fast Cache
      Storage
      Pools
      LUNS
      Compression – compression on LUN
      Thin Provisioning
      Auto tiering
      Hosts/Storage Groups/Virtualisation
      Analyser – Monitor and Alerting
      USM
    • 131. Questions and Answers
    • 132. vSphere 5 Training Offers
      Take advantage of any of the below VMware course offers which are taking place at our Southampton Training Centre and receive a FREE place on Deploying & Managing Microsoft System Center Virtual Machine Manager, worth £895.VMware vSphere: TroubleshootingDuration: 4 DaysCost: £2,075.00 + VAT per delegateDates: 03-06 October Offer: Book 1 space and save 20% or book 2 spaces and save 30% VMware vSphere: Install, Configure & ManagerDuration: 5 DaysCost: £2,595.00 + VAT per delegateDates: 10-14 October (v4.1), 17-21 October (v5) & 12-16 December (v5)Offer: Book 1 space and save 15% or book 2 spaces and save 25%Exam: Includes Free Exam VoucherVMware vSphere: Skills for Operators?Duration: 2 DaysCost: £1,095.00 + VAT per delegateDates: 29-30 September & 07-08 NovemberOffer: Buy 2 Spaces Get 1 Free
    • 133. For further information on vSphere 5, or to book a one to one consultation, please contact your account manager or email ict@bluechip.uk.com
    • 134. Blue Chip
      Change is the only constant in business... ...evolution is the key to survival
      www.bluechip.uk.com

    ×