12. 需考慮的關鍵項目 #2 – 不要備份同樣的資料兩次
A
B
C
D
E
A
B
C
D
應用程式
應用程式
作業系統
E
作業系統
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
12
13. 需考慮的關鍵項目 #2 – 應用重複資料刪除功能在實
體與虛擬環境
A
B
C
D
E
A
B
C
D
應用程式
A
應用程式
作業系統
作業系統
B
C
D
E
A
B
C
D
應用程式
E
應用程式
作業系統
A
E
作業系統
B
C
D
E
A
B
C
D
應用程式
B
C
D
E
應用程式
作業系統
應用程式
作業系統
E
重
複
資
料
刪
除
A
作業系統
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
13
14. 需考慮的關鍵項目 #3 – 不要像備份實體主機一樣去
備份虛擬主機
在 Source/Client 端就進行保護
傳統的 Guest
虛擬主機
部署簡單
和實體主機一樣
PROS
1 TB
1 TB
效能影響
非影像為主的還原
花時間的 DR
持續的維護
CONS
*1 TB of VM systems
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
14
15. 需考慮的關鍵項目 #3: 使用虛擬化原廠的 API 做到更
進階的資料保護
Agentless/Off-host 保護
PROS
降低對效能的影響
精細的 Windows 還原
進階的保護
更棒的儲存效率
更快的 VM 備份
更簡單的 DR
虛擬主機
1 TB
.5 TB
?
CONS
*1 TB of VM systems
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
15
38. vSIC 溝通流程: 未知檔案
VM (client)
SEP 12.1.2
Scan
Request
Get
EFA
Update
Scan
Check if file:
Submit the
EFA: thein
掃瞄此檔案:
chache
chache:
Extended File
Sending some
The file is
attribute.
執行 AV 掃瞄
EFA attribute
Adding for
Clean
The file is
Check the
動作
via the vShield
hash, clean
unknown
reputation
“plumbing”
statut, AV defs
and whitelist
sequence
flag
number
vShield drivers (part of VMTOOLS)
vShield
Manager
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
SVA
38
39. vSIC 溝通流程: 已知檔案
VM (client)
SEP 12.1.2
Scan
Request
Get
EFA
Check
Submit
EFA: if in
跳過此檔案:
chache:
Extended File
Sending some
attribute.
AV 終止掃瞄
EFA attribute
The file is
Check然後移
需求, for
via the vShield
known
reputation
到下一個檔
“plumbing”
and whitelist
案進行處理.
flag
vShield drivers (part of VMTOOLS)
vShield
Manager
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
SVA
39
41. 賽門鐵克 VMware 解決方案
Guest OS 1
Guest OS 2
Guest OS 3
APP
APP
APP
OS
OS
此外, vCenter Server
是否也需要考慮進來?
OS
VMware ESXi Host
vCenter Server
ESXi Host 是否也需要保護?
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
41
53. ApplicationHA 功能概要
• VCS + ApplicationHA 提供應
用程式的協調還原
VM1
VM1
VM2
ORA
– VCS 會透過 agents 管理 VM 的可
用性
VM2
SQL
ORA
SQL
App
HA
OS
App
HA
OS
OS
– 監控應用程式, 顯示健康狀態, 偵測
應用程式問題
VCS
OS
VCS
– 重新啟動應用程式. 如有需要, 驅動
VCS 執行 VM 重新開機動作
• 透過精靈介面, 進行應用程式的
自動搜索及設定
UI
datacent
er_1
APA
APA
APA
SQL
SQL
SAP
SQL
SQL
APA
WAS
WAS
WAS
cluster_
1
• 多層級的 HA (虛擬企業服務) 支
援
• 虛擬化及應用程式管控的專屬
操作介面
Start
host_2
Stop
vm_1
Restart
SQL_1
vm_2
– 在 VM 內執行應用程式的虛擬化
, 啟動, 與停止
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
ORA_1
53
54. ApplicationHA-SRM 整合所帶來的價值
針對兩地在執行測試或真正的還原時, 提供應用程式的監控
Site B
Test Recovery
• 在兩地之間, 自動化的應用程式監控
• 在分別兩地, 不需要仰賴 VMware HA
• 針對兩地, 透過儀表板方式監控
Site A
Real Recovery
針對 SRM audit trail 中應用程式的偵測
• 透過 SRM audit trail 取得應用程式狀態
• 提供相關證據給稽核人員及管理階層使用
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
54
57. 使用 AppHA, VCS, 與 VBS 做到點對點的 HA
採購部門
人資部門
採購VBS
AppHA
財務部門
財務VBS
人資VBS
AppHA
AppHA
AppHA
VCS
AppHA
AppHA
AppHA
AppHA
VCS
AppHA
AppHA
VCS
Veritas Operations Manager
CFS HA
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
SF HA
57
58. 多層次的應用程式 及服務
會計部門
Veritas Operations Manager
Web server
Service Group
Service Group
Web
Web
VM
VM
Symantec
ApplicationHA
ApplicationHA
Application
app
app
Service Group
app
Veritas Cluster Server
HA
App
IP
VCS
Database
Service Group
DB
IP
FS
VVR
Veritas Cluster Server
HA/DR
VCS
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
58
59. ApplicationHA 與 VCS 分別適用於不同的情境
Application HA
Veritas Cluster Server
VM2
VM2
VM2
VM2
VM2
VM2
SQL
SQL
SQL
SQL
SQL
Application
HA
Veritas
Cluster
Server
Veritas
Cluster
Server
Veritas
Cluster
Server
Veritas
Cluster
Server
OS
OS
OS
OS
OS
VIRTUALIZATION PLATFORM
SQL
Application
HA
Improve Storage Performance
OS
VMware HA
VMware HA
VMware HA
VMware ESX
VMware ESX
DYNAMIC MULTI-PATHING
VMware ESX
• 重要的工作負載需要應
用程式的監控及還原
• ApplicationHA – 賽門鐵
克與 Vmwawre 的整合
HA 解決方案
• 透過避免 VM 重新開機, 降低還原
時間
• 允許實體到虛擬的故障移轉
• 降低 OS 上 patch 的計劃時間
• 可支援 vMotion, DRS 及 Site
Recovery Manager
• 設定簡單且可透過 vCenter 進行
管理
賽門鐵克 VMware 解決方案 - 業界最完整的可用性及資安產品線
• 強化的儲存效能及路徑的
高可用性
• 讓你以對的價格選擇正確
的儲存設備
59
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
So let’s drill into each area a bit more. Virtualization slows the backup process. Because backup is the process of copying data and moving it from one location to another, it requires server resources, specifically I/O and CPU power. This is in direct conflict with virtualized server environments which typically maximize all of a server’s resources. The result is that the backup of one virtual machine negatively impacts all the other virtual machines running on a physical server. Not only will those servers run more slowly, but the virtual machine backup itself may also be impacted. And because many virtual machines share network resources, the backup process often creates network bandwidth problems for other machines sharing that local area network.Plain and simple, virtualization increases backup storage consumption. How can virtual machine backups be different from standard physical machine backups? There are two important concepts here. First, many customers choose to backup both the individual files in a virtual machine as well as the entire machine (e.g., the VMware VMDK or Microsoft VHD). There’s a lot of duplicate data, such as OS data, in those virtual machine images that many customers backup every day. Second, even if you don’t do both of these types of backup, most customers will acknowledge virtual machine sprawl. Most or all of these servers in the so called “sprawl” need to be protected resulting in more backup storage consumption.Finally, if you are using virtual machine backups for disaster recovery, then you are probably storing full image backups from a number of days or weeks at both your data center and at your DR site. These images may give you faster recovery, but they consume a lot of storage. Do all of these VM backups need to be on disk? We would be talking about weeks of full server backup data often stored on disk.Virtualization improves the efficiency of the server team, but it can reduce the productivity of the backup team. How can this be you ask? Well, many companies often deploy different backup & recovery tools for virtual servers & applications. The result is that not all team members understand the backup process because of the increase of tools. Worse, you now need to monitor and report on backup activities across multiple applications. Did all your jobs complete with success? How much data are you protecting? Can everybody execute a recovery? You get the picture.What about recovery of specific files or directories?At EMC World 2009, a speaker stated that 80% of VM recoveries were for individual files. If you perform image-based virtual machine backups, you have no detailed file information within your backup catalog. This means that your backup administrator have to spend more time performing a recovery request. In fact, some IT teams simply refuse to perform individual file recoveries! Finally, let’s talk about the toll of administration and monitoring. How much time is your team spending making sure that they’re protecting all of the virtual machines deployed? If some virtual machines are intentionally not protected has that gap been documented with business owners? Separate backup tools for virtual and physical machines only makes this problem worse.
Last year Symantec conducted an extensive global survey of thousands of end-users. It is astonishing the nearly two-thirds of virtual machines are not backed up! The amount of risk businesses are taking on as a result of not backing up is amazing. There are some very real historical reasons for these results:1. Virtual machine sprawl – virtual machines spread like rabbits. Often times, IT just doesn’t know about the new machine (or knows about it but doesn’t know the RPT/RTO requirements/no SLA). This is one of the top reasons virtual machines are not backed up.2. Cost of backup agents – in the past, IT would have to buy individual backup agents for each new server. Potentially costing thousands of dollars...destroying much of the cost savings created with virtualization. 3. I/O & bandwidth impact – IT may have been concerned with dragging down the host machine and/or network by moving a lot of data for backup. The whole idea of virtualization is to increase server utilization/CPU utilization/network utilization and if you are successful, there is less “slack in the system” to handle backup loads.
Today many IT shops are backing up the same data two times in virtual environments: they backup the first time for full image recovery and then they backup a second time for more granular file, object recovery. The thinking is that when you want to recover a single email or a single calendar item from Exchange if you have only backed up the virtual machine then you will have to first restore the entire server, then recover the granular data you seek.The problem, as shown in the graphic, is you take twice as long, put twice the load on the network and you take up twice the storage capacity for the same data.Today, thanks to new capabilities from virtualization vendors and backup application vendors, there are solutions that allow you to do a single backup and still recover granular object items.
According to Enterprise Strategy Group most recent research (see appendix), 62% of companies still backup virtual machines this way. There are a lot of historical reasons for doing this, including a physical machine “mindset” in IT, uncertainty about the ability to recover granularly, and most importantly the limitations imposed by virtualization vendors. Example: VMware Consolidated Backup required a proxy server and had significant limitations for backup at the hypervisor level.The impact of this approach is significant: higher real costs (of agents) and unnecessary management complexity.Today, the virtualization vendors have improved APIs to support centralized backup and many vendors have the ability to backup at the hyper-visor level.
In a 2010 survey, customers were asked what approach they use today for physical and virtual machine backup: one single backup infrastructure or TWO separate infrastructures. More than half of IT shops use two vendors. The survey went on to ask what is your *preferred* approach, and IT shops correctly recognized the mistake of running two separate backup infrastructures and almost three quarters (74%) of them said a SINGLE vendor is the preferred. The growth of virtualization has been a lot like the growth of many disruptive technologies. In the early days of Linux, for example, there were specialized people and niche technologies that supported the exciting new technology. Today, Linux is simply part of how IT organizations work. Similarly, some organizations have developed a divergent approach to backup for their virtual servers. This is a mistake.What some people do:Using one tool for physical serversUsing a different tool for virtual serversWhy do some people do this:Limitations in backup support from VMware & Hyper-V Poor support from major vendorsVMware guy different than Backup guyLack of awareness of support from major vendorsWhy is this approach a problem?Inconsistent data management resultsConfusion and conflict between IT organizationsTurf wars, budget battles, etc.
How to Avoid?Explore your backup vendor solutions for virtualization. If your current backup solution doesn’t have strong support for both physical and virtual machines, find one that does.Get the virtualization & backup teams together, assign ownership & budget for backup of both physical and virtual machines.
Too many copies. The primary. 5 snapshots. Daily incrementals. Weekly fulls. The archives.
Client deduplicationThis method reduced the amount of data sent over the network and stored. It does increase the CPU load on the client machine but for a relatively short period of time. This is ideal for remote offices, some virtual machines, and some LAN connected machines.
Many customers protect their VMs in the same manner as their physical machines. NetBackup offers a better way. Simply turn on deduplication at the client. The deduplication client eliminates duplicate data at the source which equates to as much as 99% less bandwidth to move that data. Why? First, because on average, most incremental backups represent only 10% of all data. Second, with deduplication, we can take that % down even more because we can eliminate redundant information across virtual and physical machine instances. So instead of moving 1 TB of VM data whenever you perform a full, you only move 10GB of data. What if you rarely perform full backups? You still get the same ratio of reduction.Of course, moving less data also means storing less data. And with NetBackup deduplication, you can efficiently replicate this data to another site, rather than writing it to tape.Finally, using NetBackup client-based deduplication is simple. It is just like any other client-based backup. And because NetBackup supports a wide number of OS types and applications with client based deduplication, it can be used with most any virtual machine type or application.Of course, there are other ways to protect VMs, ones which offer full VM recovery, and we can help there as well.
NetBackup for VMware offers several advantages over using traditional client/agents in VMs. First, they are “agent-less” meaning the agent resides in the backup server, and not in each VM. So, they are very easy to deploy and manage. Second, almost all of the backup processing is performed by the backup server. Third, these agents support incremental backups and deduplication, both of which greatly reduce the amount of data transfer and processing. Lastly, both support fast recovery of entire VMs or individual files, directly from any backup storage.NOTE: NetBackup alsooffers deep integration for Microsoft Hyper-V.
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
17+ VMware Security Advisories in 2011
>50% could compromise the admin VM or lead to hypervisor escapes
Examples of volatile file are:Window search index fileTemp filesDesktop.ini filesUser edited filesEtc…The algorithm for skipping volatile file is being patent filed by SymantecVolatile file are identified by keeping a hashed state flag in SymEFA. When a file is modified SymEFA removes the hash. The next time we access the file we know that it is volatile.
(1): The 1 SCSP virtual Agent to 1 ESXi server tie is for the MP3 release. Beyond MP3, we plan to enable the single SCSP agent to monitor multiple ESXi servers
Notes:vCenter requires use of relational database either SQL Server or Oracle. The standard Windows Strict policy has specific containment for SQL Server though typically the database will not be co-resident on the same system as vCenter. There are no specific protection rules for Oracle (and none are being done for this effort). Oracle runs on many systems and thus rules both Windows and Unix/Linux rules would need to be developed and tested.
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
When talking to customers today their pain points are normally falling into three categories, lets look at some of the pain points in more detail.
Symantec ApplicationHA leverages this API in 4.1 to add application awareness to VMware HA and coordinate recovery with it Let’s take an example to see how this works 2 ESX servers, 2 running VMs on 1 one them … Together ApplicationHA and VMwareHA protect against a wide range of failures. For eg. infrastructure failures, or scenarios where infra is up but app is down, or app is up but not functional
Let’s double click on how the solution worksCentral to the solution is ApplicationHA’s deep understanding of the applications. Understanding an application involves understanding app specific components/processes, dependencies between them, storage it uses, how to start and stop the apps, what does it mean for the app to be up and functional, etc.This application intelligence/insight is encoded in application specific modules (or agents) that plug in to the ApplicationHA’s application framework and provide the ability to monitor the app, detect when they fail, recover, start and stop the app, etc. For instance, a SQL 2008 agent understands the multiple SQL DB instances , FileStream and Analysis service. Beyond verifying that the set of application processes for SQL are up, the SQL agent ensures that it can process SQL queries such as add/delete table entries and is truly functionalNote that the app specific insights have been derived from the agents delivered with industry leading Veritas Cluster Server technology which has been helping the world’s largest organizations ensure availability for their most critical applications for over 10 years. This application visibility and control is available for a wide range of applications across Windows and Linux. Initial release is targeting sql, exchange, etc. Similar to what we do for VCS today, we plan to update our application support for ApplicationHA every quarterThe ApplicationHA framework watches the app status from these app specific modules and communicates with VMware HA through a heartbeat mechanism when needed. If the application is up and running ApplicationHA will continue to send heartbeats at regular intervals. If the application fails, it will try to restart a few number of times. And if that doesn’t help, it will stop the heartbeat letting VMware HA know that it needs to restart the VM. The solution enables users to customize the recovery behavior. Specifically, you can set up the # of times an app or a VM recovery should be attempted before asking for user intervention. This is to ensure we don’t compound the problem and don’t end up in an endless loop. In certain situations you may want app restarts but not VM restarts In other cases, you may want to turn off automated remediation completely, say when you are doing planned maintenance activities
VCS will be used to manage infra avail for VMs using agents, for example LDOM or KVM agents. Later I will outline how VCS will be deployed in each physical machine.Let’s move on to AppHA:From an Availability POV, we support the same operations as Lorenzo just lined out for VMware. Application visibility, app restart and VM restart.The difference is now that AppHA reports into VCS rather than to any native clustering technology.Push installation and autoregistering with the VCS-VM. Autoregistering means that AppHA configuration will be very quick and easy. Wizards will be available to configure applications via VOM . The AppHA will perform an autoregistration to the VCS-VM and add the needed configuration details to VCS to manage the VM . Traditional VCS commandline and configuration technique (main.cf) will be available.This concept also supports what we call Multi-Tier HA using a new feature known as VBS – Virtual Business Service. In the next release of VCS (and also AppHA), we are introducing VBS. Multi-Tier HA support means that we are able to control HA over several tiers, having dependencies between different tiers, and also fault propagation between different tiers, different operating systems. I will not cover VBS in detail, but I really recommend you to not miss the “HA Futures” session, where VBS will be outlined. The important thing here is to remember that we will have multi-tier HA support.We will also be able to control start and stop of individual VMs, as well as applications inside VMs.Okay, so why are we doing this then? For those of you which are familiar with VCS, you know that we did application monitoring inside a VM using the Remote Group agent. However, there are some significant differences.App HA model is much more lightweigt, quicker and easier to install, better visibility and easier management.Multi-Tier HA support
List the key points to communicate
New VCS agent: VMwareDisks (aka HotSwap)The agent manages the attach\detach disk operations on virtual machines in a VCS clusterThe agent reconfigures storage on the virtual machines by sending the disk attach\detach requests to the host ESXi serverIt uses gSOAP and C++ interfaces to communicate with the ESXi web-serviceThe agent resides at the lowest level of the storage stack in the service groupThe agent functions as a glue between the ESXi host and the VCS components within the guestsThe agent is cross-platform (Windows and Linux)
Symantec’s ability to work across this heterogeneous environment gives IT organizations a way to ensure the availability ofmulti-tiered applications, because Symantec will provide visibility and coordinated start and stop for all business servicecomponents—even if the Web server sits in a VMware virtual machine, the application server in KVM on Linux, and thedatabase on physical Big Iron.Using Symantec, HA can work across heterogeneous infrastructure, using the cross-platform layer to provide end-to-endHA information about an entire business service, comprising a multi-tier application and the heterogeneous infrastructurethat supports it. Infrastructure HA aligns with and extends the core concepts of virtualization itself, by poolinginformation resources even across heterogeneous infrastructure, and making them available on-demand for infrastructureHA, or management solutions to take appropriate action to maintain service continuity.Figure 1 shows a typical data-center server infrastructure, with one set of x86 servers hosting Windows and Linux virtualmachines, another set running applications and middleware directly, and a set of “big iron” proprietary hardware runningUNIX. A single Tier 1 application, “eCommerce”, crosses all three levels, with Web services running on x86 VMs,applications on physical x86 servers under a JBoss Enterprise Application Platform, and a database—shared with otherapplications—on a UNIX platform
Some of the significant concerns that have made customers hesitate to move mission critical applications to a virtualized environment surround a number of key issues:Identification of bottlenecks: Without visibility to path statistics at a granular level, identification of specific bottlenecks in a virtualized environment is difficult. The statistics provided by DMP through both vCenter and the remote CLI provide great detail on performance, to assist in identification of bottlenecks.Vendor lock-in: NMP provides very generic support for storage arrays, with simple algorithms that pay no attention to the performance of the storage subsystem. PowerPath/VE provides greater flexibility, but locks customers into EMC storage arrays, where they typically overpay for their storage.Enhanced device naming and extended attributes: The ability to quickly identify the storage attributes of a given device ensures that virtual machines are deployed on the right tier of storage. Additionally, the enhanced device naming allows for easier communications between the storage and server teams.
DMP for VMware leverages the same modular architecture that DMP on Unix or Linux uses, including the ASL/APM’s.This allows DMP for VMware to easily support new arrays as needed. In order for Symantec to support a storage array, the storage array must first be certified and listed on VMware’s HCL. We can not provide support for unsupported arrays on ESX.The initial HCL for DMP for VMware is a subset of the physical arrays supported, and will be expanding over the coming months. Stress that the architecture itself supports a large number of array families and models. Adding array support is a matter of testing and validation of test results by VMware for listing on their HCL. Arrays certified with DMP will be listed on the VMware HCL in addition to the Symantec HCL (included in the standard SFHA 6.0 HCL).
The VxDMP Plug-in for VMware vCenter provides the centralized graphical management interface for DMP. The interface shows up as a separate tab for administration and management.In this screen shot, we highlight the plug-in itself, and can also be used to highlight the availability of graphical performance metrics at all points in the product, evidenced by the bar graphs near the bottom center of the graphic. The subsequent slides focus on selected actions that may be performed through the VxDMP Plug-In.
In this view, I/O policy changes are possible, as well as enable/disable of specific storage paths. It is also worth pointing out during this slide the visibility of the AVID device names for the EMC Clariiion as shown. Also of note are the array serial numbers, the array failover type, and the storage layout.
In this view, we provide I/O statistics in a tabular fashion, in addition to the graphical view previously highlighted. In this particular view, the statistics show aggregate I/O to the attached storage enclosures, but additional statics on individual LUNs, array ports, and HBAs are also accessible. These statistics are the key to identification of I/O bottlenecks within the ESX environments.These I/O statistics can be set to refresh automatically by use of the Auto-Refresh drop down (15 second minimum granularity), and statistics can be reset to zero for instant analysis of real-time I/O operations.
The DataCenter view ithin vCenter provides visibility of storage arrays attached to ESX servers within the vCenter datacenter. This provides a single view of the storage infrastructure that is being used across the vCenter datacenter, and provides detailed statistics down to the LUN and virtual machine level to provide unprecedented insight into the impact individual VM guests have on the I/O subsystem.
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment
Objective: Identify with Customers Business ObjectivesTransition: Now that you have an understanding of how this technology can support your overall initiative, let’s me walk you through how we protect your virtual environment