Read full article in PDF format.


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Read full article in PDF format.

  1. 1. toCost-Effective, Hassle-Free, Virtual Server
  2. 2. An Avastu White Paper: 7 Steps to Cost-Effective, Hassle-free, Virtual Server Protection Introduction Virtual machines are fast becoming a critical part of many data centers. In the recent past, organizations across the globe have begun some form of virtualization in an effort to consolidate their IT Departments with the goal of increasing utilization and reducing management and infrastructure costs. According to Avamar, more than 0,000 customers of all sizes are going through server consolidation. There are no two opinions on the fact that virtual infrastructures are a superior solution to the challenges of distributed server architecture and a sure shot way of reducing the TCO. However, virtualization has its down side. There is a disproportionate ratio of data to physical bandwidth, which is created by consolidating physical servers to a single virtual server. Many customers who have deployed virtualization-based solutions in their existing IT department have experienced an increase in storage requirements. One of the main reasons for this increase is the challenge of backups that can lead to inefficient backup solutions. The argument that virtualization eliminates the need for traditional backup and restore has already been proven wrong. As more and more production applications are being run in virtualized environments, the need to shield the virtual machines running the production applications against data loss has become urgent. It is imperative for an IT manager to know how long it will take to rebuild servers after a disaster because with virtualization, an equivalent event is more likely to happen. If you can’t or don’t have backups that can be restored to bare metal, then you have a challenge. Few data centers are full of identically configured servers, with detailed rebuild procedures so every server can be rebuilt exactly like the one that was running before. And if you have to rebuild 1000 VMs from installation disks, the whole objective of reducing the TCO is lost. The need of the hour is to build a solution that works seamlessly to back up your entire virtual environment, including Exchange servers, SQL servers, applications servers, file and print servers, and domain controllers. The solution should guarantee complete, efficient and cost-effective production of virtual infrastructures. In this paper, we shall discuss the challenges imposed by virtual infrastructures on traditional backup methods, the solutions to these challenges using the latest backup methodologies including an efficient and cost-effective solution in the form of online backups. Welcome to the brave new world of disaster recovery. These new approaches go a long way toward making business continuity simpler, more affordable and more reliable than ever.
  3. 3. Step 1: Virtualize your current servers if not already done Today, virtualization is in the forefront - helping businesses with scalability, security and management of their global IT infrastructure. It is a must have technology and if you have not started to look at how virtualization can be used within your company then you are already falling behind. Multiple applications and operating systems can be supported within a single physical system using virtualization. Moreover, since a virtual machine (VM) is completely isolated from the host machine and other VMs, if it crashes, all others are unaffected. In addition, a complete virtual machine environment is saved as a single file (encapsulation); easy to move and copy. However, the chief benefit of this technology is the reduced TCO (which is a direct outcome of the many benefits that virtualization offers). If your IT costs are getting out of control, as multiple desktops, platforms, applications, software versions proliferate across your organization and you are left with neither the time nor the money for RD, then virtualization is imperative for your organization. The following is a summary of why virtualization is vital in today’s scenario: • Underutilized x86 boxes – The average CPU utilization rate of 5-10 % has become a norm rather than an exception. With virtualization, you can expect this figure to rise up to 70 %. • Repeatable Admin tasks – Installation of various softwares is done over and again for each and every physical server and thus, a lot of time and money is spent on repeatable tasks. This problem is efficiently overcome by virtualization. • Floor Space – Costly floor space is being used by the astronomical number of physical servers in the production environment. With virtualization, you will experience a drastic reduction in the number of physical servers. • Power / Cooling Charges – The ever increasing number of physical servers consume a lot of energy and you end up running huge electricity bills. Virtualization can keep these costs in check. • Reduce business continuity costs – Encapsulation and abstraction help to reduce the cost and complexity of business continuity by offering high availability and disaster recovery solutions where a virtual machine can easily be replicated and moved to any target server. • Solve security concerns - In an environment where systems are required to be isolated from each other through complex networking or firewalls, these systems can now reside on the same physical server and yet remain in their own sandbox environment, isolated from each other using simple virtualization configurations.
  4. 4. An Avastu White Paper: 7 Steps to Cost-Effective, Hassle-free, Virtual Server Protection Step : Mirror your virtual environment off-site Data is the lifeblood for most organizations. Storing, retrieving, and protecting that data is of utmost importance to them. Backing up data before something goes wrong and then retrieving that data when something does can mean the difference between survival and extinction for your business. Today, the IT managers responsible for securing data have to cope up with a number of challenges such as, but not limited to: • Server upgrades and larger data sets • Slow backups running into key productivity hours • Exhausted or near-exhausted backup capacity • Changing backup tapes after hours and on weekends • Failed backups and restores To further complicate the backup challenge, virtualization vendors such as VMware enables virtual machines to move between the servers, which causes the volume of data backed up from each server to vary widely on a daily basis. Organizations have thus not been able to keep their backup environment constant. Virtualization also enables users to run more than ten virtual machines on a single server. This consolidation can cause challenges for traditional tape backup solutions, including long backup windows, high CPU utilization, and high network utilization. Traditional data protection software would result in simultaneous backup traffic, choking the host server’s CPU, memory, disk, and network components—often making it impossible to back up within available windows. With all these challenges and concerns, how do you as an IT professional make sure that you have a well developed and implemented plan that protects your organization from the potential of severe financial damages? Most companies perform nightly backups and leave it at that. While on the surface that may seem sufficient, this assumes the server and tape drive are operational, the backup tapes weren’t destroyed, the backed up data is not corrupt and losing the data between the time the backup finished and the destructive event occurred won’t cost you too much. It is important that you mirror your environment off-site to limit data loss using any combination of technologies (Tape-based, disk-based etc.) described in the subsequent paragraphs.
  5. 5. Almost since the beginning of modern computing, tape technologies have been the principal way of backup and recovery. But tapes have long recovery times meaning high data discovery costs in the event of data corruption or system failures. Another problem with tape is that despite many advances in the technology, these systems are not able to keep up with the volume of data that needs to be stored in ever-shrinking back-up windows. Other problems include: • Tape failures during backup • Inability to locate an appropriate and current backup tape • Tape backup overruns into production time • Slow restore speeds from tape However, the landscape is finally changing. Newer, low-cost backup and recovery methods are being offered by vendors promising to solve many of the problems that IT administrators have faced with tape backups and restores. Some alternatives to tape include disk-to-disk backup, virtual tape libraries, content-addressable storage, continuous data-protection devices, new replication and snapshot schemes, data compression techniques. Two of the most promising of these new backup technologies are the host based disk-to-disk (DD) and virtual tape libraries (VTL). With disk-to-disk devices and virtual tape libraries, backups can run within reasonable time frames, and more data can be kept online, which enables faster recoveries. The host-based DD backup strategies using ATA disk arrays appeared to offer an efficient solution to the never- ending problem of backups using low-cost disks and providing increased data transfer speeds for backup and recovery. But many IT administrators soon realized that there are a number of challenges with DD that may not have been readily apparent, including integration issues, storage formats, file system size and performance problems, fragmentation concerns and others. To realize the benefits of DD in their IT environments, administrators should be aware of these issues. Virtual Tape Libraries (VTLs) encompass fundamentals of both traditional tape backup and the newer ATA disk-based technology and combine them to provide a solution optimized for existing backup environments. In essence, a VTL is a disk-based library that emulates standard tape library and tape formats. Acting like a tape library in the environment, with the performance of modern ATA disk, VTL offers the best of both worlds. But most enterprise organizations have a sizeable investment in their existing backup infrastructure, and have likely spent considerable effort to get their backups functioning as well as possible using existing tape technology. Business leaders and IT administrators are hesitant to throw away such an investment, and are loathe to upset their existing backup processes just for the promise of increased performance. If you are one such IT manager and are concerned how good your backups will prove to be in a virtualized environment, we suggest you to take a look at online backups which are discussed in a subsequent section. 5
  6. 6. An Avastu White Paper: 7 Steps to Cost-Effective, Hassle-free, Virtual Server Protection Step : Implement a VM-aware online backup service With virtualization there are several methods to complete a backup. Virtualization provides far greater flexibility in backing up virtual servers compared with traditional backup methods. The two fundamental concepts of backup are: the Classic Backup, wherein virtual machines are treated as physical servers; and the Creation of Snapshots or for backups (VM-Aware backup). Classic File-based Backups A classic backup is the one in which each VM is backed up and restored as if it were a physical server using commercially available backup agents. A backup agent is maintained on each VM. Backing up each VM in this manner is ideal from an operations standpoint, because procedurally no changes are required. A virtual machine can be backed up locally or across the network. This method has several well-served points that follow conventional backup wisdom and requirements. It has the ability to perform file-level backups and restores; the administrator can back up what he wants, when he wants, and how he wants. This approach is generally considered to be the foundation of any backup environment because it allows for continuous operation of the server/services during the backup. However, there are numerous problems associated with this method. Some of them are: • It is resource intensive (network, CPU, IO load on the physical server) • This type of backup process usually attempts to complete a full backup of the entire infrastructure based on a schedule. Because of the disproportionate amount of data being addressed by each individual physical server, the ability to back up all of the data stored in an operational backup window is difficult • The cost of this approach could prove to be very high. Agent software is usually required on each VM and thus, the licensing costs may increase exponentially with the number of VMs. • Archived system data can only be restored onto a running virtual machine. This implies that your facilities are fully operational and that a virtual machine has become operational prior to beginning the recovery process. In times of chaos (disaster recovery), this is a very exposed position in terms of time management and time to recover. 6
  7. 7. Creating a Snapshot for Backups (VM-Aware Backup) Using this approach, an entire storage volume can be backed up. Third party softwares are used to create a backup image that can be restored quickly. The service console is restored either with the help of the boot CD or by using the backup software created to restore it. Snapshotting is also known as cloning. This approach reduces the maintenance window of backups while allowing for the user population to run without obstruction. It creates redo logs that cache the changes occurring during the backup operation. With the virtual machine in an undoable condition, any writes to the virtual machine image are locked, thus allowing a backup to occur. The redo log captures changes in a chronological order and updates the virtual image when the process is committed to the changes. Creating a snapshot for backups allows archiving of virtual disks without virtual machine shutdown. Taking snapshots and cloning virtual machine images has many advantages. Like agent-based backups, most IT administrators are familiar with them. Snapshot and cloning capability is also included in many virtualization packages such as VMware and XenSource and with many traditional backup tools. Some of its other advantages are: • Minimizes backup window • Backs up complete VM image • Full VM restore • Maintains agent in Service Console On the other hand, the snapshot-based backup is considered a risk-based process due to the fact that operations remain in a production status. Moreover, a snapshot backup is crash-consistent, which means the backup is as good as if the power went off. It is always advisable that a snapshot has a local and a remote storage space. Thus, it is advisable that you adopt the online backup approach which is discussed in the subsequent secitons. 7
  8. 8. An Avastu White Paper: 7 Steps to Cost-Effective, Hassle-free, Virtual Server Protection Step : Backup your production environment The most flexible use of virtualization technology in a disaster recovery is to have a solid virtual production infrastructure. It is easier to move servers from a virtual environment to another virtual environment offsite than it is to move from a physical server environment to a virtual server environment offsite. Step 5: Restore your production environment to your hot site Imagine a complete off-site virtual office, fully functional within moments of total data destruction without worrying about the movement of data to the recovery site (which is often considered to be the biggest challenge for disaster recovery). It’s not a dream anymore, all thanks to virtualization and the out of the ordinary business model – Data as a Service (DaaS). We may consider Daas to be a “Pay-as-you-go” model, where you copy all your infrastructure, and have it rested somewhere (ideally several hundred miles away and in a different environment ) and it is intelligent enough to calculate and trigger several critical alarms, such as power usage, consumption, uptime, ROI, user response, profits and then allow the business managers to even “play” such scenarios, such as a Virtual environment, a simple subset of your data center, which you can run in your office. That way you can pre-run scenarios that will pre-calculate all that critical information, and ask you to choose for facilities like a Google premises in Poland, Amazon premises in Guatemala, Avastu premises in Kampala, VMware premises in Dalian etc. The pay-as-you-go model is the foundation of a typical “business agile infrastructure”. Thus, this ‘hot site’, which is a near replica of the entire production environment at another data center, allows you to efficiently restore lost data in the event of a disaster. Although generally more expensive this approach has the following advantages: • Faster recovery • Data redundancy • Full data exploration and exploitation • Comprehensive testing capabilities • No IT overhead • Continuous service delivery by a company that specializes in Disaster Recovery/Business Continuity • First-rate security 8
  9. 9. Step 6: Ensure your Regular Backups continue to run without error Tapes, disk arrays, VTLs etc. are used by most organizations to secure their data. Regular backups would ensure the survival of their data, to a large extent, in the event of a catastrophe. Although regular backup tools usually prove to be insufficient but these should nevertheless be used along with enterprise class backup tools and proper management and planning, meaning – establish a backup schedule, rotate your backup equipments, and ensure that the backups run without any error. Relying completely on traditional backups may not only cost you precious down-time, but might also make a serious dent in your productivity and profitability. But no doubt, you will be able to eventually restore corrupted data from backup tapes or disks. Step 7: Periodically run data disaster drills to determine your recovery times, prove the reliability of your solution, and keep your hot site ‘fresh’ How much time would it take for you to get your virtual environment up and running after a data disaster? Many IT managers do not have a precise answer to this question. But once you determine what and how often to backup, it becomes imperative for you to check whether the backup procedure itself is reliable. Doing a disaster drill is a crucial element in any backup plan. In addition, the data residing in your hot site should be fresh, so that in case of a disaster, all data up to the last check point be restored. Data Disaster Drill or DDD should ideally involve the simulation of a catastrophic disaster with reconstruction of the company’s critical operations in a virtual office. Just as the military drills or school fire drills, a DDD should be done at least once a year to maximize effectiveness and to ensure that everyone knows his or her responsibilities. It will prepare your IT admins for effective and seamless data recovery in the event of a disaster.
  10. 10. An Avastu White Paper: 7 Steps to Cost-Effective, Hassle-free, Virtual Server Protection Online Backups The discussion on backups will be incomplete if we do not talk about an emerging field of backups which may just solve all your backup worries and make you tension-free. The solution that we are talking about is the online backup solution. If the cost, setup, hassle of swapping discs or shuttling around hard drives are too much of a headache for you (which they surely are) and you want a simple solution which ensures that your data is backed up regularly with a click of few buttons then a new class of online services can come to your rescue. It is possible to copy files over the internet easily and affordably, keeping your data secure at an off-site location. There is no need for any additional hardware or media - you simply take advantage of the online service’s hardware. A backup software is installed and configured which runs in the background. Thus, you pay only for one agent. One of the biggest advantages of backing up online is safety. Because files are stored elsewhere, they’re protected even if your equipment gets stolen or your office or house burns down. Plus, most services offer encryption. The only requirement is that you should have a fast internet connection. There are downsides, however. If your Internet connection falters, or if the online service has server problems, you could be stuck without access to your data. By far the biggest drawback of online backup is speed. The downsides are most likely the reason that adoption and proliferation of online backup solutions has taken so long to gain momentum. However, recent advancements in software and stronger service providers can be combined to mitigate the downsides, and just in time to solve the challenges outlined in this white paper. Enterprise Class, Online Data Protection as a Service, 008 Style Let’s examine a solution provided by CRC DataProtection, an Asigra Televaulting Service Provider. CRC is one of many Asigra Televaulting Service Providers. Each Televaulting provider has the option to license certain features from Asigra, and CRC has licensed all Asigra features. For that reason, some capabilities mentioned here may not be available with other Televaulting providers. To avoid many of the downsides mentioned above, you should choose your provider very carefully. To balance the length of this paper with a complete treatment of how the CRC solution addresses all of the issues identified so far, let’s examine an example virtual environment consisting of four virtual machines on one virtual server running VMware. Our example server will run four Windows Server 00 R virtual machines: Active Directory, Exchange, SQL Server, and a file print server. 10
  11. 11. To begin, let’s return to the two classic backup options: file based backup and image based backup. CRC, using Asigra, provides an excellent solution with very little compromise to deliver all of the benefits of both options via one service. In fact, the CRC solution can provide the images at the guest OS level, or at the virtual machine level, and file based backup as well. Let’s take this step by step. Step 1: Image Based Backup (Virtual Machine Level) These images are created to provide for catastrophic disaster. In order to make these backups it will be necessary to quiesce the entire virtual servers and take clean backups. This backup will provide an option to rebuild the virtual servers quickly – including all settings, all installed guest operating systems. This step is intended to protect the business from a partial or total site loss – assuming that entire machines are either lost or destroyed. Step : Imaged Based Backup (Guest OS Level) These images are created to provide for the total loss or corruption of an individual virtual server. Examples of catastrophes that would trigger restores of these images would be: failed OS upgrades, virus infection, compromise by hackers, etc. To get clean images for these restores, it is necessary to stop all services on the guest machine only once, and from then on, it is possible to create hot images of these servers so changes to settings, user accounts, system services, etc. can be kept up to date with very little impact to users. These images will also provide for all application software, associated files, registry, and service database backups. So, application of these images will bring a server back from a base level operating system starting point, directly back to all software, patches, users, permissions, etc. being fully restored. The Active Directory machine is a simple matter of the Imaged Based backup – which CRC calls a Bare Metal backup. This backup can be accomplished with the Active Directory Server up and running, and has little to no impact on users. Depending on how often this server changes, it can be backed up continuously after the initial backup, nightly, or anywhere in between. The Microsoft Exchange machine is a simple matter of a Bare Metal backup as well – and also can be accomplished while the Exchange Server is running and with little to no user impact. Depending on the version, it can also be protected to the message level – continuously as emails are sent and received. As of this writing, 007 can not be protected to message level. All versions can be restored either directly over the Information Store, or to a Recovery Storage Group. 11
  12. 12. An Avastu White Paper: 7 Steps to Cost-Effective, Hassle-free, Virtual Server Protection The Microsoft SQL Server machine must have SQL services stopped (only the first time) to create a clean Bare Metal backup. After the initial backup, the SQL server can be backed up ‘hot’ using a variety of methods from ‘pipe’ to ‘dump’ to capture regular images of the database at various commit points. Each user database can be restored ‘hot’ but a restore of the master database would necessitate a shutdown of SQL services. The file print server would also benefit from a Bare Metal backup to ensure all permissions and server settings and installed printers would be recoverable in the event of a disaster. Step : File Based Backup (Guest OS Level) These images are created to provide for day-to-day restore needs from users who accidentally delete files, individual file corruption, and various human errors that cause about 85% of all restore operations in most data centers. These images are always created from a hot server and never require that systems or services be downed in order to make the backups. In a worst case scenario, all three images would be necessary to restore the environment. In lesser events it would be necessary to recover from steps and ; and of course, most events would only result in a restore from step . Mitigating the Downsides In the above example, we would recommend adding at least one virtual server to the virtual machine as a backup appliance. The backup appliance would host the Asigra DS-Client software and do all of the backup/restore work for all of the other servers (for steps and three.) The backup appliance would also have access to significant disk storage locally and would maintain at least one generation of all critical data, local to the data being backed up. Any restores that were prompted by anything less than a site loss would be possible at LAN speed. In the event of a total site loss (assuming you partnered with a service provider who will work with you) you can simply call the provider and tell them that you have declared a disaster and need your data on a portable system. At GigE speed, it takes about an hour to copy 0GB; so for each 0GB to be restored it will take an hour to copy. Then add in time to courier (or same day air) the appliance to your site, and finally an equal time to restore. This procedure can get most people back up and running in less than 1 hours – and comfortably within . If you have been planning for this event, you may have installed a backup set of virtual servers, you would have been doing periodic restores to the backup servers. On the day of the loss, you would simply initiate a refresh restore (must faster than a full restore) of the servers, and be back up in running very quickly from your remote location. 1
  13. 13. Note: some service providers (including CRC) offer co-location of your backup servers as an add-on service. Also note: in order for this scenario to work you will most likely end up with an additional virtual machine on your virtual server hosting CITRIX Presentation Manager – which will provide web-based access to all of the other servers from anywhere your users can gain access to the Internet. Next, let’s take a look at the individual challenges of backup and see how the CRC solution addresses them: • Server upgrades and larger data sets o Bare Metal backups make server upgrades a snap. • Slow backups running into key productivity hours o Backup windows are drastically shortened via LAN and Disk speed copy of production data. Using the Local Storage option, all copies go directly to the backup appliance, and from the appliance the ‘deltas’ are transferred via Internet bandwidth to the back-end vaults. o With Asigra, there is only ever one ‘full’ backup, and from that point on Asigra performs ‘incremental forever’ backups which cause only the smallest blocks of data necessary to reproduce any generation of a file being protected. o Further shrinking the already small ‘deltas’ the data is compressed. o Additionally, any files that already exist on the back end vault are not transferred off site (this is known as de-duplication). o For the above reasons, in all cases an Asigra Televaulting online backup will out perform a similarly sized tape backup, and most disk-to-disk backups as well. • Exhausted or near-exhausted backup capacity o There are no capacity issues for Asigra Online backups with a strong service provider. Larger and stronger service providers have snap-in storage that allows their solutions to grow storage as a ‘cloud’ in their data centers providing for invisible and unlimited growth for clients. • Changing backup tapes after hours and on weekends o With no tapes to jam, fail, expire, or fail-to-qualify-for-backup, there are no issues of backups failing for any mechanical reason – other than a failed Internet connection. (And as mentioned above, a failed Internet connection will still result in a local copy – and an immediate copy off site as soon as the connection comes back up.) o This is another area to scrutinize your provider however. Concerned providers like CRC provide Asigra in the N+1 configuration ensuring that failed servers do not result in failed backups – and other protective measures such as redundant Internet connections, data centers, and replicating storage. o The point to understand around this challenge is that your solution is only as strong as your service provider; so you must ensure that your service provider is providing extremely reliable service. 1
  14. 14. An Avastu White Paper: 7 Steps to Cost-Effective, Hassle-free, Virtual Server Protection • Failed backups and restores o Using Asigra, backups do fail, although not for reasons of mechanical tape. Asigra failures are most likely due to permissions, and beyond that open files. o This again points back to the service provider as high-end providers will train your administrators to identify failed backups via automatic email, pager, event log, and snmp monitoring – during initial setup. Any file that causes an error must be identified, excluded, or set to back up properly (which may mean running pre- and post- backup scripts, or using an open file agent). From the production roll out, your administrators should see zero errors. If they see an error (which is a fairly rare occurrence) they need to diagnose and resolve it. Because there are very few errors, there is a significant time savings when making the switch to an Asigra Televaulting solution. o Regarding failed restores, Asigra offers an option for Autonomic Healing. Autonomic Healing runs (on high end providers’ servers) x7x65 combing through all of the data looking for any file that is incomplete or corrupt. Upon finding such a file, the Asigra software issues a command to the client computer to make a new copy of that file. For this reason, CRC guarantees that any file backed up without error will be 100% restorable. • Inability to locate an appropriate and current backup tape o While always a risk with tape, this is never a problem with Asigra Televaulting. The DS- System keeps track of all files in a self-discovery mode (so even if the database of online files kept at the client is lost or destroyed, it is possible to recover the catalog). Simply put, any file that is backed up without error can be found and restored. • Tape backup overruns into production time o While we touched on this above, it bears repeating that a LAN speed copy of a disk-to- disk image that is then blocked, de-duped, compressed, and sent off site will drastically reduce your backup windows. • Slow restore speeds from tape o Another issue solved! Due to the disk cataloging and access of files, there is no need to: find, mount, spool, and read a tape. Restores start immediately, and progress at LAN speed in 85% of all restore operations. For those that require a trip to the offsite vault, most of those are small. It is only the large restores that require communication with the vault (either online or via courier) that take significant time, although this time can be much longer than a tape restore – and for that reason, you should strongly consider implementing the suggestion of a backup virtual site as mentioned above. 1
  15. 15. Conclusion: • There is not a one-size-fits-all backup solution • Independent software vendors, independent hardware vendors and virtualization vendors are required to integrate and support new and existing high availability solutions in virtual infrastructures. • Virtualization, which is often accompanied with the ‘all eggs in one basket syndrome’, calls for a pressing need • The bottom line: backup, restoration and safe archiving of electronic data can no longer be a hope it works proposition. • So if you are considering server consolidation through virtualization, take time during the planning phase to consider how this will effect the full data cycle, including backup, restores, and archiving. • Backup software vendors are doing their part to develop tools that meet the challenges of these new environments, such as avoiding conflicts and resource bottlenecks when several virtual servers are trying to use the same hardware. And companies have found that they need to take a multilayered approach to achieve adequate uptime and reliability. • Going ONLINE is the solution to many of your backup and restore problems! 15
  16. 16. To learn more about how you can protect your critical systems, download the free e-book, “Data Recovery via Enterprise Online Data Protection”. Get more information and sign up for a service trial at