Managing Data to Improve Disaster Recovery Preparedness » D...        http://www.datacenterknowledge.com/archives/2012/07/16/man...




                Managing Data to Improve Disaster Recovery Preparedness
                Posted By Industry Perspectives On July 16, 2012 @ 8:30 am In Industry Perspectives | No
                Comments


                 Joe Forgione, senior vice president of product operations and business development at
                 SEPATON, Inc [1]. Most recently, he served as CEO of mValent, a data center applications
                 management software company, acquired by Oracle in 2009.

                                     The use of tape as the primary backup medium for disaster recovery
                                     purposes long ago gave way to disk-based data protection platforms. This
                                     approach enables large organizations with massive volumes of data to
                                     minimize storage costs, reduce risk of data loss and downtime, retain data
                                     online longer, and accelerate backup/restore times.

                                     Managing Large Volumes of Data

                                   In today’s large enterprises with massive data volumes to protect
                    JOE FORGIONE   and multiple data centers and disaster recovery (DR) sites to
                      SEPATON      manage, manual data protection is not cost-efficient and does not
                                   provide sufficient risk reduction. Large organizations need to back
                 up and move tens of terabytes (often petabytes) of data over a WAN quickly and efficiently.
                 Also, they manage backup and replication policies for hundreds of backup volumes and data
                 types to ensure data is de-duplicated, replicated, stored, and (eventually) securely erased in
                 accordance with strict regulatory requirements.

                 As a result, most large enterprises are moving to powerful disk-based appliances that enable
                 them to backup data within their backup windows, to store petabytes of data in a single
                 system, and to automate management of their complex data lifecycle policies.

                 Automation and Integration

                 For example, one backup application vendor that has pioneered such automation and
                 integration is Symantec through its OpenStorage Technology (OST) plug in for the popular
                 NetBackup backup application. With OST, NetBackup can be more closely integrated with
                 disk-based data protection platforms, enabling enterprises to take full advantage of the
                 advanced capabilities in both NetBackup itself and the backup target. At the same time,
                 enterprise data protection platform technology has advanced to include such innovations as
                 ContentAware byte-level de-duplication and replication that is capable of moving massive
                 data volumes over a WAN with minimal bandwidth for fast, efficient replication. They also
                 include a high degree of automation, detailed dashboards, and support for OST’s Auto Image
                 Replication (A.I.R.), enabling them to be an integral part of the disaster recovery
                 management of all backup data sets. One such platform is capable of backing up 43 TB per
                 hour and can de-duplicate and replicate these volumes without slowing performance.

                 Together, A.I.R. and advanced enterprise data protection platforms provide the performance,




1 of 3                                                                                                             9/23/12 4:04 PM
Managing Data to Improve Disaster Recovery Preparedness » D...         http://www.datacenterknowledge.com/archives/2012/07/16/man...


                 control, flexibility, and automation that enterprises need to centralize management of data
                 protection —from data backup and replication through the expiration and secure electronic
                 destruction of each copy

                 As the name implies, A.I.R. enables you to automatically backup and replicate copies of data
                 sets without needing to manage multiple catalogues. With A.I.R. the backup is determined by
                 automated storage lifecycle policies (SLPs) enabling enterprises to consolidate data types
                 with different storage plans onto the same enterprise data protection platform for
                 significantly simpler management. Managers simply use SLP to define all copies at once,
                 specifying the storage device and the specific retention for each copy. They then point all the
                 backup policies that follow the same storage plan to that lifecycle.

                 Synthetic Backup

                 Another valuable feature is optimized synthetic backup – a capability that dramatically
                 reduces the volume of data that an enterprise needs to backup and replicate. While SLPs may
                 not be necessary for small and medium businesses where manual backup management may
                 be manageable, and tape may even remain an acceptable medium, but in large enterprises
                 with multiple sites, multiple data centers and massive volumes of data, more seamless
                 integration between a robust backup application and a high-performance, disk-based data
                 protection platform should now be considered a business continuity best practice.

                 A unified set of SLPs combined with storage pooling and multi-tenancy capabilities in the data
                 protection platform are particularly beneficial to large enterprises with multiple business units
                 and demanding recovery time and recovery point objectives (RTOs and RPOs).

                 Additional advantages of implementing a centralized, highly automated disaster recovery plan
                 include:

                        The ability to leverage de-duplication and compression capabilities built into the data
                        protection platform to minimize the size of both master and replicated backup images
                        Content-aware byte differential de-duplication to cut the capacity of data to be backed
                        up and replicated without slowing backup performance
                        Bandwidth-optimized replication to deliver fast, cost-effective movement of data to
                        geographically-dispersed locations for disaster recovery protection
                        Support for active/active, many-to-one and one-to-many topologies to accommodate
                        different business continuity strategies
                        Extending a centralized data protection umbrella to remote office locations more
                        effectively and economically
                        More affordable consolidation and centralization of a tape infrastructure used for
                        archiving
                        An easier way to set multiple, different retention periods in different locations for lower
                        storage utilization and, therefore, lower costs
                        The ability to minimize RTO by automating the importing of catalogs to immediately
                        restore mission-critical production applications and systems

                 Large enterprises should evaluate emerging solutions that can significantly reduce disaster
                 recovery data protection costs while improving recovery times. The advantages of disk-based
                 data protection are clear, especially when specifically designed for the ingest, de-duplication
                 and replication challenges of massive data volumes.

                 Industry Perspectives is a content channel at Data Center Knowledge highlighting thought
                 leadership in the data center arena. See our guidelines and submission process [2] for
                 information on participating. View previously published Industry Perspectives in our
                 Knowledge Library [3].




2 of 3                                                                                                                9/23/12 4:04 PM
Managing Data to Improve Disaster Recovery Preparedness » D...       http://www.datacenterknowledge.com/archives/2012/07/16/man...



               Article printed from Data Center Knowledge: http://www.datacenterknowledge.com

               URL to article: http://www.datacenterknowledge.com/archives/2012/07
               /16/managing-data-to-improve-disaster-recovery-preparedness/

               URLs in this post:

               [1] SEPATON, Inc: http://www.datacenterknowledge.comwww.sepaton.com
               [2] guidelines and submission process: http://www.datacenterknowledge.com/industry-
               perspectives-thought-leadership/
               [3] Knowledge Library: http://www.datacenterknowledge.com/archives/category
               /perspectives/



                                  Copyright © 2011 Data Center Knowledge. All rights reserved.




3 of 3                                                                                                            9/23/12 4:04 PM

Managing data to improve disaster recovery preparedness » data center knowledge

  • 1.
    Managing Data toImprove Disaster Recovery Preparedness » D... http://www.datacenterknowledge.com/archives/2012/07/16/man... Managing Data to Improve Disaster Recovery Preparedness Posted By Industry Perspectives On July 16, 2012 @ 8:30 am In Industry Perspectives | No Comments Joe Forgione, senior vice president of product operations and business development at SEPATON, Inc [1]. Most recently, he served as CEO of mValent, a data center applications management software company, acquired by Oracle in 2009. The use of tape as the primary backup medium for disaster recovery purposes long ago gave way to disk-based data protection platforms. This approach enables large organizations with massive volumes of data to minimize storage costs, reduce risk of data loss and downtime, retain data online longer, and accelerate backup/restore times. Managing Large Volumes of Data In today’s large enterprises with massive data volumes to protect JOE FORGIONE and multiple data centers and disaster recovery (DR) sites to SEPATON manage, manual data protection is not cost-efficient and does not provide sufficient risk reduction. Large organizations need to back up and move tens of terabytes (often petabytes) of data over a WAN quickly and efficiently. Also, they manage backup and replication policies for hundreds of backup volumes and data types to ensure data is de-duplicated, replicated, stored, and (eventually) securely erased in accordance with strict regulatory requirements. As a result, most large enterprises are moving to powerful disk-based appliances that enable them to backup data within their backup windows, to store petabytes of data in a single system, and to automate management of their complex data lifecycle policies. Automation and Integration For example, one backup application vendor that has pioneered such automation and integration is Symantec through its OpenStorage Technology (OST) plug in for the popular NetBackup backup application. With OST, NetBackup can be more closely integrated with disk-based data protection platforms, enabling enterprises to take full advantage of the advanced capabilities in both NetBackup itself and the backup target. At the same time, enterprise data protection platform technology has advanced to include such innovations as ContentAware byte-level de-duplication and replication that is capable of moving massive data volumes over a WAN with minimal bandwidth for fast, efficient replication. They also include a high degree of automation, detailed dashboards, and support for OST’s Auto Image Replication (A.I.R.), enabling them to be an integral part of the disaster recovery management of all backup data sets. One such platform is capable of backing up 43 TB per hour and can de-duplicate and replicate these volumes without slowing performance. Together, A.I.R. and advanced enterprise data protection platforms provide the performance, 1 of 3 9/23/12 4:04 PM
  • 2.
    Managing Data toImprove Disaster Recovery Preparedness » D... http://www.datacenterknowledge.com/archives/2012/07/16/man... control, flexibility, and automation that enterprises need to centralize management of data protection —from data backup and replication through the expiration and secure electronic destruction of each copy As the name implies, A.I.R. enables you to automatically backup and replicate copies of data sets without needing to manage multiple catalogues. With A.I.R. the backup is determined by automated storage lifecycle policies (SLPs) enabling enterprises to consolidate data types with different storage plans onto the same enterprise data protection platform for significantly simpler management. Managers simply use SLP to define all copies at once, specifying the storage device and the specific retention for each copy. They then point all the backup policies that follow the same storage plan to that lifecycle. Synthetic Backup Another valuable feature is optimized synthetic backup – a capability that dramatically reduces the volume of data that an enterprise needs to backup and replicate. While SLPs may not be necessary for small and medium businesses where manual backup management may be manageable, and tape may even remain an acceptable medium, but in large enterprises with multiple sites, multiple data centers and massive volumes of data, more seamless integration between a robust backup application and a high-performance, disk-based data protection platform should now be considered a business continuity best practice. A unified set of SLPs combined with storage pooling and multi-tenancy capabilities in the data protection platform are particularly beneficial to large enterprises with multiple business units and demanding recovery time and recovery point objectives (RTOs and RPOs). Additional advantages of implementing a centralized, highly automated disaster recovery plan include: The ability to leverage de-duplication and compression capabilities built into the data protection platform to minimize the size of both master and replicated backup images Content-aware byte differential de-duplication to cut the capacity of data to be backed up and replicated without slowing backup performance Bandwidth-optimized replication to deliver fast, cost-effective movement of data to geographically-dispersed locations for disaster recovery protection Support for active/active, many-to-one and one-to-many topologies to accommodate different business continuity strategies Extending a centralized data protection umbrella to remote office locations more effectively and economically More affordable consolidation and centralization of a tape infrastructure used for archiving An easier way to set multiple, different retention periods in different locations for lower storage utilization and, therefore, lower costs The ability to minimize RTO by automating the importing of catalogs to immediately restore mission-critical production applications and systems Large enterprises should evaluate emerging solutions that can significantly reduce disaster recovery data protection costs while improving recovery times. The advantages of disk-based data protection are clear, especially when specifically designed for the ingest, de-duplication and replication challenges of massive data volumes. Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process [2] for information on participating. View previously published Industry Perspectives in our Knowledge Library [3]. 2 of 3 9/23/12 4:04 PM
  • 3.
    Managing Data toImprove Disaster Recovery Preparedness » D... http://www.datacenterknowledge.com/archives/2012/07/16/man... Article printed from Data Center Knowledge: http://www.datacenterknowledge.com URL to article: http://www.datacenterknowledge.com/archives/2012/07 /16/managing-data-to-improve-disaster-recovery-preparedness/ URLs in this post: [1] SEPATON, Inc: http://www.datacenterknowledge.comwww.sepaton.com [2] guidelines and submission process: http://www.datacenterknowledge.com/industry- perspectives-thought-leadership/ [3] Knowledge Library: http://www.datacenterknowledge.com/archives/category /perspectives/ Copyright © 2011 Data Center Knowledge. All rights reserved. 3 of 3 9/23/12 4:04 PM