Mitigate costs with backup architecture strategy


Published on

Exploding volumes of data are driving the search for faster and more reliable backup, but data value and restore speed are also critical considerations. Designing a best-fit architecture requires that these elements are evaluated alongside total cost of ownership (TCO).

This storyboard will help you:

•Understand the business drivers that determine the value and sensitivity of backed-up data.
•Recognize potential trigger points for a backup architecture redesign.
•Compare the backup architecture and target available options.
•Compare the TCO of different architectures.
•Select a backup architecture based on a strong business case and best-fit analysis.
Designing a best-fit architecture will ensure that you are adequately protected without overspending. The key is to find an architecture that is just good enough.

Published in: Technology
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Mitigate costs with backup architecture strategy

  1. 1. Mitigate Costs & Maximize Value with a Best-Fit Backup Architecture Strategy
  2. 2. Info-Tech Research Group Exploding volumes of data are driving the search for faster and more reliable backup, but data value and restore speed are also critical considerations. “Tape is Dead” is a hot button phrase that is often thrown around in backup architecture discussions. While tape may not be completely obsolete yet, its role in backup processes has changed dramatically for organizations with more modern and demanding high availability needs. <ul><li>This research is designed for: </li></ul><ul><li>CIOs </li></ul><ul><li>IT Directors </li></ul><ul><li>Data Center Administrators </li></ul><ul><li>Backup Administrators </li></ul><ul><li>Anyone who wants to ensure data security & accessibility </li></ul><ul><li>This research will provide you with the tools you need to: </li></ul><ul><li>Understand the business drivers that determine the value and sensitivity of backed-up data. </li></ul><ul><li>Recognize potential trigger points for a backup architecture redesign. </li></ul><ul><li>Compare the backup architecture and target available options. </li></ul><ul><li>Compare the Total Cost of Ownership (TCO) of different architectures. </li></ul><ul><li>Select a backup architecture based on a strong business case and best-fit analysis. </li></ul>The real value of the backup process is found in the speed of the restore and the availability of critical data Interest in Backup Architecture
  3. 3. Executive Summary Info-Tech Research Group
  4. 4. Strategize <ul><li>This section will help you: </li></ul><ul><li>Align backup and business strategies. </li></ul><ul><li>Make a data value driven architecture selection. </li></ul><ul><li>Avoid common pitfalls and learn from peer experiences through case studies. </li></ul>Next Section in Brief Compare Select 1 3 2
  5. 5. Form the strategy around the value of the restore, as well as the backup <ul><li>Backing up data without a functional and reliable </li></ul><ul><li>restore process has a negative value impact. </li></ul><ul><li>The key business value of backup is the ability to restore critical data with acceptable downtime. </li></ul><ul><li>To maximize value, acceptable downtime should be a realistic goal given the backup architecture in place. For example, don’t back up large amounts of data to tape that will need to be restored in seconds. It won’t happen! </li></ul><ul><li>Align the backup architecture strategy with the overall storage and server strategies to avoid costly constraints and bottlenecks that result in unbalanced spending and load balancing. </li></ul><ul><li>Some architectures inspire more confidence than others. D2D arrays and cloud replication systems lead the way with 27% declaring “complete confidence” in their system, while traditional tape libraries and direct cloud backup lag at 14%-16%. Decide how much this confidence is worth. </li></ul>Info-Tech Research Group “ Backups are only as good as the restore they provide.” – Guy Netaneli, CTO and Managing Director of Services, e-ternity Business Continuity Consultant Inc. Don’t focus so much on your backup procedures that you forget the reason they’re in place – to protect your data in the event of loss.
  6. 6. Don’t think of backup as a storage tier – backup adds value to all tiers <ul><li>All storage tiers must be backed up and recoverable, regardless of the level of business value that they add. Backup needs to be thought of within the context of all tiers and how they add value. </li></ul><ul><li>The physical architecture of storage typically consists of three separate tiers that house data of different value: </li></ul><ul><li>Primary Tier: Highest performance and fastest storage  typically houses mission-critical data. </li></ul><ul><li>Secondary Tier: Intermediate speed and performance  can be a mix of mission-critical and non-critical data. </li></ul><ul><li>Archival Tier: Slowest and cheapest storage  for data that rarely needs to be recovered, and is not time-sensitive. </li></ul>Backup Non-critical Data Mission Critical Data Info-Tech Research Group Tier One Storage Tier Two Storage Archive Tier Storage A mixed backup architecture may be the optimal solution if there is a wide variance in the criticality of data being stored across the organization. Don’t store tier one data in the same place as archived data. Info-Tech Insight:
  7. 7. Don’t confuse backup with archiving Info-Tech Research Group Although backup and archiving play very different roles within storage architecture and have very different needs, they are often incorrectly used synonymously. Understand the distinction to effectively accomplish both goals. “ I don’t think of backup by itself ... because your storage has to include everything from primary storage to backup to disaster recovery and redundancy.” Ted Kull, Director of Information Systems Society for Industrial and Applied Mathematics (SIAM) Police officers call in for “backup” to protect their immediate interests and initiatives. Data backup is no different. It protects the services from a short-term failure or outage. All current data and the majority of important historical data must be backed up to ensure a smooth process. Archives typically exist to provide occasional access to historical information. Most archived information will never be touched, but it simply exists to protect against the need that past data needs to be recalled. Archived data can tolerate a slow restore or retrieval because by nature it isn’t usually time sensitive. Even archived data should be backed up . In the case of an archiving failure or malfunction, important archived data may still need to be on hand. Info-Tech Insight:
  8. 8. Recognize the value of the data and its requisite downtime to understand risks <ul><li>Natural breaking points exist when assessing the value of the restore ability against the cost of backing up data. Balance these factors accordingly to best protect against potential business and financial risks. </li></ul><ul><li>Use Info-Tech’s DR Recovery Objective </li></ul><ul><li>Alignment and Cost Tool to determine </li></ul><ul><li>data value and appropriate restore </li></ul><ul><li>objectives , including Restore Point </li></ul><ul><li>Objectives (RPOs) & Restore Time </li></ul><ul><li>Objectives (RTOs). </li></ul><ul><li>For more information on establishing a </li></ul><ul><li>best-fit DRP, refer to Info-Tech’s solution </li></ul><ul><li>set, Right-Size Enterprise Disaster </li></ul><ul><li>Recovery Plans . </li></ul>Info-Tech Research Group Aim to meet restore objectives, not exceed them. Using costly high-accessibility storage and backup architectures for infrequently accessed and/or less valuable data is a waste of money. Info-Tech Recommends:
  9. 9. Case Study 1: Sluggish restore of mission-critical data drove a technology services company to undertake a backup architecture redesign Info-Tech Research Group <ul><li>The Situation </li></ul><ul><li>An Internet-based technology services company faced two data restore failures from tape where recovery took over a month - even though RTO/RPO objectives called for immediate restore capabilities . </li></ul><ul><li>The company’s top priority became an architecture redesign wherein tapes would be replaced as the primary backup target. Because it is a private firm with no regulations to comply with, long-term archiving was not a consideration. </li></ul><ul><li>The Result </li></ul><ul><li>Tapes have moved to a lower storage tier and will eventually be phased out entirely in favor of “a pure disc space.” </li></ul><ul><li>Mission-criticality of data now determines architectural tiering to ensure RPOs and RTOs are met. </li></ul><ul><li>The Takeaway </li></ul><ul><li>Don’t go into an architecture redesign with preconceived ideas about the best-fit solution. If you can’t be objective, assign the project to someone who can be . </li></ul>“ ” We actually had a couple of data restore incidents where we had to get data restored beyond the 30 day window. - Vice President
  10. 10. Ensure mission-critical data is secure & available when it’s needed <ul><li>Organizations with large amounts of mission-critical data need faster-performing and more reliable data, even if it comes at a higher cost. </li></ul><ul><li>Since mission-critical data is high-value, retrievability is paramount – and often on a tight schedule. To determine the appropriate backup technology and tier, consider the organization’s restore time objectives (RTO) and restore point objectives (RPO). </li></ul><ul><li>Be sure to keep strategic goals (e.g. target service levels) in mind when determining RTO and RPO. This will ensure alignment between architecture and strategic goals. </li></ul>Info-Tech Research Group Don’t overestimate the amount of mission-critical data that you plan to back up. Storing less critical data on upper tier storage has significant cost implications and can add negative value. Only designate data that cannot be down for any amount of time as mission critical. Those implementing a new backup architecture solution have higher ratios of mission-critical data than those who have already implemented. This indicates that high levels of mission-critical data is an important factor in backup upgrade decisions.
  11. 11. <ul><li>Storing all data in tier one storage, and backing up accordingly, is untenable. The criticality of data varies considerably – alongside the best-fit storage and backup. </li></ul><ul><li>To determine best-fit storage and backup: </li></ul><ul><li>Evaluate the recovery time sensitivity of each set of data to properly triage it. Then determine where each set should be stored and how it should be backed up. </li></ul><ul><li>Measure the size of each data set being backed up and factor in the time required for a full restore. </li></ul><ul><li>Calculate the business risk of downtime on each set or subset of data. </li></ul><ul><li>Analyze the opportunity costs of having each set of data down for a time and the resulting impact on business growth and partnerships. </li></ul><ul><li>Data that has a large impact on any of the above factors may be considered mission critical and should be backed up on the most reliable and highest-performing backup storage available. </li></ul><ul><li>The percentage of mission-critical data to non-critical data will have a large impact on the overall backup architecture strategy. </li></ul>Evaluate the current balance of architectural tiers to ensure an appropriate fit with data value & sensitivity Info-Tech Research Group
  12. 12. Develop RPOs & RTOs that are driven by critical capabilities <ul><li>The variance in allowable downtime can range from less than 15 minutes to more than 48 hours. The role of business capabilities within the overall business strategy should dictate RPOs and RTOs, regardless of what </li></ul><ul><li>technology is being considered. </li></ul><ul><li>Critical capacities can have both technical and service level implications on RPOs and RTOs. For example: </li></ul>Info-Tech Research Group The financial loss from downtime (including opportunity costs) must be balanced properly with the cost of backup and restore in order to avoid business-crippling damages from unexpected outages. RPO/RTO Driving Data Media heavy data (video, images, audio) can be very large and move through a bandwidth constrained Internet connection very slowly. Emergency or medical data take up less space but have significant time and security sensitivity. A near instant restore can be critical to business emergency functionality in life threatening situations. Stock trading data may need to handle large amounts of volume over short trading sprints and is extremely sensitive to outages. A few seconds of downtime means millions of dollars in lost potential trading revenue.
  13. 13. Consider architectures that restore within targeted RPO/RTO to avoid unacceptable downtime <ul><li>Some important external factors often play a large role in determining acceptable downtime, method of storing and backing up data, and length of time for keeping records on file. Some of the typical drivers of these factors are: </li></ul>Info-Tech Research Group Regulation High Availability Footprint and Efficiency 1 2 3 Low Cost Speed and Performance Reliability Durability High Capacity Low Cost Speed and Performance Reliability Durability High Capacity Low Cost Speed and Performance Reliability Durability High Capacity Architecture Features Required: <ul><li>Many industries have regulated standards related to the period of acceptable downtime, history of records (can be up to 20 years), time interval of backups, and radius of offsite replication location. </li></ul><ul><li>Having an interruption in service is often detrimental to business needs and having a speedy restore continuously available is extremely valuable, even if it’s expensive. </li></ul><ul><li>Frequent and thorough backups are essential, as well as a reliable, corruption-free restore. </li></ul><ul><li>Data centers with Green IT initiatives will be attracted to architectures that use less electricity, require less cooling, and take up less space in the data center. </li></ul><ul><li>Durability and capacity are important issues to avoid redundant, unusable, and wasted storage. </li></ul>
  14. 14. Strategize <ul><li>This section will help you: </li></ul><ul><li>Establish restore requirements. </li></ul><ul><li>Evaluate backup target options. </li></ul><ul><li>Compare TCO across architectures. </li></ul><ul><li>Avoid common pitfalls and learn from pier experiences through case studies. </li></ul>Next Section in Brief Compare Select 2 1 3
  15. 15. Balance RPOs/RTOs and TCO when determining restore requirements Info-Tech Research Group If restore objectives and total cost of ownership are unbalanced, you are either spending too little or spending too much. Be sure to do a proper cost/benefit analysis of all TCO aspects to fit within a proper RPO and RTO at a non-detrimental cost. <ul><li>Under-protection of data is risky </li></ul><ul><li>because data can be lost and thereby </li></ul><ul><li>threaten business continuity. This can often result in not only short-term revenue loss but long-term brand and reputation damage. </li></ul><ul><ul><li>Over-protection of data is costly </li></ul></ul><ul><ul><li>because low- to no-value data eats up </li></ul></ul><ul><ul><li>valuable resources that could be directed at </li></ul></ul><ul><ul><li>value-add initiatives. </li></ul></ul><ul><li>Best-fit protection of data is ideal </li></ul><ul><li>because it safeguards valuable data at the </li></ul><ul><li>lowest possible cost. It’s just good enough . </li></ul>
  16. 16. <ul><li>Those investigating or implementing a new backup media value speed above all else. This combined with ever escalating demands of data availability is what has prompted some to declare the death of tape as a primary backup solution. Especially for larger storage silos, a tape backup can take hours and restore from tape can take days depending on where the tape library is housed. In fact, it isn’t uncommon for an organization with tape as the primary backup to have to wait for the previous night’s backup to finish before staff can do a restore! </li></ul><ul><li>Need for speed may be the reason for tape adoption’s decline </li></ul>Desire for faster, more reliable backup creation & improved restore performance are driving new architecture adoption Info-Tech Research Group N=121 <ul><li>“ Tape recoveries take a minimum of 4 hours just to get the tapes back onsite. Then we wait for operations to find space in an already busy library to mount the tapes. Then operations call the backup team so they can start the restore. Then the clock is still ticking while we rip through dozens of tapes. ” </li></ul><ul><li>Jim Adams, Sr. Manager, Backup and Recovery at Apollo Group (Data Storage Professionals) </li></ul>The speed of light is faster than a truck Another significant factor driving adoption is offsite replication. Tapes must be transported offsite physically, while any disc-based backup can be sent online. Physical transportation creates more management complexities and takes more time.
  17. 17. Plan for the future: create an upgrade plan that fits into long-term data growth strategies <ul><li>Backup target types vary significantly in terms of cost and data-criticality appropriateness. Natural migration paths occur as IT organizations aspire to increase the capacity and performance of their backup along two typical paths. Cost sensitive organizations may first move to disk arrays before implementing cloud replication, while organizations more sensitive to management complexities may choose to migrate to cloud backup first. </li></ul>Info-Tech Research Group Arrows Indicate Natural Migration Paths Magnetic Tape backup is the traditional industry standard and where many organizations still are today. In many cases, the lower cost of tape and a previous investment in the architecture make it an easy continual choice for price sensitive and restore time tolerant organizations. Disc arrays (including Virtual Tape Libraries) are currently the most common upgrade from tape. Costs can remain relatively low with advantages like deduplication, and the media is much more reliable for restores. On the down side, the hard drives must constantly be spinning for data rarely accessed, and management can be more complex and expensive than tape. Cloud Storage is not being heavily adopted yet, mostly due to the increased bandwidth required to back up straight to the cloud. With virtually no upfront costs, however, it may be appropriate for small shops with smaller, infrequently changing databases. Cloud Replication is the optimal solution for organizations that have a lot of critical data and a lot to spend on a double infrastructure. Cost Data Criticality Cloud as Primary Magnetic Tape Disc Arrays Cloud Replication
  18. 18. Backup architecture is not one-size-fits-all; u se these scenarios to pinpoint the right architecture strategy Info-Tech Research Group
  19. 19. <ul><li>“ Tapes, especially those that have been overwritten several times, have a less than 60% chance of full recoverability.” </li></ul><ul><li>- Guy Netaneli, CTO and Managing Director of Services, e-ternity Business Continuity Consultant Inc. </li></ul><ul><li>Tape remains a form of media that has its applications. There is no way will you will hear me saying that “tape is dead.” Even today over 60% of enterprise and data center primary and secondary data continues to be written to tape.” </li></ul><ul><li>-Mike DiMeglio, Product Marketing Manager, FalconStor </li></ul><ul><li>All the required manual actions that tape incurs have real business costs and required human labor. This is often one of the most costly expenses, the most difficult to scale upward, and it introduces the possibility of human error. </li></ul><ul><li>These actions include: </li></ul><ul><li>Insertion and removal of tape cartridges </li></ul><ul><li>Rotation of tape cartridges </li></ul><ul><li>Indexing of tapes and drives </li></ul><ul><li>Physical offsite relocation </li></ul><ul><li>Physical offsite recovery </li></ul><ul><li>Tape also possesses some inherent physical attributes that make it more susceptible to magnetic fields, moisture, humidity, temperature, and electric currents. </li></ul>Tape-based backup is cheap, but incurs physical complexities and media corruption is a concern Info-Tech Research Group vs. Tape Library SAN Backup Server Indexed Storage Offsite Storage Backup Restore
  20. 20. Use tapes for cost-effective backup of data that is not mission critical Info-Tech Research Group Speed is the top priority of tape owners looking into new platforms
  21. 21. Virtual tape libraries enable faster and more reliable disc-based backup while appearing to be tape Info-Tech Research Group Virtual Tape Libraries (VTLs) virtualize a disc array to appear as tape, which enables organizations currently using tape to utilize their existing infrastructure, software, and tools while exponentially increasing the performance of the backup and recovery processes. Tape Library Before After Most infrastructure remains the same before and after a move from tape to a VTL Virtual Tape Software Interface Disc Array SAN Backup Server SAN Backup Server Don’t use VTLs as a permanent solution, only as a temporary transition from tape to disc. In the long run, the slight inefficiencies of the extra virtualized layer will be significant. Info-Tech Insight:
  22. 22. Make an easy switch from tape by opting for a VTL for increased restore performance Info-Tech Research Group Data integrity and restore speed are the two biggest issues that drive organizations from tape libraries to using VTLs. Many tape users have complete confidence in their backup architecture in terms of capacity and backup speed.
  23. 23. Advanced compression features like deduplication dramatically lower the TCO of D2D, increasing its viability <ul><li>Deduplication 101 </li></ul><ul><li>Deduplication can play a significant role in making the more costly switch to disk arrays less painful. Deduplication reduces the required space by eliminating duplicate data at the block level and increasing the references to those blocks. </li></ul><ul><li>Claims of Heavy Convergence </li></ul><ul><li>Some professionals in the industry have claimed as much as a 50:1 ratio in convergence and space saving capabilities in backup storage, although this varies widely depending on the application. </li></ul><ul><li>Desktop Virtualization, an Optimal Case </li></ul><ul><li>One application where deduplication is most effective is the storing and backing up of virtual machines, particularly virtual desktops. Instead of storing the OS 50 times for 50 virtual machines, it can store the common files only once . </li></ul>Info-Tech Research Group Deduplication can reduce the pain of larger costs for higher-performing disc-based backup by dramatically decreasing the actual disc space required to store large amounts of data with frequent duplications. Depending on the repetition level in the data sets being stored, deduplication can produce anywhere from a 2:1 to a 50:1 convergence ratio in the backup storage required. Deduplication of 16 Virtual Desktops OS File Storage: 12GB x 16 VMs = 192GB Unique File Storage: 2GB x 16 VMs = 32GB Total = 224GB After Deduplication OS File Storage: 12GB x 1 VM = 12GB Unique File Storage: 2GB x 16 VMs = 32GB Total = 44GB
  24. 24. Disc-based backup is gaining ground and adds increased comfort & short-term restore reliability improvements 1 Info-Tech Research Group Tier One Tier Two Archive 39% 32% 29% 46% 27% 27% Tape Disc Disc arrays tend to have slightly higher levels of tier one data, driving the need for increased backup and restore performance that comes with disc. 7%
  25. 25. Case Study 2: SIAM turned to piecemeal implementation to spread out the costs of a backup architecture overhaul Info-Tech Research Group <ul><li>The Situation </li></ul><ul><li>The Society for Industrial and Applied Mathematics (SIAM) faced constant recovery issues with its primary tape backup, but its budget wouldn’t allow for an all-in architecture overhaul. </li></ul><ul><li>The society knew the architecture needed to support their RTO/RPO objectives, so they opted for piecemeal implementation in lieu of a band-aid solution that would have left their data under-protected. </li></ul><ul><li>The Result </li></ul><ul><li>Tape has been replaced with removable disk as the primary backup media (D2D), which has improved confidence in data recoverability and made the process faster. Even faster than predicted . </li></ul><ul><li>Automated offsite replication is being implemented in 2011 (D2O2D) to ensure adequate data protection and accessibility. </li></ul><ul><li>The Takeaway </li></ul><ul><li>Don’t design a backup architecture around what the current budget can bear – determine your best-fit architecture using restore objectives and then develop a long-term plan to pay for it. You can’t afford not to! </li></ul>“ ” <ul><ul><li>Think big, but start small. </li></ul></ul><ul><ul><li>Ted Kull, Director of IS, SIAM </li></ul></ul>
  26. 26. Outsource backup directly to the cloud for smaller data sets to lower capital costs, but beware of the risks Info-Tech Research Group “ As for Cloud, to me that’s mostly hype. Sure, there’s something to be said for it, but in all the stories I read (and hear) people always seem to gloss over the minor issues of security and accounting. And performance, of course. You always hear of the miracles that will come your way with Cloud, but I’ve seen projects trying to get Cloud up in the air. And I’ve seen grownups about to burst into tears.” -Willem Vermeer, IT Specialist at ING Personeel VOF (DSP) Although it appears that data is just being shipped off to a theoretical “cloud,” there are a lot of things going on behind the scenes that add value to the availability of the backup. Pure Cloud Not Ready for Most Cloud direct backup is unrealistic for backups of more than a terabyte or two, especially if changes are frequent. Bandwidth would be constantly congested, and there is a high risk of bandwidth overage charges and a crippling of other Internet-reliant core business functions. Ideal Case There are ideal cases for cloud backup. A small business office that needs to back up a few workstations would likely find higher value and lower overall TCO through cloud direct backup, because it would obtain more value adds with almost no upfront costs. Don’t Underestimate Bandwidth Small shops moving to cloud backup usually underestimate their anticipated bandwidth usage by a significant margin. For a more accurate estimate, calculate not only the size of your data sets, but also the frequency of changes. E-mail servers are often the most overlooked while being the most backup bandwidth heavy, because they look small but change often. SAN Backup Server Deduplication Primary Backup Offsite Replication Thin Provisioning Local Redundancy Local Redundancy
  27. 27. Direct cloud backup eases the pain of upfront capital costs & increases flexibility, but puts valuable data in third-party hands Info-Tech Research Group Speed of backup and low upfront costs are the most important factors for those who use cloud direct backup.
  28. 28. Case Study 3: E-mail server drives up bandwidth usage after North Waterloo Farmers Mutual moves to cloud backup Info-Tech Research Group <ul><li>The Situation </li></ul><ul><li>North Waterloo Farmers Mutual moved from 100% tape backup to 10% tape & 90% online because of the lack of restore reliability of tape. </li></ul><ul><li>They chose online backup because of the company’s relatively small backup size, but they did not adequately factor in the bandwidth that their constantly changing e-mail server would consume. </li></ul><ul><li>The Result </li></ul><ul><li>The storage cost was higher than projected because the bandwidth allowance had to be raised to meet the needs of the e-mail server. To mitigate these costs, the company is paying attention to attachment use and the backup frequency from the mail server. </li></ul><ul><li>Management has far greater confidence in restorability. </li></ul>” We’ve been able to tweak how frequently we send offsite backups to help manage our bandwidth usage, however, you definitely need to be prepared for a substantial increase in your bandwidth usage when you move to online backups. - Sharon Winkler, Network Administrator, North Waterloo Farmers Mutual 102754515 <ul><li>The Takeaways </li></ul><ul><li>Be mindful when calculating bandwidth. Try to think of those systems that register many changes throughout the day – such as e-mail servers. These will devour bandwidth and drive up TCO. </li></ul><ul><li>Be wary of vendor-provided average-use statistics because your organization’s usage may vary markedly from that of the vendor’s average user – even if it appears as though you have similar characteristics. You don’t want to pay for unnecessary bandwidth or pay a fortune in overages. </li></ul><ul><li>If you do accept vendor stats, make sure you can switch usage plans without penalty for up to six months. This will give you time to figure out your average usage and the best-fit plan offered by the vendor. </li></ul>“
  29. 29. Employ online/cloud local replication to ensure timely recoverability Info-Tech Research Group The extra backup step causes replication users to be less confident in their backup speed, but the local replication inspires much higher confidence in the restore. SAN Backup Server Local Backup Appliance
  30. 30. <ul><li>The restore time and restore point objectives of the business and IT. </li></ul><ul><li>The amount of data being backed up. </li></ul><ul><li>The ratio of mission-critical data to non-mission-critical data being backed up. </li></ul><ul><li>The management/oversight capabilities and assets available for the backup architecture. </li></ul><ul><li>The bandwidth costs, availability, and potential bottlenecks of the current infrastructure and the proposed solution. </li></ul><ul><li>Other important factors related to a backup architecture decision. </li></ul>Use Info-Tech’s Backup Architecture Appropriateness Assessment Tool to find your best-fit architecture design <ul><li>Many current and future situation factors can influence the decision to move to tapes, discs, or a cloud service provider as a primary backup source. </li></ul><ul><li>Info-Tech’s Backup Architecture Appropriateness Assessment Tool evaluates factors such as: </li></ul>Info-Tech Research Group The answers to these questions will produce a result from one of the four quadrants of potential backup target and architecture types.
  31. 31. Assess the lowest TCO while meeting capacity needs & restore requirements 1 2 3 Info-Tech’s Backup Architecture Acquisition TCO Comparison Tool compares total cost factors, including energy consumption, maintenance, and licensing. Info-Tech’s Backup Architecture Acquisition TCO Comparison Tool Compare Apples to Apples <ul><li>When comparing backup options, ensure that the same features are supported. </li></ul>Compare TCO, not List Prices <ul><li>Some backup options that have a higher upfront cost make up for it with lower incremental and maintenance costs. Comparing list prices without an eye to ongoing costs could be an expensive mistake. </li></ul>Use our Comparative TCO Calculator <ul><li>With the number of factors that influence the appropriateness of a backup architecture design, determining a best-fit solution is difficult and often time consuming. Use our TCO calculator to reach the right decision faster. </li></ul>
  32. 32. Strategize <ul><li>This section will help you: </li></ul><ul><li>Make a strategic business case for a backup architecture acquisition. </li></ul><ul><li>Proceed to the vendor evaluation and product selection. </li></ul><ul><li>Avoid common pitfalls and learn from peer experiences through case studies. </li></ul>Next Section in Brief Compare Implement 1 3 2
  33. 33. Make a solid strategic case for backup architecture acquisition <ul><li>How to frame the business case: </li></ul><ul><li>Acquisition is often triggered by backup or restore issues. If backup causes unacceptable LAN or WAN bandwidth issues for users, or if restore tests have proven inconsistent or unsuccessful, the case for acquisition is stronger. </li></ul><ul><li>It is less likely that the need for acquisition will require as much defense as the size, cost, and the target and architecture type of an acquisition. </li></ul><ul><li>Protection of mission-critical data justifies the cost of acquisition. </li></ul>Info-Tech Research Group Use Info-Tech’s Backup Architecture Acquisition Business Plan to justify your recommended acquisition. 1 2 3
  34. 34. Plan backup acquisition not only for today’s needs, but for projected needs of 2016 <ul><li>Most organizations have a three to five year range on their backup architecture refresh cycle, so it is essential that a chosen solution meets backup and restore needs of the future. </li></ul><ul><li>Consider variable aspects such as: </li></ul><ul><li>Anticipated Storage Growth – According to Moore’s law, the technology behind storage capacity doubles every two years, and isn’t expected to slow down until well beyond 2015. When technology improves, business pushes it to the limit. Plan your backup acquisition to handle this magnitude going forward. </li></ul><ul><li>Primary Storage Refresh Cycle – Backup strategies need to be tightly integrated with primary storage strategies. Factor in any anticipated SAN or NAS upgrades when choosing a solution to ensure continual compatibility. </li></ul><ul><li>Primary Server Refresh Cycle – Server processing and memory also follow Moore’s law, and the backup process could become a bottleneck if not planned around increasing processing capacity. </li></ul><ul><li>Changing RPO/RTO Objectives – In agile business environments, RPOs and RTOs can change quickly along with business strategies. </li></ul>Info-Tech Research Group Due to the ever increasing demands on data availability, tape will be nearly extinct from the primary backup role within five years. Tape will likely live on in archiving and “store forever” roles. Info-Tech Predicts:
  35. 35. Case Study 4: Hodges University upgrades its backup architecture to end data loss & bandwidth encroachment Info-Tech Research Group <ul><li>The Situation </li></ul><ul><li>Hodges University was frustrated with its tape backup. There was too little storage and too many delays because of the timing of the backup – it ran from evening until the following mid-day. There were times when they had to wait for the backup to finish before they could initiate a restore . </li></ul><ul><li>There was also limited archiving because of a finite number of tapes which were periodically overwritten. This meant data was only recoverable for up to six months. </li></ul><ul><li>A massive data loss resulting from tape mismanagement gave urgency to the backup redesign initiative. </li></ul><ul><li>The Result </li></ul><ul><li>More storage space was added to the SAN, and virtual servers have been employed. </li></ul><ul><li>Accessibility and recoverability have improved. </li></ul><ul><li>Confidence in the architecture has risen. </li></ul><ul><li>The Takeaway </li></ul><ul><li>If you employ tape, make sure the people in charge of backing up to them are reliable, and stringent processes are in place. Do intermittent checks to ensure backups are being done and done properly and are restorable. </li></ul>“ ” <ul><ul><ul><ul><li>We did an upgrade and pretty much got wiped out and we didn’t have a backup to go to. We had to bring 100,000 images back through the software and back through the workflow. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>- Wendy Gehring, IT Director, Hodges University </li></ul></ul></ul></ul>102754515
  36. 36. Watch out for these “gotchas” & pitfalls
  37. 37. Proceed to vendor evaluation & product selection <ul><li>When considering a backup architecture vendor, the compatibility and functionality of the backup software must also be considered. Our upcoming backup hardware and software select sets will look at: </li></ul><ul><li>Automation capabilities for tape vendors, and the extent to which the tape swap and indexing processes can be automated. </li></ul><ul><li>Continuous data protection: the capability of products to recognize data changes and speedily and efficiently apply those changes to the backup. </li></ul><ul><li>Restore value-add features such as search and restore, time-line restore, and multiple location data retrieval. </li></ul><ul><li>Maintenance and support value adds. What kind of support is available and how much does it cost. </li></ul><ul><li>Fuel efficiency and green measures. How do various solutions rank in their power and cooling requirements and energy savings features. </li></ul><ul><li>Cost! Cost! Cost! Who has the lowest initial costs and the lowest TCO. </li></ul>Info-Tech Research Group
  38. 38. Carefully compare the usual suspects of the backup vendor landscape <ul><li>Online Vendors </li></ul><ul><li>Disk Arrays </li></ul><ul><li>VTLs </li></ul>Info-Tech Research Group There tends to be overlap and movement between vendors offering tape solutions and VTL solutions, as well as disk array solutions and VTL solutions. By contrast, cloud storage vendors tend to stand apart from the others due to a large difference in the core competencies required. Data Domain IBM Sun Microsystems EMC FalconStor NetApp Overland Storage Quantum Tandberg Data Spectralogic Tape Vendors The mainstream tape market’s overarching strategy has been to move to VTLs to support waning sales. As tape becomes less popular, these same vendors will compete in the disk array market. Info-Tech Predicts: Oracle Barracuda Egnyte Trend Micro 3X Systems Compellent Exagrid EMC IBM NetApp HP Dell
  39. 39. Conclusions <ul><li>Put more stock in the value of the restore than the backup when evaluating backup architecture decisions. </li></ul><ul><li>Don’t think of backup as a storage tier , but rather something that adds value to all storage tiers. </li></ul><ul><li>Don’t confuse backup with archiving . </li></ul><ul><li>Tactical backup architecture acquisition should be made in a strategic context. </li></ul><ul><li>Ensure mission critical data is secure and available when it’s needed. </li></ul>Info-Tech Research Group <ul><li>Properly balance the RPOs/RTOs and the TCO of candidate backup solutions. </li></ul><ul><li>Plan for the future. Acquire backup solutions that fit into the long-term strategy of the business. </li></ul><ul><li>Backup architecture is not a one-size-fits-all solution. Choose a target that will meet the data and time needs of core business competencies. </li></ul><ul><li>Follow and plan for a natural backup upgrade progression that appeals to the cost and data sensitivity of the organization. </li></ul><ul><li>Learn from peers that have gone through a backup architecture decision before and learn from common pitfalls. </li></ul><ul><li>Use Info-Tech’s Backup Architecture Acquisition Business Plan template to build out a cost justification for your backup architecture purchase. </li></ul><ul><li>Use Info-Tech’s Selection Notes on Backup Software to evaluate product differentiation among backup vendors . For example, refer to “ Backup Software Seeks to Be ‘Restore Central .’” </li></ul>Strategy Comparison Implementation
  40. 40. Appendix I: Server Acquisition Survey Demographics Info-Tech Research Group Responses by Industry Responses by Number of Employees Which describes your current server acquisition situation How many virtual and physical servers do you maintain?