Your SlideShare is downloading. ×
Take Control of Your Data
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Take Control of Your Data


Published on

A guide to understanding storage technologies

A guide to understanding storage technologies

Published in: Technology, Business

  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Guide Take control of your dataA guide to understanding storage technologies
  • 2. Virtualization Backup and data protection Exchange/SQL File shares 2 Guide | A guide to understanding storage technologies Establishing and growing a business in today’s online and connected world is dramatically different than it was 10 years ago. The way customers approach your company is very different. The way you interact within your organization has changed. The level of interaction and the speed with which your company is evolving is likely to be many times faster than it would have been just a decade ago. At the heart of this change is modern IT, and at the heart of IT is data— lots of data. That’s why, when it comes to IT, the challenges you’re likely to face aren’t so much defined by the size of your business, the number of people you employ, or even your geographic location. Instead, those challenges are more likely linked to your ability to take control of your data, particularly as your business grows. Master your storage needs - with HP Simply StoreIT The level of interaction and the “connected” nature of IT today have created overwhelming levels of information. IT tools such as hypervisors have sprung up, but, as tools that extend your IT possibilities have evolved, they’ve also complicated the process of aligning your storage with your business needs. HP understands servers, business applications, networking, and storage; and we have successfully applied our knowledge to help companies from the smallest family owned business to the world’s largest enterprise. We’ve been around from the inception of information technology, and we continue to help create a new style of IT that gives small- to medium-sized businesses (SMBs) like yours the foundation needed to thrive. We are the leader in virtualization, application storage and data protection for small and midsize businesses. We have pioneered revolutionary technological advancements in areas such as converged infrastructure and converged storage that have fundamentally changed the way companies are dealing with the rate of change we see in IT today. Through our Simply StoreIT program we are focused on keeping you ahead of the curve. It does not matter where you stand in terms of your current business or IT journey. We want to work with you to make sure that your choices today leverage the best technology available for your budget and keep you prepared for what’s next. HP Simply StoreIT Stress-free storage is here. • Simply to manage • Affordable to own • Reliable to operate
  • 3. 3 Storage technologies at a glance Like the Simply StoreIT program itself, we created this guide to help you make sense of modern storage technologies and architectures and how they might impact your business. Read it cover to cover or refer to it when researching specific storage solutions. Online (or primary) storage Storage that is regularly accessed by applications and servers is often referred to as online storage. Most applications have requirements surrounding latency, or the time between when an application requests access to data and the time it becomes available. If access to a specific piece of data is too slow, an error will occur. Many years ago, Data was typically housed close to or attached directly to the server that was running the application to reduce latency. With modern storage area networks (SANs) and virtualization techniques, shared or pooled storage has become much more common, but latency is still one of the primary attributes by which different types of storage are categorized or measured against. Access times in the milliseconds are common, but these times are stretching as more networking layers and longer distances are becoming involved. Most experts would agree that any access time or latency of over a second (1,000 milliseconds) would not qualify as “online.” Online storage and data access types In the past, online storage was usually placed in one of two categories: file or block, depending on the type of data and the access method used. Most office productivity applications retrieve data as files (.doc, .pdf, .gif, .mp3, and other file formats) through a file system protocol. This is called file access. Operating systems provide a layer of administrative procedures and access protocols between users and files stored on computers and servers. Files can vary in physical size and format, which sometimes leads to file data being labeled as unstructured data. File data is stored in a file system and is organized hierarchically in a series of folders and sub- folders. Each file also has associated metadata - or data about data - such as who created it, when it was last modified, etc. Another storage type, commonly called block storage, is common in modern computing environments. Block storage is most commonly attributed to databases or structured data. This is the primary method by which many applications access and share physical storage behind the scenes. Hypervisors, email, MRP applications, accounting applications, and the like directly access blocks of data without the overhead of a file system. This is known as block access. With the rise of Web- and cloud-based applications, a new storage type has emerged, known as object storage. An object is often referred to as a file, but unlike traditional file systems and NAS, there is no concept of pathnames and directories. Objects are given unique ID numbers that are managed in an index or database. This reduces the complexity of metadata management and search. Protocols that are used to access object storage have evolved from Web-based programming methods that tolerate high latency, allowing users and applications to access data from nearly anywhere on the planet. Guide | A guide to understanding storage technologies
  • 4. 4 Storage architectures In addition to these three types of online storage, there are four common storage architectures that you should be familiar with. These architectures are sometimes confused with specific protocols that various subsystems use to connect to the computing infrastructure. We’ll go into more detail later about each of these architectures, but for now, here are some simple definitions: Direct attached storage (DAS)—the simplest type of data storage subsystem, when located in or attached directly to a server. Data stored on a DAS subsystem can be accessed only by the server it’s attached to. Network-attached storage (NAS)—a dedicated file server attached to a local area network that is running an operating system dedicated specifically to serving files to its users. Storage area network (SAN)—a dedicated network for storage traffic between servers and a shared storage device(s). Most SANs are specifically designed to provide high-speed access to the storage with low latencies. Sharing storage in a single pool across multiple servers is what enables application clusters for high availability. If one server goes down, the data is still available for processing by another server running that same application. Hybrid NAS/SAN architectures—hybrid systems that can operate as NAS devices, serving files to users, and can also provide direct connectivity to block-level storage via a SAN. These are sometimes called unified devices. These hybrid subsystems must process both NAS traffic and high-speed SAN traffic between these two networks (LAN/SAN) and their raw disk drives. Guide | A guide to understanding storage technologies
  • 5. 5 Storage networking options Modern computer networks have dramatically changed the world of computing, allowing computers to be connected together, allowing users and applications to share data, and allowing users to store data in locations other than the captive hard drives on given servers. The evolution of switch-based networking technologies and the TCP/IP standard have forever changed modern computing by virtually eliminating the distance and proximity between users and computers and the data we all share and interact with on a daily basis. Once LANs became prevalent and helped us all get connected, contention for bandwidth between the many users on specific networks and server-to-server traffic became more common. Dedicated high-speed networks began to spring up to facilitate large data transfers and to provide fast transfer rates and low latency as servers accessed data without having to contend for bandwidth with users or other devices. These newer networks, SANs, were initially dedicated to storage devices, both disk and tape. So how are they typically connected? Common SAN protocols Fibre Channel (FC)—The low-latency Fibre Channel protocol is designed specifically for larger, more demanding environments and storage networks, and provides a high level of performance and reliability when used between servers and storage devices. Although its name implies the use of “fiber optic” cables, this protocol can be run over both fiber and copper links. Through time, FC has evolved from speeds of 1 gigabit per second (Gb/s) to 8 Gb/s and as much as 16 Gb/s in today’s newest devices. Most FC SANs utilize switches that provide a direct connection between the two endpoints during a transfer. iSCSI—Also known as IP SCSI, this is a method of connecting servers and storage via a common Ethernet infrastructure. The advent of TCP/IP switches has made this protocol popular because it enables IT managers to leverage their existing LAN infrastructure yet have dedicated high- speed connections between storage and servers. Today, 100 kb/s and more often, 1 Gb/s Ethernet networks (1 GbE) are common and often provide the LAN links to desktops throughout a company. Newer 10 Gb/s Ethernet infrastructure (10 GbE) is also available but typically only deployed to connect servers and other devices in data centers. Oftentimes, IT administrators will dedicate a section of a LAN for iSCSI traffic. Serial-attached SCSI (SAS)—Once used only as a device connectivity protocol, some vendors now offer switched SAS infrastructure as a low-cost alternative to FC or iSCSI SANs for shorter cabled environments. Today, 6 Gb/s SAS devices are common, with new 12 Gb/s SAS just around the corner. Due to its affordability and high-speed transfer rate capabilities, SAS has become a popular interconnect technology for both shared and direct-attached storage subsystems. Guide | A guide to understanding storage technologies
  • 6. Disk drive technology comparison Serial ATA (SATA) SCSI Serial-attached SCSI (SAS) Fibre Channel SSD Price per GB Capacity Reliability Performance Data access Best use Low Highest Moderate Low Moderate File storage, archival/ backup, secondary applications Moderate Moderate High High Frequent Business transactions and primary applications Moderate Moderate High High Frequent Business transactions and primary applications High Moderate High High Frequent Business transactions and primary applications High High Moderate Very High Frequent High-performance, low-latency applications; OLTP, VDI, tiering, analytics 6 Comparing disk drive technologies The hard disk drive (HDD) industry continues to evolve year after year. After many years of predictions that the industry would meet technological limits of aerial storage densities, commonly known as Moore’s Law, the storage industry has continued to deliver on new technological breakthroughs that have proved the naysayers wrong again and again. From early hard drives capable of storing only a few megabytes of data to today’s multi-terabyte HDDs, the progression has been steady and impressive. In addition to larger capacities, today’s HDD industry has settled into two primary HDD form factors: 2.5-inch small form factor (SFF) and 3.5-inch large form factor (LFF) drives. Larger 5.25-inch HDDs are still available but have given way to the popular SFF/LFF form factors, which have better supported the miniaturization trends that the computing industry has undergone in the past 20 years. Initially, SFF drives did not offer the same rotational speeds or capacity points as their LFF counterparts, but in recent years that gap has closed substantially. Denser servers and storage subsystems that leverage SFF HDDs have become more common. LFF drives have continued to increase capacities and are now the primary design center for super high- capacity, multi-terabyte HDDs, which are often used in capacity-driven solutions versus the performance-driven solutions that employ the fastest HDDs. Another HDD attribute that requires some study when selecting a particular technology for your data is rotational speed. This measure is the number of times per minute that the HDD surfaces are spinning past the heads (rpm). Higher rotational speeds typically mean better transfer rates and lower latencies as data is accessed. Combined with seek specifications, internal caching algorithms, and rotational speeds, HDD head assemblies can vary widely in their transfer rates and access speeds. When used in a storage subsystem, vendors typically try to match HDD performance attributes to an array’s controller to maximize performance characteristics or, in some cases, to maximize your dollars per terabyte stored. Your array subsystem’s best practices guide should serve as a reference when selecting HDD types and other characteristics. Common rotational speeds vary from 3,600 rpm to 15,000 rpm for some enterprise-class drives. Guide | A guide to understanding storage technologies
  • 7. 7 HDD interface types Earlier we learned about different SAN protocols—also known as interface types. Interestingly, most HDDs are offered with one of these three popular interfaces. Each of the HDD interface types brings with it a unique set of attributes. Serial ATA (SATA)—If performance is not a primary consideration and your business requires cost-effective high-capacity storage for file serving, archival data, or reference information, then SATA may be a good choice. Traditionally, SATA disks provided a lower cost per gigabyte than SCSI, SAS, or Fibre Channel disks, but because SAS drives have since come down in price, SATA is less common. Fibre Channel (FC)—FC disk drives are designed primarily for rapid data throughput in high- capacity, performance-intensive, and highly available storage systems requiring maximum scalability. With the highest cost per gigabyte of the options described here, FC disk drives are a good choice for the most demanding mission-critical applications and are still the design center for many of the Tier 1 enterprise disk arrays on the market today. Serial-attached SCSI (SAS)—Featuring greater performance than SATA disks, SAS disk drives deliver the speed, reliability, and high availability that online applications and storage require. Individual SAS disk drives have become the industry standard in midrange and entry-level storage arrays. The high-speed SAS interconnect has made the process of designing disk arrays and JBOD enclosures easier, more reliable, and more cost-effective. Today’s most SAS HDDs are available in 3Gb and 6Gb SAS speeds and with capacities ranging from 100GB up to multiple Terabytes. A newer 12 Gb SAS interface is forthcoming, although HDDs with this interface are not yet available on the market. Flash and solid-state storage Over the past few years, the storage industry has been taken by storm with a new technology called solid-state or flash storage, which is commonly delivered as solid-state drives (SSDs). From a high level, flash technology can deliver incredible access performance compared to that of its spinning HDD counterparts because it is a memory-based technology. With access speeds and latency characteristics that can be 10 to 100 times better than the best HDD performance characteristics, it is easy to see the appeal. System and application performance stand to benefit greatly from those performance levels. This is because solid-state/flash technology is a nonvolatile storage medium, meaning that the data remains intact without power. This is not the case for traditional system memory, which requires power to “store” the bits. The difference between “volatile” and “nonvolatile” storage technologies is not new. Cost, access speeds, and the nonvolatile capability have been the primary balance points between HDDs and system memory technologies for years. As this industry evolves, the number of storage elements on a single chip will continue to increase, the drive-level integration will continue to improve, and we will continue to see growth in the adoption of this technology. Guide | A guide to understanding storage technologies
  • 8. Clients Application servers Public LAN • Direct-attached storage can refer to the drives inside a server or to an external storage enclosure. Direct-attached storage Windows clients using SMB (CIFS) Public LAN File I/O traffic NAS device Linux/UNIX clients using NFS Network printer Network attached storage 8 Choosing the right online storage strategy Now that you know your options when it comes to storage architectures, it’s important to understand the strengths and weaknesses of these architectures so that you are prepared to choose the right mix of technologies for your current and future needs. Direct attached storage In a DAS configuration, one or more data storage components, such as hard disks or tape drives, are either installed in a computer or connected directly to it--often with a SAS link. One emerging storage option is shared DAS, in which a fixed number of servers are connected directly to a storage system instead of using a SAN or a SAN fabric. With DAS, each server is configured with its own separate storage. Network-attached storage NAS is essentially a dedicated file server running an operating system that is designed and tuned specifically to handle file I/O traffic for network clients. Client computer access to the NAS server usually occurs over an Ethernet (LAN) connection. The NAS server appears on the network as a single node with its own IP address. Files stored on the NAS system are accessible to clients on the LAN over the Ethernet connection via protocols such as CIFS/SMB (Windows® clients) or NFS (Linux and UNIX®). Many NAS systems also support protocols such as HTTP or FTP for Internet-based file access. NAS products range from low-cost home office-quality devices up to clustered enterprise-class NAS gateways that provide file connectivity to traditional SAN storage arrays. Some new NAS technologies are capable of scaling not only “up” in capacity but also “out” in performance as the NAS file repository grows. With NAS, all clients have access to the same storage via the LAN. Guide | A guide to understanding storage technologies
  • 9. Clients Storage area network (Fibre Channel or iSCSI) Public LAN Block I/O traffic Application servers Storage array Storage area network 9 Storage area network A SAN is a network that is dedicated to storage. The SAN is separate from the LAN and provides servers with access to storage. A SAN is designed to handle storage communications and has the added benefit that the servers and devices do not need to contend with traffic from other devices or users on the same network. In its simplest form, a SAN consists of: • Shared storage (typically a disk array or a tape library) • A high-speed dedicated network that lets servers and storage “talk” to each other—typically as a switched network • Data services to help manage and protect data in the SAN A SAN is a high-speed network that is used only for storage and is separate from the public LAN. Guide | A guide to understanding storage technologies
  • 10. Clients Storage area network (Fibre Channel or iSCSI) Public LAN File I/O traffic NAS gateway Storage array NAS gateway with storage area network 10 NAS gateway with a SAN Another common use of a NAS is to store file data and provide access to that data via a NAS gateway. The gateway has no on-board storage of its own; instead, it connects to an attached SAN array that acts as a translator between the file-level NAS protocols such as NFS and CIFS and the block-level SAN protocols (Fibre Channel or iSCSI), which it uses to physically store its file data on the SAN array. This NAS/SAN hybrid combines the advantages of both technologies and offers advanced file serving functionality that can leverage the storage capabilities of a high-performance SAN array while supporting other IT initiatives such as virtualization. Using a NAS gateway to a SAN combines the advantages of both technologies and provides both file and block I/O. Guide | A guide to understanding storage technologies
  • 11. 11 Choosing the right backup and data protection strategy Reliable data protection is one of the most important challenges facing your business today. In this area, there are two key numbers that can help you assess the needs of your business: • Recovery time objective (RTO), which is the amount of time one of your business processes can be down in the event of a full system restore. • Recovery point objective (RPO), which is the amount of data you can afford to lose if, for instance, you had to perform a restore from your last saved copy. For 24x7 database applications, the RPO could be your most recent transaction. For file servers, it could be last night’s backup. Typically, decreasing your RTO can impact disk-based activities for the first point of defense, but then doing so can also affect the overall disaster recovery strategy you might choose for a full system rebuild. This section will help you understand and identify data protection solutions that best meet your needs. It will not, however, specifically address other aspects of your data availability strategy, which is typically dealt with at the primary storage level. While certainly something you hope you will never “need,” your data protection strategy is what exists between you and the risks associated with a major data loss event. Regardless of your company’s size or your overall storage capacity, a well thought out data protection strategy and a set of detailed recovery procedures that undergo frequent review are key to reducing risk and helping you sleep well at night. The good news is that the “rate of change” in the area of data protection has yielded a robust and mature set of technologies that were once only available to large enterprises. Now these technologies are available and affordable for the smallest SMBs, giving you access to data protection features that, if used properly, can yield high levels of availability, protection, and recovery. As with primary storage, the type of storage used to facilitate your data protection strategy will depend on how that storage will be used: • Backup storage, or offline storage, is used to restore data and systems in the event of a disk failure, various hardware-related failures, or even data corruption in an online system. • Disaster recovery (DR) is a higher-level function of backup storage, often implemented across physical sites and geographies. In DR, full system rebuilds are the focus, in networks where not only is the data important but where the ability to replicate a complete system is just as important as having the data to put on it. • As a general rule, in a backup and recovery process, you want easy access and fast retrieval; speed is important. When a system is down, the clock is ticking. Several metrics are used to describe and quantify the IT maturity level of your needs and your available budget in these areas. • Archival or long-term storage is another type of offline storage. This type of storage is used to keep information accessible for specific periods of time, even when the data no longer needs to be online. Guide | A guide to understanding storage technologies
  • 12. Remote mirrors Remote snaps/clones Last transaction Snaps/clones/mirrors Continuous data protection Disk-to-disk and disk-based backup Increased availability Tape Cost High Low Minutes Hours Days Days Hours Minutes Seconds "Instant" Dataloss(recoverypointobjective) Mission-critical support services Recall (recovery time objective) Server clusters Storage clusters 12 Traditional tape-based data protection One popular way to back up or archive data is to copy it to magnetic tape. Tape has been used for data protection for over 50 years, and it is still the most cost-effective and energy- efficient technology for high-capacity and long-term data protection. Tape offers a number of advantages that have yet to be eclipsed by other technologies: • Tape media (cartridges) is relatively small, extremely dense, and highly portable, and can therefore be easily moved and stored offsite. • Linear tape-open (LTO) media supports a shelf life of up to 30 years, making it a logical choice for storing archival data. • Tape is very low cost on a per-gigabyte basis in comparison with other technologies. • Automated library solutions can be integrated easily into many environments to provide high-capacity, multiple-cartridge backups; and they can automate data protection for multiple devices on a network. • Tape technologies typically leverage hardware-level compression, which increases the efficiency of each cartridge’s capacity. • Technologies like encryption and media barcoding can be managed by software to enable media traceability in multiple locations and to reduce the risk of privacy breeches occurring when used in this mode. • Tape products that support the Linear Tape File System (LTFS) standard make tape as easy to use as disk, with drag-and-drop functionality. Tape autoloaders—Tape autoloaders allow unattended automated data backup to tape. Web-based remote management frees organizations from having to perform both manual tape swap-outs and using complex data backup software, which reduces dependency on local IT staff. Tape autoloaders also allow for the centralization of backup from multiple sites. Automated tape library solutions—Web-based remote management makes automated tape libraries easy to manage from across the room or across the globe, eliminating the need for remote office IT staff. You can quickly and simply manage tape media both in and out of the library with barcode readers, configurable mail slots, and removable magazines. This technology delivers sophisticated management of backup blocks and software. Guide | A guide to understanding storage technologies
  • 13. 13 Disk-based data protection and deduplication When higher speeds and better performance are required, disk-based data protection is the answer. Compared with tape, disk-based backup delivers the following advantages: • Smaller backup windows and less impact on applications • Faster recovery of single files • High-availability features such as RAID, replication, and hardware redundancy • Features such as deduplication to reduce capacity requirements and lower costs Some of the most important decisions you will make as you establish a data protection strategy using disk-based data protection technologies will involve how you create copies of your data, how many copies you will keep, how long you will keep them, how long a copy may be used in the future (in restore terms), and how many copies you need to have on hand. The following sections describe how various disk-based storage systems can support space-efficient backup and recovery. Snapshots, clones, and mirrors—Snapshot technologies take a “picture” of data at a point in time. That picture of the disk image is stored on the array very quickly. The snapshots are maintained even as data continues to change on the primary volume. In the event of data corruption or a hardware error, data can be recovered to any previous point-in-time snapshot. Snapshots are often a complement to disk and tape backup but do not provide disaster recovery. Snapshots can occupy substantial capacity on an array. Backup applications and hypervisors often integrate into an array’s snapshot and mirroring capabilities. Clones, or volume copies, are similar to snapshots but physically copy the data (as opposed to taking a “picture” of it) to another set of disks within the same array. One of the additional benefits of volume copy is the ability to send a copy to another server for backup, application testing, or data mining. Remote data replication—Remote replication is a data protection technology for disaster recovery in which two identical sets of data can be stored on distributed systems across multiple physical sites. When data is changed at a primary location, those changes are moved, or replicated, to the alternate site behind the scenes. In the event of a disaster, the remote copy can be used to get business operations back up and running quickly--or, in some cases, for automatic failover. Software applications that coordinate failovers and system availability typically leverage remote replication services to accomplish near-real-time failover of critical systems. For additional protection, when performing a disk-to-disk backup, HP recommends performing secondary backup to tape. Deduplication—Deduplication can yield significant savings by reducing storage capacity consumption and thereby changing storage backup economics. As user requirements continue to evolve, so do deduplication technologies, meaning that deduplication has become much more flexible, user friendly, and effective. As businesses have demanded that deduplication become more flexible in terms of how and where it can be deployed, and as they have demanded tighter integration, faster performance for backup and recovery, and the ability to deduplicate within and across domains to gain more efficiency—all for the lowest possible cost—a “next-wave” of deduplication technology has sprung up to meet these needs that make this technology particularly attractive to SMBs. First-wave deduplication solutions may have stalled out, but these next-generation deduplication solutions show a lot of promise. Guide | A guide to understanding storage technologies
  • 14. 14 Software Defined Storage To understand Software Defined Storage, its first useful to understand what a Software Defined Datacenter is. The concept of a Software Defined Datacenter is about virtualizing compute, network, and storage assets and provisioning them with a common orchestration layer of management. Software Defined Storage describes how software applications can layer on server infrastructures and delivers advanced data services such as snapshots, thin provisioning, and multi-site disaster recovery. The basic tenets of any component in the Software Defined Data center are the same basic tenets of Software Defined Storage. First, the underlying hardware technology needs to be open standards-based hardware such as standard x86 based server technology. Second, the differentiating benefit of the product comes from rich data services based in software. For storage this means efficiencies such as thin provisioning, snapshots and disaster recovery. Finally, these technologies need to be held together by one common management interface. For true portability and flexibility, these should be Open API-based management and orchestration tools. Software-defined storage is a key enabler for a converged infrastructure by allowing business applications and underlying storage services to share hardware resources. By converging applications and storage on the same platform, you are able to improve utilization rates of compute power, and storage with high efficiency of power, cooling and data center footprint. Virtual storage appliances—Also known as VSAs, virtual storage appliances are an example of software defined storage and offer a flexible, cost-effective way to provide advanced data services to virtual environments. Advanced data services can include: data protection and replication, disaster recovery with multi-site and remote copy functionality, centralized management and multiple points of hypervisor and management integration. The nature of VSAs as software defined storage to be hardware agnostic and hypervisor independent allow you to create an agile infrastructure with data mobility across platforms, locations and even hardware generations. Guide | A guide to understanding storage technologies
  • 15. 15 Why choose HP? Unlike large corporations that often have extensive resources and IT specialists dedicated exclusively to storage, you may not have the time, resources, or available expertise to fully investigate and develop your storage strategies from the ground up. HP understands that you need comprehensive yet easy-to-implement solutions that work with your servers and business applications seamlessly, bridging the gap between explosive data growth and the capabilities of the IT infrastructure you have in place today. When it comes to storage, HP has the broadest and most comprehensive storage portfolio in the industry. Leveraging the solution areas within the HP Simply StoreIT program, you will find simple, affordable, and reliable disk storage systems that can be scaled up as your business grows. You’ll also find a comprehensive range of reliable, cost-effective backup and data protection solutions to meet almost any SMB need and to help you make the right choices along the way. We offer scalable, easy-to- use Windows-based NAS file and print solutions for small and midsize businesses, as well as complete SAN solutions that deliver the scalability, performance, and broad interoperability required for critical data and applications. But we don’t stop there. We partner with you for success. And we guide you every step of the way. Accelerate your IT initiatives Your primary responsibility is to enable your business to grow and operate smoothly. The HP Simply StoreIT program is designed to help you accelerate your IT initiatives—allowing you to focus on business results by making storage stress free. We start by making storage less expensive to deploy and easier to manage. We can also help you better protect your IT environment from downtime and data loss, and help make sure that your applications and users have uninterrupted access to the critical information they need to do their jobs. There is no “one-size-fits-all” solution when it comes to your business. HP and our more than 200,000 channel partners worldwide have the expertise and guidance to help you meet your business needs and requirements today and help you plan for success in the future. We don’t just want to sell you storage; we want to partner with you for success. Guide | A guide to understanding storage technologies
  • 16. Rate this documentShare with colleagues Sign up for updates © Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Windows is a U.S. registered trademark of Microsoft Corporation. UNIX is a registered trademark of The Open Group. 4AA4-7667ENW, August 2013 Where to go from here? Now that you have learned more about the various choices available to meet your storage requirements, hopefully you are better equipped to make informed decisions about what you need. To learn more about how choosing HP—or one of our over 200,000 partners worldwide—as your trusted storage partner can improve operational efficiency, reduce risks, and lower storage costs, see the following HP Simply StoreIT solution brochures: • HP Simply StoreIT Solutions for Virtualization • HP Simply StoreIT for Backup and Data Protection • HP Simply StoreIT for File Shares • HP Simply StoreIT Solutions for Exchange • HP Simply StoreIT Solutions for SQL Server Learn more at Guide | A guide to understanding storage technologies