Vnx series-technical-review-110616214632-phpapp02
Upcoming SlideShare
Loading in...5
×
 

Vnx series-technical-review-110616214632-phpapp02

on

  • 1,870 views

 

Statistics

Views

Total Views
1,870
Views on SlideShare
1,870
Embed Views
0

Actions

Likes
0
Downloads
114
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Note to Presenter: Present to customers and prospects to provide them with a detailed update of the VNX series. It covers the VNX hardware, supporting the powerful aspect of the simple, efficient, powerful high level VNX message and assumes the customer is familiar with the messaging deck “EMC VNX Series Family Overview” on Powerlink. Note to Presenter: This presentation does not cover the VNXe platform, although much of the content does apply to VNXe. For a detailed VNXe presentation, please see the VNXe technical presentation on Powerlink.
  • Whether a shrinking or expanding economy, and whether in mature or emerging markets, key IT challenges remain and traditional approaches to building and managing IT infrastructure no longer make economic sense.As EMC talks to customers about their IT infrastructure challenges, we find four recurring themes:Budgets remain flat to slightly up and are not growing nearly fast enough to meet IT demands using traditional approaches.Companies are struggling to manage increasing complexity and are looking for new tools and methodologies.Companies see no end in sight to relentless data growth and are looking for ways to keep up.Finally, fast-changing business models, competitive pressures, and other factors are putting increased demand on IT operations.It is with these four challenges in mind that EMC designed the guiding principles for its next-generation platforms.
  • Note to Presenter: View in Slide Show mode for animation. Let’s review those principles…Affordability. In the competitive environment we operate in, affordability is critical. From acquisition cost, to lifecycle cost, to service and support costs, to operating expense—companies have to do more with those flat budgets.Simplicity. As compute environments grow, merge, evolve, and change, the risk of creating runaway complexity is very real. A more holistic approach to management; improved automation; and fewer, simpler user interfaces are key weapons in the fight against complexity.Efficiency. Being wasteful is a thing of the past. As data continues to grow, IT must drive the highest utilization rates possible. Effective cost, not just nominal cost, is now a central issue when aligning IT with business and mission objectives.Power. As IT spending begins to shifts away from cost containment, IT must now have the ability to deploy new and more applications faster than ever before. Networked storage must have extra processing reserves to support new and unprecedented transaction levels. Storage must now be more powerful than ever. The EMC VNX Series is designed to meet these challenges with fundamental design approaches backed by leading innovative technologies.
  • With these challenges and requirements in mind, EMC is now introducing a new record-breaking unified storage platform—the EMC VNX family.These hardware and software solutions are simple to provision, highly efficient with lower-capacity requirements, affordable for any budget, and powerful enough to handle the increasing demands of virtual applications.In fact, the new family delivers the world’s simplest array to manage and the world's highest mid-tier performance.The VNX family is designed from the ground up for virtual application environments—from simple, money-saving server and storage consolidation for small business, to the next-generation virtual data center applications.The VNX family is comprised of two series: VNXe series and VNX series. The VNXe series represents the entry point of the VNX family, and is designed specifically for small-to-medium businesses (SMB), remote offices or branch offices (ROBO), or departmental applications where traditional storage administration skills may not be available.The VNX series is the next-generation midrange platform. For those of you familiar with EMC’s current CLARiiON and Celerra platforms, the VNX series combines the capabilities of these systems into a single modular unified storage offering.Note to Presenter: The remainder of this presentation focuses on the VNX Series.While new, the entire VNX family shares the tradition and builds on years of know-how of the world’s most popular SAN and NAS platforms: CLARiiON and Celerra. Everything EMC has learned about high performance and high reliability is culminating with the VNX Family.EMC Unisphere provides a common unified management capability for EMC’s VNX family and CLARiiON and Celerra products. And as you will see, Unisphere also provides a way to simplify and automate other common storage management tasks, such as replication and backup reporting.
  • How storage is deployed has been going through a process of change for a number of years now. That change will continue for the coming years. Some interesting things to note from this slide:Fibre channel revenues are still a strong force in the industry although essentially flat, so the market continues to embrace the technologyThe main area of growth is in the Ethernet space:Note to Presenter: Click to highlight the Ethernet technologiesiSCSI and NAS will continue to grow aggressively as the connectivity option of choice in the sub $75K storage space due to simplicity and the ubiquity of the IP network. The highest growth rates are seen in the markets for the newer connectivity options, Fibre Channel over Ethernet (FCoE) and Switched Serial Attached SCSI (SAS). Although starting from a small base these technologies will become more relevant (particularly FCoE) starting in 2011.The loser in the storage connectivity options is the external DAS market. As customers recognize that in order to truly enable the virtualized data center, a fluid and flexible storage connectivity model is required and this space continues to lose market share to the main stream and newer technologies.The main point to remember is that EMC is the leader or 2nd in all the markets in this chart except the dwindling DAS market. EMC is committed to fully support of all these technologies today or as the market demands on the VNX platform
  • The EMC VNX series is the new mid-range storage platform built for the demanding virtualized data center. Fully redundant, multi-controller design that scales and scales. With VNX you can now keep pace with double-digit data growth and dynamic environments despite flat budgets. EMC’s VNX series helps you thrive in the virtualized IT environment by radically simplifying storage and data management with unprecedented levels of automation. With power to spare for self-optimizing storage pools and automated tiering policies, you no longer have to try to manually move data around and struggle to balance your performance and cost objectives. Set it up once, and the system automatically ensure that your most active data is always served fromhighest performing drives while less active data is moved to lower-cost drives. Support more transactions, more uses, new applications and get faster response times without going over budget. That’s the promise of VNX!
  • The VNX series is based on an industry leading architecture that allows you to configure purpose built components that are designed specifically for the different workloads required. For different connectivity options like: SAN (i.e. block connectivity with iSCSI, Fibre Channel or Fibre Channel over Ethernet), NAS (i.e. CIFS, NFS with pNFS or Multi-Path File System) or Cloud ( i.e. with Atmos for REST or SOAP), the VNX platform addresses each with its purposed built modular architecture, simultaneously. The benefits of the modular unified architecture include: Offering a modular design with optimized controllers for the protocols and workload to be served on. You can add and scale out the X-Blades independently and without impacting the overall systemBoth controllers benefit from a central storage pool for LUN provisioning ensuring no stranded unused resources. Frequently accessed data is automatically moved to high-performance Flash drives and infrequently accessed data is moved to high-capacity/low-cost disk drives.Another advantage of the modular architecture is that EMC has packaged the X-Blades into a NAS gateway model. This gateway supports FC SAN connect to EMC Block storage (Symmetrix, VNX and CLARiiON) and scales with support for up to four storage arrays. So, if you are looking for more storage scale, adding memory, adding drives over and above what is typically supported, add a gateway in front of four VNX7500’s and you can scale up to 8 X-Blades, up to 8 storage processors and up to 4,000 drives. All the power you need from the VNX series modular architecture.Note to Presenter: The graphic depicting eight storage processors is based on a configuration that includes a VNX gateway front-ending four VNX Series storage arrays.Note to Presenter: Click to show the red box, indicating that the following slides (9-40) cover hardware and base software for File and Block components of the VNX solution
  • Modular design: Offering a modular design with optimized controllers for the protocols means you can add and scale out the X-Blades independently and without impacting the overall system. Block services are provided by a mature, dedicated and specialized block processor providing native block functionality. An evolution of the industry proven CLARiiON platform, native block implementation ensures optimal performance and leverage tried and trusted value added features for data protection and management.File functions are provided by an equally mature, dedicated and specialized file processor providing native file functionality. An evolution of the industry proven and leading Celerra NAS platform, the Unified solution provides a powerful native solution for FC, iSCSI, FCoE, NFS, CIFS, pNFS, MPFS and Object support.The file functionality leverages the core high availability design and software capabilities of the block storage facility and services.
  • Note to Presenter: Reference slide indicating the physical configuration of the different family members.An important question you must answer is, “What is the right storage platform that meets my business requirements?”EMC makes it easy by offering the broadest range of unified storage platforms in the industry. Rate your requirements and choose your solution.Note to Presenter: Not referenced on this slide but be aware that usable capacities per X-Blade is 256TB for all VNX platforms which allows all systems to be maximally configured as a file only system.
  • The VNX series is based on an industry leading architecture that allows you to configure purpose built components that are designed specifically for the different workloads required. For different connectivity options the VNX platform supports each, concurrently with its purposed built modular architecture.VNX gives you the choice of low-cost IP or high-throughput Fibre Channel connectivity. VNX delivers both file and block protocols. For higher throughput, Fibre Channel is the growth path for iSCSI (block), and MPFS is the growth path for NAS (files).VNX lets you start small and scale throughput and capacity.Multi-protocol support NAS (CIFS and NFS), MPFS, native iSCSI, Fibre Channel and Fiber Channel over Ethernet as well as pNFS all within one unified platform, and at no additional cost, offer cost effective flexibility in deployment options and make VNX an easy purchasing decision. NAS—File-sharing protocol for Windows and UNIX systems. Typical use cases include: Traditional NAS - CAD/CAM Software engineering, Non-Traditional NAS - Oracle, VmwareMPFS—Multi-Path File System for improved performance and scalability. See pNFS for use casespNFS—Public domain equivalent of MPFS, supported on UNIX and Linux systems in conjunction with NFS V4.1. Support for pNFS for VNX is provided at no additional cost with the advanced protocol license option. Typical use cases for pNFS and MPFS include: Image processing, Bioengineering, Financial Analysis, Oil and gasiSCSI—Advanced iSCSI implementation using a familiar native CLARiiON Block LUN model with fast failover and full CLARiiON feature support. Similar use cases to Fibre channel, although more typically seen in the commercial and Small to medium business spaceFibre Channel—high-speed networking protocol primarily used in storage area networks providing full native CLARiiON feature set. Typical use cases include: Database/Data warehouse, VMware, High performance needsFiber Channel over Ethernet—High speed block protocol over converged (data center) Ethernet transport, is a new protocol appearing in data centers to reduce infrastructure costs by consolidating all storage and data networking needs onto a single Ethernet network. Similar use cases to Fibre Channel.Cloud: Cloud storage uses open protocols (REST and SOAP) to deliver public and private cloud solutions that leverage the proven back-end storage functionality of VNX. The cloud offering is based upon a solution leveraging Atmos VE running on VMware and using Block (FC or iSCSI) or File (NFS) connections to the VNX platform. Cloud is further supported within Unisphere via Link and Launch. Use cases include content-rich web applications, infrastructure as a service and archiving to the cloud. Note to Presenter: Further details are found later in this presentation.
  • The VNX series ships as a block only, file only or Unified file and block system. The file only and Unified systems ship with all the hardware indicated in the diagrams in the slide.The block-only VNX5300 and VNX5500 comprises only the disk processor enclosure, Standby Power Supply (SPS), and drives (and potentially disk array enclosures depending on the capacity required). If the customer desires (and this is recommended), 4U of rack space (EMC-supplied rack only) and two Fibre Channel ports per storage processor will be reserved for a future upgrade to unified.A block-only VNX5700 and VNX7500 comprises only the storage processor enclosure, Standby Power Supply, vault disk array enclosure, expansion disk array enclosures, and drives. If the customer desires (and this is recommended), 6U of rack space (EMC-supplied rack only) will be reserved for a subsequent upgrade to unified. There are no Fibre Channel ports to reserve for VNX5700 and VNX7500 as Fibre Channel UltraFlex I/O modules will be added upon performing the upgrade to add file components, although a slot will be reserved so a 4 port FC module can be added at the time of upgrade.Note: No file hardware is shipped with the VNX5100 as it does not support NAS or iSCSI. Also, space need not be reserved in customer-provided racks for a future upgrade to unified.The VNX5100, VNX5300, and VNX5500 use disk processor enclosure (DPE) which hold 15 or 25 drives in the block controller for reduced cabinet footprint and reduced cabling. The VNX series DPE includes eight built-in ports of 8 Gb/s Fibre Channel host I/O and two 6 Gb/s SAS back-end buses (2 ports per SP)SAS and NL-SAS drives are supported, and while optional, EMC maintains the recommendation that large capacity (NL-SAS drives) be configured with RAID 6 to protect against the longer rebuild times associated with these drive types.VNX supports 2U 25 2.5” drive DAE and DPE for increased density and energy efficiency as well as 3U 15 drive 3.5” DAEs and DPEsCapacities per X-Blade for all VNX series models is increased to 256 TB (besides the VNX5300, which can only physically support up to 200 Useable TB.)
  • The graphs on this slide show the significant improvements in throughput for the VNX series platforms compared with the prior generation CX4/NS series systems. The end to end architecture developments (faster CPU cores up to 6x2.8GHz, larger and faster memory with up to 24GB of DDR3 @1333MHz, faster internal memory buses at x8 PCI-e Gen 2) deliver performance improvements of 2-3x that of the prior platform.That performance improvement applies to all working sets and application types; Exchange, OLTP databases, Data warehouse, Virtualized environments as well as all File applications.One important thing to note is how the platforms only truly meet their full performance potential when implemented with Flash drives. The VNX is the first storage platform truly designed to take advantage of the game changing Flash technology.
  • VNX Object support is delivered as a fully supported solution that uses the virtualized Atmos VE implementation attached to one or more VNX systems. At a high level, Atmos is…A new data storage and management frameworkBased on a unique set of data services with no limits on namespace or locationWeb-based services, multi-tenancy, and automated protection and efficiency servicesLayered over an EMC-developed, exabyte-scale object store on physical or virtual appliancesAddressing challenges with storing and managing vast amounts of unstructured content for custom or packaged applicationsAtmos can be managed as a single system over many sites andcan be delivered as a self-service experience to consumers.Note to Presenter: REST and SOAP are programming styles and Web Services interfaces for Atmos. REST = Representational State Transfer. SOAP = Simple Object Access Protocol.Atmos supports both custom and packaged applications through its versatile API. The Atmos REST API allows applications to access Atmos resources no matter whether they are connected locally or remotely over the WAN. This enables the global access described for content-rich applications, but also enables packaged applications to scale into new use cases like remote backup, remote archiving, and other online services. Atmos provides a very compelling solution in the following use cases:The first is content-rich web applications. There’s a wide range of applications fitting this profile, whether external-facing applications like a printing service for business cards, an online auction service, or even a web-based messaging application.The next use case is infrastructure as a service. Think about this as an external service that EMC partners like AT&T deliver, or consider it as a service delivered within your own enterprise to allow internal customers to access and build applicationsThe final use case is archiving to the cloud, extending an existing on-premises architecture for archiving to an on-premises or off-premises Atmos. Atmos enables customers to consolidate geographically distributed archives into one system, logically isolate them with multi-tenancy, and then protect them efficiently using policy-based management.For more detail on Atmos, please see the following presentation on PowerLink: http://powerlink.emc.com/km/live1/en_US/Offering_Basics/Presentation/Atmos.pptx
  • Attached are the Atmos VE on VNX requirements.With Atmos VE your customer can use available storage capacity on multiple VNX systems to build a single cloud (private, public or both).
  • Note to Presenter: The next 6-7 slides cover how the VNX delivers industry leading efficiency functionality. This is a base software capability, provided at no additional cost but are and significantly complement the FAST suite.Storage must be lean & highly efficient to satisfy the rigorous demands of today’s IT. EMC is proud of our “out-of-the-box” efficiency advantage where we provide 20% more than our competition before you even start. Using our efficiency features we are able to provide significantly more useable capacity for each physical GB.Gone are the practices of wasteful thick LUNs and file systems. Over the last couple of years we have seen IT move to a just-in-time provisioning model where storage capacity is allocated upon consumption. This practice is commonly referred to as “thin LUNs”In thin LUNs, capacity from the systems storage pool is only used when new and additional data is written. If data is erased, the capacity is given back to the pool. Liberating IT from having “guess” the right amount of storage needed for each user and application ahead of time, has yielded much higher utilization rates. We typically see a shift from 60% storage utilization to 90% or higher. This means you actually can store more data without having to buy more storage. With “thick” LUNs, IT shops had no choice but to live with 40% wasted resources. This is now a thing of the past. Thin LUNs prevent it.But thin LUNs are not the only capacity efficiency weapon EMC VNX offers to combat rising storage cost. – We also have file level de-duplication and storage pool LUN compression. Combining both with thin provisioning amplifies system efficiency up to 3 times. This means you get 3 times the usable capacity per spent IT dollar. Your budget effectively stretches 3 times further.This is an important part of the efficiency strategy. As we now begin to come out of a global recession, business and government will start to accelerate now initiatives. With VNX efficiency technologies IT is ready to meet new and increasing demands. Finally, VNX also addressed performance efficiencies by enabling thin provisioning with deduplication and compression on blended storage pools. Thinly provisioned LUNs will have up to 3 times the performance depending on how much FLASH is in the blend. And if that much firepower turns out to be more than the budgets allows part of the system’s processing power may be dedicated to compression and file de-duplication reducing footprint of a blended storage pool.The VNX capacity and performance efficiency technologies are designed to work together and may be combined to achieve the ideal highly efficient mix of performance, capacity power and footprint.
  • The VNX modular architecture is designed to deliver a native block and file solution with dedicated components that are optimized for the specific use case and that leverage the hardware and core technologies across both the file and block implementations. The core of the VNX platform is the storage processor that delivers the VNX Block components and services and runs VNX OE for Block. The flexibility of the VNX implementation allows support for tiered storage types such as Flash, 10K and 15K SAS. Typically, these drives will be combined into a heterogeneous storage pool to ensure that optimal efficiencies can be achieved in terms of storage objects (LUNs) allocation and capacity re-use. Physical devices are assigned to the pool and configured in a specific raid type (RAID 1, RAID 5, RAID 6). All the devices in the pool will be configured with this raid type and the system will build the required raid sets to ensure physical resiliency for the pool. Note to presenter: The storage pool UI displays disk types as extreme performance, performance and capacity Note to presenter: In VNX, the support of RAID Group LUNs is still available as a legacy support mode, although customers are encouraged to move to pool LUNs as they are more flexible and certain features require them eg FAST VP, compression etc
  • When the pool is built, it is a simple process to carve a LUN (thick or thin) from the pool, set efficiency features such as compression for the LUN or set FAST VP parameters for the LUN and then to share LUNs to external hosts. The pooled LUNs are also used by the VNX for File (NAS) support as the base storage construct for building file systems on. The system will either come with the File components installed or it is possible to add them at a later time (Post-GA). As LUNs are assigned to the VNX for File components, a File level storage pool is automatically built (each file storage pool contains LUNs from the same block storage virtual pool and has the same name). The file storage pool is the storage object from which file systems are built. VNX for File has intelligence built into it to optimize the allocation of a file system based upon the type of storage given to it and supports RAID group LUNs, Thin and Thick pool LUNs as well as compression and thin provisioned LUNs.Note to presenter: It is recommended to not use LUN level compression for LUNs used for File (NAS) but rather use the file deduplication and compression feature provided free of charge within the VNX OE for File functionality as the file level functionality is applicable at the file level based on file access policies.When a file system is defined it can be shared out to clients as CIFS, NFS or both if multi-protocol NAS access is required.
  • Note to Presenter: Present to customers and prospects to provide them with a detailed update of the VNX series advanced software functionality. It covers the VNX software suites and packs and the functionality embedded within them, and it supports the Efficient aspect of the Simple, Efficient, Powerful high level VNX message and assumes the customer is familiar with the messaging deck “EMC VNX Series Family Overview” on Powerlink. It does not cover the VNXe platform, although much of the content does apply to VNXe. For a detailed VNXe presentation, please see the VNXe technical presentation on PowerLink.
  • In much the same way that we have converged and simplified our platforms into a single unified family, we have also dramatically simplified our industry leading software portfolio.EMC offers the world’s most powerful solutions for data protection, data management, and data security. We provide a vastly simpler way to harness that power more quickly and more cost effectively.The ultimate expression of our simple software approach is the VNX Total Efficiency Pack and VNX Total Protection Pack. Each offers incredible value: delivering improved capacity efficiency, better performance, better availability. This slide deck covers the details of the powerful software components that are included in these software suites.
  • The VNX Series introduces new hardware and software licensing and packaging. The new software packaging is structured into Suites and Packs. Functionality is purchased in the form of a software suite and combinations of suites are packaged into software packs. The Total Protection Pack includes all the protection suites for the most comprehensive local and remote protection software as well as full application integration. The Total Efficiency Pack adds the Security and Compliance Suite and the FAST Suite to the protection components to provide the most comprehensive advanced functionality on the market.This simplifies the way software features are offered and ordered with a single software line item covering the required functionality of the customer. What’s more, there is no graduated pricing for the software suites: one software price is all you pay, allowing customer to physically scale their environments without unexpected software costs. In addition, software pricing is aggressively discounted as customers buy more suites or packs.All the features offered in the suites or packs (and more) are manageable directly from the Unisphere interface, or are integrated with link and launch to the Unisphere interface.
  • The VNX series has been expressly designed to take advantage of the latest innovation in Flash drive technology, maximizing the storage system’s performance and efficiency while minimizing cost per GB. When even a few Flash drives are combined with the EMC FAST Suite, an unrivaled set of software that tiers data across heterogeneous drives and boosts the most active data to cache, customers receive the optimal benefits of a FLASH 1st strategy. FLASH 1st, available only through EMC, ensures customers never have to make concessions for cost or performance. Highly active data is served from up to 2TB of Flash drives with FAST Cache, which dynamically absorbs unpredicted spikes in system workloads.  As that data ages and becomes less active over time, FAST VP (Fully Automated Storage Tiering for Virtual Pools) tiers the data from high performance to high capacity drives in 1 GB increments, resulting in overall lower costs –regardless of application type or data age. Best of all, this all happens automatically based on customer-defined policies, saving application and storage administrators time and money by intelligently doing the work associated with pre- and post-provisioning tasks.
  • The FAST Suite improves performance and maximizes storage efficiency by deploying this FLASH 1st strategy. FAST Cache, an extendable cache of up to 2.1TB, gives a real-time performance boost by ensuring the hottest data is served from the highest-performing Flash drives for as long as needed. FAST VP then complements FAST Cache by optimizing storage pools on a regular, scheduled basis. Customers define how and when data is tiered using policies that dynamically move most active data to high-performance drives (e.g. Flash), and less active data to high-capacity drives, all in one-gigabyte increments for both block and file data.Together, they automatically optimize for the highest system performance and the lowest storage cost simultaneously. In addition, the FAST Suite includes Unisphere Quality of Service Manager (UQM) and Unisphere Analyzer. UQM provides even greater flexibility to tune the system by allowing specific controller resource adjustments to be dynamically applied to specific workloads (LUNs) as those workloads change throughout the day. Unisphere Analyzer is a powerful tool integrated to Unisphere which allows detailed statistics to be gathered for the VNX system to enable troubleshooting.
  • FAST Cache can be enabled at the LUN level (for Classic RAID Group LUNs) or at the Pool level if using Thick or Thin Pool LUNs, in which case all LUNs in the pool will be enabled for FAST Cache. It is not recommended to enable FAST Cache for all LUNs in the system as some workloads do not experience sufficient cost/benefit from FAST Cache.The FAST Cache capacity options listed are the maximum sizes of FAST Cache for the specific platform. These capacities are presented in the form of maximum capacity based upon the drive type (capacity using 100GB drives/max capacity using 200GB drives) and represent the useable mirrored capacity. 100GB and 200GB drives cannot be mixed in a FAST Cache configuration.FAST Cache benefits most workloads as well as supporting internal functionality such as snaps, thin LUNs and compressed LUNsNote to Presenter: Be aware that VNX 5100 cannot support BOTH Thin Provisioning and FAST Cache at the same time due to physical memory constraints
  • The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools.  FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler.  Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
  • As its name implies, FAST is a completely automated feature and implements a set of user defined policies to ensure it is working to meet the data service levels required for the business. Typically FAST will move data between Flash, SAS and Near-Line SAS media as it ages and becomes less active, although customers may decided to configure a separate pool and use FAST with just a small amount of SAS and Near-Line SAS for optimized TCO for their 3rd Tier applications.FAST policies control how FAST should apply to individual LUNs in a storage pool via the following options:Auto-tier - Auto-tier is the default setting for all pool LUNs upon their creation. FAST will relocate slices of these LUNs based solely on their activity level after all slices with the highest/lowest available tier have been relocated. Highest available tier - Highest available tier should be selected for those LUNs which, although not always the most active, require high levels of performance whenever they are accessed. FAST will prioritize slices of a LUN with highest available tier selected above all other settings. Lowest available tier - Lowest available tier should be selected for LUNs that are not performance- or response-time-sensitive. FAST will maintain slices of these LUNs on the lowest storage tier available regardless of activity level. No data movement - No data movement may only be selected after a LUN that has already been created. FAST will not move slices from their current positions once the no data movement selection has been made.The tiering policy chosen, also affects the initial placement of a LUN’s slices within the available tiers. Initial placement with the pool set to auto-tier will result in the data being distributed across all storage tiers available within the pool, based upon the relative capacity of each tier available in the pool. LUNs set to highest available tier will have their component slices placed on the highest tier that has capacity available. LUNs set to lowest available tier will have their component slices placed on the lowest tier that has capacity available.Additionally, a relocation schedule is set and the rate of data relocation defined. This allows the relocation process to automatically run at a quiet time of the day and to minimize the impact to the ongoing workload. Typically the relocation process is scheduled every 24 hours but it is possible to relocate data as frequently as you want, although there is a tradeoff between system resources and frequency of re-location. FAST VP also supports manual relocation should a relocation be required outside of the regular schedule.
  • The value of FAST is based upon its ability to drive as much of the highly accessed data components on to the highest tiers of storage (eg Flash) while optimizing for TCO by driving low accessed data to high capacity NL-SAS drives. It achieves this with a number of mechanisms:Statistics collection - One slice of data is deemed “hotter” (more activity) or “colder” (less activity) than another based on the relative activity level of those slices. Activity level is determined simply by counting the number of I/Os, reads and writes bound for each slice. FAST maintains a cumulative I/O count and weights each I/O by how recently it arrived. This weight deteriorates over time. New I/O is given full weight. After approximately 24 hours, the same I/O will carry only about half-weight. Over time the relative weighting continues to go down. Statistics collection happens continuously in the background on all pool LUNs.Analysis - Once per hour, the collected data is analyzed. This analysis produces a rank ordering of each slice within the pool. The ranking progresses from the “hottest” slices to the “coldest.” This ranking is relative to the pool. A “hot” slice in one pool may be “cold” by another pool’s ranking. There is no system-level threshold for activity level. The user can influence the ranking of a LUN and its component slices by changing the default policy from auto-tier to either highest or lowest tier preferred, in which case the tiering policy will take precedence over activity level.Relocation - During user-defined relocation windows, 1 GB slices are promoted according to the rank ordering performed in the analysis stage. During relocation, FAST will prioritize relocating slices to higher tiers. Slices are only relocated to lower tiers if the space they occupy is required for a higher priority slice. In this way, FAST will attempt to ensure the maximum utility from the highest tiers of storage so as data is added to the pool, it will initially be distributed across the tiers and then moved up to the higher tiers if space is available. 10% space is maintained in each of the tiers to absorb new allocations that are defined as “Highest Available Tier” between relocation cycles. Lower tier spindles are utilized as capacity demand grows. Relocation can be initiated either manually or by a user-configurable, automated scheduler.
  • Slide 1 of 3 – The following three slides show a visual animation of FAST VP in conjunction with FAST Cache in operation. FAST Cache and FAST VP should be used together to yield high performance and TCO from the storage system. As an example, Flash drives can be used to create FAST Cache and FAST VP can be used on a pool consisting of Flash, SAS and NL-SAS disk drives. This slide shows activity levels changing, and subsequent scheduled data relocation.Note to Presenter : 4 blocks change color (activity level) in sequence two blocks moved, up, and two moved down
  • Slide 2 of 3 – this shows heating up of sub-slice chunks of 64 KB Granularity, and being copied into FAST CacheNote to Presenter: sub-slice chunks turn red and are copied into FAST Cache, in sequence.
  • Slide 3 of 3 – this shows swapping in of more active I/O into FAST CacheNote to Presenter: sub-slice chunks in FAST Cache cool off, relatively (turning yellow). When sub-slice chunks on disk warm up (turn red), they swap places in FAST Cache with cooler data, in sequenceThe combined benefit shown here is that FAST Cache will provide immediate performance benefit to any bursty data while FAST will move warmer data to SAS drives and colder data to NL-SAS drives. In addition to the performance benefit, there is also a TCO benefit in that FAST Cache with a small number of Flash drives serves the data that is accessed most frequently, while FAST VP with Flash, SAS and NL-SAS drives can optimize disk utilization and efficiency as well as providing for the vagaries of longer term data access patterns.
  • FAST PROOF POINTS: The industry-leading innovations of VNX and the FAST Suite translate into compelling improvements (sometimes greater than 3x) in real-world virtualized application environments. For Microsoft SQL, VNX supports more than 3X the numbers of users and transactions vs. the CX4.In environments running VMware View, VNX can boot 500 virtual desktops in 8 minutes vs. 27 minutes with our previous generation (without Flash or FAST suite). Likewise in virtualized Oracle environments, VNX can support more than 3X the number of users and transactions. The following 5 slides support the 3X numbers for each of the applications (as well as DSS applications). The starting point for comparison is the typical customer base – using 1-2 year old product and not using Flash drives or FAST technology.In most cases we are focusing on how we can cost effectively improve performance to provide improved service levels to our customers. Where we have the information we have included relative cost comparisons of the old configuration compared to the new configuration. Typically, the system cost increases with the new platform, although the performance increases are VERY substantial so the important thing to focus on is the cost/performance benefit. Note to presenter:The customer has to be looking to deliver the increased service level, otherwise all they will see is a cost increase. The final slide in the deck uses the DSS example to show the benefits we are referencing here as a TCO comparison where we extrapolate the legacy CLARiiON CX4/NS system cost to deliver the performance capability of the new VNX system (note in the case of DSS it was not necessary to use Flash and the FAST Suite).Lets take a closer look at how we achieved some of these amazing performance improvements
  • Note to Presenter: The SQL Config used in this test included the following components:SQL 2008 R2:VNX5700DB size : 400GB, 4 DB files on 4 LUNsLUN Type : Classic Raid Group LUNFC drives: 20 300 GB SAS ( + 4 100 GB EFD for Fast Cache)4Gb FC switch/interconnectTestbed: 1 Dell R710 (SQL), 4 Dell 2950 (load generation)The solution cost increase of this configuration (including hardware and also FAST Suite) is 102% of the CX4 configuration. This is not insignificant, although the performance improvement is very tangible and to configure a spinning drive solutions would be close to double the VNX cost. This still remains the most efficient way to deliver a certain service level.ote: Url to Reference Architecture TBD at GA or GA + 30 
  • The use of Flash technology with the FAST suite is a great fit for Virtual desktop / VMware View. In this case, we are trying to show how the user experience of the Virtual desktop environment is enhanced by Fast and Fast Cache and flash drives with VNX. Virtual desktop environments have to withstand very high IO activity scenarios such as a boot storm, where in the event of a failure, on the return of the system, all the users log on at once bombarding the system. Architects have to cater for this scenario so as not to make the desktop solution unusable in this event. The use of Flash and FAST Suite allows the user to cost effectively support a solution that can cater for boot storms.The solution leverages both FAST cache and FAST VP technologies on thick pool LUNs.The same is true of the desktop refresh process in Virtual desktops.Of the 2 configurations used in this example, the VNX configuration including hardware and the FAST Suite is in fact a 3% lower cost than the CX4 configuration.View Configuration:VMware View 4.5, vSphere 4.1, View Composer 2.5 (Linked Clones) :VNX5300DB size : 400GB, 4 DB files on 4 LUNsLUN Type : Thick Pool LUNDrives:5 EFD: 2 x 100 GB Read-only Replicas, 2 x 100 GB FAST Cache, 1 Hot Spare21 SAS: 15 x Linked Clones (FAST VP), 6 x Vault Drives15 NL-SAS :9 x Home Folders, 5 x FAST VP8Gb FC Switch (Brocade)Testbed: 8 Dell R710, 2 Dell 2950, Cisco 6509, Brocade DS5100Desktops: Win 7Three different areas of storage are used in this solutionReplica storage; Linked Clone storage; Home directories / roaming profiles / user data storage The Replica storage is 2 EFDs as RAID1 1+1 Classic Raid Group LUNs. The Linked Clone storage is a RAID 5 storage pool with 15 SAS drives and 5 NL SAS drives. This pool uses FAST VP for movement between SAS and NL SAS. It uses the “Highest Tier First” policy so the SAS drives are used right away and performance is not compromised. FAST Cache is also enabled on this pool. 2 EFDs are used for FAST Cache. The user data storage is a CIFS share on one LUN of 9 NL SAS drives. Note: Url to Reference Architecture TBD at GA or GA + 30
  • This slide complements the prior slide (same configuration) and looks at the VNX solution and FAST Suite from a different perspective. What would the cost comparison be if you wanted to configure a conventional storage solution (CX4 and Spinning media) to meet the performance needs provided by the new VNX with FAST and FAST Cache for a specific Virtual desktop environment. In order to meet the performance capability of this VNX5300 you would need over 180 drives and the throughput capability of a CX4-480. The Cost benefit is close to 70%, which includes hardware and software costs.Note to presenter : In this case we require a larger CX4 configs to meet the requirements of the application. This phenomenon may also apply to other applications due to the fact that we have historically met the performance requirements of applications with CX4 where we had to “short stroke” the drives and up-sell the platform. Note: Url to Reference Architecture TBD at GA or GA + 30
  • Oracle Environment VNX5300 uses FC connect to Oracle host. The cost comparison here is based on BLOCK only VNX5300 and includes the FAST SuiteDatabase version - 11gSingle Instance / RAC - Single InstanceVirtualized / Physical - Virtualized on vSphere 4.1Database size - 1TBRead/Write ratio - 60/40Metric - Transaction per minuteWorkload mix - Predominantly small random I/O type with average I/O size of 8KNote: Url to Technical White paper TBD for end of Q1
  • Historically, DSS workloads have not been a sweet spot for the CX4, except possibly for the CX4-960. The VNX changes this position. This solution does not leverage Flash drives or the FAST suite as the large sequential workloads do not lend themselves to this technology. The huge improvements in total throughput (particularly in the lower end platforms) can drive up to 4.5x the bandwidth. The CX4-120 can achieve around 750MB/s and the VNX5300 can achieve around 3,500MB/s! The cost of the comparable configuration in this case (Block-only VNX5300) is 84% higher than the CX4, however to achieve the throughput provided by that platform with CX4 would require a CX4-960 platform, which would be considerably more expensive than the VNX5300. Main takeaway: VNX is a GREAT solution for DSS workloads.Note: Url to Reference Architecture TBD at GA or GA + 30
  • VNX Security and Compliance Suite (also sold as part of the total Efficiency Pack) provides a number of complementary tools that help secure the modern data center.EMC VNX Host Encryption: Host Encryption is a host based tool that encrypts data at the host for security purposes, when VNX for Block is used. Data encryption to protect sensitive data from unauthorized access is becoming a key IT requirement. It limits exposure to security breaches, so even if someone is able to access the storage media, the data is still protected to prevent unauthorized access to sensitive information. Data encryption is also a key way to protect data in transit, including both the electronic and physical movement of data for backup, disaster recovery, and/or maintenance. Finally, data encryption helps address compliance with industry regulations such as PCI, Sarbanes-Oxley, SB 1386, HIPAA, U.K.’s DPA, Directive 95/46/EC, as well as internal security mandates. The EMC VNX host encryption capability (leveraging proven PowerPath technology) addresses all these key customer needs.EMC VNX File Level Retention (FLR): VNX FLR is a capability available to VNX for File that protects files from modification and deletion until a user specified date. FLR enables customers to create a permanent unalterable set of files and directories and ensures the integrity of the data. At the NAS level this effectively provides what is traditionally known as Write Once Read Many (WORM) access within the VNX for File, and also includes tools to help users manage FLR automatically. FLR includes 2 versions, an enterprise version allowing for self governance and a compliance version that meets compliance rules such as SEC 17a-4(f).EMC VNX Event Enabler (VEE): VEE is a capability available to VNX for File that provides an integration point between best of breed third party storage management application tools and VNX for File (NAS). In essence, VEE provides an alerting facility such that third party applications can take actions against NAS client activities on the VNX. For example VEE supports 3rd party anti-virus engines such that when a client attempts to save a file, the system will indicate to the AV engine that a file needs to be checked for viruses.
  • With the same capabilities as EMC PowerPath Encryption, EMC VNX Host Encryption is a midrange offering that provides data encryption at the storage-device level to protect sensitive data on VNX arrays. As a host-based encryption product, VNX Host Encryption lets you choose the LUNs or volumes that contain sensitive data and need to be encrypted, thereby reducing management and infrastructure requirements. Data is encrypted as it leaves the host and is secure from the host to disk on storage where it resides. VNX Host Encryption also offers the Emulex hardware-assistHBA encryption option, which offloads encryption processes to the HBA and results in near-zero impact to the host CPU. The software-only encryption is also available for less CPU-intensive environments. VNX Host Encryption is available for EMC VNX-only storage environments. PowerPath Encryption supports mixed EMC Symmetrix, VNX, and CLARiiON and/or non-EMC storage environments.The main goal of encrypting sensitive data is to control how data is accessed, and the focus of the PowerPath Encryption and VNX Host Encryption is to prevent unauthorized access to data when it is removed from a protected perimeter.During disk migrations, rotations, or equipment upgrades, sensitive information may leave the protected perimeter of a secure data center. Using PowerPath Encryption or VNX Host Encryption to secure data while inside the protected perimeter ensures that should sensitive data leave the secure area, the data is inaccessible without authorized access. Note to Presenter: VNX Host Encryption is supported on Windows, Linux and Solaris hosts, with AIX support coming in Q2 2011.
  • The EMC VNX for file platforms with the Security and Compliance Suite offers VNX Event Enabler (VEE) functionality. VEE is an alerting framework that contains the following facilities and is used to provide a working environment for these facilities: EMC VNX AntiVirus Agent provides an antivirus solution to clients using a VNX system. It uses an industry-standard CIFS protocol in a Microsoft Windows Server 2003, Windows 2000, Windows NT, Windows XP or Windows 2008 domain. The anti virus agent uses third-party antivirus software to identify and eliminate known viruses before they infect files on the storage system. EMC VNX Event Publishing Agent is a mechanism whereby applications can register to receive event notification and context from Celerra. CEPA delivers to the consuming application both event notification and associated context (file/directory metadata needed to make a business policy decision) in one message. While the CEE framework includes both Anti Virus Agent and Event Publishing Agent, they can run independently. The benefits of the VEE framework include:Benefits:High availability architected inScalable as your environment growsProvides load balancing across application serversSupports heterogeneous AV engines in your environmentIntegrated with the top AV, quota mgmt and auditing vendorsProtects your NAS file environment The VEE framework allows us to extend the integration to new applications as required eg content management
  • For simple peace of mind, and total protection; whether it be local protection or remote protection, whether it be for encryption or for application protections, VNX protects your system better than ever. With unified replication for local and remote data recovery with DVR-like roll-back capabilities for business continuity on block-based storageBy allowing recovery of production applications with minimal data exposure through roll-back to a point in time abilities, you can now restore through a simple clickSimply define your recover-point-objectives (RPOs) set it and forget itAutomating processors for failover and failback further reduces risk exposure and simplifies on-going protection management
  • Local replication can significantly enhance your current business and technical operations by providing you with access points to production data; enabling parallel-processing activities like backups, as well as disk-based recovery after logical corruptions; and creating test environments for faster time to revenue for applications.Every business strives to increase productivity and usage of its most important resource—information. This asset is key to finding the right customers, building the right products, and offering the best service. The greater the extent to which corporate information can be shared, re-used, and exploited, the greater competitive advantage a company can gain.EMC offers local snapshots for Block (VNX SnapView) and File (VNX SnapSure) as well as a continuous Data Protection software option (RecoverPoint/SE), to provide the broadest set of native local replication capabilities in the market.Note to Presenter: The suite includes Recoverpoint/SE software only. To use Recoverpoint with the VNX solution also requires Recoverpoint hardware appliances , which are ordered separately.
  • The VNX Remote Protection pack provides a number of value added options for remote data protection, repurposing and Disaster Recovery.RecoverPoint/SE CRR—is a comprehensive data protection solution that provides bi-directional synchronous and asynchronous replication. RecoverPoint/SE allows users to recover applications remotely to any significant point in time without impact to the production application or to ongoing replication.MirrorView—Provides a cost effective replication solution for smaller FC or iSCSI host environments. It can be implemented in a synchronous mode or an asynchronous mode depending on RTO and replication distance requirements. Mirrorview supports consistency groups for multi LUN or federated applications. In addition MirrorView supports multi-site.VNX Replicator—Is a file system level replication tool, built for ease of use and multi site replication. Replicator allows users to set up service levels for Recovery time objectives that will account for variations in the environment and adjust to ensure it meets the required business SLAs.
  • RecoverPoint supports unified block (SAN) and file system (NAS) replication for VNX series arrays, RecoverPoint also supports unified block and file system replication for the VG2 and VG8 gateways. RecoverPoint only supports file system replication of a VNX VG2 or VNX VG8 gateway running VNX OE for File V7 that is attached to a VNX series array or an EMC CLARiiON CX4. Symmetrix backend storage is not supported. The use case for file system replication is for disaster recovery enabled by full-cabinet level failover* (this applies only to the file system objects, LUN objects continue to be manageable at the host level). For replication of file systems between two VNX series arrays there is only one file system consistency group in each direction. Failover is controlled by a nas command from the control station on the VNX series array at the DR site. During failover all the Data Movers at the primary site failover and/or are shutdown. RecoverPoint/SE CRR for file systems will recover to the most recent point in time at the remote site upon failover.File system storage that is not replicated to the remote site by RecoverPoint/SE CRR Unprotected file system storage on the VNX series array at the primary site remains available. RecoverPoint can replicate the file system data either synchronously or asynchronously. If RecoverPoint replicates the data asynchronously then the RPO is set by the remote control station’s NAS cli and is in the range of 1 minuet to 5 minutes. If the customer needs to federate a block consistency group with a file consistency group then the customer must use the Group Set feature of the RecoverPoint GUI or the parallel bookmark command for the RecoverPoint CLI.For GUI failover, VMware Site Recovery Manager, or sub-cabinet level recovery the customer must utilize VNX Replicator.Note to Presenter: It is possible to replicate a smaller sub set of File data (down to the X-Blade/Data Mover level if required).
  • The VNX Application Protection Suite includes:Replication Manager is a powerful tool designed to automate the creation, management, and use of EMC point-in-time replicas (snapshots, clones, mirrors). No scripting is required! Replication Manager auto-discovers the environment (application host, associated storage, and underlying replication technology) and enables easy point-and-click management by integrating the technology stack from the application to the storage and replicas. Robust functionality options within Replication Manager have the intelligence to manage point-in-time replicas in the context of the application. You will be able to create replicas of your production environment with minimum impact. The easy-to-use, centralized console and GUI make it very easy to create and manage point-in-time copies.Data Protection Advisor for Replication provides a unique Replication Analysis capability that helps companies address cost, compliance, complexity ,and confidence in their replication and disaster recovery configurations, helping to expand visibility in their environments. Data Protection Advisor for Replication discovers and catalogs the application structure and all the associated storage, devices, and replicas. Customers have the ability to define and measure protection policies by application(s) or server(s). With this understanding of the environment, Data Protection Advisor for Replication gives businesses the needed visibility, providing automatic oversight and audit capabilities to all interested parties such as the Storage or Database Administrators. Think of Data Protection Advisor for Replciation as a 24x7 super IT expert that enables your business to:View all recovery points in the environment – local and remote snaps, clones and real-time replicationIdentify recovery gaps and failures, for fast remediation, resulting in improved recoverabilityView remote replication lag to and set alerts based on defined thresholdsIn RecoverPoint environments, Data Protection Advisor has expanded capabilities such as checking for “link state change, and WAN/SAN usage” as well as the ability to monitor configuration changes on active RecoverPoint appliances. Additionally, for RecoverPoint environments customers have the ability to measure and charge back replication services to lines of business based on the Size of data protected—The amount of data protected by RecoverPoint or the Size of data transferred—The amount of data replicated from primary storage by RecoverPoint appliances.Data Protection Advisor for Replication is one of a number of products in the DPA umbrella. Other DPA titles include DPA for Backup, DPA for Virtualization and DPA for File Server.
  • Data Protection Advisor provides a unique Replication Analysis capability that helps companies address cost, compliance, complexity ,and confidence in their replication and disaster recovery configurations, helping to expand visibility into RecoverPoint, MirrorView and SnapView environments. Data Protection Advisor discovers and catalogs the application structure and all the associated storage, devices, and replicas. You have the ability to define and measure protection policies by application(s) or server(s). With this understanding of the environment, Data Protection Advisor gives your business the visibility it needs, providing automatic oversight and audit capabilities to all interested parties such as the Storage or Database Administrators. Think of Data Protection Advisor as a 24x7 super IT expert that enables your business to:Identify recovery gaps and failures, leading to improved recoverabilityUnderstand your businesses recoverability status 24x7Within Data Protection Advisor, you can create analysis rules for RecoverPoint such as checking for “link state change,” “WAN/SAN usage high,” and “Replication lag too high”. Monitor configuration changes on active RecoverPoint appliancesIntegrated reporting provides the ability to measure and charge back replication services to lines of business and prove compliance to Service Level Protection policies. Some service level agreement reports include percent downtime, service level status, and service level agreement performance for local and remote replication. Note to presenter: Data Protection Advisor for Replication does not support Replicator, SnapSure or RecoverPoint/SE on VNX file systems. There is a version of DPA for NAS that is orderable separately that supports NAS generic and data protections specific monitoringData Protection Advisor offers a host-less replication option for customers who prefer not to install agents on production servers. This is available for replication running on VNX, CLARiiON and Symmetrix arrays in all operating environments. Ideal for environments customers prefer to not allow contact with the replication host or provide root pseudo root credentials. DPA generates alerts and allows customers to generate reports related to replication after scanning the arrays, without the need to scan hosts. Data Protection Advisor will detect the following gaps:When only part of the node initiator (WWN) devices are assigned to a Replication GroupWhen a continuous replication is haltedNote to Presenter: For optional host-less replication analysis, Data Protection Advisor offers the following configuration reports, which is a subset of reports that are available when hosts are also scanned. For a list of all reports available when hosts are scanned, refer to the Data Protection Advisor Product Documentation.RDF Groups configurationMirrorView ConfigurationMasking ConfigurationNode Initiator to storage configuration TopologyReplication Configuration ReportStorage to Server Topology Report
  • Within Data Protection Advisor you can create protection policies for continuous data protection (Note to Presenter: Click now and as indicated n Slide Show mode for animation.) such as MirrorView and RecoverPoint (and SRDF for Symmetrix users) as well as for point-in-time replicas such as SnapView (and TimerFinder). Once you’ve defined a policy, assign it to a server, group of servers, application, or line of business—whatever grouping makes the most sense for your business. Data Protection Advisor will then monitor compliance to the policy and provide alerts when the policies are violated. ( You can schedule reports for auditing purposes to prove protection policy compliance, essentially automating the auditing process for your business. This is a brief overview of Data Protection Advisor’s Replication Analysis support for VNX, and RecoverPoint.
  • In summary, the VNX series offers the most scalability, flexibility and is the most powerful midtier unified storage solution. It supports the most flexible and Powerful software solution available in the mid-tier.

Vnx series-technical-review-110616214632-phpapp02 Vnx series-technical-review-110616214632-phpapp02 Presentation Transcript

  • INTRODUCING VNX SERIES Technical Presentation Givonn Jones Consultant February 2011© Copyright 2011 EMC Corporation. All rights reserved. 1
  • IT Challenges: Tougher than EverFour central themes facing every decision maker today Overcome flat budgets Manage escalating complexity Cope with relentless data growth Meet increased business demands© Copyright 2011 EMC Corporation. All rights reserved. 2
  • IT Challenges: Tougher than EverFour central themes facing every decision maker today Affordable Overcome flat budgets Simple Manage escalating complexity Efficient Cope with relentless data growth Powerful Meet increased business demands© Copyright 2011 EMC Corporation. All rights reserved. 3 View slide
  • Next Generation Unified StorageOptimized for today’s virtualized IT Unisphere™ VNXe3100 VNXe3300 VNX5100 VNX5300 VNX5500 VNX570 VNX750 0 0 Affordable. Simple. Efficient. Powerful.© Copyright 2011 EMC Corporation. All rights reserved. 4 View slide
  • Storage Connectivity Profile is Changing• Increasing emphasis on Ethernet-based connectivity options• EMC offers all major storage connectivity options today 14,000 2010-2014 CAGR 12,000 NAS + iSCSI + FCoE 13.9% 10,000 Fibre Channel SAN 1.3% Revenue ($M) Network-attached NAS 5.4% 8,000 iSCSI SAN 18.2% 6,000 External DAS -8.8% Fibre Channel over Ethernet 4,000 104.6% 2,000 Switched SAS 31.9% 0 2008 2009 2010 2011 2012 2013 2014Source: IDC, (7/10) and EMC© Copyright 2011 EMC Corporation. All rights reserved. 5
  • Click. Automate. Done.© Copyright 2011 EMC Corporation. All rights reserved. 6
  • Powerful, Flexible Modular ArchitectureMore processing power. Self-optimizing pools. Any network. Unified multi-protocol SAN NAS CLOUD • Full support for any network • Unified block, file and object support BLOCK FILE OBJECT • Share volumes and files iSCSI CIFS REST FC NFS SOAP • Fully provisioned LUNs FCoE pNFS MPFS Multi-controller scale* • Add X-blades for the right amount of file sharing power • Add storage processors for more storage pool scale • Scales to 96 CPU cores and 4,000 drives SSD HDD Self-optimizing storage pools • Active Data is automatically moved to FLASH for fastest performance • Inactive Data is automatically moved out of FLASH to large disks for lowest capacity cost • Fully Automated. Always on. No management intervention needed. Set-it-and-forget-it. • Lowest transaction cost and lowest capacity cost— simultaneously! *2 SP’s requires Gateway with multi back end HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 7
  • Hardware Architectural OverviewOptimized for FlashHigh performance POWERFUL Latest technology Newest Intel multi-corearchitecture designed for CPU, large memory; SASthe Flash drive ageOptimized packaging FLEXIBLE architecture Flexible IO modulesEfficient packaging with FC, FCoE, 1Gb and 10Gbdense disk options andbuilt in energy efficiency AVAILABLE IP; future proofed with plug in architecture for next generation connectivityStorage tiersSupport a mix of ultra-performance, performance ECONOMICAL Advanced failover Always on, no compromise, and capacity drives for availability whileoptimal economics maintaining application service levelsModular designScale as business needsgrow HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 8
  • Modular Architecture Designed for optimal flexibility• Proven architecture that extends existing Object Technology (Atmos VE) Celerra and CLARiiON investment (Single UI, 3rd party management integration)• Scalable capacity and performance File Protocol – Dedicated system elements – Flash, SAS and Near-Line for optimal Native File Services capacity/performance balance (NFS, CIFS, FTP, HA IP networking, IP Management V6 , AVM, deduplication and – Optimized standalone gateway option compression)• Native NAS and SAN Implementation Block Protocol• Object delivered via integrated solution Base Native Block Services (Storage• Simple, powerful, intuitive consolidated pooling, automation, security, RAID, SA management N, etc.)• Future proofed Disk Expansion – Flexible IO options (Flash, SAS, Near-Line SAS) – Plug and play HARDWARE © Copyright 2011 EMC Corporation. All rights reserved. 9
  • VNX: Modular Unified and GatewayImplementation modelsUNIFIED STORAGE Gateway GATEWAY File Object File Fibre Channel SAN Block Object VNX series Servers Symmetrix VNX series CLARiiON• Easy to deploy, simple to manage • Leverage existing storage investment• Scale capacity • Scale performance and capacity• Multi protocol • Shared storage – File: NFS (including pNFS), CIFS, MPFS • Add to existing block implementation – Block: iSCSI, Fibre Channel, FCoE – File: NFS (including pNFS), CIFS, MPFS – Object: REST, SOAP – Object: REST, SOAP HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 10
  • VNX System Architecture Object: Application servers Exchange servers Clients Virtual servers Oracle servers Atmos VE SAN LAN VNX Unified Storage 10Gb 10Gb FC iS FCo Enet FC iS FCo Enet E E VNX X-Blade Failover VNX X-Blade VNX X-Blade VNX X-Blade VNX X-Blade VNX X-Blade VNX X-Blade VNX X-Blade VNX OE FILE VNX SP Failover VNX SP VNX OE BLOCK Power Supply Power Supply SPS SPS LCC LCC Flash drives Near-Line SAS drives SAS drives HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 11
  • VNX Series MODULAR UNIFIED GATEWAY VNX5100 VNX5300 VNX5500 VNX5700 VNX7500 VG2 VG8 Min. Form Factor 4U 7U 7U-9U 8U - 11U 8U – 15U 2U 2U Max. Drives 75 125 250 500 1000 4000 4000 Drive types 3.5‖ Flash, SAS and NL-SAS and 2.5‖ 10K SAS BE dependent BE dependent Configurable I/O n/a 3 4 4 5 3 5 Slots per X-Blade X-Blades n/a 1 or 2 1 or 2 or 3 2 or 3 or 4 2 to 8 1 or 2 2 to 8 FILE Capacity per X- n/a 200 256 Blade (in TBs) System Memory n/a 6 GB/blade 12 GB/blade 12 GB/blade 24 GB/blade 6 GB/blade 24 GB/blade Protocols n/a NFS, CIFS, MPFS, pNFS Configurable I/O 0 2 2 5 5 N/A Slots per SP BLOCK Embedded I/O ports 4 FC ports, 2 Back-End SAS ports 0 0 N/A System Memory 4 GB /SP 8 GB/SP 12 GB/SP 18 GB/SP 24 GB/SP N/A Protocols FC FC, iSCSI, FCoE, N/A HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 12
  • VNX Unified StorageMaximum CPU cores of unified data processing power OBJECT MPFS/pNFS Host Build a single cloud AUTOMATIC SERVER with ―N‖ number OPTIMIZATION of systems NAS SAN CLOUD Storage Storage Processor Processor BLOCK FILE AND 12 cores dedicated to OBJECT storage pool 48 cores dedicated to management and high object, networked file performance block AUTOMATIC DATA OPTIMIZATION system management serving and data sharing Flash SAS NL-SAS (Highest (Good (Highest performance performance) capacity) ) VNX supports all protocols—today and in the future HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 13
  • VNX Form Factors DAE-15x 3.5‖ & 2.5‖ drives Drives (Disk Array Enclosure) 3U Add DAEs up to the DAE-15x 3.5‖ & 2.5‖ drives maximum capacity allowed DAE-25x 2.5‖ drives (Disk Array Enclosure) 3U (Disk Array Enclosure) Can mix drive types in the 2U same DAE (e.g. 7.2K rpm + DAE-15x 3.5‖ & 2.5‖ drives DAE-25x 2.5‖ drives (Disk Array Enclosure) DAE-25x 2.5‖ drives 15K rpm (Disk Array Enclosure) (Disk Array Enclosure) 3U 2U 2U Can mix different DAES in a DAE-25x 2.5‖ drives DAE-25x 2.5‖ drives system (e.g. 15 drives and DAE-25x 2.5‖ drives (Disk Array Enclosure) (Disk Array Enclosure) 25 drives DAEs) (Disk Array Enclosure) 2U 2U 2U DME DME (Data Mover Enclosure) File Only or Unified Base* (Data Mover Enclosure) 2U 2U Need block hardware + file CS (Control Station) 1 U CS (Control Station) 1 U hardware DAE-25 Block Only Base (Disk Array Enclosure) 2U DAE = drives only DPE DPE (Disk Processor Enclosure) SPE (C500, C1000) (Disk Processor Enclosure) (Storage Processor Enclosure) DPE = storage processors + 3U 3U 2U drives SPS (Standby SPS (Standby SPS (Standby SPS (Standby SPS (Standby PS) PS) PS) Filler PS) PS) 1U 1U SPE = storage processors 1U 1U 1U VNX5100 VNX5300 and VNX5500 VNX5700 and VNX7500 * File and Unified options not available on 5100 HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 14
  • VNX Unified Storage Components Architecture and packaging File implementation: X-Blade enclosure – VNX operating Environment for File 15x 3.5‖ & 2.5‖  SAN Disk Array Enclosure – From 2 to 8 blades supported with configurable failover options Start with FC or – Flexible IO connectivity options: 25x 2.5‖ Disk Array Enclosure  iSCSI or NAS Block implementation: storage or data Add other protocols processor enclosure* 25x 2.5‖Disk Array Enclosure seamlessly, as needed – VNX operating environment for block – Dual active storage processors – Automatic failover Data Mover Enclosure (X-Blade enclosure)  – Flexible IO connectivity options: Flexible IO options: Standby power supplies (battery backup) Control Station  4-port 8Gb FC, 4 port 6Gb SAS, 2 Control stations (1 or 2) Disk Array Enclosure port 10GbE, 4 port Disk array enclosures DPE Disk Processor (Disk Processor Enclosure)  Copper 1GbE, 2 Enclosure/Storage Processor Port 10Gb FCoE Upgradeable to add Fibre Channel ports 3U Enclosure and/or native 1 or 10 Gigabit Ethernet iSCSI * DPE contains disks, SPE does not contain disks Standby Power Supply  HARDWARE © Copyright 2011 EMC Corporation. All rights reserved. 15
  • Simplified Upgradeability to Unified• Unified is the default configuration• Block only: – SPE/DPE – ―File Ready Option‖ • Reserve 4U-6U Rack Space 15x 3.5‖ & 2.5‖ drives • Reserve FC ports or I/O Slot – Upgrade to Unified Disk • Add Data Movers (1 to 8) 25x 2.5‖ drives • Add Control Stations (1 or 2) • Add SFP or FC I/O module File Only Data Movers • Add Unisphere File Enabler or Unified Control Stations• File only: – Same HW as Unified Storage Processors (SPE/DPE) Block Only – Add FE SLIC to array – Add Unisphere block enabler Power Supply HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 16
  • VNX Storage ProcessorDedicated processing for block services Administrator • Block services Unisphere – Flexible RAID options (10,3,5,6) provide optimal performance AND protection – RAID Groups and Virtual Pools Private • Administration/management Network – Through storage processor Ethernet ports – Aggregated to single file/block view in UnisphereStorage Processors • Single point of management/control • High availability Active/Active controller FC/iSCSI failover option SAN • Connects to Hosts via FC, FCoE, iSCSI for VNX flexibility of connection • Connects to storage via 4 lane 6Gbit SAS for up to 24Gb/SAS bus (6x prior generation) HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 17
  • Flexible Storage TiersOptimize TCO with tiered service levelsSAS back-end connect for SAS (HDD) optionsperformance and reliability AUTOMATIC DATA OPTIMIZATION • 3.5‖ drives (195 drives/• Up to 24Gb (4x6 Gb) per rack) SAS bus – 300 GB and 600 GB – 10K and 15K RPM• Point-to-point, robust Flash SAS Near-line SAS – ~ 140 (10K) to ~ 180 (15K) interconnect (10K /15K rpm) (7.2K rpm) IOs per secondFlash (SSD) options • 2.5‖ Drives (500 drives/ rack)• Highest performing – 300 GB and 600 GB drives Highest Good Highest – 10K RPM• 3.5‖ – 100 GB, 200 GB performance performance capacity – ~3,000 IOs per second Near-line SAS (HDD) options • 3.5‖ drives (195 drives/ rack) – 2 TB – 7.2K RPM – ˜ 90 IOs per second HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 18
  • Flash Drives for Higher Service LevelsFlash drives introduce a paradigm shift in the storage industry Key benefits • Faster performance 1 15K 10 15K 30 15K Fibre Fibre Fibre – Up to 30 times IOPS and less than 1 millisecond response time Channel Channel Channel • More energy efficient Response Time drive drives drives – Uses 38% less energy per terabyte and 95% less energy per I/O • Better reliability – No moving parts, faster RAID rebuilds 1 Flash drive Ideal for: I/O per second (IOPS) • Oracle (NFS), and Microsoft SQL and Exchange (iSCSI) • VMware iSCSI and NFS (particularly VMware View) • File sharing, software engineering, and CAD/CAM Flash environments drives Complements high-capacity, cost- effective, energy-efficient NL-SAS drives HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 19
  • VNX – Designed for FlashOptimized for all of your virtual applications BETTER BANDWIDTH BETTER PERFORMANCE BETTER PERFORMANCE PERFORMANCE FOR MIXED FOR FILE SERVING Bandwith - Typical DSS WORKLOADS IOPS - Mixed Workloads NFS File Type Workload CX4 CX4 CX4 VNX Series with Rotating drives VNX Series VNX Series with SAS drives VNX Series with Flash drives VNX Series with Flash drives # ofMB/s Users KOPs VNX5100 VNX5300 VNX5500 VNX5700 VNX7500 VNX5100 VNX5300 VNX5500 VNX5700 VNX7500 VNX5300 VNX5500 VNX5700 VNX7500 CX4-120 CX4-240 CX4-480 CX4-960 CX4-120 CX4-240 CX4-480 CX4-960 CX4-120 CX4-240 CX4-480 CX4-960 Platform Platform Platform• Flash fully leverages the power of the VNX system• End-to-end throughput improvements enable 2-3x performance improvementsAll claims are subject to validation testing HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 20
  • Massively Scalable Performance withVNXIndependently scale file and block infrastructure Multi-controller scale VG8 • Add VG8 X-blades for the right amount of file sharing power • Add storage processors for FC SAN more storage pool scale • Scales to 96 CPU cores, 4000 drives and 384GB DRAM 4 x VNX memory VNX series HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 21
  • VNX X-BladesRun the world’s most mature NAS operating system • Up to eight independent file servers contained in a single system – Scale by adding enclosures: 2 X-Blades per X-Blade enclosure Network • Managed as one, high-performance, high- X-Blade availability server X-Blade X-Blade • Connects data to the network X-Blade Enclosure X-Blade • VNX operating environment for file – No performance impact after failover Control Station – Concurrent Network File System (NFS), Common Internet File System (CIFS), File Transfer Protocol file access, MPFS/pNFS Control Station • Hot-pluggable VNX series • Flexible N-to-M failover options • Continues to operate even if control station fails VNX, Symmetrix or CLARiiON • No internal disks in the Gateway HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 22
  • VNX Control StationSecure management and control for VNX for file • Installation • Administration/management – Through X-Blade and Storage Processor Ethernet Administrator ports Unisphere • Configuration changes • Monitoring and diagnostics Private – Heartbeat pulse of X-Blade Network Control Station • Monitors and manages X-Blade failover Control Station • Enterprise Linux-based (RHEL 5) Control Station VNX series • Initiates communications with X-Blades for greater security • Single point of management/control • Failover redundancy option HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 23
  • VNX X-Blade FailoverHigh availability architecture with no performance impact No client-performance • Configurable X-Blade failover options impact – N-to-M, Automatic, manual, none • Failover triggers: – Software panic or hang – Internal network failure – Power failure – Memory error – Non-responsive X-Blade Network Control Station • Failed X-Blade shut down to avoid ―split-brain Control Station syndrome‖Data pathtransferred X-Blade • IP, Media Access Control (MAC), and Virtual X-Blade X-Blade LAN (VLAN) addresses are transferred X-Blade • Automatic call-home of event X-Blade Data remains • No performance impact after failover accessible VNX series • Automatic control station failover • Configuration dependent failover times of ~15 secs to ~100 secs HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 24
  • VNX High AvailabilityDesigned to deliver 5x 9’s availability Platform No single-point-of-failure N+1 power and battery backup Redundant, hot-pluggable components Function RAID protection N+M X-Blades with advanced failover Automatic control station failover Quick X-Blade reboots Service Simple, customer-driven VNX OE updates Secure remote maintenance, call-home, automatic diagnostics HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 25
  • VNX Object Support A SINGLE CLOUD BUILT WITH SOFTWARE UNISPHERE GUI VNX SYSTEMS ON VMWARE INTEGRATION Certified with EMC Unified Storage (NFS, FC, iSCSI); VMware-supported servers REST, SOAP, HTTP or Web access and third-party storage Object Technology (Atmos/VE) Atmos Virtual Edition No limits on namespace or location Multi-tenancy securely isolates data Automated location, protection, and efficiency services Flexibility NEW Tokyo New York London VMware vSphere NFS/FC/iSCSI Storage Starts at 10 TB Up to 960 TB per site HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 26
  • Custom Web Atmos ISV File System Application Application AccessAtmos VE on VNX REST and SOAP access methods Global scale namespace spans locations• Storage Multi-tenancy securely segregates data Policies automate data services – vSphere-supported FC/iSCSI/NFS – Storage can be from more than one Atmos Virtual Edition array Atmos Atmos Atmos Atmos Atmos Atmos Atmos Atmos Node Node Node Node Node Node Node Node• vSphere – vSphere HCL supported servers ESX Server ESX Server ESX Server ESX Server – Minimum 2 servers required IP/FC• Virtual machines – Atmos SW is installed on the VMs Scales large and – Access methods are configured on VMs fast Self-optimizes for• Atmos access/integration layer performance and capacity efficiency – Customer web application using Atmos REST/SOAP API – Pre-integrated ISV application (e.g. Documentum) VNX5100 VNX5300 VNX5500 VNX5700 VNX7500 HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 27
  • Atmos VE on VNX Requirements Component Base Configuration Options • Guest OS should support 64- bit configuration VM Configuration • Min of two VMs • 12GB – 24 GB RAM per VM • 8 GB – 12 GB RAM per VM • Dual quad CPU per ESX • Each VM corresponds to an • 2 vCPU per VM • 4 GigE ports per ESX; 2 vNic Atmos node; All VMs should • 2 vNic per VM per VM share the same configuration • DRS/vMotion • 10 GBE • VMware HCL approved HW • Max Atmos virtual nodes • Supports shared infrastructure supported per site is 32 Unified Storage Fibre Channel, NFS, iSCSI • Max capacity per virtual Access Methods REST and SOAP node is 30 TB MDS: SS 1:14 (metadata drive to data Any Atmos supported • Maximum capacity per drive ratio) location 960 TB # of vDisks Up to 30 RDMs (raw device mappings) • Hybrid configurations are vDisk Size 100 GB to 2 TB 200 GB, 500 GB, 1 TB and 2 supported via RPQ only TB # of sites 1 1 or 2 (>2 is supported via RPQ)**Build a single cloud with ―N‖ number of VNX systems HARDWARE© Copyright 2011 EMC Corporation. All rights reserved. 28
  • 3-Times More Cost EffectiveGain 3X more storage with VNX capacity efficiency FAST Suite File Deduplication & Compression Thin 3X more efficient Provisioning Classic Provisioning EFFICIENCY© Copyright 2011 EMC Corporation. All rights reserved. 29
  • Enhanced Virtual ProvisioningStorage pool virtualizes the storage provisioning model Traditional RAID Groups Flexible Pools Replication ReplicationOptional Features, UQM, Analyzer, Virtual Features, UQM, Analyzer, FASTFeatures Provisioning, Compression, FAS Cache T VP, FAST Cache LUN Migration,Included LUN Migration, LUN Expansion,Features MetaLUNs LUN Shrink (Windows 2K8 only) ThinLUNs Classic Thick Meta LUN LUN LUN LUN StorageDrives Pool RAID Group Flash SAS NL-SAS EFFICIENCY© Copyright 2011 EMC Corporation. All rights reserved. 30
  • VNX Modular Architecture—Pools FLASH 15K SAS Near-Line SAS • Block services provided by VNX storage processor • Tiered storage – Choose storage tiers • Extreme Performance(Flash), Performance (SAS), Capacity (NL-SAS) Tiered Storage Pool – Choose protection type (RAID 1, 5, 6) Virtual RAID Group Blended Storage RAID Group RAID Group Pool • Pool has to have a single RAID type – System builds RAID groups • Tiers are aggregated into virtual blended storage pool EFFICIENCY© Copyright 2011 EMC Corporation. All rights reserved. 31
  • VNX Modular Architecture—LUNs andFile Systems CIFS or NFS • Thin or Thick LUNs are simply X-Blade File Services built from the Virtual Pool X-Blade File Storage Pool • LUNs can be shared to block connected Hosts (FC, iSCSI, FCoE) • File services added • LUNs are configured for file and Block Storage Processor consumed by X-Blade Virtual Blended Storage Pool • File systems optimally built via Near-Line Automated Volume FLASH 15K SAS Thick Thin SAS Management and shared via LUN LUN NFS or CIFS SAN EFFICIENCY© Copyright 2011 EMC Corporation. All rights reserved. 32
  • VNX Thin ProvisioningOnly allocate the actual capacity required by the application VNX THIN PROVISIONING • Capacity oversubscription allows intelligent use of resources – File systems User A User B User C 10 GB 10 GB 10 GB – FC and iSCSI LUNs – Logical size greater than physical sizeLogicalapplicationand user view • VNX Thin Provisioning safeguards to avoid running out of spacePhysicalallocation 4 GB Physical – Monitoring and alerting consumed 2 GB 2 GB storage • Automatic and dynamic extension past logical size – Automatic NAS file system extension – FC and iSCSI dynamic LUN extensionCapacity on demand EFFICIENCY© Copyright 2011 EMC Corporation. All rights reserved. 33
  • VNX Virtual Provisioning • Thick pool LUN – Full capacity allocation – Near RAID-Group LUN performance – Capacity reserved at LUN creation – 1 GB chunks allocated as relative block address is written • Thin pool LUN – Only allocates capacity as data is written by the host – Capacity allocated in 1 GB chunks – 8 KB blocks contiguously written within 1 GB – 8 KB mapping incurs some performance overhead EFFICIENCY© Copyright 2011 EMC Corporation. All rights reserved. 34
  • Space Reclamation with ThinMigrate to ThinUsing LUN Thickly Provisioned DataMigration or SANCopy Thinly Provisioned Data Reduced storage capacity Freed Capacity Returned to pool for usage by other LUNs Storage Pool EFFICIENCY© Copyright 2011 EMC Corporation. All rights reserved. 35
  • Compression for More Capacity SavingsMigrate to ThinUsing LUN Fully Provisioned DataMigration or SANCopy Thinly Provisioned Data Reduced storage capacityEnable Compressed DataCompression Max storage savings Freed Capacity Returned to pool for usage Storage Pool by other LUNs EFFICIENCY© Copyright 2011 EMC Corporation. All rights reserved. 36
  • File Deduplication and Compression Capacity Savings • Intelligent data selection – Typically avoids active data 1 TB  Option to target active files Traditional file data  Compression support for VMs through the Celerra Plug-in for VMware  End-user file-level activation – Tunable options by file system ~100 GB ~900 GB  Size of files  Age of files Active Inactive or aged data or data specifically targeted data  File extension  Directory filtering • Internal policy engine ~100 GB ~400 GB – Runs in the background Up to 500 GB Active Dedupe and – Throttles to avoid negative impact on data compressed savings client services data • Leverages EMC technologies ~ 500 GB – Compression engine Up to 50% File-level Deduplication- – Deduplication engine enabled file system savings EFFICIENCY© Copyright 2011 EMC Corporation. All rights reserved. 37
  • Software Suites Technical Presentation© Copyright 2011 EMC Corporation. All rights reserved. 38
  • Software. Simple. Powerful.© Copyright 2011 EMC Corporation. All rights reserved. 39
  • Advanced Software:VNX Total Efficiency PackSimplified ordering, maximum cost effectiveness VNX All software managed via Unisphere VNX Suites Packs • Optimize for both the lowest cost and the highest application performance FAST Suite automatically VNX Total Efficiency Pack Security and Compliance • Keep data safe from data Suite corruption, changes, deletions, and malicious activity Local Protection Suite Total Protection • Achieve safe data protection and re- purposing Pack Remote Protection Suite • Protect data against localized Application Protection Suite failures, outages and disasters at all times • Automate application copies and prove compliance to corporate policies© Copyright 2011 EMC Corporation. All rights reserved. 40
  • VNX Series Software ComponentsSoftware solutions made simple Management software Unisphere Attractively Priced Packs and Suites File deduplication & compression, Base software VNX block compression, virtual VNX Suites (no additional Packs provisioning, SAN Copy, and charge) protocols FAST Suite FAST VP, FAST Cache, Unisphere Analyzer, Unisphere Quality of Service Manager VNX Total Efficiency Pack Security and Compliance Event Enabler (anti-virus, quota management, auditing), File- Suite level retention, host encryption Total Protection Pack Local Protection Suite SnapView, SnapSure, RecoverPoint/SE CDP Remote Protection Suite Replicator, MirrorView A/S, RecoverPoint/SE CRR Application Protection Suite Replication Manager, Data Protection Advisor for ReplicationVNX5100 does not support FAST VP, VEE, FLR, Replicator, or SnapSure. It also does not support the Total Efficiency Pack, but a Total Value Packinstead.© Copyright 2011 EMC Corporation. All rights reserved. 41
  • New FLASH 1st Data StrategyHot data on fast Flash SSDs—cold data on dense disks ―Hot‖ high activity Highly active data As data ages, activity is stored on falls, triggering automatic Flash SSDs for movement to high fastest capacity disk drives for response time lowest cost Data Activity FLASH High Cap. SSD HDD Movement Trigger ―Cold‖ low activity Data Age© Copyright 2011 EMC Corporation. All rights reserved. 42
  • The FAST SuiteHighest performance and capacity efficiency—automatically! Real-time caching with FAST Cache • FAST Cache continuously ensures that the hottest data is served from high performance FLASH SSDs • FAST VP (Virtual Pools)* optimizes storage pools automatically, ensuring that active data is being served from High- High- SSDs, while cold data is moved to lower Flash Performance capacity cost disk tiers SSD HDD HDD • Together they deliver a fully automated FLASH 1st storage strategy for optimal performance at the lowest cost attainable Scheduled optimization • Monitor and Tune the whole system with with FAST VP the complementary Unisphere QoS* Not available for VNX5100. Manager and Unisphere AnalyzerFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 43
  • FAST Cache OverviewRun SQL and Oracle up to 3X faster Exchange SAP VMware Oracle Database SharePoint File • Support for file and block • Extends mid-tier cache using FlashFastest drives 30 DRAM – Adds up to 2 TB of cache • Less than a third of the cost of DRAMPerformance FAST Cache • Hot data automatically ends up in 8 FAST Cache • RAID 1 for Read/Write protectionCapacity Disk Drives • Transparent to SP failure, no need 1 to warm up the cache • Applicable to most workloadsFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 44
  • FAST Cache Approach Exchange SAP VMware Oracle Database SharePoint File  Page requests satisfied from DRAM if available  If not, FAST Cache driver checks map to determine where page is DRAM located Policy  Page request satisfied from disk MAP Engine drive if not in FAST Cache Driver  Policy Engine promotes a page to FAST Cache if it is being used frequently FAST Cache Disk Drives  Subsequent requests for this page satisfied from FAST Cache  Dirty pages are copied back to disk drives as background activityFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 45
  • FAST Cache Configuration• FAST Cache supported for all VNX based platforms EMC FAST – Applicable to VNX for File (V7) and VNX for Block (R31) System Cache Max Model Capacity*• Highly scalable – Up to 2.1 TB EMC FAST Cache (using 100 GB drives) VNX5100 100 GB/NA extending System Cache by a factor of 90 500 GB/400 – Reads and writes supported VNX5300 GB• Applies to classic LUNs and pool LUNs VNX5500 1 TB/1 TB – Thick and Thin pool LUNs VNX5700 1.5 TB/1.4 TB• A system-wide resource that benefits many workloads VNX7500 2.1 TB/2.0 TB – Host application data : VMware, Oracle, SQL; OLTP/DW etc * The two numbers are when using – Array-based data services (e.g. Snaps, etc.) 100 GB/200 GB drives, e.g., 5100 does not support 200 GB Flash• Two-click configuration in Unisphere drives FAST SUIT E © Copyright 2011 EMC Corporation. All rights reserved. 46
  • FAST VP for Block and File AccessOptimize VNX for minimum TCO BEFORE AFTER Pool Automates movement of hot or cold LUN 1 Tier 0 blocks Optimizes LUN 2 Tier 1 use of high performance and high capacity drives Improves Tier 2 cost and performance Most activity Neutral activity Least activityFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 47
  • FAST VP PoliciesPolicy ensures storage service levels are met 1 Highest Tier Preferred Maximize performance e.g. high-performance OLTP system 2 Auto-Tier Optimize TCO and performance e.g. databases with varied levels of LUN activity among tables, or file systems with 1 varied levels of activity across files 3 Lowest Tier Preferred LUN LUN Reduce TCO 2 3 e.g., archived or infrequently accessed data SSD HDDFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 48
  • FAST VP Operational Process• Statistics collection—cumulative I/O history (reads and writes) – Weights recent I/O history above longer-term I/O history – Maintains relative ranking of all data in pool, based on tier preference and I/O history: • Highest tier preference and high activity level get highest priority • Highest tier preference with less activity level get next highest priority • No tier preference (Auto-Tier) with high activity level gets next highest priority• When a pool is created, it is detected in the next poll cycle for inclusion in statistics collection – LUNs created in pool are likewise detected in the next poll cycle for inclusion in statistics collection – Poll occurs every hour – Relocation estimate—―amount to move up/down‖—updated every hour• Tier utilization – Algorithm attempts to gain greatest utility from highest tiers • Data is demoted as space is needed in top tiers• Relocation granularity – Sub-LUN ―slices‖ are 1 GB granularityFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 49
  • FAST Suite in Action—FAST VPFAST VPtiers across drives in pool Optimizes drive utilization Relative ranking over time 1 GB slices ideal for deterministic data FAST Virtual Pool FLASH FLASH SAS SAS SAS SAS NL-SASNL-SASNL-SASNL-SASNL-SASNL-SASNL-SASNL-SASFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 50
  • FAST Suite—FAST VP + FAST CacheFAST VP DRAM Cachetiers across drives in pool FAST Cache Optimizes drive utilization Relative ranking over time FLASHFLASH FLASHFLASH 1 GB slices ideal for deterministic data FAST Virtual PoolFAST Cachecopies hottest data to Flash FLASH FLASH Optimizes Flash utilization Dynamic movement in near SAS SAS SAS SAS real time 64 KB sub-slices ideal for bursty data NL-SASNL-SASNL-SASNL-SASNL-SAS NL-SAS NL-SASNL-SASFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 51
  • FAST Suite—FAST VP + FAST CacheFAST VP DRAM Cachetiers across drives in pool FAST Cache Optimizes drive utilization Relative ranking over time FLASH FLASHFLASHFLASH 1 GB slices ideal for deterministic data FAST Virtual PoolFAST Cachecopies hottest data to FLASH FLASHFlash Optimizes Flash utilization SAS SAS SAS SAS Dynamic movement in near real time NL-SASNL-SASNL-SASNL-SASNL-SASNL-SASNL-SASNL-SAS 64 KB sub-slices ideal for bursty dataFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 52
  • 3-Times the PerformanceSupercharge applications with VNX and FAST Suite • 3X the number • 1/3 View boot • 3X the number of users time of users • 3X the number • 4X faster View • 3X the number of transactions response time of transactionsFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 53
  • Accelerate SQL Server with VNX SeriesMore than 3x performance improvement with VNX and FASTCache• 4.5x performance Virtualized SQL Server with VNX and FAST Cache improvement vs. previous 5 generation Relative Transactions 4• Achieves optimized per second 3 performance without 2 4.5 expensive data base and 1 application tuning 1 0• Configuration: CX4 1 year ago VNX series with FAST Cache – VNX5700 utilized 20 SAS and 4 Flash drives with FAST Cache – vs CX4-480 with 20 FC drivesFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 54
  • Improved Scale and Availability for Virtual Desktop 4x the number of virtual desktop users with VNX Series, FAST VP and FAST Cache at sustained performance• Boot storm Boot Before Config – 3x faster: Boot and settle 500 • NS-120 desktops in 8 min vs. 27 min. • 30FC + 15 SATA drives – FAST Cache absorbs the majority of the Boot work-load After Config (i.e. I/O to spinning drives) • VNX5300 • 5xFlash, 21x SAS, 15xNL-• Desktop refresh SAS – Refresh 500 desktops in 50 Refresh • 2 x Flash as Fast Cache min. vs. 130 min. • 2x Flash for VM Replica – Fast Cache serviced the storage majority of the IO during • SAS and NL-SAS with FAST refresh and prevents linked VP for linked clones clones from overloading The 2 configurations are comparably priced FAST SUIT E © Copyright 2011 EMC Corporation. All rights reserved. 55
  • Only:Optimize TCO for Virtual DesktopSolutionsUp to 70% TCO benefit compared to same performanceconventional storage• Boot storm Up to 70% reduction in – Boot and settle 500 desktops in 8 min. storage cost for same I/O performance• Desktop refresh – Refresh 500 desktops in 50 min. Celerra NS-480 183x 300GB• Flash enabled VNX versus NS 15K FC Disks with conventional HDD to deliver 500 desktop SLA VNX5300 – Conventional Solution requires NS- 480 performance and 183 FC 5x 100GB Flash drives – Optimally tiered solution requires 21x 300GB 15K SAS VNX5300 with 5x Flash, 21 x SAS and 15xNL-SAS 15x 2TB NL-SASFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 56
  • Accelerate Oracle OLTP with VNX Series>3x utility from your Oracle OLTP using FAST Cache andFAST VP• Improved Oracle transaction time Virtualized Oracle with VNX Series and FAST Cache by 3.7x 4 – Using FAST Cache and virtualized Relative Transactions Oracle (with vSphere 4.1) 3 per minute• Increased performance comes at 2 3.7 only a 18% increase in storage 1 solution cost 1• Configuration: 0 CX4 1 year ago VNX series – VNX5300 with 20 15K SAS and 7 with FAST Cache Flash drives (two used as FAST Cache, five in a tiered pool with If you hear Oracle OLTP, FAST VP) VNX series is a great – Vs. CX4-120 with 45 FC drives solutionFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 57
  • Accelerate Oracle DSS with VNX SeriesOracle Decision Support Systems (DSS) benefits from VNX• Up to 4.5x increase in bandwidth Oracle DSS with VNX• Before configuration 5 Relative Performance – CX4-120 with 30 x 300GB 15K FC 4 drives 3 4.5• After configuration 2 – VNX 5300 with 75 x 300GB 15K SAS 1 1 drives - – No FAST VP or FAST Cache was CX4 1 year ago VNX series enabled due to limited workload benefit If you hear Oracle DSS, VNX series is a great solutionFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 58
  • Unisphere Quality of Service ManagerApplication-based service level management BeforeUnisphere Quality With Unisphere Quality of Service Manager of Service Manager • Manage block resources based on service levels High Medium Low – Monitor and achieve performance objectives for Priority Priority Priority applications • Optimize performance based on policy management – Set performance goals for critical applications – Set limits on lower-priority applications – Schedule policies to run at different intervals • Measure and control storage based on different Available Performance metrics – Response time (e.g., Exchange) – Bandwidth (e.g., backup to disk) – Throughput (e.g., OLTP applications) • Complements FAST VP and FAST Cache Applications Applications – Adds additional dynamic service level management to FAST VP and FAST CacheFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 59
  • Unisphere AnalyzerBlock data trend analysis, reporting, and capacitymanagement • Provides real-time and historical performance data • Pinpoints performance bottlenecks • Easy, one-step access to charts and reports • Provides the flexibility to customize the analytical focus by time period, elements, and metricsFAST SUIT E© Copyright 2011 EMC Corporation. All rights reserved. 60
  • VNX Security and Compliance SuiteKeep data safe from changes, deletions, and maliciousactivityEMC VNX Host EMC VNX File-level EMC VNX EventEncryption Retention (FLR) Enabler (VEE)• Maintains data • Provides the ability to lock • Delivers alerts upon confidentiality for data down (WORM) file systems file system actions at rest, provides to avoid malicious or • Allows integration with compliance accidental changes third-party anti-virus• Encrypts data where it • Supports file-level retention checking, quota is created—providing periods management, and protection anywhere auditing applications • VNX File-level Retention outside the server Compliance Option (FLR- C) meets SEC Rule 17a- 4(f) compliance requirementsSECURITY AND COMPLIANCE SUITE© Copyright 2011 EMC Corporation. All rights reserved. 61
  • EMC VNX Host Encryption • Provides host-based data security solution for VNX environments with Windows, Linux or Solaris hosts VNX Host Encryption • Integrates with Emulex hardware-assist HBA encryption option – Offloads encryption to HBA, resulting in near-zero impact to host CPU – Addresses software-based encryption performance concerns • Complements PowerPath Encryption with Emulex HBA option for Symmetrix, non-EMC, and mixed- array environments • Protects data when it leaves a protected area such as a secure data center – E.g., disk migrations, rotations, or equipment upgradesSECURITY AND COMPLIANCE SUITE© Copyright 2011 EMC Corporation. All rights reserved. 62
  • VNX File Level Retention File Level Retention Enterprise Option (FLR-E) • Provides for retention periods per file • Enables adherence to good business practices – Tamper proof clock – Activity Logging File Level Retention Compliance Option (FLR-C) • Meets SEC Rule 17a-4(f) compliance requirements – Prevents file system deletions with locked files – ―Hard‖ default retention periods – Data verification to validate committed content • Retention periods cannot be modified • File systems can only be deleted when retention period has expired • Third-party compliance validation paperSECURITY AND COMPLIANCE SUITE© Copyright 2011 EMC Corporation. All rights reserved. 63
  • File Level Retention WorkflowOptional VNX for File functionality for CIFS and NFS File Level Retention Workflow • Not-locked files – Traditional non-FLR files Append • Locked files (WORM) 5 2 – Retention periods are set on a per-file 1 2 4 basis Locked Not-locked Expired – Retention period is set to ―infinite‖ if left (WORM) 3 unspecified 6 3 – Cannot be deleted, renamed, or modified – Retention periods can be extended 1 Non-File Level Retention-enabled files • Append files 2 Set retention periods—enable File Level – Protected files to which content can be Retention and commit to ―WORM‖ state added but not modified or deleted 3 Retention period extended 4 Retention period expires • Expired files 5 Locked or expired empty files – Files can be deleted after retention time 6 File deleted expiresSECURITY AND COMPLIANCE SUITE© Copyright 2011 EMC Corporation. All rights reserved. 64
  • VNX Event Enabler OverviewVNX integrates with best-of-breed third party enterprise applications • Best-of-breed third party software integration – Anti-virus – Enterprise quota management – Unstructured data auditing • CIFS and NFS File based alerting • Extensible architecture • Highly availableSECURITY AND COMPLIANCE SUITE© Copyright 2011 EMC Corporation. All rights reserved. 65
  • VNX Anti-Virus Support  • Shared bank of virus-checking servers  • Can deploy multiple vendors’ engines  concurrently • Virus-checking server only reads part of files User VNX5300 Virus-checking server • File access is blocked until it is checked – Scan after update Write/close or first read after new – Scan on first read virus-definition file – Automatic access-time update Virus-checking request • Notification on virus detect Virus-checking signatures • Anti-virus sizing toolMcAfee NetShield • Runs over VNX Event Enabler infrastructureSymantec AntiVirus for NAS and EndpointTrend Micro ServerProtect for EMC VNXCA eTrust Antivirus This is the only anti-virus checking methodSophos Anti-VirusKaspersky Anti-Virus supported by major virus-checking vendors for network sharesSECURITY AND COMPLIANCE SUITE© Copyright 2011 EMC Corporation. All rights reserved. 66
  • VNX Event Publishing Agent • Event notification for integration with   third-party applications – Quota management   – Auditing and indexing User VNX5300 Server with • Alerts agents upon VNX file actions CEPA – Files: create/open/delete/close (unmodified or User creates file stored on VNX modified)/rename Event sent to third-party application on – Directories: create/delete/rename server through VNX Event Publishing – Files and directories: any attempt to modify Agent integration the security metadata (access control list modification) Response from third-party application to VNX • Can deploy multiple agents Response from VNX to user concurrently for high availabilityThird-party applications: • Runs over VNX Event EnablerNorthern Parklife—NSS infrastructureNTP Software—QFSVaronis—DatAdvantageSECURITY AND COMPLIANCE SUITE© Copyright 2011 EMC Corporation. All rights reserved. 67
  • Total Protection—Better than EverUnified replication with the Total Protection Pack • Local and remote data recovery with DVR-like roll-back • Restore individual or multiple virtual machines with a single click • Define and enforce custom RPOs and SLAs across virtual infrastructure • Automated failover and failback • Proven reference architectures© Copyright 2011 EMC Corporation. All rights reserved. 68
  • Options for Data Protection Traditional Backup Daily Backup: Recovery point every 24 hours SnapViewithSnapSure Snapshots/Clones: Recovery point every 3 hours MirrorViewithReplicator Disk Mirroring: Recovery point latest image replicated RecoverPoint/SE CDP and CRR Continuous Data Protection: DVR like recovery Unlimited recovery points, application bookmarks (T) TIME Checkpoint Pre-Patch Patch Post-Patch Cache Flush Hot Quarterly Checkpoint Backup Close© Copyright 2011 EMC Corporation. All rights reserved. 69
  • VNX Local Protection SuitePractice safe data protection and repurposingSnapView and SnapSure RecoverPoint/SE CDP• Production data copies for • DVR-like roll-back of production instant recovery on file and applications to any point-in-time block storage • Granular recovery to the I/O• Streamline data protection level for VNX for Block storage and repurposing use cases • Self-service recovery with – Development/QA testing tighter RPO and flexibility – Reporting/decision support tools – Backup acceleration • Streamline data repurposingLOCAL PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 70
  • SnapView and SnapSure OverviewAccelerate data protection with point-in-time replicas • Provide near-instant recovery Backup replica • Increase application availability while reducing Recovery Production data replica downtime Test and – Improve RTOs and RPOs devt. replica DB check- – Eliminates downtime during backup window point replica • Facilitates source data restore Recovery Replica • Enable parallel processing for: – Data-warehouse refreshes Production Test File System Replica – Decision support – Application development and testing Dev Replica • Application integration and advanced monitoring via the optional Application Protection SuiteLOCAL PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 71
  • Point-in-Time Views with Snaps Logical PIT Copies Source data • Pointer-based copy of data – Takes only seconds to create a complete snap Logical point-in-time • Requires less space than a full copy, but has performance Snap view overhead Snap – Only need space for modified data—―Copy on First Write‖ Snap – Could result in spindle contention for concurrent reads (from source and snap) Snap Physical PIT Copies* Clones • Physically independent point-in-time copies of source Clones Full-image volume copies Clones – Require the same space as the source data – Available after initial copy Clones – No performance impact on source data – Can be used to replace source after hardware or software error Source LUN • Can be incrementally re-established * Physical PIT copies for file systems requires VNX ReplicatorLOCAL PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 72
  • Logical Snap: Copy-on-First-WriteProcess Source data COFW Storage Block A Block B Updated Production Server Original Block C Block C Block D Memory 0 0 1 0 0 0 0 0 Block A 0 0 0 0 Block B 0 0 0 0 Block C 0 0 0 0 Block D Secondary Server Tracking Bitmap SnapLOCAL PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 73
  • Instant Restore Production • Restore source data contents to different server point-in-time version – Any snap or clone may be selected – Does not affect other replicas • Change in source data appears Source instantaneously to host data – Data copied to source data ―behind the scenes‖ – During restore, source data is accessible Recover for server reads and writes Back up – Other replicas available for server I/O • Ideal for operational recovery, to recover Snap Clone quickly 1:00 a.m. 10:00 p.m. Snap 2:00 a.m. Snap 9:00 p.m. Backup • Supported for LUNs and file systems serverLOCAL PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 74
  • EMC RecoverPoint/SE CDP RECOVERPOINT/SE CDP • Protects block data for physical and virtual serversApplication Copy • Provide affordable data servers RecoverPoint appliance protection Journals – Recovery to last write for VNX Prod LUN Unified block storage SAN s • Enables any point-in-time Storage arrays recovery Host-based splitter VNX based write-splitter – DVR-like roll-back to minimize data loss Federated, clustered, cloud applications – Customized RPOs Protect physical and virtualized application Integrated with VMware and VMware SRM – Self-service recoveryLOCAL PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 75
  • RecoverPoint/SE CDPRoll back production applications to any point in time  RecoverPoint write splitter – Intercepts all server writes (block-level) – Resides on the VNX series array (or optionally on host or Virtual Machine) EMC RecoverPoint  RecoverPoint Appliance iSCSI SAN FC SAN • – Runs RecoverPoint software, offloads replication processing for better scale and performance  – Performs protection and recovery for VNX Unified block storage – Handles monitoring, management, and control – Attached to FC for storage access or can be   directly attached to FC ports on VNX series Array-based – Maintains write-order fidelity write splitter runs on VNX  Journalseries, no host – Tracks all data changes to every protected LUNagent required – Utilizes bookmarks for application-aware recoveryLOCAL PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 76
  • RecoverPoint/SE Local ProtectionProcessContinuous Data Protection (CDP) 1. Data is split by the VNX splitter and sent to the RecoverPoint appliance 2. VNX splitter 3. Writes are acknowledged back from the RecoverPoint appliance 4. The appliance writes data to the journal volume, along with time stamp and application-specific bookmarks /A /C rA rC /B rB Production volumes Replica volumes Journal volume 5. Write-order-consistent data is distributed to the replica volumesLOCAL PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 77
  • VNX Remote Protection SuiteEnsure your data is physically protected all the timeRecoverPoint/SE Continuous MirrorViewRemote Replication (CRR) • Synchronous or asynchronous• One solution to protect any one to many block replication host, any application solution – Unified block and file replication• Efficiently protect all your data Replicator* – WAN deduplication for bandwidth reduction up to 90% • Scheduled asynchronous file system level replication including• Customize RPOs from zero to one to many and cascading hours for improved quality of replication service * Not available for VNX5100• Immediate DVR-like recoveryREMOTE PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 78
  • RecoverPoint/SE Remote ProtectionProcess—Continuous Remote Replication (CRR) 1. Production Data sent to storage and split by embedded VNX splitter 3. Writes are acknowledged back from the RecoverPoint appliance 6. Data is 7. Data is written received, uncompressed, seque to the journal nced, and checksummed volume2. VNX splitter 5. Data is sequenced, checksu mmed, compressed, 4. Appliance functions and replicated to • Fibre Channel-IP the remote conversion RecoverPoint /A /C appliances over IP rA rC /B • Deduplication rB or SAN • Data reduction Local site and compression Remote site Journal volume • Monitoring and 8. Consistent data management is distributed to the remote volumesREMOTE PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 79
  • RecoverPoint/SE Consistency Groups(CG)VNX with RecoverPoint/SE supports federated applications • RecoverPoint ensures consistency across production and targets OE CG 1 CRR • Maintain SLAs by assigning priorities to independent applications CRM CDP CRR • RecoverPoint enables independent CG 2 SCM CDP CRR replication of various applications • RecoverPoint utilizes consistency E-mail CG 3 CRR groups to recover and prioritize data • Supports single and multiple server (federated) applicationsREMOTE PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 80
  • RecoverPoint/SE File System ReplicationRecoverPoint/SE support for NAS file system disaster recovery onVNX Primary Storage DR Site Use case/environment App/DB Servers • Unified replication between 2 VNX DM2 (dev/test) systems with RecoverPoint/SE • Bi-directional also supported Value DM1(prod) DM1(DR stby) • Full failover of protected file systems Storage Storage • Improve DR site ROI RecoverPoint Feature details Block CG(s) • Single consistency group for File System replication per array RecoverPoint File CG • On NAS failover: – All primary site Data Movers will failover or shutdown – Non-file system storage on primary site remains availableREMOTE PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 81
  • VNX ReplicatorNative file system level replication focuses on ease-of use CASCADING REPLICATION • Service-level specifications Production Disaster recovery Disaster recovery – Automated, business-oriented policy definitions for site local site remote site recovery point objectives (RPOs) – Set interconnect Quality of Service (QoS) by scheduled bandwidth throttling • Advanced functionality – 1-to-N replication for data distribution Network Network Network – Cascading Replication for multi-site disaster recovery – Many to 1 replication for consolidation • Scalability FS/ FS/ FS/ LUN LAN LUN WAN LUN – Up to 1,024 replication sessions VDM VDM VDM Snaps Snaps Snaps • Common replication management with Unisphere 10-minute RPO 2-hour RPO • Integrates with writeable NAS snaps Easily specify replication RPO and • Supported concurrently with RecoverPoint interconnect Quality of ServiceREMOTE PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 82
  • LUN level Disaster Recovery –MirrorViewithSynchronousCost effective synchronous block replication Primary Secondary • Cost effective block replication – Supports multi-site replication • Tracks host writes while link to secondary is down – Uses a bitmap to map the entire F’J’O’ R’ EJ O R E’J’O’R’ EJ OR primary mirror – When secondary is available Fracture Log again, sends only changed data 1 0 0 0 0 • Enables partial sync; avoids full 0 1 0 0 0 re-sync 0 0 1 0 0 – Minimizes customer’s exposure to 0 1 0 0 0 out-of-sync dataREMOTE PROTECTION SUITE© Copyright 2011 EMC Corporation. All rights reserved. 83
  • VNX Application Protection SuiteAutomate application copies and prove compliance Replication Manager • Automated ―application-consistent‖ copy management • User privileges enable self-service replication Data Protection Advisor for Replication • Increased visibility for all application recovery points • Monitor, alert, troubleshoot, and report • Prove applications are recoverableA P P L I C AT I O N P R O T E C T I O N S U I T E© Copyright 2011 EMC Corporation. All rights reserved. 84
  • Replication Manager Local or RemoteProtectionImproved business continuity with automated disk-basedreplicas • Automates the creation, management, and use of Replica 1 all EMC disk-based, point-in-time replication technologies Replica 2 • Auto-discovers the environment Replica 3 • Intelligence to orchestrate replicas Replica 4 with deep application awareness Integrated with major enterprise applications • Easy to use single GUI with advanced wizards • Assignable roles for self-service replicationA P P L I C AT I O N P R O T E C T I O N S U I T E© Copyright 2011 EMC Corporation. All rights reserved. 85
  • Replication Manager Example:RecoverPoint Replication Exchange BOOKMARK Manager Mount Server Host Exchange Production SAN RecoverPoint RecoverPoint SAN BOOKMARK WAN LUN LUN Production Local Local Remote Remote LUN CDP CDP CRR CRR Copy Journal Journal Copy  Replication Manager server freezes application  Replication Manager server requests VSS-compliant bookmark  Replication Manager server thaws applicationA P P L I C AT I O N P R O T E C T I O N S U I T E© Copyright 2011 EMC Corporation. All rights reserved. 86
  • Replication Manager Example: NFS and Oracle Replicas Production host Secondary host • Automate local and remote replicas with application consistency for backup acceleration or business repurposing – Local (in-frame) uses VNX SnapSure – Remote (out-of-frame) uses VNX Replicator NFS NFS • NFS-based storage configurations on Linux protocol protocol – VNX NFS file systems mounted as network file systems on a Linux host (physical or VMware Linux guest operating system) – Oracle data on a network file system or dNFS mounted on a VNX NAS file system Source Destination file file • Oracle support includes: system system – Oracle 10g, 11g , 11g R2 on Linux IP – Real Application Cluster (RAC) to single instance cloning Snaps network Snaps – Backs up database or tablespace – Optionally backs up Flash Recovery Area (FRA) archive logs, parameter files – Application consistency through Oracle hot backup Source Destination • Backs up control file, data files, archive logs VNX VNXA P P L I C AT I O N P R O T E C T I O N S U I T E © Copyright 2011 EMC Corporation. All rights reserved. 87
  • Replication Manager for VMwareEnvironments VNX • Common management console for EMC snaps, clones and CDP replicas • Application and VM consistency Production – Exchange, SQL Server, SharePoint, Oracle SnapView • Eliminates costly, error prone scripting of replicas with point-and-click wizard-driven GUI Replica – Backup acceleration eliminates backup windows – Business continuity for instant restore or surgical repairs to production – Repurposing to dev/test environments • No impact on production performance Replication vSphere VI Client • Faster, low impact on VMware ESX Manager Server proxy serverA P P L I C AT I O N P R O T E C T I O N S U I T E© Copyright 2011 EMC Corporation. All rights reserved. 88
  • Data Protection Advisor for ReplicationAlways know you can recover REPLICATION • Define and measure protection ANALYSIS Status policies • Visibility and alerts for: – All recovery points – Recovery gaps Instant status by application or device Completeness – Replication lag Consistency SLA compliance – Configuration changesApplication integration • Integrated reportingExchangeSQL Server – Chargeback servicesOracle – Service level complianceFile Systems • Host-less optionA P P L I C AT I O N P R O T E C T I O N S U I T E© Copyright 2011 EMC Corporation. All rights reserved. 89
  • DPA Application Mapping LOCAL DATABASE STORAGE REMOTE SYSTEM SYSTEM Application Recovery Logic Awareness Data Right Backup files Mode Remote replication Vol Vol Local replication Primary storage Control files Always required 1 1 Oracle Recovery server Required with Redo logs Vol Vol Server Cold Backup 2 2 Archive Required with logs Hot Backup Host/Storage Data Protection Process Application Awareness Mapping Awareness Awareness Recovery point—A collection of images that can be used to recover an application or storage objectA P P L I C AT I O N P R O T E C T I O N S U I T E© Copyright 2011 EMC Corporation. All rights reserved. 90
  • Customize Protection PoliciesMeasure and monitor protection policy compliance • Create protection policies – Continuous – Point-in-time • Assign policy to server, group of servers, application, or line of business • Monitor policy compliance • Schedule reports to prove compliance to defined policiesA P P L I C AT I O N P R O T E C T I O N S U I T E© Copyright 2011 EMC Corporation. All rights reserved. 91
  • Next-Generation Unified StorageOptimized for today’s virtualized IT  Scales large & fast  Self-optimizes for performance and capacity efficiency  Works on any network  Everything fully automated VNX5100 VNX5300 VNX5500 VNX570 VNX750 0 0 Affordable. Simple. Efficient. Powerful.© Copyright 2011 EMC Corporation. All rights reserved. 92
  • THANK YOU© Copyright 2011 EMC Corporation. All rights reserved. 93