VNX Overview


Published on

Published in: Technology
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Whether a shrinking or expanding economy, and whether in mature or emerging markets, key IT challenges remain and traditional approaches to building and managing IT infrastructure no longer make economic sense.As EMC talks to customers about their IT infrastructure challenges, we find four recurring themes:Budgets remain flat to slightly up and are not growing nearly fast enough to meet IT demands using traditional approaches.Companies are struggling to manage increasing complexity and are looking for new tools and methodologies.Companies see no end in sight to relentless data growth and are looking for ways to keep up.Finally, fast-changing business models, competitive pressures, and other factors are putting increased demand on IT operations.It is with these challenges in mind that EMC designed the guiding principles for its next-generation platforms.
  • With these challenges and requirements in mind, EMC is introducing a new record-breaking unified storage platform—the EMC VNX family.These hardware and software solutions are simple to provision, efficient with lower-capacity requirements, affordable for any budget, and powerful enough to handle the demands of virtual applications.In fact, the new family delivers the world’s simplest array to manage and the world's highest midtier performance.The VNX family is designed from the ground up for virtual application environments—from simple, money-saving server and storage consolidation for small business, to the next-generation virtual data center applications.The VNX family is comprised of two series: VNXe series and VNX series. The VNXe series represents the entry point of the VNX family, and is designed specifically for small-to-medium businesses (SMB), remote offices or branch offices (ROBO), or departmental applications where traditional storage administration skills may not be available.The VNX series is the next-generation midrange platform. For those of you familiar with EMC’s current CLARiiON and Celerra platforms,the VNX series combines the capabilities of these systems into a single modular unified storage offering.While new, the entire VNX family shares the tradition and builds on years of know-how of the world’s most popular SAN and NAS platforms—CLARiiON and Celerra. Everything EMC has learned about high performance and high reliability culminates with the VNX family.EMC Unisphere provides a common unified management capability for EMC’s VNX family and CLARiiON and Celerra products. And as you will see, Unisphere also provides a way to simplify and automate other common storage management tasks, such as replication and backup reporting.
  • Two system models—one management environment. The VNXe series provides a choice of hardware platforms and capabilities to meet you specific needs.Common capabilities include:6 Gb/s SAS disk interface delivers the highest bandwidth for high performance and full redundancy for high availabilitySupport for15k rpm high-performance and high-capacity 7,200 rpm near-line disks.1 Gigabit Ethernet ports for shared iSCSI and NAS connectivityI/O expansion slotsManagement and protocols: Unisphere, CIFS, NFS, iSCSIAdvanced functionality: Thin provisioning, and File Deduplication with CompressionChoose:VNXe3100 for compact, highly integrated solutions with class-leading features and the option of single or dual controllers to achieve the right combination of price, performance, and availability.VNXe3300 for greater storage scalability and performance, and optional 10 Gigabit Ethernet connectivity
  • The new VNX series is of course much faster than the older CLARiiON and Celerra. The new processers makes that happen. And with all that processing power we now can take full advantage of Flash technology. By combining the VNX architecture with EMC’s industry leading Fully Automated Storage Tiered virtual pools and FAST Cache, VNX is unstoppable. Never before has this level of performance been available in a mid-tier platform. With the VNX series you now can do 3 times more than before. More users. More transactions and much faster response times. VNX supercharges your applications more than ever. No more storage bottlenecks. No more disk-bound systems. Your world can now be three-times faster.
  • Note to Presenter: View in Slide Show mode for animation. One of the biggest shifts in IT is, of course, the move to server virtualization. It continues to have a profound impact on how we look at storage provisioning today.In the “good old days,” we typically had a single application running on a server. To meet performance objectives, we would create a RAID group and then carve out LUNs statically. With virtualization, that all has changed.Note to Presenter: Click now in Slide Show mode for animation. Virtualization enables you to save on infrastructure and run more applications on fewer physical servers. To meet the growing and dynamic needs of the business, server virtualization pools server resources and enables dynamic provisioning and optimization of compute power and applications according to shifting business needs. To stay relevant, storage had to embrace this same pooling paradigm. Just like VMware vMotion allows you to move things around on the server side, EMC saw the need to allow data to move dynamically across different tiers of drives (from high performance to lower cost/high capacity) according to its business activity. Not only did such data movement make sense, it also had to be fully automated and self-managing.As the storage system takes on more and more automation and ongoing optimization, it needs more processing power. EMC knew this to be a central requirement for its next-generation midtier storage systems.
  • The FAST Suite improves performance and maximizes storage efficiency by deploying this FLASH 1st strategy. FAST Cache, an extendable cache of up to 2 TB, gives a real-time performance boost by ensuring the hottest data is served from the highest performing Flash drives for as long as needed. FAST VP then complements FAST Cache by optimizing storage pools on a regular, scheduled basis. You define how and when data is tiered using policies that dynamically move the most active data to high-performance drives (e.g., Flash), and less active data to high-capacity drives, all in one-gigabyte increments for both block and file data.Together, they automatically optimize for the highest system performance and the lowest storage cost simultaneously.
  • The second feature provided in the FAST Suite, which is highly complementary to FAST Cache is FAST for Virtual Pools. The combination of FAST Cache and FAST VP addresses the perennial storage management problem: the cost of optimizing the storage system. In many cases prior to FAST and FAST Cache, it was simply too resource intensive to perform manual optimization and many customers simply overprovisioned storage to ensure the performance requirements of a data set were met. With the arrival of Flash drives and the FAST Suite, we have a better way to achieve this fine cost/performance balance:The classic approach to storage provisioning can be repetitive and time-consuming and often produces uncertain results. It is not always obvious how to match capacity to the performance requirements of a workload’s data. Even when a match is achieved, requirements change, and a storage system’s provisioning may require constant adjustment. Storage tiering is one solution. Storage tiering puts several different types of storage devices into an automatically managed storage pool. LUNs use the storage capacity they need from the pool, on the devices with the performance they need. Fully Automated Storage Tiering for Virtual Pools (FAST VP) is the EMC® VNX® feature that allows a single LUN to leverage the advantages of Flash, SAS, and Near-line SAS drives through the use of pools.  FAST solves theses issues by providing automated sub-LUN-level tiering. FAST collects I/O activity statistics at the 1 GB granularity level (known as a slice). The relative activity level of each slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated at the user’s discretion through either manual initiation or an automated scheduler.  Through the frequent relocation of 1 GB slices, FAST continuously adjusts to the dynamic nature of modern storage environments. This removes the need for manual, resource-intensive LUN Migrations while still providing the performance levels required by the most active dataset, thereby optimizing for cost and performance simultaneously.
  • The industry-leading innovations of the VNX series and the FAST Suite translate into compelling improvements in real-world virtualized application environments. For Microsoft SQL Server, the VNX series supports more than three-times the number of users and transactions versus the CLARiiON CX4.In environments running VMware View, the VNX series can boot 500 virtual desktops in eight minutes versus 27 minutes with EMC’s previous generation (without Flash or FAST Suite). Likewise in virtualized Oracle environments, the VNX series can support more than three-times the number of users and transactions.
  • Storage must be lean and highly efficient to satisfy the rigorous demands of today’s IT. Data is simply growing faster than IT budgets can keep up. Relentless efficiency is mandatory. Gone are the practices of wasteful thick LUNs and file systems. Over the last couple of years EMC has seen IT move to a just-in-time provisioning model where storage capacity is allocated upon consumption. This practice is commonly referred to as thin LUNs or thin provisioning.In thin LUNs, capacity from the system’s storage pool is only used when new and additional data is written. If data is erased, the capacity is given back to the pool. Liberating IT from having to “guess” the right amount of storage needed for each user and application in advance has yielded much higher utilization rates. EMC typically sees a shift from 60 percent storage utilization to 90 percent or higher. This means you actually can store more data without having to buy more storage. With classic thick LUNs, IT shops had no choice but to live with 40 percent wasted resources. This is now a thing of the past. Thin LUNs prevent it.But thin LUNs are not the only capacity-efficiency feature that the VNX series offers to challenge rising storage costs. The VNX series also offers file-level de-duplication and storage pool LUN compression. Combining both with thin provisioning amplifies system efficiency by up to three times. This means you get three-times the usable capacity per IT dollar spent. Your budget effectively stretches three-times further.This is an important aspect of the efficiency strategy. With the VNX series’ efficiency technologies, IT is ready to meet new and increasing demands. The VNX series’ capacity and performance efficiency technologies are designed to work together and may be combined to achieve the ideal, highly efficient mix of performance, capacity, power, and footprint.
  • The VNX series is managed by EMC Unisphere 1.1, which enables a single system view of unified, file, and block systems with all features and functions available in a single interface. Unisphere 1.1 will also be compatible with older systems, including CLARiiON and Celerra systems running DART 6.0 or higher.Unisphere offers a new approach to storage management. It is an easy-to-use, sleek interface that provides a simple integrated management experience for the VNX series, VNXe series, CLARiiON, and Celerra. It is extensible and supports other EMC mid-tier products such as RecoverPoint, Replication Manager, Data Protection Advisor, and Atmos Virtual Edition.Log on to Unisphere once and discover all the systems in your environment. Unisphere offers built-in contextual access to the support ecosystem and is your one-stop shop for all your support and service needs. To leverage all these benefits, you don’t need to make any radical changes to your existing infrastructure. Unisphere can be seamlessly added to your current environment. Let’s take a closer look at each of these capabilities…Common experience for SAN and NAS:One look and you know what’s going onSingle sign-onAggregated alertsSummary viewsReports with sorting and filteringData exportable to 3’rd party tools Tight VMware integrationAutomatic discovery of virtual machines and ESX serversEnd-to-end virtual-to-physical mappingAutomated virtual infrastructure reportingBuilt-in access to support communityOne-click self service
  • Note to Presenter: View in Slide Show mode for animation. The VNXe series was specifically designed to integrate with your server and application environments. The wizards for provisioning new storage do it in the context of the application, rather than as just generic capacity. There’s no need to be a RAID expert. Simply let the wizard be your guide. Gain instant expertise and let the system do the hard work.For example, the Microsoft Exchange wizard asks you for your Exchange version, how many mailboxes are needed, and what size, and then automatically creates the storage volumes needed for the data and log files. The wizards take the guesswork out provisioning storage and embodies years of EMC experience and best practices. If you are creating volumes using iSCSI, or file shares for CIFS or NFS, the appropriate wizard will create the storage, set up appropriate access, and enable snapshots and even external replication, all in less than 10 clicks. Simply confirm your selections or accept the defaults, and it is all done for you. With the VNXe series, storage is now easier than ever.
  • Consolidation of the mid-tier data protection software, specifically providing a single solution for local and remote data protection, replication, failover and disaster recoveryNew packaging reduces number of SKUs to simplify ordering All titles now offered with array pricing, makes quoting and selling easierDifferentiated solution that bundles best of breed technologies at competitive price pointsTighter alignment with array business to drive software replication pen rates
  • For simple peace of mind and total protection—whether it be local protection or remote protection, whether it be for encryption or for application protection—the VNX series protects your system better than ever.The VNX series provides unified replication for local and remote data recovery with DVR-like rollback capabilities for business continuity on block-based storage.By allowing recovery of production applications with minimal data exposure through rollback to a point in time abilities, you can now restore through a simple click.Simply define your recovery point objectives: set-it-and-forget-it.Automating processors for failover and failback further reduces risk exposure and simplifies ongoing protection management.
  • EMC is the storage platform of choice in the virtualized server market. The reason that two out of three CIOs choose EMC for their VMware environments is simple—EMC delivers high-availability platforms that are critical in the virtual server market. High availability, advanced functionality, and excellent service place Celerra ahead of the nearest competitor.EMC Celerra provides the same I/O workload before and after a failure, which is a necessity when you have many virtual applications dependent on this I/O payload. With other vendors, a failure can cause serious degradation, resulting in all virtualized applications degrading concurrently.As shown in the Forrester Research and Goldman Sachs survey results on the slide, EMC is the number one vendor for VMware-attached storage.Note to Presenter: The quote on the slide comes from the Goldman Sachs “IT Spending Survey.”
  • The VNX platform is optimized for virtualization with over 50 points of tight integration.Until today, one of the major challenges facing users in virtual environments was a management complexity gap. Storage administrators have access to detailed information on the array, but lack visibility into how the virtual server is configured and which virtual machines are consuming what for storage resources. VMware administrators, on the other hand, can see the details of the virtual server environment and virtual machines, but lack visibility into the storage system. On VNX, Unisphere together with vCenter Server integration makes storage management in a virtualized environment a seamless experience. Each administrator can use their familiar interface to gain full visibility into virtual and physical resources, transparently provision storage, integrate replication, and access and offload all storage functions to the storage system. To further drive efficiency in the VMware space, EMC has delivered on the VMware vStorage APIs for Array Integration (VAAI), allowing the VNX series is be fully optimized for virtualized environments. This technology offloads VMware storage-related functions from the server to the storage system, enabling more efficient use of server and network resources for increased performance and consolidation.Letting VNX perform common data management tasks like vMotion result in:10 times network I/O10 times more virtual machines10 times faster response timeVNX delivers unmatched VMware management integration and optimization and is a perfect match for anyone running VMware.
  • EMC has several infrastructure solutions for a virtual desktop environment. Our most recent solution leverages EMC Symmetrix VMAX with Symmetrix Virtual Provisioning and VMware View Composer to significantly reduce storage costs – up to 50% while maintaining overall performance levels under load. Leveraging EMC Symmetrix Virtual Provisioning with VMware View Composer, you are able to minimize the storage requirements. EMC Virtual Provisioning A consistent end-user experience enables you to better support your business’ productivity. Finally, with centralized desktop management, you are able to better support your productivity as you are able to manage and maintain the environment from the data center instead of going to each device in your company.
  • The EMC Integrated Infrastructure for VMware enables you to quickly deploy a virtualized infrastructure for VMware. This whitepaper demonstrates how to add VMware View to the infrastructure for a turnkey solution for virtual desktops. Leveraging the best practices in this whitepaper, you can effectively use the EMC Integrated Infrastructure for VMware to support your virtual desktop environment to reduce management and operational costs.
  • The VNX series is based on an industry leading architecture that allows you to configure purpose built components that are designed specifically for the different workloads required. For different connectivity options the VNX platform supports each, concurrently with its purposed built modular architecture.VNX gives you the choice of low-cost IP or high-throughput Fibre Channel connectivity. VNX delivers both file and block protocols. For higher throughput, Fibre Channel is the growth path for iSCSI (block), and MPFS is the growth path for NAS (files).VNX lets you start small and scale throughput and capacity.Multi-protocol support NAS (CIFS and NFS), MPFS, native iSCSI, Fibre Channel and Fiber Channel over Ethernet as well as pNFS all within one unified platform, and at no additional cost, offer cost effective flexibility in deployment options and make VNX an easy purchasing decision. NAS—File-sharing protocol for Windows and UNIX systems. Typical use cases include: Traditional NAS - CAD/CAM Software engineering, Non-Traditional NAS - Oracle, VmwareMPFS—Multi-Path File System for improved performance and scalability. See pNFS for use casespNFS—Public domain equivalent of MPFS, supported on UNIX and Linux systems in conjunction with NFS V4.1. Support for pNFS for VNX is provided at no additional cost with the advanced protocol license option. Typical use cases for pNFS and MPFS include: Image processing, Bioengineering, Financial Analysis, Oil and gasiSCSI—Advanced iSCSI implementation using a familiar native CLARiiON Block LUN model with fast failover and full CLARiiON feature support. Similar use cases to Fibre channel, although more typically seen in the commercial and Small to medium business spaceFibre Channel—high-speed networking protocol primarily used in storage area networks providing full native CLARiiON feature set. Typical use cases include: Database/Data warehouse, VMware, High performance needsFiber Channel over Ethernet—High speed block protocol over converged (data center) Ethernet transport, is a new protocol appearing in data centers to reduce infrastructure costs by consolidating all storage and data networking needs onto a single Ethernet network. Similar use cases to Fibre Channel.Cloud: Cloud storage uses open protocols (REST and SOAP) to deliver public and private cloud solutions that leverage the proven back-end storage functionality of VNX. The cloud offering is based upon a solution leveraging Atmos VE running on VMware and using Block (FC or iSCSI) or File (NFS) connections to the VNX platform. Cloud is further supported within Unisphere via Link and Launch. Use cases include content-rich web applications, infrastructure as a service and archiving to the cloud. Note to Presenter: Further details are found later in this presentation.
  • Note to Presenter: View in Slide Show mode for animation. In order to have enough processing power to run all the advanced data management functions and protocols simultaneously, the VNX series is based on an industry-leading modular architecture. This architecture allows you to scale purpose-built controllers that are designed specifically for the different workloads to processed.The VNX series modular architecture deliversmaximum flexibility and performance— without compromise. Storage processor modules manages the storage pool and provides SAN block-level access (iSCSI, Fibre Channel, or Fibre Channel over Ethernet). X-Blades add scale-out shared networked file system support (CIFS, NFS with pNFS, or Multi-Path File System) and can be added independently without impacting the overall system or require planned down time. In addition, VNX seamlessly supports EMC Atmos for cloud connectivity and object protocols (REST or SOAP).Storage resources are managed in virtual self-optimizing storage pools, ensuring no stranded unused resources. Frequently accessed data is automatically moved to high-performance Flash drives and infrequently accessed data is moved to high-capacity/low-cost disk drives. This way, active data is served as quickly as possible while keeping storage cost under control.The VNX modular architecture is open. EMC has packaged the X-Blades into a VNX NAS gateway with full support for EMC block storage (Symmetrix, VNX SPs, and CLARiiON) and enables scaling up to four storage arrays. So when more storage pool processing, or investment protection, is needed, the VNX modular architecture does not leave you behind.Note to Presenter:The slide graphic depicts the VNX series architecture. Pre-integrated VNX series systems scale to two storage processors and eight X-Blades. In order to scale the VNX architecture further, a separate configuration with VNX gateways front-ending four VNX series storage arrays is needed.
  • The graphs on this slide show the significant improvements in throughput for the VNX series platforms compared with the prior generation CX4/NS series systems. The end to end architecture developments (faster CPU cores up to 6x2.8GHz, larger and faster memory with up to 24GB of DDR3 @1333MHz, faster internal memory buses at x8 PCI-e Gen 2) deliver performance improvements of 2-3x that of the prior platform.That performance improvement applies to all working sets and application types; Exchange, OLTP databases, Data warehouse, Virtualized environments as well as all File applications.One important thing to note is how the platforms only truly meet their full performance potential when implemented with Flash drives. The VNX is the first storage platform truly designed to take advantage of the game changing Flash technology.
  • The EMC VNX Series also had the lowest overall response time (ORT) of systems tested, taking the top spot with a response time of .96 milliseconds. EMC’s response time is 3 times faster than the IBM offering in second place. Faster response times enable end-users to access information quicker and more efficiently. Chris Mellor in The Register blog entry EMC kills SPEC benchmark with all-flash VNX ( writes about IBM, HP and NetApp: “For all three companies, any ideas they previously had of having top-level SPECsfs2008 results using disk drives have been blown out of the water by this EMC result. It is a watershed benchmark moment. ®”
  • Note to Presenter: View in Slide Show mode for animation.While FAST provides automated and efficient tiering over time, FAST Cache leverages enterprise Flash drives to extend existing cache capacities to automatically absorb unpredicted spikes in application workloads, and thereby speeds system and application performance for data that is not already at the Flash tier.Where FAST with Sub-LUN Tiering works at a very granular level of 1 GB chucks, FAST Cache takes this concept one step further by working at the 64K I/O level. By doing so, FAST Cache acts more like dynamic, but persistent, controller cache. By extending controller cache with Flash, the cache-hit ratio is dramatically improved. As a result, the new goal for most application workloads is to strive for a 90 to 95 percent cache hit rate. This is achievable because the size of Flash-based cache is up to 64-times larger than the controller’s original DRAM (dynamic random access memory)cache.Cache hit rates will typically go from one out of five I/Os served from cache to nine out of 10 I/Os served from cache, a 4.5-times improvement.FAST Cache may be added to existing LUN configurations and acts as a system-wide resource. With FAST Cache, you now have multi-terabyte, read-write, non-volatile cache—an absolute first for storage platforms in the midtier market. The fact that data is written to enterprise Flash drives means that when the system returns from a power failure or planned outage, the cache is already warmed up and service levels can readily resume at the point they were before the disruption.The size of FAST Cache is more than ample to catch transitory spikes in I/O demand. Should large amounts of Flash be needed to meet service level agreements, it is important to know that FAST Cache works in unison with FAST Sub-LUN Tiering and that the two technologies complement each other fully.Note to Presenter: EMC’s FAST Cache works for both reads and writes. Competitors, like NetApp, frequently only implement Flash as proprietary read-only schemes.
  • FAST will turbo-charge most applications since frequently accessed data automatically will be moved to cache in 64 KBchunks. EMC is one of the few companies offering both read and write acceleration using Flash drives. Two or more Flash drives may be used to form 2 TB (4 TB raw) of mirrored read and write cache.Both database and VMware datasets benefit from FAST Cache. There is little doubt that FASTCache will help any blend of applications. The only place where FAST Cache will be of little help is for 100 percent sequential workloads. To accelerate such workloads, add Flash drives to your FAST pools.
  • FAST Cache can be enabled at the LUN level (for Classic RAID Group LUNs) or at the Pool level if using Thick or Thin Pool LUNs, in which case all LUNs in the pool will be enabled for FAST Cache. It is not recommended to enable FAST Cache for all LUNs in the system as some workloads do not experience sufficient cost/benefit from FAST Cache.The FAST Cache capacity options listed are the maximum sizes of FAST Cache for the specific platform. These capacities are presented in the form of maximum capacity based upon the drive type (capacity using 100GB drives/max capacity using 200GB drives) and represent the useable mirrored capacity. 100GB and 200GB drives cannot be mixed in a FAST Cache configuration.FAST Cache benefits most workloads as well as supporting internal functionality such as snaps, thin LUNs and compressed LUNsNote to Presenter: Be aware that VNX 5100 cannot support BOTH Thin Provisioning and FAST Cache at the same time due to physical memory constraints
  • As its name implies, FAST is a completely automated feature and implements a set of user defined policies to ensure it is working to meet the data service levels required for the business. Typically FAST will move data between Flash, SAS and Near-Line SAS media as it ages and becomes less active, although customers may decided to configure a separate pool and use FAST with just a small amount of SAS and Near-Line SAS for optimized TCO for their 3rd Tier applications.FAST policies control how FAST should apply to individual LUNs in a storage pool via the following options:Auto-tier - Auto-tier is the default setting for all pool LUNs upon their creation. FAST will relocate slices of these LUNs based solely on their activity level after all slices with the highest/lowest available tier have been relocated. Highest available tier - Highest available tier should be selected for those LUNs which, although not always the most active, require high levels of performance whenever they are accessed. FAST will prioritize slices of a LUN with highest available tier selected above all other settings. Lowest available tier - Lowest available tier should be selected for LUNs that are not performance- or response-time-sensitive. FAST will maintain slices of these LUNs on the lowest storage tier available regardless of activity level. No data movement - No data movement may only be selected after a LUN that has already been created. FAST will not move slices from their current positions once the no data movement selection has been made.The tiering policy chosen, also affects the initial placement of a LUN’s slices within the available tiers. Initial placement with the pool set to auto-tier will result in the data being distributed across all storage tiers available within the pool, based upon the relative capacity of each tier available in the pool. LUNs set to highest available tier will have their component slices placed on the highest tier that has capacity available. LUNs set to lowest available tier will have their component slices placed on the lowest tier that has capacity available.Additionally, a relocation schedule is set and the rate of data relocation defined. This allows the relocation process to automatically run at a quiet time of the day and to minimize the impact to the ongoing workload. Typically the relocation process is scheduled every 24 hours but it is possible to relocate data as frequently as you want, although there is a tradeoff between system resources and frequency of re-location. FAST VP also supports manual relocation should a relocation be required outside of the regular schedule.
  • The value of FAST is based upon its ability to drive as much of the highly accessed data components on to the highest tiers of storage (eg Flash) while optimizing for TCO by driving low accessed data to high capacity NL-SAS drives. It achieves this with a number of mechanisms:Statistics collection - One slice of data is deemed “hotter” (more activity) or “colder” (less activity) than another based on the relative activity level of those slices. Activity level is determined simply by counting the number of I/Os, reads and writes bound for each slice. FAST maintains a cumulative I/O count and weights each I/O by how recently it arrived. This weight deteriorates over time. New I/O is given full weight. After approximately 24 hours, the same I/O will carry only about half-weight. Over time the relative weighting continues to go down. Statistics collection happens continuously in the background on all pool LUNs.Analysis - Once per hour, the collected data is analyzed. This analysis produces a rank ordering of each slice within the pool. The ranking progresses from the “hottest” slices to the “coldest.” This ranking is relative to the pool. A “hot” slice in one pool may be “cold” by another pool’s ranking. There is no system-level threshold for activity level. The user can influence the ranking of a LUN and its component slices by changing the default policy from auto-tier to either highest or lowest tier preferred, in which case the tiering policy will take precedence over activity level.Relocation - During user-defined relocation windows, 1 GB slices are promoted according to the rank ordering performed in the analysis stage. During relocation, FAST will prioritize relocating slices to higher tiers. Slices are only relocated to lower tiers if the space they occupy is required for a higher priority slice. In this way, FAST will attempt to ensure the maximum utility from the highest tiers of storage so as data is added to the pool, it will initially be distributed across the tiers and then moved up to the higher tiers if space is available. 10% space is maintained in each of the tiers to absorb new allocations that are defined as “Highest Available Tier” between relocation cycles. Lower tier spindles are utilized as capacity demand grows. Relocation can be initiated either manually or by a user-configurable, automated scheduler.
  • Slide 1 of 3 – The following three slides show a visual animation of FAST VP in conjunction with FAST Cache in operation. FAST Cache and FAST VP should be used together to yield high performance and TCO from the storage system. As an example, Flash drives can be used to create FAST Cache and FAST VP can be used on a pool consisting of Flash, SAS and NL-SAS disk drives. This slide shows activity levels changing, and subsequent scheduled data relocation.Note to Presenter :<click 4x> 4 blocks change color (activity level) in sequence<click 5th> two blocks moved, up, and two moved down
  • Slide 2 of 3 – this shows heating up of sub-slice chunks of 64 KB Granularity, and being copied into FAST CacheNote to Presenter: <click 3x> sub-slice chunks turn red and are copied into FAST Cache, in sequence.
  • Slide 3 of 3 – this shows swapping in of more active I/O into FAST CacheNote to Presenter: <click 3x> sub-slice chunks in FAST Cache cool off, relatively (turning yellow). When sub-slice chunks on disk warm up (turn red), they swap places in FAST Cache with cooler data, in sequenceThe combined benefit shown here is that FAST Cache will provide immediate performance benefit to any bursty data while FAST will move warmer data to SAS drives and colder data to NL-SAS drives. In addition to the performance benefit, there is also a TCO benefit in that FAST Cache with a small number of Flash drives serves the data that is accessed most frequently, while FAST VP with Flash, SAS and NL-SAS drives can optimize disk utilization and efficiency as well as providing for the vagaries of longer term data access patterns.
  • VNX Overview

    1. 1. VNX Overview<br />Presenter: Allan Trotman<br /> EMC – Unified Infrastructure Group<br />Advisory Technical Consultant<br />
    2. 2. IT Challenges: Tougher than Ever <br />Four central themes facing every decision maker today<br />Overcome flat budgets<br />Manage escalating complexity<br />Cope with relentless data growth<br />Meet increased business demands <br />
    3. 3. EMC Unisphere<br />Next-Generation Unified Storage<br />Optimized for today’s virtualized IT<br />VNXe3100<br />VNX7500<br />VNX5700<br />VNXe3300<br />VNX5100<br />VNX5500<br />VNX5300<br />Affordable.Simple. Efficient.Powerful.<br />
    4. 4. Positioning - Customer Segments<br />Data Center<br />Dept/ROBO<br />Buyer: <br /><ul><li>IT Infrastructure
    5. 5. Storage Specialist</li></ul>Buying Criteria:<br /><ul><li>Efficiency
    6. 6. Performance
    7. 7. Feature/Function
    8. 8. Upgradeability</li></ul>Buyer: <br /><ul><li>IT Generalist /Application specialist
    9. 9. Remote-Office/Branch-Office (ROBO) or Department within Enterprise</li></ul>Buying Criteria:<br /><ul><li>Price and Simplicity
    10. 10. Application Performance
    11. 11. ROBO: central management
    12. 12. Departmental: “vertical” app affinity</li></ul>Enterprise<br />VNX<br />series<br />VNXe series<br />VNXe series<br />Mid Market<br />SMB/Mid<br />Buyer: <br /><ul><li>IT Generalist /Application specialist</li></ul>Buying Criteria:<br /><ul><li>Price & Simplicity
    13. 13. Application Performance
    14. 14. Flexibility</li></ul>Mid Market<br />SMB<br />Limited Opportunity: Server Affinity / Cloud Services<br />
    15. 15. Purpose Designed Architectures<br />VNX series: Modular Unified<br />VNXe series: Integrated Unified<br /><ul><li>Configurable for purpose: File, Block and/or Object
    16. 16. Maximizing performance, scalability and flexibility as you grow</li></ul>Designed for IT generalist<br />Maximizing simplicity and easy of use (no X-Blades, Control Stations etc.)<br />Providing balanced performance, scalability and flexibility<br />SOAP<br />iSCSI<br />REST<br />FCoE<br />CIFS<br />NFS<br />FC<br />iSCSI<br />Xblade<br />Xblade<br />Xblade<br />Xblade<br />Xblade<br />Xblade<br />Xblade<br />Xblade<br />CIFS<br />NFS<br />Storage Processor A<br />Storage Processor A<br />Storage Processor B<br />Storage Processor B<br />Highest<br />Capacity (NL-SAS)<br />Highest<br />Capacity (NL-SAS)<br />Good<br />Performance<br />(SAS)<br />Good<br />Performance<br />(SAS)<br />Highest<br />Performance<br />(Flash)<br />Virtual Storage Pool<br />Virtual Storage Pool<br />
    17. 17. VNXe SeriesModels<br />Simple. Efficient. Affordable.<br />
    18. 18. VNX Family Common Functionality<br />VNX Series and VNXe Series<br />Unified Platform<br />One platform supporting File, Block and Object<br />Easily Upgradable<br />Start with File or Block and upgrade to Unified<br />Centralized Management<br />One management framework<br />Capacity Optimization Services<br />Virtual Provisioning, Compression, File Dedupe<br />Powerful<br />New multi-core Intel CPUs &6Gb/s SAS backend<br />High Availability <br />Designed to deliver 5 9’s availability<br />Optimized For All Virtual Applications<br />VMware and Hyper-V integration <br />Packaged Software<br />Simple software suites<br />
    19. 19. VNX Series Hardware<br />Simple. Efficient. Powerful.<br />
    20. 20. 3x Better Performance <br />More users, more transactions, better response time<br />FAST Cache<br />FAST VP <br />3X<br />VNXPlatform<br />Faster<br />CX/NS Platforms<br />
    21. 21. Virtualization Changes Everything<br />To move to virtual servers demands new storage solutions<br />Self-Optimizing Storage Pools<br />Static RAID Groups<br />Virtual server pool<br />Discrete servers<br />RAID groups<br />Storage pool<br />Storagesystem<br />Storagesystem<br />AUTOMATIC APPLICATION OPTIMIZATION <br />AUTOMATIC DATA OPTIMIZATION <br />
    22. 22. Flexible Storage Tiers<br />Optimize TCO with tiered service levels<br />SAS back-end connect for performance and reliability<br />Up to 24Gb (4x6 Gb) per SAS bus<br />Point-to-point, robust interconnect<br />Flash (SSD) options<br />Highest performing drives<br />3.5” – 100 GB, 200 GB<br />~3,000 IOs per second<br />SAS (HDD) options<br /><ul><li>3.5” drives (195 drives/ rack)
    23. 23. 300 GB and 600 GB
    24. 24. 10K and 15K RPM
    25. 25. ~ 140 (10K) to ~ 180 (15K) IOs per second
    26. 26. 2.5” Drives (500 drives/ rack)
    27. 27. 300 GB and 600 GB
    28. 28. 10K RPM</li></ul>Near-line SAS (HDD) options<br /><ul><li>3.5” drives (195 drives/ rack)
    29. 29. 2 TB
    30. 30. 7.2K RPM
    31. 31. ˜ 90 IOs per second</li></ul>AUTOMATIC DATA OPTIMIZATION<br />Near-line SAS<br />(7.2K rpm)<br />SAS<br />(10K /15K rpm)<br />Flash<br />VIRTUALSTORAGE POOL<br />Highest capacity<br />Good performance<br />Highest performance<br />
    32. 32. The FAST Suite<br />Highest performance & capacity efficiency – automatically!<br /><ul><li>FAST Cache continuously ensures that the hottest data is served from high-performance Flash SSDs
    33. 33. FAST VPsupporting both file and block optimizes storage pools automatically, ensuring only active data is being served from SSDs, while cold data is moved to lower-cost disk tiers
    34. 34. Togetherthey deliver a fully automated FLASH 1st storage strategy for optimal performance at the lowest cost attainable</li></ul>Real-time caching withFAST Cache <br />FlashSSD<br />High Perf. HDD<br />High Cap.<br />HDD<br />Scheduled optimization with FAST VP<br />
    35. 35. FAST Cache Approach<br />MAP<br /><ul><li>Page requests satisfied from DRAM if available
    36. 36. If not, FAST Cachedriver checks map to determine where page is located
    37. 37. Page request satisfied from disk drive if not in FAST Cache
    38. 38. Policy Engine promotes a page to FAST Cache if it is being used frequently
    39. 39. Subsequent requests for this page satisfied from FAST Cache
    40. 40. Dirty pages are copied back to disk drives as background activity</li></ul>Exchange<br />SharePoint<br />Oracle<br />Database<br />File<br />VMware<br />SAP<br />DRAM<br />Policy<br />Engine<br />FAST Cache<br />Driver<br />Disk Drives<br />
    41. 41. FAST VP for Block and File Access<br />Optimize VNX for minimum TCO<br />BEFORE<br />AFTER<br />Automatesmovement of hot or cold blocks<br />Optimizesuse of high performance and high capacity drives<br />Improvescost and performance<br />Pool<br />LUN 1<br />Tier 0<br />LUN 2<br />Tier 1<br />Tier 2<br />Most activity<br />Neutral activity <br />Least activity <br />
    42. 42. Proven Results<br />Reference Architecture, White Papers, etc. Available at GA<br />Run applications faster than ever<br />3 Times the Performance<br /><ul><li>3Xthe number of users
    43. 43. 3X the number of transactions
    44. 44. 3X faster VMware View boot time
    45. 45. 3X quicker VMware View refresh time
    46. 46. 3Xthe number of users
    47. 47. 3X the number of transactions</li></li></ul><li>FAST and FAST Cache <br />Optimal performance at lowest possible costs<br />29% lower 3-year TCO<br />Maintenance<br />Unisphere,Unisphere Analyzer<br />50% lower operating cost<br />Maintenance<br />CLARiiON CX4-960<br />130 TB raw<br />All Fibre Channel<br />220 x 600 GB 15K<br />Unisphere,Advanced FAST Suite<br />CLARiiON CX4-960<br />130 TB raw<br />FAST Cache<br />8 x 73 GB Flash<br />Tiered Storage<br />30 x 600 GB 15K<br />57 x 2 TB SATA<br />Power and Cooling<br />Management<br />Power and Cooling<br />Management<br />System Cost<br />No FAST, no FAST Cache<br />no tiering<br />Three-Year OpExNo FAST, no FAST Cache<br />no tiering<br />Three-Year OpExFAST and FAST Cache<br />with Fibre Channel andSATA<br />System Cost FAST and FAST Cache<br />with Fibre Channel andSATA<br />
    48. 48. 3 Times More Efficient<br />More storage, better utilization, lower cost<br />FASTSuite<br />File De-Dupe &Compression<br />3X<br />Thin Provisioning<br />More Efficient<br />ClassicProvisioning<br />
    49. 49. VNX Thin Provisioning<br />User B10 GB<br />User A<br />10 GB<br />User C10 GB<br />Logical <br />application <br />and user view<br />Physical <br />allocation<br />4 GB<br />Physical consumed storage<br />2 GB<br />2 GB<br />Only allocate the actual capacity required by the application<br />VNX THIN PROVISIONING<br />Capacity oversubscription allows intelligent use of resources<br />File systems<br />FC and iSCSI LUNs<br />Logical size greater than physical size<br />VNX Thin Provisioning safeguards to avoid running out of space<br />Monitoring and alerting<br />Automatic and dynamic extension past logical size<br />Automatic NAS file system extension<br />FC and iSCSI dynamic LUN extension<br />Capacity on demand<br />
    50. 50. VNX Virtual Provisioning<br />Thick pool LUN<br />Full capacity allocation<br />Near RAID-Group LUN performance<br />Capacity reserved at LUN creation<br />1 GB chunks allocated as relative block address is written<br />Thin pool LUN<br />Only allocates capacity as data is written by the host<br />Capacity allocated in 1 GB chunks<br />8 KB blocks contiguously written within 1 GB<br />8 KB mapping incurs some performance overhead <br />
    51. 51. 3 Times Simpler<br />Smarter management, greater automation, faster results<br />3XSimpler<br />Manage it all :<br />VNX, CLARiiON and Celerra<br />Storage pools, virtualization, replication<br />File, block and object<br />…from a single pane of glass with Unisphere™<br />
    52. 52. Instant Expertise <br />Best-practice wizards configure storage with just a few clicks<br />Email Wizard<br />Set up hundreds of Exchange mailboxes in fewer than 10 clicks<br />Hyper-V Wizard<br />VMware Wizard<br />Set up 1 TB Hyper-V datastore in 10 minutes <br />Set up 1 TB VMware datastore in 10 minutes <br />Share Wizard<br />Volume Wizard<br />Set up NFS and CIFS shares in minutes<br />Set up iSCSI volumes in minutes<br />
    53. 53. VNX Series Software<br />Software Solutions Made Simple<br />Attractively Priced Packsand Suites<br />Total Efficiency Pack<br />FAST Suite<br />Security and Compliance Suite<br />TotalProtection Pack <br />Local Protection Suite<br />Remote Protection Suite<br />Application Protection Suite<br />VNX5100 does not support FAST VP, Event Enabler, FLR, Replicator, or SnapSure<br />It also does not support the FAST Suite or the Total Efficiency Pack, but a Total Value Pack instead<br />
    54. 54. Unified Remote Replication<br />Next Gen<br />Today/2011:<br />MirrorView/S<br />RecoverPoint/SE<br />MirrorView/A<br />Celerra Replicator<br />File System support starting in 1H’11<br />Granular RPO via journaling<br />Less bandwidth via Dedupe/Compression<br />Flexibility – Synch, Asynch<br />RecoverPoint<br />SIMPLIFY<br />single remote replication solution for any data type<br />INTEGRATE<br />integrate with Unisphere array manager<br />
    55. 55. Total Protection: Better than Ever<br />Local and remote data recovery with DVR-like (SKY+) rollback/roll-forward<br />Restore individual or multiple virtual machines with a single click<br />Define and enforce custom recovery point objectives and service level agreementsacross virtual infrastructure <br />Automated failover and failback<br />Proven Reference Architectures <br />Unified replication with the Total Protection Pack<br />
    56. 56. Any point in time<br />All recovery points<br />Significant point in time<br />Significant<br />points in time<br />Any point<br />in time<br />Continuous Recovery Points<br />Database<br />checkpoint<br />Pre-app<br />patch<br />Post-app<br />patch<br />Database<br />checkpoint<br />Quarterly<br />close<br />Any user-<br />configurable event<br />Daily recovery points—from tape or disk<br />Daily backup<br />More frequent disk-based recovery points<br />Snapshots<br />RecoverPoint<br />Snapshot<br />Daily backup<br />24 hours<br />Yesterday<br />Midnight<br />Now<br />
    57. 57. Traditional Backup<br />Daily Backup: Recovery point every 24 hours<br />SnapView with SnapSure<br />Snapshots/Clones: Recovery point every 3 hours<br />MirrorView with Replicator<br />Disk Mirroring: Recovery point latest image replicated<br />RecoverPoint/SE CDP and CRR<br />Continuous Data Protection: DVR like recovery<br />Unlimited recovery points, application bookmarks<br />(T) TIME<br />Checkpoint<br />Patch<br />Post-Patch<br />Cache Flush<br />Quarterly<br />Close<br />Hot<br />Backup<br />Checkpoint<br />Pre-Patch<br />Compared to Traditional Data Protection <br />
    58. 58. EMC: The VMware Choice<br />Two out of three CIOs pick EMC for their VMware environments<br />Trusted storage platform for the most critical and demanding VMware environments<br />Advanced integration and functionality that maximizes the value of a virtualized data center<br />Flexibility to meet infrastructure to business and technical needs<br />Knowledge, experience, and partnerships to make your virtual data center a reality<br />“Which vendor(s) supplied the networked (SAN or NAS) storage used for your virtual server environment?”<br />“Which is your storage vendor of choice in a virtual server environment?”<br />“EMC remains the clear storage leader in virtualized environments.”<br />
    59. 59. Optimized for Virtualization<br />Seamless virtualization experience and acceleration<br />VAAI offload<br />Virtual Server Administrator<br />Storage Administrator<br />vCenter Serverintegration<br />Unisphere<br />Up To<br />10X<br />less net I/O<br />more VMs<br />faster response<br />Manage storage & VM<br />resources in unison<br />Control offload functions and<br />integrated replication<br />Manage storage & VM<br />resources in unison<br />Control offload functions and <br />integrated replication<br />
    60. 60. Virtualization Management<br />EMC Virtual Storage Integrator plug-in <br />EMC Virtual <br />Storage <br />Integrator<br />Integrated point of control to simplify and speed VMware storage management tasks<br /><ul><li>One unified storage tool for all Symmetrix, CLARiiON, Celerra, VNX series, and VNXe series</li></ul>VMware vSphere<br />Unified storage<br />
    61. 61. EMC Infrastructure for Virtual Desktops<br />Enabled by EMC VNX, VMware vSphere 4.1, VMware View 4.5, and VMware View Composer 2.5 <br />Improve desktop environment performance by up to 60% during boot storms leveraging EMC FAST Cache<br />Reduce storage disk costs by leveraging VMware View Composer and advanced storage technologies from EMC like EMC FAST VP and FAST Cache<br />Simplify your environment by reducing the number of disk drives, up to 77%<br />
    62. 62. EMC Integrated Infrastructure for Virtual Desktops<br />Whitepaper<br />Automate and simplify storage provisioning for virtual desktops<br />Accelerate deployment of virtual desktops<br />Integrate application best practices<br />Reduce management and operational costs<br />
    63. 63. EMC Cloud Tiering Appliance (CTA)<br /><ul><li>Based on the EMC File Management Appliance
    64. 64. Provides tiering, archive and migration of file data from heterogeneous sources to the Hybrid Cloud (private and public)
    65. 65. Retains the ability to tier, archive and migrate to traditional storage platforms</li></li></ul><li>CTA – Use cases<br />Data Center<br />Cloud<br />Tiering : <br />Auto-tiering from the data center to the cloud<br />Archive :<br />Archiving to EMC platforms such as Data Domain, Centera<br />Migration<br />Migrating files from NetApp onto EMC storage (VNX, Isilon)<br />VNX FAST VP<br />Archivingvia EMC CTA<br />Auto File Tiering via EMC CTA<br />Migrationvia EMC CTA<br />Cloud Service Providers<br />EMC VNX<br />EMC<br />NetApp<br />NetApp<br />VNX<br />Isilon<br />Primary NAS<br />EMC<br />Data Domain<br />
    66. 66. CTA options for tiering to the Cloud<br />Migration<br />Move a file without leaving a stub<br />Tiering<br />Move a file and leave a stub behind on the source<br />Within the box or outside the box<br />Non-compliance requirements<br />Archive<br />Move a file and leave a stub behind<br />Compliance (WORM) and governance requirements<br />
    67. 67. CTA - Tiering/Archive<br />Find a file that matches a policy<br />Last access time, last modified time, file size, file type<br />Copy that file to another location<br />Different tier within the box (FC to SATA)<br />Off the box (VNX to Atmos)<br />Replace the original file with a stub<br />Meta-data in stub contains all info required for recall<br />Stub is 8 K on VNX, 4 K on NetApp<br />The stub looks and behaves like a normal file<br />No need to train users or modify applications<br />More primary storage space is now available<br />Stub<br />
    68. 68. CTA - Migration<br />Stubless migration<br />Move the file without leaving a stub on the source<br />Supports standard CTA policies<br />Full or partial file systems<br />File size, file type, last access, last modified<br />Include/exclude dirs<br />Think “Robocopy” or “NDMPCopy” on steroids<br />Multi-protocol<br />CIFS, NFS<br />Out-of-band<br />Clients are able to r/w on source until final cutover<br />
    69. 69.
    70. 70. What Is The VNX Series?<br />Architecture<br /><ul><li>Modular Unified (configured for purpose: File, Block & Object)
    71. 71. High performance – up to 3x performance
    72. 72. Optimized for Multi-core & Flash
    73. 73. 6G SAS Back End Infrastructure
    74. 74. Denser Packaging
    75. 75. Expanded UltraFlex I/O – FCoE, FC, iSCSI, CIFS, NFS (Parallel-NFS (pNFS), Multi-path File System (MPFS))
    76. 76. Green – High power efficiency
    77. 77. Flash, SAS & NL-SAS drives (i.e. no FC drives)</li></ul>Product Convergence<br /><ul><li>Easily Upgradable
    78. 78. Unified Packaging
    79. 79. Unified Management
    80. 80. Unified Replication (phase 1)
    81. 81. Unified FAST Suite</li></ul>Yesterday  Today<br />Block<br />VNX series<br />CX4-960<br />CX4-480<br />AX4<br />CX4-240<br />CX4-120<br />Unified<br />CLARiiON<br />Block & File <br />File<br />Block<br />NS960<br />NS480<br />NX4<br />Start with file or block and EASILY upgrade to Unified<br />NS120<br />Celerra<br />
    82. 82. VNX Unified Storage<br />Maximum CPU cores of unified data processing power<br />OBJECT<br />Build a single cloud with “N” number of systems<br />MPFS/pNFS Host<br />AUTOMATIC SERVER OPTIMIZATION<br />NAS<br />CLOUD<br />X-blade<br />X-blade<br />X-blade<br />X-blade<br />X-blade<br />X-blade<br />X-blade<br />X-blade<br />BLOCK<br />12 cores dedicated to storage pool management and high performance block serving<br />FILE AND OBJECT<br />48 cores dedicated to object, networked file system management and data sharing<br />Storage<br />Processor<br />Storage<br />Processor<br />VIRTUALSTORAGE POOL<br />AUTOMATIC DATA OPTIMIZATION<br />Flash <br />(Highest performance)<br />NL-SAS<br />(Highest capacity)<br />SAS<br />(Good performance)<br />VNX supports all protocols—today and in the future<br />SAN<br />
    83. 83. Powerful, Flexible, Modular Architecture<br />More processing power, self-optimizing pools, any network<br />X-Blade<br />X-Blade<br />X-Blade<br />X-Blade<br />X-Blade<br />X-Blade<br />X-Blade<br />X-Blade<br />SP<br />SP<br />CLOUD<br />SAN<br />NAS<br />OBJECT<br />REST<br />SOAP<br />BLOCK<br />iSCSI<br />FC<br />FCoE<br />FILE<br />CIFS<br />NFS<br />pNFS<br />MPFS<br />Multi-controller scale<br /><ul><li>Add processors as needed
    84. 84. Scale up to 96 CPU cores and thousands of drives</li></ul>Unified multi-protocol<br /><ul><li>Share files and volumes on any network</li></ul>Self-optimizing pools<br /><ul><li>Achieve lowest transaction cost and lowest capacity cost dynamically and simultaneously—all the time</li></ul>SSD<br />HDD<br />
    85. 85. VNX – Designed for Flash<br />Optimized for all of your virtual applications<br />BETTER BANDWIDTH PERFORMANCE<br />BETTER PERFORMANCE FOR MIXED WORKLOADS<br />BETTER PERFORMANCE FOR FILE SERVING<br />Flash fully leverages the power of the VNX system<br />End-to-end throughput improvements enable 2-3x performance improvements<br />All claims are subject to validation testing<br />
    86. 86. VNX—Faster than the Rest <br />Highest number of transactions and lowest response time<br />IBM<br />12<br />3X<br />FASTER<br />THAN IBM<br />10<br />8<br />HP<br />RESPONSE TIME IN MS—LOWER IS BETTER<br />6<br />NetApp<br />4<br />2<br />0<br />50,000<br />100,000<br />150,000<br />200,000<br />250,000<br />300,000<br />350,000<br />400,000<br />450,000<br />500,000<br />TRANSACTIONS—HIGHER IS BETTER <br />Note: SPECsfs2008 NFSv3 <br />
    87. 87. HDD<br />HDD<br />With FAST Cache<br />Appserver<br />9 of 10 I/Os from Cache<br />Controller<br />DRAM Cache<br />Flash<br />Storage Innovation<br />FAST Cache for maximum application performance<br />4.5Xbetter<br />Without FAST Cache<br />HDD (hard disk drive) performance has not kept up with server demands, leaving applications I/O deficient<br />FAST Cache solves this problem by increasing controller cache by 64 times using Flash drives<br />Most reads and writes are now served directly from high performance Silicon<br />FAST preserves I/O slots for connectivity and is well positioned to take full advantage of any decrease in Flash drive costs <br />FAST Cache’s persistent nature keeps data cached even after a power failure<br />Appserver<br />1 of 5 I/Os from Cache<br />Controller<br />DRAM Cache<br />1 of 10 I/Os from disk<br />4 of 5 I/Os from disk<br />
    88. 88. Celerra Innovation<br />The FAST Cache effect on applications<br />50,000<br />2,445<br />Newfeature<br />Microsoft <br />SQL Server<br />Uses system Flash drives<br />Turbo-charges applications with up to 2 TB of FAST Cache<br />Database<br />Twice the number of users<br />Twice the number of transactions<br />VMware <br />Half the boot time <br />Four-times faster response time<br />Vastly improved end-user experience<br />Scale users faster and further with each system <br />2445<br />9minutes <br />25,000<br />1,228<br />TPS*<br />Users<br />BootTime<br />>1 minute<br />Users<br />TPS*<br />Resp.Time<br />with FAST Cache<br />Baseline<br />with FAST Cache<br />150 VMware<br />virtual desktops <br />4minutes<br />20minutes<br />2445<br />Boot Time<br />Resp.Time<br />Baseline<br />* Transactions per second<br />
    89. 89. FAST Cache Configuration<br />FAST Cache supported for all VNX based platforms<br />Applicable to VNX for File (V7) and VNX for Block (R31)<br />Highly scalable<br />Up to 2.1 TB EMC FAST Cache (using 100 GB drives) extending System Cache by a factor of 90 <br />Reads and writes supported<br />Applies to classic LUNs and pool LUNs<br />Thick and Thin pool LUNs<br />A system-wide resource that benefits many workloads<br />Host application data : VMware, Oracle, SQL; OLTP/DW etc<br />Array-based data services (e.g. Snaps, etc.)<br />Two-click configuration in Unisphere<br />* The two numbers are when using 100 GB/200 GB drives, e.g., 5100 does not support 200 GB Flash drives<br />
    90. 90. Policy ensures storage service levels are met<br />1<br />Highest Tier PreferredMaximize performance<br />e.g. high-performance OLTP system<br />Auto-TierOptimize TCO and performance<br />e.g. databases with varied levels of activity among tables, or file systems with varied levels of activity across files<br />2<br />LUN<br />1<br />STORAGE POOL<br />Lowest Tier Preferred<br />Reduce TCO<br />e.g., archived or infrequently accessed data<br />3<br />LUN<br />LUN<br />2<br />3<br />SSD<br />HDD<br />FAST VP Policies<br />
    91. 91. FAST VP Operational Process<br />Statistics collection—cumulative I/O history (reads and writes)<br />Weights recent I/O history above longer-term I/O history<br />Maintains relative ranking of all data in pool, based on tier preference and I/O history:<br />Highest tier preference and high activity level get highest priority<br />Highest tier preference with less activity level get next highest priority<br />No tier preference (Auto-Tier) with high activity level gets next highest priority<br />When a pool is created, it is detected in the next poll cycle for inclusion in statistics collection<br />LUNs created in pool are likewise detected in the next poll cycle for inclusion in statistics collection<br />Poll occurs every hour<br />Relocation estimate—“amount to move up/down”—updated every hour<br />Tier utilization<br />Algorithm attempts to gain greatest utility from highest tiers<br />Data is demoted as space is needed in top tiers <br />Relocation granularity<br />Sub-LUN “slices” are 1 GB granularity<br />
    92. 92. NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />FAST Virtual Pool<br />FAST Suite in Action—FAST VP<br />FAST VPtiers across drives in pool<br /><ul><li>Optimizes drive utilization
    93. 93. Relative ranking over time
    94. 94. 1 GB slices ideal for deterministic data</li></ul>FLASH<br />FLASH<br />SAS<br />SAS<br />SAS<br />SAS<br />
    95. 95. FAST Suite—FAST VP + FAST Cache<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />FAST VP tiers across drives in pool<br /><ul><li>Optimizes drive utilization
    96. 96. Relative ranking over time
    97. 97. 1 GB slices ideal for deterministic data</li></ul>FAST Cache copies hottest data to Flash<br /><ul><li>Optimizes Flash utilization
    98. 98. Dynamic movement in near real time
    99. 99. 64 KB sub-slices ideal for bursty data</li></ul>DRAM Cache<br />FAST Cache<br />FAST Virtual Pool<br />FLASH<br />FLASH<br />FLASH<br />FLASH<br />FLASH<br />FLASH<br />SAS<br />SAS<br />SAS<br />SAS<br />
    100. 100. FAST Suite—FAST VP + FAST Cache<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />NL-SAS<br />FAST VP tiers across drives in pool<br /><ul><li>Optimizes drive utilization
    101. 101. Relative ranking over time
    102. 102. 1 GB slices ideal for deterministic data</li></ul>FAST Cachecopies hottest data to Flash<br /><ul><li>Optimizes Flash utilization
    103. 103. Dynamic movement in near real time
    104. 104. 64 KB sub-slices ideal for bursty data</li></ul>DRAM Cache<br />FAST Cache<br />FAST Virtual Pool<br />FLASH<br />FLASH<br />FLASH<br />FLASH<br />FLASH<br />FLASH<br />SAS<br />SAS<br />SAS<br />SAS<br />
    105. 105. VNX Series for SQL Server <br />More than 3x Performance Improvement with VNX and FAST Cache<br />4.5X performance improvement vs. previous generation<br />Achieves optimized performance without expensive data base and application tuning <br />Configuration: <br />VNX utilized 20 SAS and 4 SSDs drives with FAST Cache <br />vs CX4 with 20 FC drives<br />Relative Transactions <br />per second<br />
    106. 106. VNX Series for Virtual Desktop<br />4x the number of Virtual Desktop users with VNX Series, FAST VP and FAST Cache at Sustained Performance<br />Up to 70% reduction in storage cost for same I/O performance<br />Boot Storm<br />3x Faster: Boot & settle 500 desktops in 8 min vs. 27 min<br />FAST Cache absorbs the majority of the Boot work-load (i.e. I/O to spinning drives)<br />Desktop Refresh<br />Refresh 500 desktops in 50min vs. 130min<br />Fast Cache serviced the majority of the IO during refresh and prevents Linked clones from overloading<br />Celerra NS<br />183x 300GB 15K FC Disks<br />VNX series<br />5x 100GB SSD<br />21x 300GB 15H SAS<br />15x 2TB NL-SAS<br />
    107. 107. VNX Series for Oracle<br />Oracle Online Transaction Processing (OLTP) benefits from VNX<br />Improved Oracle transaction time by 3.25x<br />Using FAST Cache and vSphere 4.1<br />Reduces Storage Hardware cost by up to 50%<br />Configuration: <br />VNX utilized 20 SAS and 8 SSDs drives with FAST Cache <br />vs CX4 with 45 FC drives<br />Relative Transactions <br />per minute<br />If you hear Oracle OLTP, <br />VNX series is a great solution<br />VNX Series vs. with previous CX generation<br />
    108. 108. VNX Series for Oracle<br />Oracle Decision Support Systems (DSS) benefits from VNX<br />Up to 5x increase in bandwidth <br />From 800 MB/s to 4 GB/s<br />Up to 2.25x increase in number of users<br />With less storage, server and software infrastructure<br />Oracle DSS with VNX<br />Relative Performance <br />Relative Number<br />Of Users<br />If you hear Oracle DSS, <br />VNX series is a great solution<br />
    109. 109. Enhanced Virtual Provisioning<br />Storage pool virtualizes the storage provisioning model<br />Traditional RAID Groups<br />Flexible Pools<br />Replication Features, UQM, Analyzer, Virtual Provisioning, Compression, FAST VP, FAST Cache<br />Replication Features, UQM, Analyzer, FAST Cache<br />Optional<br />Features<br />LUN Migration, LUN Expansion,<br />LUN Shrink (Windows 2K8 only)<br />IncludedFeatures<br />LUN Migration, MetaLUNs<br />Thin<br /> LUN<br />Thick<br />LUN<br />Classic<br /> LUN<br />LUNs<br />Meta LUN<br />Storage<br />Pool<br />Drives<br />RAID Group<br />Flash<br />SAS<br />NL-SAS<br />
    110. 110. Migrate to ThinUsing LUN Migration or SAN Copy <br />Thickly Provisioned Data<br />Thinly Provisioned DataReduced storage capacity<br />Freed CapacityReturned to pool for usage by other LUNs<br />Space Reclamation with Thin<br />Storage Pool<br />
    111. 111. Migrate to ThinUsing LUN Migration or SAN Copy <br />Fully Provisioned Data<br />Thinly Provisioned DataReduced storage capacity<br />Enable <br />Compression<br />Compressed DataMax storage savings<br />Freed CapacityReturned to pool for usage by other LUNs<br />Storage Pool<br />Compression for More Capacity Savings<br />
    112. 112. Unisphere Quality of Service Manager<br />Application-based service level management<br />WithUnisphere Quality<br />of Service Manager<br />Available Performance<br />Available Performance<br />Applications<br />Applications<br />BeforeUnisphere Quality<br />of Service Manager<br />Manage block resources based on service levels<br />Monitor and achieve performance objectives for applications <br />Optimize performance based on policy management<br />Set performance goals for critical applications<br />Set limits on lower-priority applications<br />Schedule policies to run at different intervals<br />Measure and control storage based on different metrics <br />Response time (e.g., Exchange) <br />Bandwidth (e.g., backup to disk)<br />Throughput (e.g., OLTP applications)<br />Complements FAST VP and FAST Cache<br />Adds additional dynamic service level management to FAST VP and FAST Cache<br />MediumPriority<br />HighPriority<br />Low Priority<br />FAST SUITE<br />
    113. 113. EMC VNX Family Positioning<br />
    114. 114. CTA - Recall Policies<br />Full recall<br />File is recalled from the archive and replaces the stub<br />Requires adequate space on primary to rehydrate data<br />Used less than 5% of the time<br />Pass-through recall<br />Data is recalled and available to user<br />File remains in archive and stub remains on primary<br />Used 95% of the time in production environments<br />Important for backup and anti-virus<br />Partial recall<br />Primarily used for media files<br />Application requests a specific byte range to be returned<br />Does not replace the stub<br />
    115. 115. CTA - Migration Details (cont.)<br />Supports stub migration<br />Auto-convert NetApp to Celerra/VNX stubs<br />Maintains deduplication<br />Audit trail<br />Log files that were migrated<br />Log files that were not migrated<br />Reporting<br />Provides standard graphical reports <br />Number of files<br />Total size of files<br />CPU throttle<br />Limit migration impact during peak usage hours<br />
    116. 116. CTA - Migration details (cont.)<br />Multiple source<br />Multiple target<br />Concurrent migrations<br />Based on NDMPCopy<br />Fixes cleanup issues with NDMPCopy<br />Purges deleted files/dirs<br />Syncs renames<br />Celerra<br />NetApp<br />NetApp<br />EMC Cloud Tiering Appliance<br />EMC Cloud Tiering Appliance<br />Celerra<br />Isilon<br />
    117. 117. ‘Try before you buy’<br />Blue Chip and EMC are offering a 45 day no obligation trial on VNX/e*<br />Contact your account manager or email for further details.<br />*Terms and conditions apply<br />