Intel is important to the way customers architect their storage environments. We are focused on Storage TCO, Performance Improvements and Technology transitions.
Volume: Exponential Growth: Sheer volume, unstructured, metadata Example: Enterprise dealing with the growth in volume, and complexity of managing that volume. Big Data driving need for different approaches to storage, and solving problems in different ways. Even figuring out how to copy things fewer times would help. (18x copies, viz IBM). 40ZB viz IDC.
Customer expectations are changing and as such current storage systems need to evolve to have the ability to add a new workload or service in a matter of minutes instead of hours or days. Current storage environments are not responsive as there are dedicated appliances to address various storage needs like de-depup, backup etc. If we need to add a new workload/service, it is added to one particular system. Not possible to have the workload working on multiple systems in the data center. As a result storage resources are not utilized at an optimal level.
Complexity Ability to Use: Across Datasets, instrument, monitor, manage One of the other major trends we see happening in Storage today is driven by the fact that what we do with data is changing. It’s no longer just about collecting data, but we’re using it to connect things in different ways. We’re not just analyzing data for business intelligence, but we’re using it to predict next steps; we’re not just collected data in a structured way, we’re also using unstructured data (in big data as well as things like video and images). So, what we do with the data is changing, but also what we expect our storage systems to be able to do to support that is changing as well. This is a picture of the Higgs-Boson effort. While that computation, which lasted months, was going on, the HPC system was also tweeting its progression. We think of tweeting as a web application; but this was a web application running with HPC system, based on data within the HPC cluster. ‘mashups’ like this are starting to happen more and more. End users are expecting to be able to ‘mash’ their BI system with their HPC storage in order to extract the value of their data, for instance.
Abstracting hardware with software is not a new idea In the last decade server virtualization has fundamentally changed the economics & reality of deploying and managing IT infrastructure SDI at it’s core is the extension of the principles and benefits of virtualization from individual servers to the entirety of the hyper-scale datacenter
Software Defined Storage is the storage solution for SDI. That is, SDS is not a single storage system, but is a framework for managing a variety of storage systems in the data center. Much like in networking, you don’t buy a data center network, but rather you buy components that are put together to create a datacenter network. SDS will follow a similar path. So what is SDS if it is not a storage system? SDS is the ability to manage the assets of storage to meet a Service Level Agreement (SLA), perhaps through an Orchestration layer, such that storage resources can be pooled and expanded/reduced elastically. SDS is not about virtualizing storage, as storage has been virtualized for years. It is also not about building a single storage system, rather SDS is about having many different storage systems working in harmony.
Partner, Integrate, Enable Focus has been on leading with technology capabilities and proof points – demonstrating advantages of VT, PM, workload consolidation. These benefits are well understood in DC/Cloud – new opportunities in Secure Compute Pools & Data Analytics Now moving beyond technology proof points to commercial viability pilots – proving out business case/TCO benefits and aligning ecosystem partners for solution delivery Key for scale will be successful trials/deployments – requiring comprehensive GTM approach – incumbents that are willing to disrupt themselves & new entrants looking for growth. Our focus is to identify the key partners (TEM, OEM, ISV, SI) and enable them for scale. This requires us to continue to lead on integrated platform and solution readiness for quicker GTM. Key partners on DC/Cloud will continue to be OEMs with opportunities for disruptors within DC Cloud and smaller CoSPs (e.g., Centurylink/Saavis). Key co-travelers on network side will be Cisco, Ericsson, Huawei, HP – with growing interest in NEC, Samsung offerings. We are using industry consortia & public proof points to drive awareness and preference for IA
What are the potential TCO (and other) benefits with SDS?
So why are we talking so much about Flash technology? Because Flash disks represent a pivotal change in technology. Its important we all understand how to explain how important this technical development is to the storage consumer. Think about what has happened with Drives Drive performance over the last ten years. That is represented by the flat light blue line. IOPS per spindle for rotating drives has remained essentially flat since the late 1990’s increasing from 120 iops/spindle to today’s 180 iops/spindle. We don’t anticipate any change in this trend in the near future. Meanwhile areal densities continue to increases giving rise to two classes of drives -15K “performance” drives with limited capacity and larger capacity slower spinning SATA and NL SAS drives that are excellent choices for bulk storage. If we stay on this track storage systems really won’t be able to satisfy the increasing demand for disk iops from servers So how does a data center satisfy the ever increasing number of iops required by modern servers? Good question stay tuned Meanwhile semiconductors continue to follow Moore’s law becoming increasingly faster, denser, more energy efficient, and cheaper. All good characteristics. Over the last ten years we’ve seen a 100 fold improvement in semiconductor performance and there is no end in sight to this trend. This is the performance curve you’d like to see Storage on! (Build – Flash drive appears) Flash is a way for the Storage industry to jump on the semiconductor bandwagon and leverage these dramatic improvements. If you tore the cover off a Flash drive it would look very much like a small array; storage processor, cache and some nonvolatile memory accessed via a standard disk drive protocols. FLASH drives offer several thousand IOPs per unit and response times are in the sub millisecond range. EMC’s was one of the first to market with Flash drives and has promoted this technology heavily. We’ve followed that introduction with a heavy investment in features and testing that leverage this exciting new storage technology. Today Flash drives remain at a premium to regular disk drives so it is critical that you can leverage a small amount of FLASH storage to gain maximum business advantage. For that USD has developed the FAST Suite. Lets take a look at how we can leverage Flash technology
Storyline: As we have highlighted, in order to keep data storage and management cost effective. There are many new technologies being deployed including thin provisioning, storage tiering and data reduction technologies, such as data de-duplication and compression. The sophisticated new capabilities are enabled by powerful compute.
Speaker notes: These usage models are critical to enabling dynamic data centers and slowing the CAPEX due to skyrocketing data growth rates by eliminating redundant data and making more efficient use of existing storage assets.
These are now INTELLIGENT STORAGE Solutions based on standards and benefitted by high volume Intel Xeon processors delivering increased processing performance while driving power efficiency.
Common Terms: Storage Virtualization Refers to the process of completely abstracting logical storage from physical storage
Data De-duplication is a specialized data compression technique for eliminating coarse-grained redundant data, typically to improve storage utilization. In the de-duplication process, duplicate data is deleted, leaving only one copy of the data to be stored, along with references to the unique copy of data. De-duplication is able to reduce the required storage capacity since only the unique data is stored
Thin Provisioning An capability that applies to large-scale centralized computer disk storage systems, SANs and storage virtualization systems. It allows capacity to be easily allocated to servers, on a just enough and just-in-time basis
• SDS is solving a relevant
• SDS needs industry
• Our broad assets can align
to our vision
• We have gaps to close
• This is the 1st step of many
• Please contribute to this