Why Enterprises Need to Move to a New Storage Paradigm


Published on

This IDG interview article features EMC ViPR Data Services CTO, Mark O'Connell as he discusses the ever-changing storage landscape, rise of object, and EMC's strategic direction with Software-Defined Storage.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Why Enterprises Need to Move to a New Storage Paradigm

  1. 1. EXECUTIVE A D V E R T O R I A L VIEWPOINT Why enterprises need to move to a new storage paradigm Mark O’Connell VIPR DATA SERVICES CTO, ADVANCED SOFTWARE DIVISION, EMC CORPORATION Mark O’Connell is the ViPR Data Services CTO within the Advanced Software Division at EMC Corporation. With more than 37,000 employees worldwide, EMC is the world leader in products, services, and solutions for information management. Among the products Mark covers is ViPR, a software-defined storage platform with support for heterogeneous storage and APIs, EMC Atmos, the industry’s leading multi-petabyte information management solution, and EMC Centera, the industry’s leading compliance and archiving offering. LEARN MORE ABOUT VIPR: www.emc.com/vipr FOLLOW US ON TWITTER: https://twitter.com/EMCITMgmt * Source: Big Data Study, CIO, July 2012 Traditional block and file storage cannot scale to today’s volume of data or the increased demand for access to it via the Internet or mobile devices. A CIO survey of IT leaders about Big Data shows that on average, respondents expect the amount of data they are managing to nearly double within the next year to 18 months, from 193 terabytes to 285 terabytes.* Meanwhile, given the ubiquity of smart phones and tablets, it’s no surprise that an InfoWorld survey, Navigating IT: Objectives and Obstacles, shows that more than one-third of organizations are investing or increasing investments in mobile application development. Mark O’Connell, ViPR Data Services CTO for EMC’s Advanced Software Division, discusses how IT leaders can meet challenges with object-based storage – and at the same time leverage the technology to many advantageous ends, including becoming a more strategic consulting partner to the business. What key trends are causing the enterprise to rethink storage, increasingly in favor of object storage? The industry is moving to a different storage paradigm for two main reasons: the sheer scale of digital information today, plus the increasing demand for access to that information online – often and from anywhere. That greater scale of information and changes in how users are accessing it doesn’t fit the traditional paradigms of file system or block storage. Object storage gives you a greater degree of scalability without the management overhead that traditional block and file systems require. You can see an early example in EMC Centera. It was one of the first true object storage platforms, combining multiple disk drives and presenting as one single object store that could be up to even a hundred terabytes, all easily accessible and manageable via an application that didn’t have to worry about space management or any of those elements. Object storage today also satisfies breadth of access with new protocols that are Internet-friendly, and mobile devicefriendly. Proprietary APIs in early devices have given way to accessing data in object stores over the Internet via simple REST protocols. What are some of the most important evolutions in object storage positively impacting IT operations? One is that companies like Amazon with S3, Microsoft with Azure, and EMC with Atmos, came out with REST-based access to large, scalable object storage in the cloud that allowed full updates to the information and greater multi-tenancy. So you could segment large, scalable object stores into separate administrative domains such that individual customers can access and have management privileges for just their data. That means you could use more of a service provider model internally in the enterprise, or externally as Amazon and the others do. And with the multi-tenancy capabilities, not only is it very easy for end users to request storage and get it immediately, but also for IT to enforce a consistent set of standards for how that storage is used. Another is that in traditional file system and block-based devices, once data is placed in a location, it’s hard or risky to move it. So IT personnel spend a lot of time matching storage characteristics to applications and carefully planning migrations. With object storage, what’s changing over time is that a lot of those concerns are taken on by the storage environment itself. An example: With EMC’s new ViPR software-defined storage platform, which supports Amazon S3 and EMC Atmos REST-based APIs, when an application needs data, it simply requests a new bucket of storage, and provisions that storage immediately. The storage array underneath can figure out where that data should be located; if the application has different needs over time, the storage array can be made aware of that via a policy change for that data. Object storage can automatically and seamlessly move data around independent of and transparent to the application. When IT doesn’t have to worry about routine provisioning tasks or data mobility, they can focus more on longer-term data center needs. They can think about the application growth, mobility and access requirements, whether there should be one copy of the data in a distributed fashion or multiple copies for speed of access. That makes IT a higher-value service, and more of a consultant with application developers and application administrators about how best to deploy an application and how best to satisfy end users What cost savings should emerge from the move to object storage? Greater automation of routine IT tasks leads to cost-savings. You don’t need as many IT people to manage a much larger-scale object storage system. There’s also cost savings in that there’s a greater flexibility regarding geographic locations of data and how it can move there, and much lower management associated with that. In a traditional block and file world, when a network connection breaks you have to resynchronize the entire data set. With object storage, you just resynchronize the changes that occurred since the outage. Additionally, object storage drives are cheaper on a dollar-per-gigabyte basis. You pay about the same amount for a very fast enterprise-class drive in a block or file system as for a drive in an object storage system. But object storage systems can use much larger drives, and hold 10 to 15 times as much data. And by scaling out the number of processing heads that are available with object storage, you can scale performance pretty much to any level you need to within the context of that object storage system. How can you assure that existing storage investments and data are protected as object storage comes online – and that future investments in object storage are sound ones, too? We can’t forget that people do have large infrastructures in place with applications and storage that isn’t going away. So IT shops and the business overall should use a partner who can work across generational changes – who can help it maintain its existing footprint while helping customers transition to the new world. That partner should help them understand which workloads are appropriate to move to an object storage environment and which to leave in their existing locations because there is not a cost benefit to moving and reworking them. Moving forward with object storage, enterprise need to drive toward standard technology - REST–based systems and S3 interfaces, for example. So whatever implementation a customer has today, it won’t be tied to that, but will have a choice among different providers that have all coded to those standard interfaces. How does object storage add up to a benefit for the business? For one thing, application developers will be able to focus on the core business needs of an application because they won’t have to take on storage management. In an object storage platform, those concerns are automated and internalized inside the system, so the developer can focus purely on how to lay out data and organize it in a way to make sense for the business needs of an application, vs. matching that to the underlying storage system and some of its quirks. It’s also going to be easier to have analytics capabilities built into the object storage platform. So, developers also can look at how to structure application data not only to respond to the business needs now, but to drive additional benefits out of that same data. When IT doesn’t have to worry about routine provisioning tasks or data mobility, they can focus more on longer-term data center needs.