A transcript of a discussion on the latest technologies and products delivering common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower Cost
1. Page 1 of 9
How Consistent Data Services
Deliver Simplicity, Compatibility,
And Lower Cost
A transcript of a discussion on the latest technologies and products delivering common data
services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: IBM Storage.
Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions and
you’re listening to BriefingsDirect. Part 2 in our Data Strategies Insights Discussion
Series explores the latest technologies and products that are delivering common data
services across today’s hybrid cloud, distributed data centers, and burgeoning edge
landscapes.
New advances in storage technologies, standards, and methods have changed the
game when it comes to overcoming the obstacles businesses too often face when
seeking pervasive analytics across their systems and services.
Stay with us now as we examine how IBM Storage is leveraging containers and the
latest storage advances to deliver inclusive, comprehensive, and actionable storage.
To learn more about the future of storage
strategies that accelerate digital transformation,
please join me in welcoming Denis Kennelly,
General Manager, IBM Storage. Welcome back,
Denis.
Denis Kennelly: Thank you, Dana. It’s great to
be here.
Gardner: In our earlier discussion we learned
about the business needs and IBM’s large-scale
vision for global, consistent data. Let’s now
delve beneath the covers into what enables this
new era of data-driven business transformation.
In our last discussion, we talked about
containers -- how they had been typically
relegated to application development. What
should businesses know about the value of containers more broadly within the storage
arena as well as across other elements of IT?
Kennelly
2. Page 2 of 9
Containers enable ease, efficiency
Kennelly: Sometimes we talk about containers as being unique to application
development, but I think the real business value of containers is in the operational
simplicity and cost savings.
When you build applications on containers, they are container-aware. When you look at
Kubernetes and the controls you have there as an operations IT person, you can scale
up and scale down your applications seamlessly.
As we think about that and about storage, we have to include storage under that
umbrella. Traditionally, storage was independently doing of a lot of the work. Now we are
in a much more integrated environment where you have cloud-like behaviors. And you
want to deliver those cloud-like behaviors end-to-end -- be it for the applications, for the
data, for the storage, and even for the network -- right across the board. That way you
can have a much more seamless, easier, and operationally efficient way of running your
environment.
Containers are much more than just an
application development tool; they are a
key enabler to operational improvement
across the board.
Gardner: Because hybrid cloud and multi-cloud environments are essential for digital
business transformation, what does this container value bring to bridging the hybrid gap?
How do containers lead to a consistent and actionable environment, without integrations
and complexity thwarting wider use of assets around the globe?
Kennelly: Let’s talk about what a hybrid cloud is. To me, a hybrid cloud is the ability to
run workloads on a public cloud and on a private cloud traditional data center. And even
right out to edge locations in your enterprise where there are no IT people whatsoever.
Being able to do that consistently across that environment -- that’s what containers
bring. They allow a layer of abstraction above the target environment, be it a bare-metal
server, a virtual machine (VM), or a cloud service – and you can do that seamlessly
across those environments.
That’s what a hybrid cloud platform is and what enables that are containers and being
able to have a seamless runtime across this entire environment.
And that’s core to digital transformation, because when we start to think about where we
are today as an enterprise, we still have assets sitting on the data center. Typically, what
you see out there are horizontal business processes, such as human resources or sales,
and you might want to move those more to a software as a service (SaaS) capability
while still retaining your core, differentiating business processes.
Containers are much more than just
an application development tool;
they are a key enabler to operational
improvement across the board.
3. Page 3 of 9
For compliance or regulatory reasons, you may need to keep those assets in the data
center. Maybe you can move some pieces. But at the same time, you want to have the
level of efficiency you gain from cloud-like economics. You want to be able to respond to
business needs, to scale up and scale down the environment, and not design the
environment for a worst-case scenario.
That’s why a hybrid cloud platform is so critical. And underneath that, why containers are
a key enabler. Then, if you think about the data in storage, you want to seamlessly
integrate that into a hybrid environment as well.
Gardner: Of course, the hybrid cloud environment extends these days more broadly
with the connected edge included. For many organizations the edge increasingly allows
real-time analytics capabilities by taking advantage of having compute in so many more
environments and closer to so many more devices.
What is it about the IBM hybrid storage vision that allows for more data to reside at the
edge without having to move it into a cloud, analyze it there, and move it back? How are
containers enabling more data to stay local and still be part of a coordinated whole
greater than the sum of the parts?
Data and analytics at the edge
Kennelly: As an industry, we go from being centralized to decentralized -- what I call a
pendulum movement every number of years. If you think back, we were in the
mainframe, where everything was very centralized. Then we went to distributed systems
and decentralized everything.
With cloud we began to recentralize
everything again. And now we are moving
our clouds back out to the edge for a lot of
reasons, largely because of egress and
ingress challenges and to seek efficiency
in moving more and more of that data.
When I think about edge, I am not necessarily thinking about Internet of things (IoT)
devices or sensors, but in a lot of cases this is about branch and remote locations.
That’s where a core part of the enterprise operates, but not necessarily with an IT team
there. And that part of the enterprise is generating data from what’s happening in that
facility, be it a manufacturing plant, a distribution center, or many others.
As you generate that data, you also want to generate the analytics that are key to
understanding how the business is reacting and responding. Do you want to move all
that data to a central cloud to run analytics, and then take the result back out to that
distribution center? You can do that, but it’s highly inefficient -- and very costly.
Now we are moving our clouds
back out to the edge … largely
because of egress and ingress
challenges and to seek efficiency in
moving more and more of that data.
4. Page 4 of 9
What our clients are asking for is to keep the data out at these locations and to run the
analytics locally. But, of course, with all of the analytics you still want to share some of
that data with a central cloud.
So what’s really important is that you can share across this entire environment, be it
from a central data center or a central cloud out to an edge location and provide what we
call seamless access across this environment.
With our technology, with things like IBM Spectrum Scale, you gain that seamless
access. We abstract the data access as if you are accessing the data locally -- or it could
be back in the cloud. But in terms of the applications, it really doesn’t care. That
seamless access is core to what we are doing.
Gardner: The IBM Storage portfolio is broad and venerable. It includes flash, disk, and
tape, which continues to have many viable use cases. So, let’s talk about the products
and how they extend the consistency and commonality that we have talked about and
how that portfolio then buttresses the larger hybrid storage vision.
Storage portfolio supports all environments
Kennelly: One of the key design points of our portfolio, particularly our flash line, is
being able to run in all environments. We have one software code base across our entire
portfolio. That code runs on our disk subsystems and disk controllers, but it can also run
on your platform of choice. So we absolutely support all platforms across the board. So
that’s one design principle.
Secondly, we embrace containers very heavily. And being able to run on containers and
provide data services across those containers provides that seamless access that I
talked about. That’s a second major design principle.
Yet as we look at our storage portfolio, we also
want to make sure we optimize the storage and
optimize the spend by the customer by tiering the
storage and being able to move data across the
different tiers of storage.
You mentioned tape storage. And so, for example, at times you may want to move from
fast, online, always-on, and high-end storage to a lower tier of less expensive storage
such as tape, maybe for data retention reasons. You’ll then need an air gap solution
and you’ll want to move to cold storage, as we call it, i.e. on tape. We support that
capability and we can manage your data across that environment.
There are three core design principles to our IBM Storage portfolio. Number one is we
can run seamlessly across these environments. Number two, we provide seamless
access to the data across those environments. And number three, we support
We want to make sure we
optimize the storage and
optimize the spend by the
customer.
5. Page 5 of 9
optimization of the storage for the use case needed, such being able to tier the storage
to your economic and workload needs.
Gardner: Of course, what people are also interested in these days is the FlashSystem
performance. Tell us about some of the latest and greatest when it comes to your
FlashSystems. You have the new 5200, the high-end 9200, and those also complement
some of your other products like ESS 3200.
Flash provides preferred performance, price
Kennelly: Yes, we continue to expand the portfolio. With the FlashSystems, and some
of our recent launches, some things don’t change. We’re still able to run across these
different environments.
But in terms of price-performance, especially with the work we have done around our
flash technology, we have optimized our storage subsystems to use standard flash
technologies. In terms of price for throughput, when we look at this against our
competitors, we offer twice the performance for roughly half the price. And this has been
proven as we look at our competitors’ technology.
That’s due to leveraging our innovations around what we call the FlashCore Module,
wherein we are able to use standard flash in those disk drives and enable compression
on the fly. That’s driving the roadmap in terms of throughput and performance at a very,
very competitive price point.
Gardner: Many of our readers and listeners, Denis, are focused on their digital business
transformation. They might not be familiar with some of these underlying technological
advances, particularly end-to-end Non-Volatile Memory Express (NVMe). So why are
these systems doing things that just weren’t possible before?
Kennelly: A lot of it comes down to where the
technology is today and the price points that we
can get from flash from our vendors. And that’s
why we are optimizing our flash roadmap and
our flash drives within these systems. It’s really
pushing the envelope in terms of performance
and throughput across our flash platforms.
Gardner: The desired end-product for many organizations is better and pervasive
analytics. And one of the great things about artificial intelligence (AI) and machine
learning (ML) is it’s not only an output -- it’s a feature of the process of enhancing
storage and IT.
How are IT systems and storage using AI inside these devices and across these
solutions? What is AI bringing to enable better storage performance at a lower price
point?
We are optimizing our flash
roadmap and our flash drives
within these systems. It’s really
pushing the envelope in terms
of performance and throughput
across our flash platforms.
6. Page 6 of 9
Kennelly: We continue to optimize what we can do in our flash technology, as I said.
But when you embark on an AI project, something like 70 to 80 percent of the spend is
around discovery, gaining access to the data, and finding out where the data assets are.
And we have capabilities like IBM Spectrum Discover that help catalog and understand
where the data is and how to access that data. It’s a critical piece of our portfolio on that
journey to AI.
We also have integrations with AI services like Cloudera out of the box so that we can
seamlessly integrate with those platforms and help those platforms differentiate using
our Spectrum Scale technology.
But in terms of AI, we have some really key enablers to help accelerate AI projects
through discovery and integration with some of the big AI platforms.
Gardner: And these new storage platforms are knocking off some impressive numbers
around high availability and low latency. We are also seeing a great deal of consolidation
around storage arrays and managing storage as a single pool.
On the economics of the IBM FlashSystem approach, these performance attributes are
also being enhanced by reducing operational costs and moving from CapEx to OpEx
purchasing.
Storage-as-a-service delivers
Kennelly: Yes, there is no question we are moving toward an OpEx model. When I
talked about cloud economics and cloud-like flexibility behavior at a technology level,
that’s only one side of the equation.
On the business side, IT is demanding cloud
consumption models, OpEx-type models, and
pay-as-you-go. It’s not just a pure financial
equation, it's also how you consume the
technology. And storage is no different. This is
why we are doing a lot of innovation around
storage-as-a-service. But what does that really mean?
It means you ask for a service. “I need a certain type of storage with this type of
availability, this type of performance, and this type of throughput.” Then we as a storage
vendor take care of all the details behind that. We get the actual devices on the floor that
meet those requirements and manage that.
As those assets depreciate over a number of years, we replace and update those assets
in a seamless manner to the client.
It’s not just a pure financial
equation, it’s also how you
consume the technology.
And storage is no different.
7. Page 7 of 9
As the storage sits in the data center, maybe the customer says, “I want to move some
of that data to a cloud instance.” We also offer a seamless capability to move the data
over to the cloud and run that service on the cloud.
We already have all the technology to do that and the platform support for all of those
environments. What we are working on now is making sure we have a seamless
consumption model and the business processes of delivering that storage-as-a-service,
and how to replace and upgrade that storage over time -- while making it all seamless to
the client.
I see storage moving quickly to this new storage consumption model, a pure OpEx
model. That’s where we as an industry will go over the next few years.
Gardner: Another big element of reducing your total cost of ownership over time is in
how well systems can be managed. When you have a common pool approach, a
comprehensive portfolio approach, you also gain visibility, a single pane of glass when it
comes to managing these systems.
Gain intelligent insights into storage, seamlessly
Kennelly: That’s an area we continue to invest in heavily. Our IBM Storage Insights
platform provides tremendous insights in how the storage subsystems are running
operationally. It also provides insights within the storage in terms of where you have
space constraints or where you may need to expand.
But that’s not just a manual dashboard that we present to an operator. We are also
infusing AI quite heavily into that platform and using AIOps to integrate with Storage
Insights to run storage operations at much lower costs and with more automation.
And we can do that in a consistent manner
right across the environments, whether it’s a
flash storage array, mainframe attached, or a
tape device. It’s all seamless across the
environment. You can see those tiers and
storage as one platform and so are able to
respond quickly to events and understand
events as they are happening.
Gardner: As we close out, Denis, for many organizations hybrid cloud means that they
don’t always know what’s coming and lack control over predicting their IT requirements.
Deciding in advance how things get deployed isn’t always an option.
How do the IBM FlashSystems, and your recent announcements in February 2021,
provide a path to a crawl-walk-run adoption approach? How do people begin this journey
regardless of the type of organization and the size of the organization?
You can see those tiers and
storage as one platform and
so are able to respond quickly
to events and understand
events as they are happening.
8. Page 8 of 9
Kennelly: We are introducing an update to our FlashSystem 5200 platform, which is our
entry point platform. Now, that consistent software platform runs our storage software,
IBM Spectrum Virtualize. It’s the same software as in our high-end arrays at the very top
of our pyramid of capabilities.
As part of that announcement, we are also supporting other public cloud vendors. So
you can run the software on our arrays, or you can move it out to run on a public cloud.
You have tremendous flexibility and choice due to the consistent software platform.
And, as I said, it’s our entry point so the price is
very, very competitive. This is a part of the
market where we see tremendous growth. You
can experience the best of the IBM Storage
platform at a low-cost entry point, but also get
the tremendous flexibility. You can scale up
that environment within your data center and right out to your choice of how to use the
same capabilities across the hybrid cloud.
There has been tremendous innovation by the IBM team to make sure that our software
supports this myriad of platforms, but also at a price point that is the sweet spot of what
customers are asking for now.
Gardner: It strikes me that we are on the vanguard of some major new advances in
storage, but they are not just relegated to the largest enterprises. Even the smallest
enterprises can take advantage and exploit these great technologies and storage
benefits.
Kennelly: Absolutely. When we look at the storage market, the fastest growing part is at
that lower price point -- where it’s below $50K to $100K unit costs. That’s where we see
tremendous growth in the market and we are serving it very well and very efficiently with
our platforms. And, of course, as people want to scale and grow, they can do that in a
consistent and predictable manner.
Gardner: I’m afraid we will have to leave it there. You have been listening to a
sponsored BriefingsDirect discussion on how the latest storage technologies and
products are delivering a common data services benefit across today’s hybrid cloud,
distributed data centers, and burgeoning edge landscapes. And we have delved beneath
the covers to learn about the latest technologies enabling IBM’s vision for the future of
data storage.
So please join me now in thanking our guest, Denis Kennelly, General Manager, IBM
Storage. Thank you so much, Denis.
Kennelly: Thank you, Dana.
You can experience the best
of the IBM Storage platform at
a low-cost entry point, but also
get the tremendous flexibility.
9. Page 9 of 9
Gardner: Please join me and Denis again soon for our next discussion in this three-part
series as we explore the virtues of a global access capabilities and the importance of
consistent data -- no matter where it’s stored.
Thank you to our audience as well for joining this BriefingsDirect data strategies insights
discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host
throughout this series of IBM Storage-sponsored BriefingsDirect discussions.
Thanks again for listening. Please pass this along to your IT community, and do come
back next time.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: IBM Storage.
A transcript of a discussion on the latest technologies and products delivering common data
services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.
Copyright Interarbor Solutions, LLC, 2005-2021. All rights reserved.
You may also be interested in:
• IBM Storage Made Simple for All: Latest announcements
• Executive Q&A: Denis Kennelly, GM of IBM Storage
• How Agile Enterprise Architecture Builds Agile Business Advantage
• Smart Buyer’s Guide to Flash
• In the New Cloud Era, IBM Focuses on Storage
• The Open Group Digital Practitioner Effort Eases the People Path to Digital Business
Transformation
• The 28 Best Enterprise Data Storage Companies for 2021
• Top Data Storage Vendors for 2021
• Services, Cloud Dominated Data Storage News in 2020
• The 10 Hottest SSD And Flash Storage Products Of 2020