Transcript of "Smart Grid for Data Centers Promises Major Gains in Infrastructure Productivity"
Smart Grid for Data Centers Promises Major Gains in
Transcript of a BrieﬁngsDirect podcast on implementing energy efﬁciency in the data center to
gain new capacity.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor:
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re
listening to BrieﬁngsDirect.
Today, we present a sponsored podcast discussion on gaining control over energy use and misuse
in enterprise data centers. More often than not, very little energy capacity analysis
and planning is being done on data centers that are ﬁve years old or older. Even
newer data centers don’t always gather and analyze the available energy data
being created amid all of the components.
Nowadays, smarter, more comprehensive energy planning tools and processes are
being directed at this problem. It’s a lifecycle approach from the data centers to
full automation beneﬁts. Automation software for capacity planning and monitoring has been
designed and improved to best match long-term energy needs and resources in ways that cut total
cost, while gaining the truly available capacity from old and new data centers.
These so-called smart grid solutions jointly cut data center energy costs, reduce carbon
emissions, and can dramatically free up capacity from overburdened or inefﬁcient infrastructure.
Such data gathering, analysis and planning can break the inefﬁciency cycle that plagues many
data centers where hotspots can mismatch cooling needs, and underused and under-needed
servers are burning up energy needlessly. Done well, such solutions as Hewlett Packard's (HP)
Smart Grid for Data Center can increase capacity by 30-50 percent just by gaining control over
energy use and misuse.
We're here today with two executives from HP to delve more deeply into the notion of Smart
Grid for Data Center. Please join me in welcoming Doug Oathout, Vice President of Green IT
Energy Servers and Storage at HP. Welcome Doug.
Doug Oathout: Thank you, Dana, and welcome.
Gardner: We're also here with John Bennett, Worldwide Director of Data Center Transformation
Solutions at HP. Welcome back to the show, John.
John Bennett: Thank you very much, Dana. Glad to be here.
Gardner: John, let me start with you, if you don’t mind. Let’s set up a little bit of the context for
this whole energy lifecycle approach. It’s not isolated. It’s part of a larger set of trends that we
loosely call data center transformation (DCT). What’s going on with DCT and how important is
the role these energy conversation approaches play?
Bennett: DCT, as we’ve discussed before Dana, is focused on three core concepts, and behind it,
energy is another key focus for that work. But, the drivers behind data center
transformation are customers who are trying to reduce their overall IT spending,
either ﬂowing it to the bottom-line or, in most cases, trying to shift that spending
away from management and maintenance and onto business projects, business
priorities, and innovation in support of the business and business growth.
We also see increasing mandates to improve sustainability. It might be expressed
as energy efﬁciency in handling energy costs more effectively or addressing green IT. The issues
that customers have in executing on this, of course, is that the facilities, their people, their
infrastructure and applications, everything they are spending and doing today -- if they don’t
change it -- can get in the way of them realizing these objectives.
Data center strategy
So, DCT is really about helping customers build out a data center strategy and an infrastructure
strategy. That is aligned to their business plans and goals and objectives. That infrastructure
might be a traditional shared infrastructure model. It might be a fabric infrastructure model of
which HP’s converged infrastructure is probably the best and most complete example of that in
the marketplace today. And, it may indeed be moving to private cloud or, as I believe, some
combination of the above for a lot of customers.
The secret is doing so through an integrated roadmap of data-center projects, like consolidation,
business continuity, energy, and such technology initiatives as virtualization and automation.
Energy has deﬁnitely been a major issue for data-center customers over the past several years.
The increased computing capability and demand has increased the power needed in the data
center. Many data centers today weren’t designed for modern energy consumption requirements.
Even data centers that were designed even ﬁve years ago are running out of power, as they move
to these dense infrastructures. Of course, older facilities are even further challenged. So,
customers can address energy by looking at their facilities.
More recently, we in the industry have been focused on the infrastructure and layout of the data
center. Increasingly, we're ﬁnding that we need to look at management -- managing the
infrastructure and managing the facilities in order to address the energy cost issues and the
increasing role of regulation and to manage energy related risk in the data center.
That brings us not only to energy as a key initiative in DCT, but on Smart Grid for Data Center
as a key way of managing it effectively and dynamically.
Gardner: You know, John, it’s interesting. When I hear you describe this, it often sounds as if
you are describing security. I know that sounds odd. but security has some of the same
characteristics. You can’t look at it individually. It needs to be taken in as a comprehensive view
that there are risks associated and that it becomes management. Maybe we can learn from the
way in which people approach security. Perhaps, they should also be thinking along similar lines
when they approach energy as a problem.
Bennett: That’s an interesting analogy, and the point I would add to that, Dana, is that the best
security is built-in, not layered on. I think the best control of energy is probably better described
as built-in and not layered on.
Gardner: Let’s go to Doug. Doug. Tell me what the problem is out there. What are folks facing
and how inefﬁcient are they really? What kind of inefﬁciency is common now?
Oathout: Dana, what we're really talking about is a problem around energy capacity in a data
center. Most IT professionals or IT managers never see an energy bill from the utility. It's usually
handled by the facility. They never really concentrate on solving the energy consumption
Where problems have arisen in the past is when a facility person says that they can’t deploy the
next server or storage unit, because they're out of capacity to build that new
infrastructure to support a line of business. They have to build a new data center.
What we're seeing now is customers starting to peel the onion back a little bit,
trying to ﬁnd out where the energy is going, so they can increase the life of their
To date, very few clients have deployed comprehensive software strategies or
facility strategies to corral this energy consumption problem. Customers are
turning their focus to how much energy is being absorbed by what and then, how do they get the
capacity of the data center increase so they can support the new workloads.
The way to do that is to get the envelope cleared up so we know how much is left. What we're
seeing today is that software, hardware, and people need to come together in a process that John
described in DCT, an energy audit, or energy management.
All those things need to come together, so that customers can now start taking apart their data
center, from an analysis perspective, to ﬁnd out where they are either over provisioned or under
provisioned, from a capacity standpoint, so they know where all the energy is going. Then, they
can then take some steps to get more capability out of their current solution or get more
capability out of their installed equipment by measuring and monitoring the whole environment.
Gardner: John, we’ve already done a podcast on converged infrastructure, and I don’t want to
belabor that point too much, but it strikes me that going about this data center energy exercise in
alignment with a converged-infrastructure approach would make a lot of sense. We're starting to
see commonality in ways we hadn’t seen before.
Bennett: There’s very strong commonality there, and I’ll ask Doug to address that in a minute.
When I described the best energy solution as being built-in, that really captured the essence of
what we're doing with converged infrastructure. It’s not only integrating the elements of the data
center, but better instrumenting them from a management and automation perspective. One of the
key drivers for making management and automation decisions about provisioning and workload
locations will be energy cost and consumption. Doug?
Oathout: Converged infrastructure is really about deploying IT in the optimal way to support a
workload. We talk about energy and energy management. You're talking about
doing the same thing. You want to deploy that workload to a server and storage
networking environment that will do the amount of work you need with the least
amount of energy.
The concept of converged infrastructure applies to data-center energy
management. You can deploy a particular workload onto an IT infrastructure that is optimally
designed to run efﬁciently and optimally designed to continually run in an efﬁcient way, so that
you know you're getting the most productive work from the least energy and the more energy
efﬁcient equipment infrastructure sitting underneath it.
An example of this is identifying what type of application you want to run on your infrastructure
and then deploying the right amount of resources to run that application. You're not deploying
more and not deploying less, but deploying the optimal amount of resources that you know that
you are getting the best productivity for the energy budget that you have.
As that workload grows over time, you have the capability built into the software and into the
monitoring, so that you can add more resources to that pool to run that application. You're not
over-provisioning from the start and you're not under-provisioning, but you're getting the optimal
settings over time. That's what's really important for energy, as well as efﬁciency, as well as
operating within a data center environment.
You want to keep it optimal over time. You don’t want to set up silos to start. You don’t want to
set up over-provisioning to start. You want to be able to optimally run your infrastructure long
term. Therefore, you must have tools, software, and hardware that is not only efﬁcient, but can be
optimized and run in an optimized way over a long period of time.
Gardner: Another trend in the data center nowadays is moving towards shared-services
approaches, viewing yourself as a service provider, and billing based on these workloads and on
the actual demand. It seems to me that energy needs to ﬁt into that as well. Perhaps, as we think
about private cloud, where we’ve got elasticity of resources, energy needs to be elastic, along
with the workload allocation. So, quickly, John, what about the notion of shared services and
how energy plays into that as well as this private cloud business?
Bennett: It deﬁnitely plays, as both you and Doug have highlighted. As one moves into a private
cloud model, it accentuates the need to have a better real-time perspective of energy consumption
and what devices consume and are capable of, in order to manage the assets of the private cloud
efﬁciently and effectively. Whether you have a private cloud, providing a broader set of services,
you clearly want to minimize your own cost structures. That's going to be for good energy
management as well as other items. Doug?
Oathout: Yeah. With the private cloud implementation and how a converged infrastructure
would support that is that you want to bring the amount of resources you need for an application
online, but you also want to be able to have the resources available to run a separate set of
applications and bring that on line as well.
You're managing a group of resources as a pool, so that over time you can manage up resources
to run a particular application and then manage them down and put the resources back into pool,
so they can be deployed for another application.
The living and breathing of a data center is really what we're talking about with a private-cloud
infrastructure on a converged infrastructure. That living and breathing capability is built within
the processes and within the infrastructure, so that you can run applications in an optimal way.
Gardner: It's my understanding that some of the public-cloud providers nowadays have built
their infrastructure with conservation in mind, because every penny counts when you're in a
lower-margin shared service and providing services business. They can track every watt. They
know where it's all going. They’ve built for that.
Now, what about some of these older organizations, ﬁve years plus? What can be done to retroﬁt
what's out there to be more energy efﬁcient? How does this work towards the older sets?
Oathout: The key to that, Dana, is to understand where the power is going. One of the ﬁrst
things we recommend to a client is to look at how much power is being brought into a data
center and then where is it going. You can easily do that through a facility survey or a facility
workshop, but the other thing you want to look at is your IT. As you’re upgrading your IT, all the
new IT equipment -- whether it be servers or storage or networking -- has power management
built into it and has reporting built into it.
What you want to do is start collecting that information through software to ﬁnd out how much
power is being absorbed by the different pieces of IT equipment and associate that with the
workloads that are running on them. Then, you have a better view of what you're doing and how
much energy you're using.
Then, you can do some analysis and use some applications like HP SiteScope to do some
performance analysis, to say, "Could I match that workload to some other platform in the
infrastructure or am I running it in optimal way?"
Over time, what you can do is you can migrate some of your older legacy workloads to more
efﬁcient newer IT equipment, and therefore you are basically building up a buffer in your data
center, so that you can then go deploy new workloads in that same data center.
It's really using a process or an assessment to ﬁgure out how much energy you're using and
where it's going and then deploying to this newer equipment with all the instrumentation built in,
along with software to understand where your energy is going.
It's the way to get started but it's also the way to keep yourself in an automated way or keep
yourself optimizing over time. You use that software to your beneﬁt, so that you're freeing up
capacity, so that you can support the new workload that the businesses need.
Bennett: That's really key, Doug, as a concept, because the more you do at this infrastructure
level, the less you need to change the facilities themselves. Of course, the issue with facilities-
related work is that it can affect both quality of service and outages and may end up costing you
a pretty penny, if you have to retroﬁt or design new data center.
Gardner: As I understand it now, we're talking about an initial payback, which would be
identifying waste, hotspots, and right cooling approaches, getting some added capacity as a
result, while perhaps also cutting cost. But, over time, there's a separate distinct payback, which
is that you can control your operational costs and keep them at a lower percentage of your total
cost of IT spend. Does that sound about right?
Oathout: That is right, Dana. You can actually decrease the slope of the energy curve. The
energy curve today is growing at about 11 percent annually, and that's the amount IT is spending
on energy in a data center.
Over time, if you implement more efﬁcient IT, you can actually decrease that slope to something
much less than 11 percent growth. Also, as you increase your capacity in your data center in the
same power envelope, you could actually start getting a much more efﬁcient infrastructure
running in the same power envelope, so you're actually getting to run that IT equipment for free
energy, because you’ve freed up that energy from something else.
The idea of decreasing the slope or decreasing your budget is the start, but long term you're
going to get more workload for the same budget. You can say the same thing for the IT
management budget as well. You're trying to is get more efﬁciency out of your IT and out of your
energy budget to support future workloads.
Gardner: And, the insight that you gain from implementing these sensors and tracking and
automation, the ability to employ capacity-planning software, can bring out some hard numbers
that allow you to be more predictable in understanding what your energy requirements will be,
regardless of whether you are growing, staying the same, or even if you need to downsize your
Those numbers, that visibility, is something that can be applied to other assets allocations and
important decisions in the enterprise around such things as perhaps carbon taxes and caps, as
well as facilities, and even thinking about alternative energy sources.
Oathout: There are a lot of different ways to use green IT. We’ve seen customers implement a
consolidation of infrastructure. They took a number of servers, a number of facilities associated
with that server and storage environment, and minimize it down to a level that was very useable.
It gave the same service-level agreement (SLA) to their lines of businesses and they received
energy credits from governments. They could then use those energy credits for monetary reasons
or for conservation reasons. We also see customers, as they do these environmental changes or
policies, look for ways that they can better demonstrate to their clients that they are being energy
aware or energy efﬁcient.
A lot of our clients use consolidation studies or energy efﬁciency studies as ways to show their
clients that they are doing a very good job in their infrastructure and supporting them with the
least possible environmental impact.
We see customers getting certiﬁcates, but also using energy consumption reductions as a way to
show their clients that they’re being green or being environmentally friendly, just the same as
you'd see a customer looking at a transportation company and how energy efﬁcient they are in
transporting goods. We see a lot of clients using energy efﬁciency in multiple ways.
Gardner: We've talked about Smart Grid for Data Centers several times. Now, let's drill down
and describe exactly what it is. What are we talking about? What is HP offering in this category?
Oathout: Smart Grids for Data Centers gives a CIO or a data-center manager a blueprint to
manage the energy being consumed within their infrastructure. The ﬁrst thing that we do with a
Data Center Smart Grid is map out what is hooked up to electricity in the data center, everything
from PDUs, UPSs, and error handlers to the IT equipment servers, networking and storage. It's
really understanding how that all works together and how the whole topology comes together.
The second thing we do is visualize all the data. It's very hard to say that this server, that server,
or that piece of facilities equipment uses this much power and has this kind of capacity. You
really need to see the holistic picture, so you know where the energy is being used and
understand where the issues are within a data center.
It's really about visualizing that data, so you can take action on it. Then, it's about setting up
policies and automating those procedures to reduce the energy consumption or to manage energy
consumption that you have in the data center.
Today, our servers and our storage are much more efﬁcient than the ones we had three or four
years ago, but we also add the capability to power cap a lot of the IT equipment. Not only can
you get an analysis that says, "Here is how much energy is being consumed," you can actually
set caps on the IT equipment that says you can’t use more than this. Not only can you monitor
and manage your power envelope, you can actually get a very predictable one by capping
everything in your data center.
You know exactly, how much the max power is going to be for all that equipment. Therefore,
you can do much better planning. You get much more efﬁciency out of your data center, and you
get more predictable results, which is one of the things that IT really strives for, from an SLA to
getting those predictable results, day in and day out.
So, really Data Center Smart Grid for the infrastructure is about mapping the infrastructure. It's
about visualizing it to make decisions. Then, it's about automating and capping what you’ve got,
so you have better predictable results and you're managing it, so that you are not having out
wires, you're not having problems in your data centers, and you're meeting your SLA.
Gardner: John, I'm going to grasp for another analogy here, it sounds like, once again, we're up
against governance. It's an important concept and topic, when it comes to how to properly do IT,
but now we are applying it to energy.
Bennett: That's just the reﬂection of the fact that for any organization looking to get the most
value out of their IT organization, their infrastructure, and operations they need to address
governance, as much as they need to address the business services they're providing, as much as
they need to address the infrastructure with how they deliver it and how they manage things like
energy and security in that environment. It's all connected then.
Gardner: I wonder if we have any examples of how this has worked in practice. Within HP,
itself, I assume that you want to cut your energy bills as much as anyone else does, particularly
in a down economy or when a growth pattern hasn’t quite kicked in fully. Are there any
examples within HP or some customers or clients that you have worked with?
Oathout: In the HP example, our IT organization has gone from 85 data centers down to 6.
They've actually reduced the amount of budget we spent on IT from about 4 percent of our
overall P&L down to about 2 percent. So, they've done a very good job consolidating and
migrating the workload to a smaller set of facilities and a smaller set of infrastructure.
They're now in the process of automating all that, so long term we will have a much more
predictable IT workload from an energy perspective. They're implementing the software to
control the energy. They're implementing power capping. They're implementing a converged
infrastructure, so they have the ability to share resources amongst application. HP IT has really
driven their cost down through this.
We have another example with the Sisters of Mercy Hospital System, which did a very similar
convergence of infrastructure on a smaller scale. In their data center, they freed up 75 percent of
their ﬂoor space by doing server consolidation, storage consolidation, and energy management.
They now have 25 percent of the footprint they used to have from a server-storage physical
standpoint, but they are also only using about 33 percent of the energy they used to use within
So, they're getting a huge ﬂoor saving capacity back, but are also getting a power saving of 66
percent, versus where they were two years ago. By doing this converged infrastructure, by doing
consolidation, and then managing and capping the IT systems, they’ve got a much more
predictable budget to run their IT infrastructure.
Gardner: I suppose getting started is a tough question, because you could get started so many
different ways and there is such wide variability in how data centers are constructed and how old
they are and what the characteristics are. I almost know the answer to this question so many
different ways -- but how do you get started, depending on what your situation is at this
Bennett: For many customers, if they're struggling to understand where energy is being
consumed and how it's being used, we will probably recommend starting with an energy
efﬁciency analysis. That will not only do a thorough evaluation of both the facility and the
infrastructure, but provide insight into the kind of savings you can expect from the different
types of investment opportunities to reduce energy costs. That’s the general starting point, if you
are trying to understand just what’s going on with the energy.
Once you understand what you are doing with energy, then you can dive into looking at a Smart
Grid for Data Center solution itself as something to take you even further. Doug, how do you get
started with that?
Oathout: Another way to get started, John, is deploying new IT infrastructure. Our ProLiant
servers, our Integrity servers, or our storage products have the instrumentation and the
monitoring all built into the infrastructure. Deploying those new servers or storage environments
allow you to get a picture of how much energy is being used by those, so you can have more
predictable power usage going forward.
Customers are using virtualization. Customers are trying to get utilization of the servers and
storage environment up to a very efﬁcient level. Having the power management and the energy
monitoring being built into those systems allows them to start laying out how much
infrastructure they can support in their data center.
One of the keys for us is to start deploying the new pieces of HP IT equipment, which are fully
instrumented and energy efﬁcient. You'll have the snapshot of actual power consumption, and, if
you upgraded your IT facilities over a longer period of time, you can get a full snapshot of your
infrastructure. You can actually increase the capacity of the data center just by deploying the new
products that are much more efﬁcient than the ones three or four years ago.
Bennett: That’s a good example of this integrated roadmap idea behind DCT. I characterize it as
modernization, consolidation, and virtualization. Really
it's, stepping up the capabilities or their infrastructure to both reduce cost, improve efﬁciencies,
improve quality of service, and reduce the energy costs.
As Doug highlighted, after that phase of work is done, you've laid the ground work to consider
taking advantage of that from an instrumentation and management point of view. You can
augment that with further instrumentation of the racks and the data center resources in order to
really implement a complete Smart Grid for Data Center solution. It's a stepping stone. It
leverages the accomplishments done for other purposes to take you further into a good efﬁcient
Gardner: Based on some of the capacity improvements and savings, it certainly sounds like a
no-brainer, but I have to imagine, John, that in the future, it's going to become less of an option
and something that’s essentially mandatory. An 11 percent annual growth in energy cost is not a
sustainable trajectory. We have to expect that energy costs will be volatile, but, perhaps, over
time more expensive, whether in real terms or when you factor in the added cost of taxation,
carbon taxes and caps, and what have you. So, this is really something that has to be done. You
might as well start sooner than later.
Bennett: Yes. And, regulations and governance from outside agencies is currently an issue.
There are places in the world, such as the UK or California, where the power you have coming
into your facilities is all the power you are ever going to have. So, you really have to manage
inside of that type of regulatory constraint.
We have voluntary programs. Perhaps the most visible one is the European Data Center Code of
Conduct, and clearly we expect to see more regulation of IT and facilities, in general, moving
forward. Carbon reduction mandates impacting organizations are going to be external drivers
behind doing this. Of course, if you get your hands ahead of the game, and you do this for
business purposes, you will be well set to manage that when it comes.
Gardner: We've been talking about how to gain control over energy use and perhaps misuse in
enterprise data centers. We were talking about how a Smart Grid approach, a comprehensive
approach, using the available data to start creating capacity management capabilities, makes a
tremendous amount of sense. I want to thank our guests on this discussion. We've been joined by
Doug Oathout,Vice President of Green IT Enterprise Servers and Storage at HP. Thank you,
Oathout: Thank you, Dana.
Gardner: We've also been joined by John Bennett, Worldwide Director of Data Center
Transformation Solutions at HP. Thanks again, John.
Bennett: My pleasure, Dana. Thank you.
Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been
listening to a sponsored BrieﬁngsDirect Podcast. Thanks very much for listening and come back
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor:
Transcript of a BrieﬁngsDirect podcast on Transcript of a BrieﬁngsDirect podcast on
implementing energy efﬁciency in the data center to gain new capacity. Copyright Interarbor
Solutions, LLC, 2005-2010. All rights reserved.