SlideShare a Scribd company logo
Page 1 of 13
Make AI Adoption a Strategic, ROI-
Focused, Fit-for-Purpose and
Sustainable Transformation, Says HPE
Transcript of a discussion on how energy use and resources management have emerged as key
ingredients of artificial intelligence adoption success -- or failure.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard
Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast series. I’m
Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this
ongoing discussion on best practices for deploying artificial intelligence (AI) with a focus on
sustainability and strategic business benefits.
As AI rises as an imperative that impacts companies at nearly all levels, proper concern for
efficiency around energy use and resources management has emerged as a key ingredient of
success -- or failure. It’s becoming increasingly evident that AI deployments will demand vast
resources, energy, water, skills, and upgraded or wholly new data center and electrical grid
infrastructures.
Stay with us now as we examine why factoring the full and long-term benefits — accurately
weighed against the actual costs — is essential to assuring the desired business outcomes from
AI implementations. Only by calculating the true and total expected costs in the fullest sense
can businesses predict the proper fit-for-purpose use for large deployments of AI systems.
Here to share the latest findings and best planning practices for
sustainable AI is John Frey, Director and Chief Technologist of
Sustainable Transformation at Hewlett Packard Enterprise
(HPE). Welcome, John.
John Frey: Thank you. It’s great to be here.
Gardner: It’s good to have you back.
John, AI capabilities have moved outside the traditional
boundaries of technology, data science, and analytics. AI has
become a rapidly growing imperative for business leaders -- and
it’s impacting the daily life of more and more workers.
Generative AI, and more generally large language models, for
example, are now widely sought for broad and varied uses.
While energy efficiency has long been sought for general IT and high-performance computing
(HPC), AI appears to dramatically up the game on the need to factor and manage the required
resources.
John, how much of sea change is the impact of AI having on all that’s needed to support these
complex systems?
Frey
Page 2 of 13
AI impact adds up, everywhere
Frey: Well, AI certainly is an additional load on resources. AI training, for example, is power-
intensive. AI inferencing acts similarly, and obviously is used again and again and again if users
use the tools as designed for a long period of time.
It remains to be seen how much and how quickly, but there’s a lot of research out there that
suggests that AI use is going to rapidly grow in terms of overall technology demand.
Gardner: And, you know, we need to home in on how powerful and useful AI is. Nearly
everyone seems confident that there’s going to be really important new use cases and very
powerful benefits. But we also need to focus on what it takes to get those results, and I think
some people may have skipped over that part.
Frey: Yes, absolutely. A lot of businesses are still trying to figure out the best uses of AI, and
the types of solutions within their infrastructure that either add business value, or speed up their
processes, or that save some money.
Gardner: And this explosive growth isn’t replacing a traditional IT. We still need to have the
data centers that we’re running now performing what they’re performing. This is not a rip and
replace by any stretch. This is an add-on and perhaps even a different type of infrastructure
requirement given the high energy density, total power, and resulting heat requirements.
Frey: Absolutely. In fact, we constantly have customers coming to us asking both how does this
supplement the existing technology workloads that they are already running, and what do they
need to change in terms of the infrastructure to run these new workloads in the future?
Gardner: John, we’re seeing different countries approach these questions in different ways. We
do not have a clean room approach to deploying AI. We have it going into the existing public
infrastructure that serves cities, countries, and rural localities.
And so, how important is it to consider the impact -- not just from an AI capabilities requirement
-- but from a societal and country-by-country specific set of infrastructure requirements?
Frey: That’s a great question. We’re already
seeing evidence of jurisdictions looking at the
increasing power demand, and also water
demand. Some are either slowing down the
implementation or even pausing the
implementation for a period of time so that they
can truly understand the implications on the
utilities and infrastructure that these new AI
workloads are going to have.
Gardner: And, of course, the some countries and regulatory agencies are examining how
sustainable our overall economy is, given the amount of carbon and record-breaking levels still
being delivered into the atmosphere.
Some [jurisdictions] are either slowing
down the implementation … for a
period of time so that they can truly
understand the implications on the
utilities and infrastructure that these
new AI workloads are going to have.
Page 3 of 13
Frey: Absolutely, and that has been a constant focus. Certainly, technology like AI brings that
front and center from both a power and water perspective, but also from a social good
perspective.
If you think in the broadest use of the term sustainability, that’s what those jurisdictions are
looking at. And so, we’re going to see new permitting processes, I predict. We’re also going to
see more regulatory action.
Gardner: John, there’s been creep over the years as to what AI entails and includes -- from
traditional analytics and data crunching, to machine learning (ML), and now the newer large
language models and their ongoing inference demands. We’re also looking at tremendous
amounts of data, and the requirement for more data, as another important element of the
burgeoning demand for more resources.
How important is it for organizations to examine the massive data gathering, processing, and
storing requirements -- in addition to the AI modeling aspects -- as they seek resources for
sustainability.
Data efficiency delivers maximum effectiveness
Frey: It’s vital. In fact, when we think about how HPE looks at sustainable IT broadly, data
efficiency is the first place we suggest users think about improvement. From an AI perspective,
it’s increasingly about minimizing the training sets of data used to train the models.
For example, if you’re using off-the-shelf data sets, like from crawls of the entire internet around
the globe, and if your solutions are only going to operate in English, you can instantly discard
the data that have been collected that aren’t in the English language. If, for example, you’re
building a large language model, you don’t need the HTML and other programming code that a
crawler probably grabbed as well.
Getting the data pull right in the first place, before you do the training, is a key part of
sustainable AI, and then you can use only your customer’s specific data as you tune that model
as well.
By starting first with data efficiency -- and getting that data population as concise as it can be
from the early stages of the process -- then you’re driving efficiency all the way through.
Gardner: So being wise with your data choices is an important first step for any AI activity. Do
you have any data points on how big of an impact data and associated infrastructure demands
for AI can have?
Frey: Yes, for the latest large language models that many people are using or familiar with,
such as GPT-4, there’s been research looking at the reported infrastructure stack that was
needed to train that. They’ve estimated more than 50 gigawatt hours of energy were consumed
Rethink Sustainability
As A Productivity Catalyst
Page 4 of 13
during the training process. And that training process, by the way, was believed to be
somewhere on the order of about 95 days.
Now to put that level of power in perspective, that is about the same power that 2,700 U.S.
homes consume for a year using the US Environmental Protection Agency’s (EPA’s)
equivalency model. So, there’s a tremendous amount of energy that goes into the training
process. And remember, that’s only in the first 95 days of training for that model. Then the
model can be used for multiple years with people running inference problems against it.
In the same way, we can look at the water consumption involved. There are often millions of
gallons of water used in the cooling during such a training run. Researchers also predicted that
running a single 5- to 20-variable problem, or doing inference with 5- to 20-variables, results in a
water consumption for each inference run of about 500 milliliters, or a 16-ounce bottle of water,
as part of the needed cooling. If you have millions of users running millions of problems that
each require an inference run, that’s a significant amount of water in a short period of time.
Gardner: These impacts then are on some of our most precious and valuable commodities:
carbon, water, and electricity. How is the infrastructure needed to support these massive AI
undertakings different from past data centers? Do such things as energy density per server rack
or going to water instead of air cooling need to be re-examined in the new era of AI?
Get a handle on AI workloads
Frey: It’s a great question. Part of the challenge, and why it comes up so much, is as we think
about these new AI workloads, the question becomes, “Can our existing infrastructure and
existing data centers handle that?”
Several things that we think are pushing us to consider either new facilities or new co-location
sites are such issues as rack density going up. Global surveys look at rack densities and the
most commonly reported rack density today is about four to six kilowatts per rack. Yet we know
with AI training systems, and even inference systems, that those rack densities may be up in the
20, 30, to all the way up to 50 kilowatts per rack.
Many existing IT facilities aren’t made to
handle that power density at all. The other
thing we know is many of the existing facilities
continue to be air-cooled. They’re taking in
outside air, cooling it down and then providing
that to the IT equipment to remove the heat.
We know that when you start getting above 20
kilowatts per rack or so, air cooling is less
effective against some of those high-heat-producing workloads. You really may need to make a
shift to direct liquid cooling.
And again, what we find is so many data centers that exist today, whether they’re privately
owned or in a co-location space, don’t have the capability for the liquid cooling that’s required.
So that’s going to be another needed change.
We know that when you start getting
above 20 kilowatts per rack, air
cooling is less effective against some
of those high-heat-producing
workloads. You really may need to
make a shift to direct liquid cooling.
Page 5 of 13
And then the third thing here is the workloads are running both the training and the inference
and so often they have accelerators in them. We’re seeing the critical temperature that those
accelerators -- along with the central processing units (CPUs) – run at have to be kept below
certain thresholds to run most effectively, and that is actually dropping.
At the same time, we have higher densities, higher heat generation, and therefore, need for
more effective cooling. The required critical temperature of the most critical devices is dropping.
These three elements put together are really what’s driving a tremendous amount of the data
that calls for our infrastructure to change in the future.
Gardner: And this is not going to just impact your typical global 2000 enterprise’s on-premises
data centers. This is going to impact co-location providers, various IT service providers, and the
entire ecosystem of IT infrastructure-as-a-service (IaaS) providers.
Frey: Yes, absolutely. I will say that many of these providers have already started the transition
for their normal, non-AI workloads as server efficiency has dramatically improved, particularly in
terms of performance per watt and as rack densities have grown.
One of the ways that co-location providers
charge their customers is by space used, and
another way is by power consumption. So, if
you’re trying to do as much work as possible
for the same watt of power -- and you’re trying
to do it in the smallest footprint possible -- you
naturally will raise rack densities.
So, this trend has already started, but AI accelerates the trend dramatically.
Gardner: It occurs to me, John, that for 30 or more years, there was a vast amount of wind in
the sails of IT and its evolution in the form of Moore’s law. That benefit of the processor design
improving its efficiency, capability, and to scale rapidly over time was often taken for granted in
the economics of IT in general. And then, for the last 5 to 10 years, we’ve had advances in
virtualization and soaring server utilization improvements. Then massive improvements in data
storage capacities and management efficiencies were added.
But it now seems that even with all of that efficiency and improved IT capabilities, that we’re
going in reverse. We face such high demands and higher costs because of AI workloads that
the cost against value is rapidly rising and demands more of our most expensive and precious
resources.
Do we kiss goodbye any notion of Moore’s law? How long can true escalating costs continue for
these newer compute environments?
Is it time to move on from Moore’s Law?
Frey: Those of us who are technologists, of course, we love to find technology solutions to
challenges. And as we’ve pushed on energy efficiency and performance per watt, we have seen
and predicted in many cases an end to Moore’s law.
If you’re trying to do as much work as
possible for the same watt of power –
and you’re trying to do it in the
smallest footprint possible – you
naturally will raise rack densities.
Page 6 of 13
But then we find new ways to develop higher functioning processors with even better
performance. We haven’t hit thresholds there that have stopped us yet. And I think that’s going
to continue, we will grow performance per watt. And that’s what all of the processor vendors are
pushing for, on improving that performance per watt equation.
That trajectory is going to continue into the near future, at least. At the same time, though, when
we think more broadly, we have to focus on energy efficiency, so we literally consume less
power per device.
But as you look at human behavior over the past two decades, every time we’ve been able to
save energy in one place, it doesn’t mean that overall demand drops. It means that people get
another device that they can’t live without.
For example, we all now have cell phones in our pockets, which two decades ago we didn’t
even know we needed. And now, we have tablets and laptop computers and the internet and all
of the things that we have come to not be able to live without.
It’s gotten to the point that every time we drive these power efficiencies, there are new uses for
technology -- many of which, by the way, decarbonize other processes. So, there’s a definite
benefit there. But we always have to weigh that.
Is a technology solution always the right
way to solve a challenge? And what are
the societal and environmental impacts of
that new technology solution so that we
can factor and make the best decisions?
Gardner: In addition to this evolution of AI technology toward productivity and per watt
efficiency, there are also market factors involved. If the total costs are too high, then the
marketplace won’t sustain the AI solution on a cost-benefit basis. And so, as a business, if
you’re able to reduce cost as the only way to make solutions viable, that’s going to be what the
market demands, and what your competitors are going to force on you, too.
The second market forces pillar is the compliance and regulatory factor. In fact, in May of 2024,
the European Union Energy Efficiency Directive kicks in. And so, there are powerful forces
around total costs of AI supply and consumption that we don’t have much choice over, that are
compelling facts of life.
Frey: Absolutely. In fact, one of the things we’re seeing in the market is a tremendous amount
of money being spent to develop some AI technologies. That comes with really hard questions
about what’s a proper return on investment (ROI) for that initial money spent to build and train
the models. And then, can we further prove out the ROI over the long-term?
Our customers are now wisely asking those very questions. We’re also, from an HPE
perspective, making sure that customers think about the ethical and societal consequences of
these AI solutions. We don’t want customers bringing AI solutions to market and having an
What are the societal and environmental
impacts of a new technology solution so
that we can make the best decisions?
Empowering Sustainable IT
Through Data Efficiency
Page 7 of 13
unintended consequence from a bias that’s discovered, or some other aspect around privacy
and cybersecurity that they had not considered when they built the solution.
And, to your point, there is also increasing interest in how to contend with regulatory constraints
for AI solutions as well.
Gardner: So, one way or another, you’re going to be seeking a fit-for-purpose approach to AI
implementations -- whether you want to or not. And so, you might as well start on that earlier
than later.
Let’s move now toward ways that we can accomplish what we’ve been describing in terms of
keeping the AI services costs down, the energy demand down, and making sure that the
business benefits outweigh the total and real costs.
What are some ways that HPE -- through your research, product development, and customer
experiences -- is driving toward general business sustainability and transformation? How can
HPE be leveraged to improve and reduce risk specifically around the AI transformation journey?
Five levers for moving forward sustainably
Frey: One of the things that we’ve learned in 22 years or so of working with customers
specifically on sustainable technology broadly is we’ve discovered five levers. And we
intentionally call them “levers” because we believe that all of them apply to every customer,
whether they have their IT workloads in the public cloud, a hybrid or private cloud, a bare-metal
environment, or whether they are on-premises, co-location, or even out on the edge.
We know that they can drive efficiencies if customers consider these levers. And those five are
first data efficiency, which we’ve talked about a little bit already. From the AI context, it’s first
about making sure that the data sets that you’re using are optimized before running the training.
When we process a bit of data in a training environment, for example, do we avoid processing it
again if we can? And how do we make sure that any data that we’re going to train for, or derive
from an inference, actually has a use? Does that data provide a business value?
Next, if we’re going to collect data, how do we
make sure that we make an intentional decision
on the front end about how long we’re going to
store that data, and how we’re going to store it?
What types of storage? Is it something they will
need instantaneously? And we can choose from
high availability storage or go all the way down to
tape storage if it’s more of an archival or regulatory requirement to keep that data for a long
period of time. So, data efficiency is where we suggest we start, because making the right
decisions there flows down through all of the other aspects.
The second lever is software efficiency and this, from a broader technology perspective, is
focused on writing more efficient software applications. How do we reduce the carbon intensity
of software applications? And how do we use software to drive efficiency?
Data efficiency is where we suggest
we start, because making the right
decisions there flows down through
all of the other aspects.
Page 8 of 13
From an AI perspective, this gets into model development. How do we develop more efficient
models? How do we design these models, or leverage existing models, to be as efficient as
possible and to use the least amount of compute capability, storage capability, and networking
capability to operate most efficiently?
Software efficiency even includes things such as the efficiency of the coding. Can it be in a
compiled language versus a non-compiled language, and so it takes less power and CPU
capability to run that software as well? And HPE brings many tools to the market in that
environment.
Next, how do we use software to drive efficiency? Some of the things we’re seeing lots of
interest in with AI are things like predictive maintenance and digital twins, where we can actually
use software tools to predict things like maintenance cycles or failures, even things like inferring
operating and buying behaviors. We see these used in terms of the design of data centers. How
do we shift workloads for most efficient and lowest carbon operation? All of those aspects are in
software efficiency.
And then we move to the hardware stack and that means equipment efficiency. When you
have a piece of technology equipment, can you have it do the most amount of work? We know
from global industry surveys that technology equipment is often very underutilized. For a variety
of reasons, there’s redundancy and resiliency built into the solutions.
But as we begin moving more into AI, we tend
to look at hardware and software solutions that
deliver high levels of availability across the
equipment infrastructure. On one hand, by its
very nature, AI is designed to run this
equipment at higher levels of utilization. And
there is huge demand, particularly in terms of
training, on single large workloads that run across a variety of devices as well. But equipment
efficiency is all about attaining the highest levels of utilization.
Then, we move to energy efficiency. And this is about how to do the most amount of work per
input watt of power so that the devices are as high performing as possible. We tend to call that
being energy effective. Can you do the most amount of work with the same input of energy?
And, from an AI perspective, it’s so critical because these systems consume so much power
that often we’re able to easily demonstrate the benefits for an input watt of power or volume of
water that we’re using from a cooling perspective.
And finally, resource efficiency, and that’s about how do we run technology solutions so that
they need the least number of various resources. Those include auxiliary cooling or power
conversions, or even the human resources that it takes to run these solutions.
So, from an AI context, again, we’ve talked about raising power densities and how we can shift
directly from air to water. Cooling is going to be so critical. And it turns out that as you move to
direct liquid cooling, that has a much lower power percentage compared to some of our air-
cooled infrastructure. You can drop your power consumption dramatically by moving to direct
liquid cooling.
As we begin to move more into AI,
we tend to look to hardware and
software solutions that deliver high
levels of availability across the
equipment infrastructure.
Page 9 of 13
It's the same way from a staffing perspective. As you begin having analytics that allow you to
monitor all these variables across your technology solutions -- which is so common in an AI
solution – you need fewer staff to run those solutions. You also gain higher levels of employee
satisfaction because they can see how the infrastructure is doing and a lot of the mundane
tasks, such as constant tuning, are being made more efficient.
Gardner: Well, this drive for sustainability is clearly a non-trivial undertaking. Obviously, when
planning out efficiencies across entire data centers, it continues over many years, even
decades.
It occurs to me, John, that smaller companies that may want to do AI deployments themselves -
- to customize their models and their data sets for particular uses – and so to develop
proprietary and advantage-based operations, they are going to be challenged when it comes to
achieving AI efficiently.
At the same time, the large hyperscalers, which are very good at building out efficient data
center complexes around the globe, may not have the capability to build AI models at the
granular level needed for the vertical-industry customization required of smaller companies.
So, it seems to me that an ecosystem approach is going to shake out where these efficiencies
are going to need to manifest. But it’s not all going to happen at the company-by-company level.
And it can’t necessarily happen at the cloud provider-by-cloud provider level either.
Do you have any sense of how we should expect an AI services ecosystem – one that reaches
a balance between needed customization and needed efficiency at scale – will emerge that can
take advantage of these essential efficiency levers you described?
An AI ecosystem evolves
Frey: Yes, exactly what you describe is what we see happening. We have some customers
that want to make the investments in high-performance computing and in the development and
training of their own AI solutions. But those customers are very few that want to make that type
of investment.
Other customers want to access an AI
capability and either have some of that
expertise themselves or they want to
leverage a vendor such as HPE’s
expertise from a data science
perspective, from a model development
perspective, and from a data efficiency
perspective. We certainly see a lot more
customers that are interested in that.
Customers want to access an AI capability
and either have some of that expertise
themselves or they want to leverage a
vendor such as HPE’s expertise from a
data science perspective, from a model
development perspective, and from a data
efficiency perspective.
How Digital Transformation Benefits
From Energy Efficiency
Page 10 of 13
And then there’s a level above that. Customers that want to take a pre-trained model and just
tune it using their own specific data sets. And we think that segment of the population is even
broader because so many highly valuable uses of AI still require training on task-specific or
organization-specific data.
And finally, we see a large range of customers that want to take advantage of pre-trained, pre-
tuned AI solutions that are applicable across an entire industry or segment of some kind. One of
the things that HPE has found over the years as we’ve built all portions of that stack and then
partnered with companies is that having that whole portfolio, and having the expertise across
them, allows us to look both downstream and upstream to what the customer is looking at. It
allows us to help them make the most efficient decisions because we look across the hardware,
software, and that entire ecosystem of partners as well.
It does, in our mind, allow us to leverage decades worth of experience to help customers attain
the most efficient and most effective solutions when they’re implementing AI.
Gardner: John, are there any leading use cases or examples that we can look to that illustrate
how such efficiency makes an impactful difference?
Examples of AI increasing efficiency, productivity
Frey: Yes. I’ll give you just a couple of examples. Obviously, an early adopter of some types of
AI systems have been in healthcare. A great example is x-rays and looking at x-rays. It turns
out, with ML, you can actually do a pretty good job of having an ML system look at x-rays, do
scanning, and make a decision. “Is that a bone fracture or not?” for example. And if it’s unsure,
pass that to a radiologist who can take a deeper look. You can tune the system very, very well.
There’s a large population of x-ray imagery that creates some very clear examples of something
that is a fracture or something that is not, for example. There have been lots of studies looking
at how these systems perform against the single radiologist looking at these x-rays as well.
Particularly, when a radiologist spends their day
going from x-ray to x-ray to x-ray, there can be some
fatigue associated with that, so their diagnostic
capabilities get better when the system does a first-
level screen and then passes the more specific
cases to the radiologist for a deeper analysis. If
there is something that’s not really clear one way or
the other, it lets the radiologist spend more time on
it. So, that’s a great one.
We’re seeing a lot of interest in manufacturing processes as well. How do we look at something
using video and video analytics to examine parts or final assemblies coming off of an assembly
line and say, “Does this appear the way it’s supposed to from a quality perspective?” “Are there
any additional components or are there any components missing,” for example.
It turns out those use cases actually do a really good job from a power performance perspective
and from a ROI perspective. If you dive deeper, into natural language processing (NLP), we
want to train tools that can answer basic customer questions or allow the customer to interact
[A radiologist’s] diagnostic
capabilities improve when the
system does a first-level screen
and then passes the more
specific cases to the radiologist
for a deeper analysis.
Page 11 of 13
from a voice perspective with a service tool that can provide low-level diagnostics for a
customer or route, for example. In some cases, it can even give them the right answer in both
voice and typed speech.
In fact, you’re now seeing some of those come out in very popular software applications that a
variety of people around the world use. We’re seeing AI systems that predict the next couple
words in a sentence, for example, or allow for a higher level of productivity. I think those again
are still proving their case.
In some cases, users see them as a barrier, not as an assistant, but I think the time will come,
as those start getting more and more accurate, when they’re going to be really useful tools.
Gardner: Well, it certainly seems that given the costs and the impacts on carbon load, on
infrastructure, on the demand for skills to support it, that it’s incumbent on companies big and
small to be quite choosy about which AI use cases and problems they seem to solve first. This
just can’t be a solution in search of a problem. You need to have a very good problem that will
deliver very good business results.
It seems to me that businesses should carefully evaluate where they devote resources and use
these intelligence capabilities to the fullest effect and pick those highly productive use cases
and tasks earlier rather than later.
Define and refine your sustainable IT strategy
Frey: Yes, absolutely. Let’s not have a solution in search of a problem. Let’s find the best
business challenges and opportunities to solve, and then look at what the right strategic
approaches to solving them. What’s the ROI for each of those solutions? What are some of the
unintended consequences, like a privacy issue or a bias issue, that you want to prevent? And
then, how do we learn from others that have implemented those tools and partner with vendors
that have a lot of historical competencies in those topics and have had many customers bring
those solutions to market.
So, it’s really finding the best solution for the business challenge and being able to quantify that
benefit. One of the things that we did really early on as we were developing our sustainable IT
approach is to recognize that so many customers didn’t know how to get started.
We offered a free workbook for the customers called, Six Steps for Developing a Sustainable IT
Strategy. Well, one of the things that it says -- and this is in the majority of AI conversations as
well – is that the customer couldn’t measure the impact of what they had today because they
didn’t have a baseline. So, they implemented a technology solution and then said, “That must
be much better because we’re using technology.” But without measuring the baseline, they
weren’t able to quantify the financial, environmental, and carbon implications of the new
solution.
Six Steps for Developing
A Sustainable IT Strategy
Page 12 of 13
We help customers along this journey by helping them think about this strategically, to get all
the appropriate organizations within their company that need to be part of making a decision
about these solutions together. For example, if you’re worried about cybersecurity implications,
make sure the cybersecurity team is part of this project team. If you’re worried about bias
implications, make sure that your legal teams are involved and anyone else it’s looking at
employee or customer privacy. If you’re thinking about solutions that are going to decarbonize
or save power, for example, make sure you have your global workplace teams involved and
help quantify that, and your sustainability teams if you’re going to talk about carbon mitigation as
part of all of this.
It’s about having the right organizations
involved, looking at all the issues that can
help make the decisions, and examine if the
solution really is sustainable. Does it have
both a financial and an environmental ROI
that makes sense?
Gardner: It sure seems that emphasizing AI sustainability should be coming from the very top
of the organization, any organization, and in very loud and impressive terms. Because as AI
becomes a core competency -- whether you source it or do it in-house -- it is going to be an
essential business differentiator. If you’re going to do AI successfully, you’re going to need to do
it sustainably. And so, AI sustainability seems to be a pillar of getting to an AI outcome that
works long-term for the organization.
As we move to the end of our very interesting discussion, John, what are some resources that
people can go to? How can they start to consider what they’ve been doing around sustainability
and extending that into AI, or examine what they’re doing with AI and make sure that it conforms
to the concepts around sustainability and the all-important objectives of efficiency?
Frey: The first one, which we’ve already talked about, is make sure you have a sustainable IT
strategy. It’s part of your overarching technology strategy. And now that it includes AI, it really
gets accelerated by AI workloads.
Part of that strategy is getting stakeholders together so that folks can help look for the blind
spots and help quantify the implications and the opportunities. And then, look across the entire
environment -- from public cloud to edge, hybrid cloud, and private cloud in the middle -- and
look to those five levers of efficiency that we talked about. In particular, emphasize data
efficiency and software efficiency from an AI perspective.
And then, look at it all across the lifecycle, from the design of those products to the return and
the end-of-life processes. Because when we think about IT lifecycles, we need to consider all of
the aspects in the middle.
That drives such things as how do you procure the most efficient hardware in the first place and
provide the most efficient solutions? How do you think about tech refresh cycles and why are
tech refresh cycles different for compute, storage, networking, and with AI? How do all those
pieces interconnect to impact tech refresh cycles?
And from an HPE perspective, one of the things that we’ve done is published a whole series of
resources for customers. We mentioned the Six Steps for Developing a Sustainable IT Strategy
Examine if the solution really is
sustainable. Does it have both a
financial and an environmental return
on investment that makes sense?
Page 13 of 13
workbook. But we also have specific white papers as well on software efficiency, data efficiency,
energy efficiency, equipment efficiency, and resource efficiency.
We make those freely available on HPE’s website. So, use the resources that exist, partner with
vendors that have core capability and core expertise across all of these areas of efficiency, and
spend a fair amount of time in the development process trying to ensure that that ROI both
financially and from a sustainability perspective are as positive as possible when implementing
these solutions.
Gardner: I’m afraid we’ll have to leave it there. We’ve been exploring how AI deployments will
demand vast resources -- energy, water, skills, and upgraded or wholly new data center and
electrical grid infrastructures.
And we’ve learned that not only by calculating the true and total expected costs in the fullest
sense can businesses predict the proper fit-for-purpose use in deployments of AI but doing it in
a most efficient manner might be the only way to go about successful AI deployments.
And so, please join me in thanking our guest, John Frey, Director and Chief Technologist for
Sustainable Transformation at HPE. Thank you so much, John.
Frey: Thank you for letting me come on and share this expertise.
Gardner: You bet. And thanks as well to our audience for joining this sponsored BriefingsDirect
discussion on the best path to sustainable AI.
I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of
HPE-sponsored discussions. Thanks again for listening. Please pass this along to your IT
community and do come back next time.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard
Enterprise.
Transcript of a discussion on how energy use and resources management have emerged as key
ingredients of artificial intelligence adoption success -- or failure. Copyright Interarbor Solutions, LLC,
2005-2024. All rights reserved.
You may also be interested in:
• HPE Accelerates its Sustainability Goals While Improving the Impact of IT on the Environment
and Society
• How HPE Pointnext Tech Care changes the game for delivering enhanced IT solutions and
support
• How HPE Pointnext ‘Moments' provide a proven critical approach to digital business
transformation
• How to industrialize data science to attain mastery of repeatable intelligence delivery
• The journey to modern data management is paved with an inclusive edge-to-cloud Data Fabric
• The IT intelligence foundation for digital business transformation rests on HPE InfoSight AIOps
• How Digital Transformation Navigates Disruption to Chart a Better Course to the New Normal
• How the right data and AI deliver insights and reassurance on the path to a new normal
• How IT modern operational services enables self-managing, self-healing, and self-optimizing

More Related Content

Similar to Make AI Adoption a Strategic, ROI-Focused, Fit-for-Purpose and Sustainable Transformation, Says HPE

The Evolving Role of the Data Engineer - Whitepaper | Qubole
The Evolving Role of the Data Engineer - Whitepaper | QuboleThe Evolving Role of the Data Engineer - Whitepaper | Qubole
The Evolving Role of the Data Engineer - Whitepaper | Qubole
Vasu S
 
Industrializing Data Science: Transform into an End-to-End, Analytics-Oriente...
Industrializing Data Science: Transform into an End-to-End, Analytics-Oriente...Industrializing Data Science: Transform into an End-to-End, Analytics-Oriente...
Industrializing Data Science: Transform into an End-to-End, Analytics-Oriente...
Dana Gardner
 
Platform 3.0 Ripe to Give Standard Access to Advanced Intelligence and Automa...
Platform 3.0 Ripe to Give Standard Access to Advanced Intelligence and Automa...Platform 3.0 Ripe to Give Standard Access to Advanced Intelligence and Automa...
Platform 3.0 Ripe to Give Standard Access to Advanced Intelligence and Automa...
Dana Gardner
 
HP's Converged Infrastructure and Data Center Transformation Models Define th...
HP's Converged Infrastructure and Data Center Transformation Models Define th...HP's Converged Infrastructure and Data Center Transformation Models Define th...
HP's Converged Infrastructure and Data Center Transformation Models Define th...
Dana Gardner
 
IRJET- Scope of Big Data Analytics in Industrial Domain
IRJET- Scope of Big Data Analytics in Industrial DomainIRJET- Scope of Big Data Analytics in Industrial Domain
IRJET- Scope of Big Data Analytics in Industrial Domain
IRJET Journal
 
A study on web analytics with reference to select sports websites
A study on web analytics with reference to select sports websitesA study on web analytics with reference to select sports websites
A study on web analytics with reference to select sports websites
Bhanu Prakash
 
Wind Data US 2017 Brochure FINAL
Wind Data US 2017 Brochure FINALWind Data US 2017 Brochure FINAL
Wind Data US 2017 Brochure FINAL
Dominic Coyne
 
Wind Data USA 2017 brochure
Wind Data USA 2017 brochureWind Data USA 2017 brochure
Wind Data USA 2017 brochure
Heather Smith
 
Dell NVIDIA AI Powered Transformation in Financial Services Webinar
Dell NVIDIA AI Powered Transformation in Financial Services WebinarDell NVIDIA AI Powered Transformation in Financial Services Webinar
Dell NVIDIA AI Powered Transformation in Financial Services Webinar
Bill Wong
 
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
Dana Gardner
 
Serving the long tail white-paper (how to rationalize IT yet produce more apps)
Serving the long tail white-paper (how to rationalize IT yet produce more apps)Serving the long tail white-paper (how to rationalize IT yet produce more apps)
Serving the long tail white-paper (how to rationalize IT yet produce more apps)
Newton Day Uploads
 
Hadoop Does Not Equal Big Data
Hadoop Does Not Equal Big Data Hadoop Does Not Equal Big Data
Hadoop Does Not Equal Big Data
Enterprise Management Associates
 
Greening Artificial Intelligence
Greening Artificial IntelligenceGreening Artificial Intelligence
Greening Artificial Intelligence
Alexandra Petruș
 
Influence of Hadoop in Big Data Analysis and Its Aspects
Influence of Hadoop in Big Data Analysis and Its Aspects Influence of Hadoop in Big Data Analysis and Its Aspects
Influence of Hadoop in Big Data Analysis and Its Aspects
IJMER
 
Implementation of application for huge data file transfer
Implementation of application for huge data file transferImplementation of application for huge data file transfer
Implementation of application for huge data file transfer
ijwmn
 
A Data-driven Maturity Model for Modernized, Automated, and Transformed IT
A Data-driven Maturity Model for Modernized, Automated, and Transformed ITA Data-driven Maturity Model for Modernized, Automated, and Transformed IT
A Data-driven Maturity Model for Modernized, Automated, and Transformed IT
balejandre
 
The book of elephant tattoo
The book of elephant tattooThe book of elephant tattoo
The book of elephant tattoo
Mohamed Magdy
 
Converged IoT Systems: Bringing the Data Center to the Edge of Everything
Converged IoT Systems: Bringing the Data Center to the Edge of EverythingConverged IoT Systems: Bringing the Data Center to the Edge of Everything
Converged IoT Systems: Bringing the Data Center to the Edge of Everything
Dana Gardner
 
Talend Open-Source Approach Provides Holistic Integration Capability Across, ...
Talend Open-Source Approach Provides Holistic Integration Capability Across, ...Talend Open-Source Approach Provides Holistic Integration Capability Across, ...
Talend Open-Source Approach Provides Holistic Integration Capability Across, ...
Dana Gardner
 
Practical analytics john enoch white paper
Practical analytics john enoch white paperPractical analytics john enoch white paper
Practical analytics john enoch white paper
John Enoch
 

Similar to Make AI Adoption a Strategic, ROI-Focused, Fit-for-Purpose and Sustainable Transformation, Says HPE (20)

The Evolving Role of the Data Engineer - Whitepaper | Qubole
The Evolving Role of the Data Engineer - Whitepaper | QuboleThe Evolving Role of the Data Engineer - Whitepaper | Qubole
The Evolving Role of the Data Engineer - Whitepaper | Qubole
 
Industrializing Data Science: Transform into an End-to-End, Analytics-Oriente...
Industrializing Data Science: Transform into an End-to-End, Analytics-Oriente...Industrializing Data Science: Transform into an End-to-End, Analytics-Oriente...
Industrializing Data Science: Transform into an End-to-End, Analytics-Oriente...
 
Platform 3.0 Ripe to Give Standard Access to Advanced Intelligence and Automa...
Platform 3.0 Ripe to Give Standard Access to Advanced Intelligence and Automa...Platform 3.0 Ripe to Give Standard Access to Advanced Intelligence and Automa...
Platform 3.0 Ripe to Give Standard Access to Advanced Intelligence and Automa...
 
HP's Converged Infrastructure and Data Center Transformation Models Define th...
HP's Converged Infrastructure and Data Center Transformation Models Define th...HP's Converged Infrastructure and Data Center Transformation Models Define th...
HP's Converged Infrastructure and Data Center Transformation Models Define th...
 
IRJET- Scope of Big Data Analytics in Industrial Domain
IRJET- Scope of Big Data Analytics in Industrial DomainIRJET- Scope of Big Data Analytics in Industrial Domain
IRJET- Scope of Big Data Analytics in Industrial Domain
 
A study on web analytics with reference to select sports websites
A study on web analytics with reference to select sports websitesA study on web analytics with reference to select sports websites
A study on web analytics with reference to select sports websites
 
Wind Data US 2017 Brochure FINAL
Wind Data US 2017 Brochure FINALWind Data US 2017 Brochure FINAL
Wind Data US 2017 Brochure FINAL
 
Wind Data USA 2017 brochure
Wind Data USA 2017 brochureWind Data USA 2017 brochure
Wind Data USA 2017 brochure
 
Dell NVIDIA AI Powered Transformation in Financial Services Webinar
Dell NVIDIA AI Powered Transformation in Financial Services WebinarDell NVIDIA AI Powered Transformation in Financial Services Webinar
Dell NVIDIA AI Powered Transformation in Financial Services Webinar
 
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
 
Serving the long tail white-paper (how to rationalize IT yet produce more apps)
Serving the long tail white-paper (how to rationalize IT yet produce more apps)Serving the long tail white-paper (how to rationalize IT yet produce more apps)
Serving the long tail white-paper (how to rationalize IT yet produce more apps)
 
Hadoop Does Not Equal Big Data
Hadoop Does Not Equal Big Data Hadoop Does Not Equal Big Data
Hadoop Does Not Equal Big Data
 
Greening Artificial Intelligence
Greening Artificial IntelligenceGreening Artificial Intelligence
Greening Artificial Intelligence
 
Influence of Hadoop in Big Data Analysis and Its Aspects
Influence of Hadoop in Big Data Analysis and Its Aspects Influence of Hadoop in Big Data Analysis and Its Aspects
Influence of Hadoop in Big Data Analysis and Its Aspects
 
Implementation of application for huge data file transfer
Implementation of application for huge data file transferImplementation of application for huge data file transfer
Implementation of application for huge data file transfer
 
A Data-driven Maturity Model for Modernized, Automated, and Transformed IT
A Data-driven Maturity Model for Modernized, Automated, and Transformed ITA Data-driven Maturity Model for Modernized, Automated, and Transformed IT
A Data-driven Maturity Model for Modernized, Automated, and Transformed IT
 
The book of elephant tattoo
The book of elephant tattooThe book of elephant tattoo
The book of elephant tattoo
 
Converged IoT Systems: Bringing the Data Center to the Edge of Everything
Converged IoT Systems: Bringing the Data Center to the Edge of EverythingConverged IoT Systems: Bringing the Data Center to the Edge of Everything
Converged IoT Systems: Bringing the Data Center to the Edge of Everything
 
Talend Open-Source Approach Provides Holistic Integration Capability Across, ...
Talend Open-Source Approach Provides Holistic Integration Capability Across, ...Talend Open-Source Approach Provides Holistic Integration Capability Across, ...
Talend Open-Source Approach Provides Holistic Integration Capability Across, ...
 
Practical analytics john enoch white paper
Practical analytics john enoch white paperPractical analytics john enoch white paper
Practical analytics john enoch white paper
 

Recently uploaded

GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
Aftab Hussain
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Malak Abu Hammad
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
DianaGray10
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
Kumud Singh
 
How to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For FlutterHow to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For Flutter
Daiki Mogmet Ito
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIEnchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Vladimir Iglovikov, Ph.D.
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
Claudio Di Ciccio
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
DianaGray10
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
innovationoecd
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
Rohit Gautam
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 

Recently uploaded (20)

GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
 
How to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For FlutterHow to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For Flutter
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIEnchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 

Make AI Adoption a Strategic, ROI-Focused, Fit-for-Purpose and Sustainable Transformation, Says HPE

  • 1. Page 1 of 13 Make AI Adoption a Strategic, ROI- Focused, Fit-for-Purpose and Sustainable Transformation, Says HPE Transcript of a discussion on how energy use and resources management have emerged as key ingredients of artificial intelligence adoption success -- or failure. Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise. Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on best practices for deploying artificial intelligence (AI) with a focus on sustainability and strategic business benefits. As AI rises as an imperative that impacts companies at nearly all levels, proper concern for efficiency around energy use and resources management has emerged as a key ingredient of success -- or failure. It’s becoming increasingly evident that AI deployments will demand vast resources, energy, water, skills, and upgraded or wholly new data center and electrical grid infrastructures. Stay with us now as we examine why factoring the full and long-term benefits — accurately weighed against the actual costs — is essential to assuring the desired business outcomes from AI implementations. Only by calculating the true and total expected costs in the fullest sense can businesses predict the proper fit-for-purpose use for large deployments of AI systems. Here to share the latest findings and best planning practices for sustainable AI is John Frey, Director and Chief Technologist of Sustainable Transformation at Hewlett Packard Enterprise (HPE). Welcome, John. John Frey: Thank you. It’s great to be here. Gardner: It’s good to have you back. John, AI capabilities have moved outside the traditional boundaries of technology, data science, and analytics. AI has become a rapidly growing imperative for business leaders -- and it’s impacting the daily life of more and more workers. Generative AI, and more generally large language models, for example, are now widely sought for broad and varied uses. While energy efficiency has long been sought for general IT and high-performance computing (HPC), AI appears to dramatically up the game on the need to factor and manage the required resources. John, how much of sea change is the impact of AI having on all that’s needed to support these complex systems? Frey
  • 2. Page 2 of 13 AI impact adds up, everywhere Frey: Well, AI certainly is an additional load on resources. AI training, for example, is power- intensive. AI inferencing acts similarly, and obviously is used again and again and again if users use the tools as designed for a long period of time. It remains to be seen how much and how quickly, but there’s a lot of research out there that suggests that AI use is going to rapidly grow in terms of overall technology demand. Gardner: And, you know, we need to home in on how powerful and useful AI is. Nearly everyone seems confident that there’s going to be really important new use cases and very powerful benefits. But we also need to focus on what it takes to get those results, and I think some people may have skipped over that part. Frey: Yes, absolutely. A lot of businesses are still trying to figure out the best uses of AI, and the types of solutions within their infrastructure that either add business value, or speed up their processes, or that save some money. Gardner: And this explosive growth isn’t replacing a traditional IT. We still need to have the data centers that we’re running now performing what they’re performing. This is not a rip and replace by any stretch. This is an add-on and perhaps even a different type of infrastructure requirement given the high energy density, total power, and resulting heat requirements. Frey: Absolutely. In fact, we constantly have customers coming to us asking both how does this supplement the existing technology workloads that they are already running, and what do they need to change in terms of the infrastructure to run these new workloads in the future? Gardner: John, we’re seeing different countries approach these questions in different ways. We do not have a clean room approach to deploying AI. We have it going into the existing public infrastructure that serves cities, countries, and rural localities. And so, how important is it to consider the impact -- not just from an AI capabilities requirement -- but from a societal and country-by-country specific set of infrastructure requirements? Frey: That’s a great question. We’re already seeing evidence of jurisdictions looking at the increasing power demand, and also water demand. Some are either slowing down the implementation or even pausing the implementation for a period of time so that they can truly understand the implications on the utilities and infrastructure that these new AI workloads are going to have. Gardner: And, of course, the some countries and regulatory agencies are examining how sustainable our overall economy is, given the amount of carbon and record-breaking levels still being delivered into the atmosphere. Some [jurisdictions] are either slowing down the implementation … for a period of time so that they can truly understand the implications on the utilities and infrastructure that these new AI workloads are going to have.
  • 3. Page 3 of 13 Frey: Absolutely, and that has been a constant focus. Certainly, technology like AI brings that front and center from both a power and water perspective, but also from a social good perspective. If you think in the broadest use of the term sustainability, that’s what those jurisdictions are looking at. And so, we’re going to see new permitting processes, I predict. We’re also going to see more regulatory action. Gardner: John, there’s been creep over the years as to what AI entails and includes -- from traditional analytics and data crunching, to machine learning (ML), and now the newer large language models and their ongoing inference demands. We’re also looking at tremendous amounts of data, and the requirement for more data, as another important element of the burgeoning demand for more resources. How important is it for organizations to examine the massive data gathering, processing, and storing requirements -- in addition to the AI modeling aspects -- as they seek resources for sustainability. Data efficiency delivers maximum effectiveness Frey: It’s vital. In fact, when we think about how HPE looks at sustainable IT broadly, data efficiency is the first place we suggest users think about improvement. From an AI perspective, it’s increasingly about minimizing the training sets of data used to train the models. For example, if you’re using off-the-shelf data sets, like from crawls of the entire internet around the globe, and if your solutions are only going to operate in English, you can instantly discard the data that have been collected that aren’t in the English language. If, for example, you’re building a large language model, you don’t need the HTML and other programming code that a crawler probably grabbed as well. Getting the data pull right in the first place, before you do the training, is a key part of sustainable AI, and then you can use only your customer’s specific data as you tune that model as well. By starting first with data efficiency -- and getting that data population as concise as it can be from the early stages of the process -- then you’re driving efficiency all the way through. Gardner: So being wise with your data choices is an important first step for any AI activity. Do you have any data points on how big of an impact data and associated infrastructure demands for AI can have? Frey: Yes, for the latest large language models that many people are using or familiar with, such as GPT-4, there’s been research looking at the reported infrastructure stack that was needed to train that. They’ve estimated more than 50 gigawatt hours of energy were consumed Rethink Sustainability As A Productivity Catalyst
  • 4. Page 4 of 13 during the training process. And that training process, by the way, was believed to be somewhere on the order of about 95 days. Now to put that level of power in perspective, that is about the same power that 2,700 U.S. homes consume for a year using the US Environmental Protection Agency’s (EPA’s) equivalency model. So, there’s a tremendous amount of energy that goes into the training process. And remember, that’s only in the first 95 days of training for that model. Then the model can be used for multiple years with people running inference problems against it. In the same way, we can look at the water consumption involved. There are often millions of gallons of water used in the cooling during such a training run. Researchers also predicted that running a single 5- to 20-variable problem, or doing inference with 5- to 20-variables, results in a water consumption for each inference run of about 500 milliliters, or a 16-ounce bottle of water, as part of the needed cooling. If you have millions of users running millions of problems that each require an inference run, that’s a significant amount of water in a short period of time. Gardner: These impacts then are on some of our most precious and valuable commodities: carbon, water, and electricity. How is the infrastructure needed to support these massive AI undertakings different from past data centers? Do such things as energy density per server rack or going to water instead of air cooling need to be re-examined in the new era of AI? Get a handle on AI workloads Frey: It’s a great question. Part of the challenge, and why it comes up so much, is as we think about these new AI workloads, the question becomes, “Can our existing infrastructure and existing data centers handle that?” Several things that we think are pushing us to consider either new facilities or new co-location sites are such issues as rack density going up. Global surveys look at rack densities and the most commonly reported rack density today is about four to six kilowatts per rack. Yet we know with AI training systems, and even inference systems, that those rack densities may be up in the 20, 30, to all the way up to 50 kilowatts per rack. Many existing IT facilities aren’t made to handle that power density at all. The other thing we know is many of the existing facilities continue to be air-cooled. They’re taking in outside air, cooling it down and then providing that to the IT equipment to remove the heat. We know that when you start getting above 20 kilowatts per rack or so, air cooling is less effective against some of those high-heat-producing workloads. You really may need to make a shift to direct liquid cooling. And again, what we find is so many data centers that exist today, whether they’re privately owned or in a co-location space, don’t have the capability for the liquid cooling that’s required. So that’s going to be another needed change. We know that when you start getting above 20 kilowatts per rack, air cooling is less effective against some of those high-heat-producing workloads. You really may need to make a shift to direct liquid cooling.
  • 5. Page 5 of 13 And then the third thing here is the workloads are running both the training and the inference and so often they have accelerators in them. We’re seeing the critical temperature that those accelerators -- along with the central processing units (CPUs) – run at have to be kept below certain thresholds to run most effectively, and that is actually dropping. At the same time, we have higher densities, higher heat generation, and therefore, need for more effective cooling. The required critical temperature of the most critical devices is dropping. These three elements put together are really what’s driving a tremendous amount of the data that calls for our infrastructure to change in the future. Gardner: And this is not going to just impact your typical global 2000 enterprise’s on-premises data centers. This is going to impact co-location providers, various IT service providers, and the entire ecosystem of IT infrastructure-as-a-service (IaaS) providers. Frey: Yes, absolutely. I will say that many of these providers have already started the transition for their normal, non-AI workloads as server efficiency has dramatically improved, particularly in terms of performance per watt and as rack densities have grown. One of the ways that co-location providers charge their customers is by space used, and another way is by power consumption. So, if you’re trying to do as much work as possible for the same watt of power -- and you’re trying to do it in the smallest footprint possible -- you naturally will raise rack densities. So, this trend has already started, but AI accelerates the trend dramatically. Gardner: It occurs to me, John, that for 30 or more years, there was a vast amount of wind in the sails of IT and its evolution in the form of Moore’s law. That benefit of the processor design improving its efficiency, capability, and to scale rapidly over time was often taken for granted in the economics of IT in general. And then, for the last 5 to 10 years, we’ve had advances in virtualization and soaring server utilization improvements. Then massive improvements in data storage capacities and management efficiencies were added. But it now seems that even with all of that efficiency and improved IT capabilities, that we’re going in reverse. We face such high demands and higher costs because of AI workloads that the cost against value is rapidly rising and demands more of our most expensive and precious resources. Do we kiss goodbye any notion of Moore’s law? How long can true escalating costs continue for these newer compute environments? Is it time to move on from Moore’s Law? Frey: Those of us who are technologists, of course, we love to find technology solutions to challenges. And as we’ve pushed on energy efficiency and performance per watt, we have seen and predicted in many cases an end to Moore’s law. If you’re trying to do as much work as possible for the same watt of power – and you’re trying to do it in the smallest footprint possible – you naturally will raise rack densities.
  • 6. Page 6 of 13 But then we find new ways to develop higher functioning processors with even better performance. We haven’t hit thresholds there that have stopped us yet. And I think that’s going to continue, we will grow performance per watt. And that’s what all of the processor vendors are pushing for, on improving that performance per watt equation. That trajectory is going to continue into the near future, at least. At the same time, though, when we think more broadly, we have to focus on energy efficiency, so we literally consume less power per device. But as you look at human behavior over the past two decades, every time we’ve been able to save energy in one place, it doesn’t mean that overall demand drops. It means that people get another device that they can’t live without. For example, we all now have cell phones in our pockets, which two decades ago we didn’t even know we needed. And now, we have tablets and laptop computers and the internet and all of the things that we have come to not be able to live without. It’s gotten to the point that every time we drive these power efficiencies, there are new uses for technology -- many of which, by the way, decarbonize other processes. So, there’s a definite benefit there. But we always have to weigh that. Is a technology solution always the right way to solve a challenge? And what are the societal and environmental impacts of that new technology solution so that we can factor and make the best decisions? Gardner: In addition to this evolution of AI technology toward productivity and per watt efficiency, there are also market factors involved. If the total costs are too high, then the marketplace won’t sustain the AI solution on a cost-benefit basis. And so, as a business, if you’re able to reduce cost as the only way to make solutions viable, that’s going to be what the market demands, and what your competitors are going to force on you, too. The second market forces pillar is the compliance and regulatory factor. In fact, in May of 2024, the European Union Energy Efficiency Directive kicks in. And so, there are powerful forces around total costs of AI supply and consumption that we don’t have much choice over, that are compelling facts of life. Frey: Absolutely. In fact, one of the things we’re seeing in the market is a tremendous amount of money being spent to develop some AI technologies. That comes with really hard questions about what’s a proper return on investment (ROI) for that initial money spent to build and train the models. And then, can we further prove out the ROI over the long-term? Our customers are now wisely asking those very questions. We’re also, from an HPE perspective, making sure that customers think about the ethical and societal consequences of these AI solutions. We don’t want customers bringing AI solutions to market and having an What are the societal and environmental impacts of a new technology solution so that we can make the best decisions? Empowering Sustainable IT Through Data Efficiency
  • 7. Page 7 of 13 unintended consequence from a bias that’s discovered, or some other aspect around privacy and cybersecurity that they had not considered when they built the solution. And, to your point, there is also increasing interest in how to contend with regulatory constraints for AI solutions as well. Gardner: So, one way or another, you’re going to be seeking a fit-for-purpose approach to AI implementations -- whether you want to or not. And so, you might as well start on that earlier than later. Let’s move now toward ways that we can accomplish what we’ve been describing in terms of keeping the AI services costs down, the energy demand down, and making sure that the business benefits outweigh the total and real costs. What are some ways that HPE -- through your research, product development, and customer experiences -- is driving toward general business sustainability and transformation? How can HPE be leveraged to improve and reduce risk specifically around the AI transformation journey? Five levers for moving forward sustainably Frey: One of the things that we’ve learned in 22 years or so of working with customers specifically on sustainable technology broadly is we’ve discovered five levers. And we intentionally call them “levers” because we believe that all of them apply to every customer, whether they have their IT workloads in the public cloud, a hybrid or private cloud, a bare-metal environment, or whether they are on-premises, co-location, or even out on the edge. We know that they can drive efficiencies if customers consider these levers. And those five are first data efficiency, which we’ve talked about a little bit already. From the AI context, it’s first about making sure that the data sets that you’re using are optimized before running the training. When we process a bit of data in a training environment, for example, do we avoid processing it again if we can? And how do we make sure that any data that we’re going to train for, or derive from an inference, actually has a use? Does that data provide a business value? Next, if we’re going to collect data, how do we make sure that we make an intentional decision on the front end about how long we’re going to store that data, and how we’re going to store it? What types of storage? Is it something they will need instantaneously? And we can choose from high availability storage or go all the way down to tape storage if it’s more of an archival or regulatory requirement to keep that data for a long period of time. So, data efficiency is where we suggest we start, because making the right decisions there flows down through all of the other aspects. The second lever is software efficiency and this, from a broader technology perspective, is focused on writing more efficient software applications. How do we reduce the carbon intensity of software applications? And how do we use software to drive efficiency? Data efficiency is where we suggest we start, because making the right decisions there flows down through all of the other aspects.
  • 8. Page 8 of 13 From an AI perspective, this gets into model development. How do we develop more efficient models? How do we design these models, or leverage existing models, to be as efficient as possible and to use the least amount of compute capability, storage capability, and networking capability to operate most efficiently? Software efficiency even includes things such as the efficiency of the coding. Can it be in a compiled language versus a non-compiled language, and so it takes less power and CPU capability to run that software as well? And HPE brings many tools to the market in that environment. Next, how do we use software to drive efficiency? Some of the things we’re seeing lots of interest in with AI are things like predictive maintenance and digital twins, where we can actually use software tools to predict things like maintenance cycles or failures, even things like inferring operating and buying behaviors. We see these used in terms of the design of data centers. How do we shift workloads for most efficient and lowest carbon operation? All of those aspects are in software efficiency. And then we move to the hardware stack and that means equipment efficiency. When you have a piece of technology equipment, can you have it do the most amount of work? We know from global industry surveys that technology equipment is often very underutilized. For a variety of reasons, there’s redundancy and resiliency built into the solutions. But as we begin moving more into AI, we tend to look at hardware and software solutions that deliver high levels of availability across the equipment infrastructure. On one hand, by its very nature, AI is designed to run this equipment at higher levels of utilization. And there is huge demand, particularly in terms of training, on single large workloads that run across a variety of devices as well. But equipment efficiency is all about attaining the highest levels of utilization. Then, we move to energy efficiency. And this is about how to do the most amount of work per input watt of power so that the devices are as high performing as possible. We tend to call that being energy effective. Can you do the most amount of work with the same input of energy? And, from an AI perspective, it’s so critical because these systems consume so much power that often we’re able to easily demonstrate the benefits for an input watt of power or volume of water that we’re using from a cooling perspective. And finally, resource efficiency, and that’s about how do we run technology solutions so that they need the least number of various resources. Those include auxiliary cooling or power conversions, or even the human resources that it takes to run these solutions. So, from an AI context, again, we’ve talked about raising power densities and how we can shift directly from air to water. Cooling is going to be so critical. And it turns out that as you move to direct liquid cooling, that has a much lower power percentage compared to some of our air- cooled infrastructure. You can drop your power consumption dramatically by moving to direct liquid cooling. As we begin to move more into AI, we tend to look to hardware and software solutions that deliver high levels of availability across the equipment infrastructure.
  • 9. Page 9 of 13 It's the same way from a staffing perspective. As you begin having analytics that allow you to monitor all these variables across your technology solutions -- which is so common in an AI solution – you need fewer staff to run those solutions. You also gain higher levels of employee satisfaction because they can see how the infrastructure is doing and a lot of the mundane tasks, such as constant tuning, are being made more efficient. Gardner: Well, this drive for sustainability is clearly a non-trivial undertaking. Obviously, when planning out efficiencies across entire data centers, it continues over many years, even decades. It occurs to me, John, that smaller companies that may want to do AI deployments themselves - - to customize their models and their data sets for particular uses – and so to develop proprietary and advantage-based operations, they are going to be challenged when it comes to achieving AI efficiently. At the same time, the large hyperscalers, which are very good at building out efficient data center complexes around the globe, may not have the capability to build AI models at the granular level needed for the vertical-industry customization required of smaller companies. So, it seems to me that an ecosystem approach is going to shake out where these efficiencies are going to need to manifest. But it’s not all going to happen at the company-by-company level. And it can’t necessarily happen at the cloud provider-by-cloud provider level either. Do you have any sense of how we should expect an AI services ecosystem – one that reaches a balance between needed customization and needed efficiency at scale – will emerge that can take advantage of these essential efficiency levers you described? An AI ecosystem evolves Frey: Yes, exactly what you describe is what we see happening. We have some customers that want to make the investments in high-performance computing and in the development and training of their own AI solutions. But those customers are very few that want to make that type of investment. Other customers want to access an AI capability and either have some of that expertise themselves or they want to leverage a vendor such as HPE’s expertise from a data science perspective, from a model development perspective, and from a data efficiency perspective. We certainly see a lot more customers that are interested in that. Customers want to access an AI capability and either have some of that expertise themselves or they want to leverage a vendor such as HPE’s expertise from a data science perspective, from a model development perspective, and from a data efficiency perspective. How Digital Transformation Benefits From Energy Efficiency
  • 10. Page 10 of 13 And then there’s a level above that. Customers that want to take a pre-trained model and just tune it using their own specific data sets. And we think that segment of the population is even broader because so many highly valuable uses of AI still require training on task-specific or organization-specific data. And finally, we see a large range of customers that want to take advantage of pre-trained, pre- tuned AI solutions that are applicable across an entire industry or segment of some kind. One of the things that HPE has found over the years as we’ve built all portions of that stack and then partnered with companies is that having that whole portfolio, and having the expertise across them, allows us to look both downstream and upstream to what the customer is looking at. It allows us to help them make the most efficient decisions because we look across the hardware, software, and that entire ecosystem of partners as well. It does, in our mind, allow us to leverage decades worth of experience to help customers attain the most efficient and most effective solutions when they’re implementing AI. Gardner: John, are there any leading use cases or examples that we can look to that illustrate how such efficiency makes an impactful difference? Examples of AI increasing efficiency, productivity Frey: Yes. I’ll give you just a couple of examples. Obviously, an early adopter of some types of AI systems have been in healthcare. A great example is x-rays and looking at x-rays. It turns out, with ML, you can actually do a pretty good job of having an ML system look at x-rays, do scanning, and make a decision. “Is that a bone fracture or not?” for example. And if it’s unsure, pass that to a radiologist who can take a deeper look. You can tune the system very, very well. There’s a large population of x-ray imagery that creates some very clear examples of something that is a fracture or something that is not, for example. There have been lots of studies looking at how these systems perform against the single radiologist looking at these x-rays as well. Particularly, when a radiologist spends their day going from x-ray to x-ray to x-ray, there can be some fatigue associated with that, so their diagnostic capabilities get better when the system does a first- level screen and then passes the more specific cases to the radiologist for a deeper analysis. If there is something that’s not really clear one way or the other, it lets the radiologist spend more time on it. So, that’s a great one. We’re seeing a lot of interest in manufacturing processes as well. How do we look at something using video and video analytics to examine parts or final assemblies coming off of an assembly line and say, “Does this appear the way it’s supposed to from a quality perspective?” “Are there any additional components or are there any components missing,” for example. It turns out those use cases actually do a really good job from a power performance perspective and from a ROI perspective. If you dive deeper, into natural language processing (NLP), we want to train tools that can answer basic customer questions or allow the customer to interact [A radiologist’s] diagnostic capabilities improve when the system does a first-level screen and then passes the more specific cases to the radiologist for a deeper analysis.
  • 11. Page 11 of 13 from a voice perspective with a service tool that can provide low-level diagnostics for a customer or route, for example. In some cases, it can even give them the right answer in both voice and typed speech. In fact, you’re now seeing some of those come out in very popular software applications that a variety of people around the world use. We’re seeing AI systems that predict the next couple words in a sentence, for example, or allow for a higher level of productivity. I think those again are still proving their case. In some cases, users see them as a barrier, not as an assistant, but I think the time will come, as those start getting more and more accurate, when they’re going to be really useful tools. Gardner: Well, it certainly seems that given the costs and the impacts on carbon load, on infrastructure, on the demand for skills to support it, that it’s incumbent on companies big and small to be quite choosy about which AI use cases and problems they seem to solve first. This just can’t be a solution in search of a problem. You need to have a very good problem that will deliver very good business results. It seems to me that businesses should carefully evaluate where they devote resources and use these intelligence capabilities to the fullest effect and pick those highly productive use cases and tasks earlier rather than later. Define and refine your sustainable IT strategy Frey: Yes, absolutely. Let’s not have a solution in search of a problem. Let’s find the best business challenges and opportunities to solve, and then look at what the right strategic approaches to solving them. What’s the ROI for each of those solutions? What are some of the unintended consequences, like a privacy issue or a bias issue, that you want to prevent? And then, how do we learn from others that have implemented those tools and partner with vendors that have a lot of historical competencies in those topics and have had many customers bring those solutions to market. So, it’s really finding the best solution for the business challenge and being able to quantify that benefit. One of the things that we did really early on as we were developing our sustainable IT approach is to recognize that so many customers didn’t know how to get started. We offered a free workbook for the customers called, Six Steps for Developing a Sustainable IT Strategy. Well, one of the things that it says -- and this is in the majority of AI conversations as well – is that the customer couldn’t measure the impact of what they had today because they didn’t have a baseline. So, they implemented a technology solution and then said, “That must be much better because we’re using technology.” But without measuring the baseline, they weren’t able to quantify the financial, environmental, and carbon implications of the new solution. Six Steps for Developing A Sustainable IT Strategy
  • 12. Page 12 of 13 We help customers along this journey by helping them think about this strategically, to get all the appropriate organizations within their company that need to be part of making a decision about these solutions together. For example, if you’re worried about cybersecurity implications, make sure the cybersecurity team is part of this project team. If you’re worried about bias implications, make sure that your legal teams are involved and anyone else it’s looking at employee or customer privacy. If you’re thinking about solutions that are going to decarbonize or save power, for example, make sure you have your global workplace teams involved and help quantify that, and your sustainability teams if you’re going to talk about carbon mitigation as part of all of this. It’s about having the right organizations involved, looking at all the issues that can help make the decisions, and examine if the solution really is sustainable. Does it have both a financial and an environmental ROI that makes sense? Gardner: It sure seems that emphasizing AI sustainability should be coming from the very top of the organization, any organization, and in very loud and impressive terms. Because as AI becomes a core competency -- whether you source it or do it in-house -- it is going to be an essential business differentiator. If you’re going to do AI successfully, you’re going to need to do it sustainably. And so, AI sustainability seems to be a pillar of getting to an AI outcome that works long-term for the organization. As we move to the end of our very interesting discussion, John, what are some resources that people can go to? How can they start to consider what they’ve been doing around sustainability and extending that into AI, or examine what they’re doing with AI and make sure that it conforms to the concepts around sustainability and the all-important objectives of efficiency? Frey: The first one, which we’ve already talked about, is make sure you have a sustainable IT strategy. It’s part of your overarching technology strategy. And now that it includes AI, it really gets accelerated by AI workloads. Part of that strategy is getting stakeholders together so that folks can help look for the blind spots and help quantify the implications and the opportunities. And then, look across the entire environment -- from public cloud to edge, hybrid cloud, and private cloud in the middle -- and look to those five levers of efficiency that we talked about. In particular, emphasize data efficiency and software efficiency from an AI perspective. And then, look at it all across the lifecycle, from the design of those products to the return and the end-of-life processes. Because when we think about IT lifecycles, we need to consider all of the aspects in the middle. That drives such things as how do you procure the most efficient hardware in the first place and provide the most efficient solutions? How do you think about tech refresh cycles and why are tech refresh cycles different for compute, storage, networking, and with AI? How do all those pieces interconnect to impact tech refresh cycles? And from an HPE perspective, one of the things that we’ve done is published a whole series of resources for customers. We mentioned the Six Steps for Developing a Sustainable IT Strategy Examine if the solution really is sustainable. Does it have both a financial and an environmental return on investment that makes sense?
  • 13. Page 13 of 13 workbook. But we also have specific white papers as well on software efficiency, data efficiency, energy efficiency, equipment efficiency, and resource efficiency. We make those freely available on HPE’s website. So, use the resources that exist, partner with vendors that have core capability and core expertise across all of these areas of efficiency, and spend a fair amount of time in the development process trying to ensure that that ROI both financially and from a sustainability perspective are as positive as possible when implementing these solutions. Gardner: I’m afraid we’ll have to leave it there. We’ve been exploring how AI deployments will demand vast resources -- energy, water, skills, and upgraded or wholly new data center and electrical grid infrastructures. And we’ve learned that not only by calculating the true and total expected costs in the fullest sense can businesses predict the proper fit-for-purpose use in deployments of AI but doing it in a most efficient manner might be the only way to go about successful AI deployments. And so, please join me in thanking our guest, John Frey, Director and Chief Technologist for Sustainable Transformation at HPE. Thank you so much, John. Frey: Thank you for letting me come on and share this expertise. Gardner: You bet. And thanks as well to our audience for joining this sponsored BriefingsDirect discussion on the best path to sustainable AI. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening. Please pass this along to your IT community and do come back next time. Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise. Transcript of a discussion on how energy use and resources management have emerged as key ingredients of artificial intelligence adoption success -- or failure. Copyright Interarbor Solutions, LLC, 2005-2024. All rights reserved. You may also be interested in: • HPE Accelerates its Sustainability Goals While Improving the Impact of IT on the Environment and Society • How HPE Pointnext Tech Care changes the game for delivering enhanced IT solutions and support • How HPE Pointnext ‘Moments' provide a proven critical approach to digital business transformation • How to industrialize data science to attain mastery of repeatable intelligence delivery • The journey to modern data management is paved with an inclusive edge-to-cloud Data Fabric • The IT intelligence foundation for digital business transformation rests on HPE InfoSight AIOps • How Digital Transformation Navigates Disruption to Chart a Better Course to the New Normal • How the right data and AI deliver insights and reassurance on the path to a new normal • How IT modern operational services enables self-managing, self-healing, and self-optimizing