Is it possible to recycle some data center energy to achieve a PUE less than 1.00? The simple answer is no. However, over the years, such claims have been made and are invariably followed by wise nods, knowing winks, and comments such as, "Well, somebody doesn't understand PUE basic math."
While striving for a PUE of less than 1.00 undoubtedly gets us into the realm of physics-defying fantasy, that does not however mean there is no value in recycling the heat energy produced by our ICT equipment. After all, we have already recognized the great value in actually increasing PUE by dramatically reducing the divisor of our PUE equation through virtualization, turning off comatose servers, migrating from Windows to Linux, etc.
The PUE rock showing above the water level may be bigger, but the sea level itself is lower (hence, the ongoing discussion for new and improved metrics). Such is the case with looking for ways to perform useful work with the thermal energy produced by ICT equipment. While we may not give ourselves a bragging-rights PUE decimal fraction, perhaps we can improve our organization's bottom line and contribute positively to the planet's health. This session will explore five strategies for re-using data center heat energy while discussing their impacts on PUE, the bottom line, and the environment.
Key takeaways:
- Learn about the different strategies that are available to re-use (recycle) heat energy produced by ICT equipment.
- Understand the relationship between recycling data center heat energy and resultant impacts on PUE.
- Learn about the benefits of recycling data center heat energy for your organization's bottom line and the environment.
An exploration of the benefits and limitations of the popular Power Usage Effectiveness (PUE) metric, for gauging datacenter efficiency.
How to avoid the pitfalls inherent in the definition of PUE; and some suggested means by which the PUE concept can be enhanced in real-world applications.
CORPORATE PLUG: For more information about how Raritan helps solve this problem, I encourage you to see: http://www.raritan.com/resources/screenshots/power-iq/
An exploration of the benefits and limitations of the popular Power Usage Effectiveness (PUE) metric, for gauging datacenter efficiency.
How to avoid the pitfalls inherent in the definition of PUE; and some suggested means by which the PUE concept can be enhanced in real-world applications.
CORPORATE PLUG: For more information about how Raritan helps solve this problem, I encourage you to see: http://www.raritan.com/resources/screenshots/power-iq/
How green standards are changing data center design and operationsSchneider Electric
An effort is underway to harmonize certain energy-efficiency standards. Could global standardization ultimately diminish the technical effectiveness of such standards? Which will emerge as the de facto standards? This session will explore these questions, as well in data center efficiency and sustainability guidelines.
Bringing Enterprise IT into the 21st Century: A Management and Sustainabilit...Jonathan Koomey
I gave this talk as a webinar on March 19th, 2014 for the Corporate Eco Forum. It discusses ways to improve the efficiency of enterprise IT, mainly focusing on institutional changes that are necessary to make modern IT organizations perform effectively. It draws upon our case study of eBay as well as my other work on data centers over the years.
How green standards are changing data center design and operationsSchneider Electric
An effort is underway to harmonize certain energy-efficiency standards. Could global standardization ultimately diminish the technical effectiveness of such standards? Which will emerge as the de facto standards? This session will explore these questions, as well in data center efficiency and sustainability guidelines.
In this video from SC17 in Denver, Dan Reed moderates a panel discussion on HPC Software for Energy Efficiency.
"We have already achieved major gains in energy-efficiency for both the datacenter and HPC equipment. For example, the PUE of the Swiss Supercomputer (CSCS) datacenter prior to 2012 was 1.8, but the current PUE is about 1.25; a factor of ~1.5 improvement. HPC system improvements have also been very strong, as evidenced by FLOPS/Watt performance on the Green500 List. While we have seen gains from data center and HPC system efficiency, there are also energy-efficiency gains to be had from software- application performance improvements, for example. This panel will explore what HPC software capabilities were most helpful over the past years in improving HPC system energy efficiency? It will then look forward; asking in what layers of the software stack should a priority be put on introducing energy-awareness; e.g., runtime, scheduling, applications? What is needed moving forward? Who is responsible for that forward momentum?"
Watch the video: https://wp.me/p3RLHQ-hHQ
Learn more: https://sc17.supercomputing.org/presentation/?id=pan103&sess=sess245
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Clarifying ASHRAE's Recommended Vs. Allowable Temperature Envelopes and How t...Upsite Technologies
The topic of raising temperatures in data centers used to be met with much criticism in the industry, but in recent years has become more accepted. A big driver for this acceptance has been ASHRAE’s expanded envelope for recommended and allowable server inlet temperatures. However, while this has eased the discussion, there are still some questions that have been left unanswered. What’s the difference between recommended and allowable? Which one is best to use? What steps must be taken to safely raise set points? How do you ensure servers are still adequately cooled? What if you have different server types (A1, A2, A3, A4)? This presentation will examine these questions to give a clearer understanding of ASHRAE’s recommended and allowable guidelines. Also covered will be an explanation on how, in some cases, it is possible to raise cooling control set points without raising server inlet temperatures.
Energy Efficiency in Multi Tenanted Data CentresDataCentred
Presentation by Mike Kelly CEO DataCentred at 2014 Data Centre World, asking does PUE miss the point?
Traditionally in data centres there has been no end to end view of energy efficiency. Chip designers have used techniques to make on-chip and on-board calculations efficient; rack builders have devised techniques which are efficient in cooling servers within racks and within containment systems for pods of racks; data centre operators have ensured that bulk efficiency of supposed compute energy demand versus total energy demand are as low as possible.
The reality, however, is that since no part of this chain has sight of the others, what is described as total efficiency, PUE, may bear no relation to compute efficiency at all. In large single user data centres, an end to end view is increasingly possible, and is a better measure of energy efficiency.
This presentation explores whether some of these end to end measures and some the efficiency techniques they lead to, could be used in a multi-tenancy data centre.
The data center market has expanded dramatically in the past few years, and it doesn’t show signs of slowing down. Many clients and building owners are requesting modular data centers, which can be placed anywhere data capacity is needed. Modular data centers can help cash-strapped building owners add a new data center (or more capacity) to their site, and can assist facilities with unplanned outages, such as disruptions due to storms. Owners look to modular data centers to accelerate the “floor ready” date as compared to a traditional brick and mortar.
Data Center Monitoring and Management Best Practices: How You Can Benefit fro...Upsite Technologies
As data centers have adapted to many changes over the past year, managing sites with limited on-site personnel has become a new challenge. Because of this, a renewed focus by data center managers has been placed on remote monitoring and management solutions to help ensure operations continue to run as efficiently as possible. And while legacy monitoring and DCIM solutions provide some resolve to this issue, ever-changing needs and requirements by data centers are driving much change and innovation in this segment of the industry. This session will examine the software solutions that are currently available to remotely monitor and manage data center operations, with an emphasis on the power and cooling infrastructure. We will also examine emerging technologies like artificial intelligence, machine learning, and virtual reality as well as new concepts like DMaaS, and discuss how they are changing the landscape of data center monitoring and management. This analysis will take many factors into consideration including, real-time thermal performance, airflow management, power consumption, risk mitigation, and cooling optimization.
Cooling Optimization 101: A Beginner's Guide to Data Center CoolingUpsite Technologies
As new personnel enter the industry, they are often bombarded with a slew of buzz words and marketing messages that would lead them to believe that data centers almost run themselves. And while monitoring and DCIM solutions are improving the management of power and cooling, an understanding of the fundamental science is crucial to both see through the hype and get the most out of management systems. More so, as the veterans in our industry start to retire, much of the basic knowledge around power and cooling is often overlooked when training their successors. This session will provide that basic knowledge and give a fundamental understanding of the power and cooling infrastructure in a data center, with an emphasis on cooling optimization. In this session, you’ll learn how to recover stranded cooling capacity, reduce operating costs, improve IT equipment reliability, and prolong the life and capacity of the data center.
More Related Content
Similar to Strategies for Re-Using Data Center Heat Energy and Their Impact on PUE (Both Real and Perceived)
How green standards are changing data center design and operationsSchneider Electric
An effort is underway to harmonize certain energy-efficiency standards. Could global standardization ultimately diminish the technical effectiveness of such standards? Which will emerge as the de facto standards? This session will explore these questions, as well in data center efficiency and sustainability guidelines.
Bringing Enterprise IT into the 21st Century: A Management and Sustainabilit...Jonathan Koomey
I gave this talk as a webinar on March 19th, 2014 for the Corporate Eco Forum. It discusses ways to improve the efficiency of enterprise IT, mainly focusing on institutional changes that are necessary to make modern IT organizations perform effectively. It draws upon our case study of eBay as well as my other work on data centers over the years.
How green standards are changing data center design and operationsSchneider Electric
An effort is underway to harmonize certain energy-efficiency standards. Could global standardization ultimately diminish the technical effectiveness of such standards? Which will emerge as the de facto standards? This session will explore these questions, as well in data center efficiency and sustainability guidelines.
In this video from SC17 in Denver, Dan Reed moderates a panel discussion on HPC Software for Energy Efficiency.
"We have already achieved major gains in energy-efficiency for both the datacenter and HPC equipment. For example, the PUE of the Swiss Supercomputer (CSCS) datacenter prior to 2012 was 1.8, but the current PUE is about 1.25; a factor of ~1.5 improvement. HPC system improvements have also been very strong, as evidenced by FLOPS/Watt performance on the Green500 List. While we have seen gains from data center and HPC system efficiency, there are also energy-efficiency gains to be had from software- application performance improvements, for example. This panel will explore what HPC software capabilities were most helpful over the past years in improving HPC system energy efficiency? It will then look forward; asking in what layers of the software stack should a priority be put on introducing energy-awareness; e.g., runtime, scheduling, applications? What is needed moving forward? Who is responsible for that forward momentum?"
Watch the video: https://wp.me/p3RLHQ-hHQ
Learn more: https://sc17.supercomputing.org/presentation/?id=pan103&sess=sess245
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Clarifying ASHRAE's Recommended Vs. Allowable Temperature Envelopes and How t...Upsite Technologies
The topic of raising temperatures in data centers used to be met with much criticism in the industry, but in recent years has become more accepted. A big driver for this acceptance has been ASHRAE’s expanded envelope for recommended and allowable server inlet temperatures. However, while this has eased the discussion, there are still some questions that have been left unanswered. What’s the difference between recommended and allowable? Which one is best to use? What steps must be taken to safely raise set points? How do you ensure servers are still adequately cooled? What if you have different server types (A1, A2, A3, A4)? This presentation will examine these questions to give a clearer understanding of ASHRAE’s recommended and allowable guidelines. Also covered will be an explanation on how, in some cases, it is possible to raise cooling control set points without raising server inlet temperatures.
Energy Efficiency in Multi Tenanted Data CentresDataCentred
Presentation by Mike Kelly CEO DataCentred at 2014 Data Centre World, asking does PUE miss the point?
Traditionally in data centres there has been no end to end view of energy efficiency. Chip designers have used techniques to make on-chip and on-board calculations efficient; rack builders have devised techniques which are efficient in cooling servers within racks and within containment systems for pods of racks; data centre operators have ensured that bulk efficiency of supposed compute energy demand versus total energy demand are as low as possible.
The reality, however, is that since no part of this chain has sight of the others, what is described as total efficiency, PUE, may bear no relation to compute efficiency at all. In large single user data centres, an end to end view is increasingly possible, and is a better measure of energy efficiency.
This presentation explores whether some of these end to end measures and some the efficiency techniques they lead to, could be used in a multi-tenancy data centre.
The data center market has expanded dramatically in the past few years, and it doesn’t show signs of slowing down. Many clients and building owners are requesting modular data centers, which can be placed anywhere data capacity is needed. Modular data centers can help cash-strapped building owners add a new data center (or more capacity) to their site, and can assist facilities with unplanned outages, such as disruptions due to storms. Owners look to modular data centers to accelerate the “floor ready” date as compared to a traditional brick and mortar.
Data Center Monitoring and Management Best Practices: How You Can Benefit fro...Upsite Technologies
As data centers have adapted to many changes over the past year, managing sites with limited on-site personnel has become a new challenge. Because of this, a renewed focus by data center managers has been placed on remote monitoring and management solutions to help ensure operations continue to run as efficiently as possible. And while legacy monitoring and DCIM solutions provide some resolve to this issue, ever-changing needs and requirements by data centers are driving much change and innovation in this segment of the industry. This session will examine the software solutions that are currently available to remotely monitor and manage data center operations, with an emphasis on the power and cooling infrastructure. We will also examine emerging technologies like artificial intelligence, machine learning, and virtual reality as well as new concepts like DMaaS, and discuss how they are changing the landscape of data center monitoring and management. This analysis will take many factors into consideration including, real-time thermal performance, airflow management, power consumption, risk mitigation, and cooling optimization.
Cooling Optimization 101: A Beginner's Guide to Data Center CoolingUpsite Technologies
As new personnel enter the industry, they are often bombarded with a slew of buzz words and marketing messages that would lead them to believe that data centers almost run themselves. And while monitoring and DCIM solutions are improving the management of power and cooling, an understanding of the fundamental science is crucial to both see through the hype and get the most out of management systems. More so, as the veterans in our industry start to retire, much of the basic knowledge around power and cooling is often overlooked when training their successors. This session will provide that basic knowledge and give a fundamental understanding of the power and cooling infrastructure in a data center, with an emphasis on cooling optimization. In this session, you’ll learn how to recover stranded cooling capacity, reduce operating costs, improve IT equipment reliability, and prolong the life and capacity of the data center.
For Most Data Centers, Liquid and Air Cooling Will Not be Mutually ExclusiveUpsite Technologies
A recent report from Technavio indicates that the adoption of liquid-based cooling is high, as it is considered more efficient than air-based cooling. Globally liquid-based cooling is expected to grow at a remarkable rate through 2020, posting a CAGR of almost 16% during the forecast period. So, why is this level of adoption happening? Increasing rack densities lead by high performance computing (HPC) and the quest to improve efficiency are driving an increase in liquid cooling design strategies and deployment. While still relatively sparse, liquid cooling will become more prevalent, but this does not mean the end of air cooling. In this session, we’ll discuss how to implement liquid cooling while maintaining appropriate air-cooling conditions and fully realize efficiency gains. Lastly, we’ll discuss how to get started and get ahead of the market when it comes to improving cooling efficiency.
Data Center Cooling Efficiency: Understanding the Science of the 4 Delta T'sUpsite Technologies
While the term Delta T may be commonly used in the industry, there is much misunderstanding about where and why temperatures are changing in computer rooms. While two ΔT’s are commonly known, there are actually four different ΔT’s which contribute to the health of the data center. Understanding the sources of these differences and measuring them in your site provides insight about how to further improve the efficiency and capacity of computer room cooling.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
How IT Decisions Impact Facilities: The Importance of Mutual UnderstandingUpsite Technologies
Decisions and actions typically under the jurisdiction of the IT side of data center management can have a profound impact on the mechanical systems and resultant operating costs and capacity of the data center. By understanding these impacts, IT and facilities management are able to develop a cooperative approach to managing the data center, resulting in a more effective and efficient operation, thereby reducing operating costs.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
Ian Seaton, Industry Guru & Technical Advisor, Upsite Technologies
Myths of Data Center Containment:Whats's True and What's NotUpsite Technologies
This presentation focuses on common misconceptions about containment in data centers and provides participants with a technical understanding of the science behind containment. This understanding will enable managers to more fully realize the benefits of their own containment systems or be able to make informed decisions about deploying containment.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
The purpose of this presentation is to discuss the underlying science behind data center airflow management and how applying best practices can make the greatest impact on the computer room, both in terms of energy savings and capacity.
Presented by:
Lars Strong, P.E., Senior Engineer, Upsite Technologies
Mark Seymour, Director, Future Facilities.
4 steps to quickly improve pue through airflow managementUpsite Technologies
It’s well known that cooling typically accounts for around half of a data center's total power consumption. Given this, it's imperative that cooling is optimized to achieve a low Power Usage Effectiveness (PUE). While this too may be common knowledge, the question still remains, how can this be done quickly, with all possible benefits realized, and with the fastest return on investment?
Utilization of Computer Room Cooling Infrastructure: Measurement Reveals Oppo...Upsite Technologies
Study of data centers reveals the average computer room has cooling capacity that is nearly four times the IT heat load. When running cooling capacity is excessively over-implemented, then potentially large operating cost reductions are possible by turning off cooling units and/or reducing fan speeds for units with variable frequency drives (VFD). Using data from 45 sites reviewed by Upsite Technologies, this presentation will show how you can calculate, benchmark, interpret, and benefit from a simple and practical metric called the Cooling Capacity Factor (CCF). Calculating the CCF is the quickest and easiest way to determine cooling infrastructure utilization and potential gains to be realized by AFM improvements.
This presentation was originally delivered at AFCOM's Data Center World conference in May, 2014 in Las Vegas, Nevada. The presentation discuss the state of cooling and airflow management, and also introduces Upsite's newest solution, AisleLok Modular containment. For more information, please visit http://upsite.com/aislelok-modular-containment
Gaining Data Center Cooling Efficiency Through Airflow ManagementUpsite Technologies
This presentation highlights research from Upsite Technologies regarding the latest best in data center airflow management and cooling, including steps to improvement. Originally delivered by Upsite President John Thornell at the AFCOM Boston-New England Chapter meeting.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdfEnterprise Wired
In this guide, we'll explore the key considerations and features to look for when choosing a Trusted analytics platform that meets your organization's needs and delivers actionable intelligence you can trust.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
2. datacenterworld.com | #datacenterworld
Information Classification: General
Speaker Background
• Thought leader and recognized expert on data center optimization with
over 25 years of experience
• Certified US Department of Energy Data Center Energy Practitioner
(DCEP) HVAC and IT Specialist
• Presented on various topics around the world:
• How IT Decisions Impact Facilities: The Benefit of Mutual
Understanding
• Designing, Deploying and Managing Efficient Data Centers
• Myths of Data Center Containment
• Understanding the Science Behind Data Center Airflow
Management
• Data Center Cooling Efficiency: Understanding the Science of the
4 Delta T’s
Lars Strong, P.E.
Senior Engineer and
Company Science Officer,
Upsite Technologies
3. datacenterworld.com | #datacenterworld
Information Classification: General
Agenda
• What is your experience?
• Understanding Power Usage Effectiveness (PUE)
• What is PUE?
• PUE Levels of Measurement
• How Airflow Management Relates to PUE
• Can You Achieve a PUE of Less Than 1.00?
• The 4 Delta Ts
• Strategies for Data Center Heat Energy Re-Use
• Ship the Hot Air Next Door
• Tap the Chilled Water Loop
• Hot Water Cooling
• Relationship Between Recycling Data Center Heat Energy and PUE
• Questions
4. datacenterworld.com | #datacenterworld
Information Classification: General
Interactive
• Do you operate a data center?
• Are you re-using heat?
• How is it going?
• Do you know of a heat re-use project?
• Are you considering / planning on re-using heat?
6. datacenterworld.com | #datacenterworld
Information Classification: General
Introduction to PUE
• PUE / DCiE originally from The Green Grid, now falls under ISO/IEC JCT1 SC39 WG1 and is defined by an international
ISO standard – ISO/IEC 30134-2
• This is the *only* definition of PUE and figures measured or reported in any other way are not true PUE
• ISO is an independent, non-governmental membership organization and the world's largest developer of voluntary
International Standards
• Members are the national standards bodies of the 163 member countries around the world. Based in Geneva,
Switzerland
• ISO works alongside International Electrotechnical Commission (IEC), in the development of emerging international
data centre standards
• ISO/IEC JCT1 SC39 WG1 are responsible for the development of the ISO/IEC 30134 series of standards (data
centre resource efficiency KPIs)
• All ISO standards are designed to dovetail so the format for the 30134 series will be similar and relate to familiar
standards series such as ISO 9000, ISO 27000, ISO 50000 and ISO 14000 etc.
7. datacenterworld.com | #datacenterworld
Information Classification: General
What is PUE?
• PUE is defined as the ratio of total facilities energy to IT equipment energy
• (Total Facility Power) / (IT Equipment Power) = PUE
• Total facility energy is defined as the energy dedicated solely to the data center
• The energy measured at the utility meter of a dedicated data center facility or at the meter for a data center or data
room in a mixed-use facility
• The IT equipment energy is defined as the energy consumed by equipment that is used to manage, process, store, or
route data within the compute space
• IT equipment energy includes the energy associated with all of the IT equipment (compute, storage, and network
equipment) along with supplemental equipment (KVM switches, monitors, and workstations/laptops used to monitor
or otherwise control the data center)
• Total facility energy includes all IT equipment energy as described in the bullet above plus everything that supports
the IT equipment using energy
8. datacenterworld.com | #datacenterworld
Information Classification: General
PUE Levels of Measurement Chart
• Regardless of how the
measurements are taken it is
always worth remembering that
PUE is not an overall energy
efficiency metric
• PUE merely measures the
overhead that the building puts on
the IT load it hosts
• True energy efficiency requires a
focus on the IT load power
consumption and reduction too
• Ironically reducing the power
consumed by the IT load and
thereby reducing the total kW
hours used can actually increase
the PUE
• This is one of the reasons that
PUE is not a true energy
efficiency metric
9. datacenterworld.com | #datacenterworld
Information Classification: General
PUE Levels of Measurement Chart
• There is a direct relationship between airflow
management and PUE – the better the airflow
management, the better the PUE
• Mechanical cooling + fan cooling is 35% of total load
and 73% of non-IT load
Total Load
IT Load
PUE = = 1.92
10. datacenterworld.com | #datacenterworld
Information Classification: General
Partial PUE (pPUE)
• While PUE includes all energy-using components within a facility, pPUE includes all energy-using components within a
boundary/zone
• A boundary/zone can be a physical designation such as a container, room, modular pod, or building
• It also can be a logical boundary such as equipment owned by a department or a customer, owned versus leased
equipment, or any other boundary that makes sense for the management of the assets
• (Total energy inside the boundary) / (Total IT equipment energy inside the boundary) = pPUE
11. datacenterworld.com | #datacenterworld
Information Classification: General
Can you Achieve a PUE of Less than 1.00?
• For a few years now, there have been many claims that achieving a PUE of less than 1.00 was possible by recycling
some data center energy
• These claims usually derive from either obtaining off-grid electricity or producing some energy from the IT ΔT that is used
to perform some other task
• Neither of these cases change the basic math of total power into the data center divided by IT consumed power = 1
or higher, unless the IT thermal output were converted to energy to provide a portion of the energy required to
operate our IT equipment, thereby subtracting that portion from total consumption
• This result of IT equipment energy use exceeding total data center energy use gets us a PUE less than 1.00, and
also gets us into the realm of physics-defying fantasy
• However, while achieving a PUE of less than 1.00 is simply not possible, that does not mean that efforts in mining heat
energy produced by IT equipment hold no value, there can be a real and positive ecological impact
14. datacenterworld.com | #datacenterworld
Information Classification: General
Can you Achieve a PUE of Less than 1.00?
ISO/IEC 30134 Information Technology – Data Centres – Key Performance Indicators
• ISO/IEC 30134-1:2018 — Part 1: Overview and general requirements
• ISO/IEC 30134-2:2018 — Part 2: Power usage effectiveness (PUE)
• ISO/IEC 30134-3:2018 — Part 3: Renewable energy factor (REF)
• ISO/IEC 30134-4:2017 — Part 4: IT Equipment Energy Efficiency for servers (ITEEsv)
• ISO/IEC 30134-5:2017 — Part 5: IT Equipment Utilization for servers (ITEUsv)
• ISO/IEC 30134-6:2021 — Part 6: Energy Reuse Factor (ERF)
• ISO/IEC 30134-7 — Part 7: Cooling Efficiency Ratio (CER)
• ISO/IEC 30134-8:2022 — Part 8: Carbon Usage Effectiveness (CUE)
• ISO/IEC 30134-9:2022 — Part 9: Water Usage Effectiveness (WUE)
16. datacenterworld.com | #datacenterworld
Information Classification: General
Strategies We’ll Be Looking At
• While a PUE of less than 1.00 cannot be an achievable goal, we can still improve our organization’s bottom line and
contribute positively to the planet’s health in other ways
• To help us measure and report this reliably since we cannot have a PUE of less than 1 there is another metric in the
ISO/IEC 30134 series to help us do this:
• ISO/IEC 30134-6 - Energy Reuse Factor (ERF)
• There a few strategies we’ll look at in the subsequent slides:
• Ship the Hot Air Next Door
• Tap the Chilled Water Loop
• Hot Water Cooling
• Something to consider though is that the technology to recover the otherwise wasted heat is available to us now and has
been for some time
• The real difficulty is finding a customer or use for what is typically relatively low-grade heat which doesn’t transport
well
• The effective use of ‘waste’ heat requires collaboration with potentially interested parties at the early planning stage
17. datacenterworld.com | #datacenterworld
Information Classification: General
Ship the Hot Air Next Door
• Shipping the hot air next door, on the surface,
may seem like the simplest and most
straightforward application of data center
waste heat re-use, but unfortunately it can get
complicated pretty quickly when we try to
compensate for it’s being what we call low
grade heat energy
• Data center return air has been used to heat
adjacent office spaces, auditoriums, and even
a greenhouse
18. datacenterworld.com | #datacenterworld
Information Classification: General
Ship the Hot Air Next Door
• In general, there are a few obstacles with this strategy:
• Working waste air is typically going to be in the 80˚ to 95˚F range, which prevents it from being particularly useful
except when it can be applied to comfort temperature range applications in close proximity to the data center
• The requirement for proximity is another obstacle, as hot air does not travel well
• Another obstacle is that anyone looking for squeezing out more energy efficiencies after reaching their PUE goal, has
most assuredly already implemented some form of free cooling and has fine-tuned their infrastructure
• The dilemma is that when our free cooling temperatures are the lowest during that cold time of the year and our
associated waste air temperatures are the lowest, that is exactly when our customer (internal or external) is going to
be looking to us for the most heat
• Conversely, during the summer, when we are cranking out hot return air, our customer may be looking for some heat
relief
19. datacenterworld.com | #datacenterworld
Information Classification: General
Ship the Hot Air Next Door
• Yandex, a large Russian data center operator with facilities around the world
• In their data center in Mantsala, hot waste air is piped through heat pumps and air-to-liquid heat exchangers to
produce 90˚ to 115˚F water that then requires a short lift to 130˚ to 140˚F, which is useable for local district heating
• This particular site has been operational for several years and has reduced the carbon emissions of the city by 40%
and reduced local consumers’ heating prices by 12%
• Similar strategies have been deployed by Facebook in Denmark and numerous facilities in Sweden, including a Digiplex
evaporative air-to-air economizer retrofit project in Stockholm projected to provide heating for 10,000 homes
• Cooling, on the other hand, can be accomplished by using absorption chillers to convert the heat to cooling, though it can
be a stretch to get good performance from an absorption chiller with the air temperatures we typically see
20. datacenterworld.com | #datacenterworld
Information Classification: General
The Four Delta Ts (ΔT)
1. Though IT equipment
2. IT equipment exhaust to cooling unit
3. Through cooling unit
4. Cooling unit supply to IT equipment intake
21. datacenterworld.com | #datacenterworld
Information Classification: General
The Four Delta Ts (ΔT)
• In most data centers the greatest loss of
efficiency is having a low temperature of air
returning to the cooling units
• The goal is to always have the highest possible
return air temperature to cooling units
22. datacenterworld.com | #datacenterworld
Information Classification: General
Tap the Chilled Water Loop
• Chilled water loop may occasionally be somewhat of
a misnomer
• We may tap it on the supply side (chilled) or on the
return side (warm, or at least less chilled)
• In an example from the University of Syracuse, part
of that water loop can be downright hot
• In a recent example in the UK this technique has
been used to heat a public swimming pool with more
similar projects to follow
23. datacenterworld.com | #datacenterworld
Information Classification: General
Tap the Chilled Water Loop
• The turbines produced 585˚F exhaust air which was
put to work in two separate operations in the chiller
room:
• Provide input to absorption chillers with capacity
to provide cooling to the data center as well as
supply the air conditioning chilled water loop for
a large office building nearby
• Provide the heat to an air-to-liquid heat
exchanger to deliver heat to that same office
building during the winter
24. datacenterworld.com | #datacenterworld
Information Classification: General
Tap the Chilled Water Loop
• A more current, easier-to-diagram and still-operating
example of productive loop-tapping can be
demonstrated in the Westin-Amazon cooperative
project in Seattle
• In this neatly symbiotic relationship, the Amazon
campus is essentially an economizer and condenser
for the Westin Carrier Hotel and the Westin’s thirty
four floors of data center essentially comprise a
private utility providing heating energy to the Amazon
campus
• In this case, Amazon taps the return water loop from
the Westin’s data center cooling and the Westin taps
the return water loop from Amazon’s heating
25. datacenterworld.com | #datacenterworld
Information Classification: General
Tap the Chilled Water Loop
• AAmazon should save around 80 million kW hours of
electricity over the first twenty five years, roughly
equivalent to eliminating carbon emissions of sixty
five million pounds of coal
• Saves water from tower evaporation loss and saves
operating expense of running cooling towers
• Estimated $985,000 annual savings for five million
square feet of office space
• This is also being done by at North in Sweden by
connecting directly to an adjacent district heating
system to provide winter heating to residential homes
26. datacenterworld.com | #datacenterworld
Information Classification: General
Hot Water Cooling
• The third category of data center heat energy re-use is hot water cooling, which can benefit either of the first two
categories, but is particularly beneficial with data center liquid cooling
• As previously mentioned, if data center waste air is being used to facilitate generator starters, raising supply air from 65˚F
or 70˚F up to 78-80˚F will produce a return air temperature high enough to eliminate block heaters
• In the Westin-Amazon project, a good airflow containment execution could allow the data center water supply to the utility
heat exchanger to be increased enough to reduce the heat recovery plant lift by 28%
• Neither of these cases utilize cooling with warm or hot water, but even moving the needle these small steps can produce
significant benefits
• When we start working with hot water, we get higher grade waste heat energy and water is easier to move around
than air
27. datacenterworld.com | #datacenterworld
Information Classification: General
Hot Water Cooling
• The IBM proof-of-concept data center at the Zurich
Research Laboratory took advantage of innovations
in direct contact liquid cooling whereby hot water was
pumped through copper microchannels attached to
computer chips
• Findings included:
• 140˚F supply water maintained chip
temperatures around 176˚F, safely below the
recommended 185˚F maximum
• Hot water cooling resulted in a post-process
“return” temperature of 149˚F
• In addition to providing heat for an adjacent lab,
the absorption chiller provided 49kW of cooling
capacity at about 70˚F
28. datacenterworld.com | #datacenterworld
Information Classification: General
Hot Water Cooling
• Around the same time that the IBM proof-of-concept hot water liquid cooling experiment was being implemented in
Switzerland, eBay was experimenting with warm water cooling in Phoenix in the well-publicized Mercury Project
• The Mercury Project involved one part of the data center cooled by chilled water loop connected to chillers and then a
second data center using condenser return water from the first data center up to 87˚F to supply rack-mounted rear door
heat exchangers
• Obviously, temperatures exceeded ASHRAE recommended server inlet air temperatures, but remained within the Class
A2 allowable range
30. datacenterworld.com | #datacenterworld
Information Classification: General
Key Takeaways
• Lack of demand for heat energy produced by data centers, particularly in the United States, can be a show-stopper or at
least a significant complication
• Heat energy is not quite as transportable as electric energy, so a customer/consumer of heat energy in close proximity to
the data center is critical
• In northern Europe there is a built-in infrastructure in district heating networks with on-line demand for heat energy
• The Westin/Amazon project is such a perfect model because the Amazon office buildings effectively represent the
equivalent of a local heating district “customer”
The IT load is measured at the output of the UPS equipment and can be read from the UPS front panel, through a meter on the UPS output, or, in cases of multiple UPS modules, through a single meter on the common UPS output bus
Facility load is measured from the utility service entrance that feeds all of the electrical and mechanical equipment used to power, cool, and condition the data center
Measurement period is either weekly or monthly
The IT load is measured at the output of the PDUs within the data center and can typically be read from the PDU front panel
The facility load is measured from the utility service entrance that feeds all of the electrical and mechanical equipment used to power, cool, and condition the data center
Measurement period is daily/hourly at the same time
The IT load is measured at each individual piece of IT equipment within the data center, either by metered rack PDUs that monitor at the strip or receptacle level or by the IT device itself
The incoming energy is measured from the utility service entrance that feeds all of the electrical and mechanical equipment used to power, cool, and condition the data center
Measurement period is every 15 minutes or less
It should be noted that a PUE calculation uses the IT energy as the denominator in a fraction and is thus undefined if the IT energy in a zone is zero. Since the reason for a pPUE calculation is to provide a metric of the efficiency of delivering energy and cooling to IT equipment, pPUE has no meaning for a zone that contains no IT equipment. Such a zone would more appropriately be accounted for as overhead.
There is great value in actually increasing PUE by dramatically reducing the divisor of our PUE equation through virtualization, turning off comatose servers, migrating from Windows to Linux, etc.
The PUE rock showing above the water level may be bigger, but the sea level itself is lower (hence, the ongoing discussion for new and improved metrics)
Such is the case with looking for ways to perform useful work with the thermal energy produced by the work performed by our IT equipment
I’ll be starting today with the most straightforward – the Notre Dame practice of exploiting the elevated temperature of the data center return air to do some useful work – in this case maintaining an operational greenhouse temperature at the South Bend Greenhouse and Botanical Gardens through the sometimes brutal northern Indiana winter. We call this strategy, “Ship the hot air next door.”
In both these cases, however, there are ways to take advantage of our data center waste heat that may require adding to our facilities engineering tool kit. Heat pumps and absorption chillers may help us bridge the gap from low grade heat energy to at least somewhat useful grade energy. In both cases, heating or cooling, the folks who are using this design approach typically start with transferring the heat from air to water.
Meanwhile, why is it that my good examples all seem to come from Northern Europe, at least in this section on shipping the hot air next door? The cooler ambient temperatures, providing access to much larger blocks of free cooling time, have attracted data center companies serious about energy efficiency and pursuing carbon emission reduction and neutral goals. More importantly, however, is probably the prevalence of district heating. For my American readers who may not be familiar with the term, we are merely talking about a heating and cooling district where all the homes or buildings are connected to a single heating and cooling source, versus having individual boilers or furnaces. Each district will either buy heating and cooling from some other supplier (utility) or have its own production facility. The opportunity for the data center is to become that source, profitably, at a lower cost than either the existing vendor or captive facility. The scale of this opportunity can be illustrated by the fact that data centers now account for over 10% of Sweden’s municipal heating supply. Taking it one step further, Finland actively markets itself as a great place to locate your data center because of the income opportunities associated with selling to these heating district networks. In addition, in many public utility jurisdictions, if you are going to sell this energy to your neighbor, you may need to get yourself registered as a utility.
So does that mean that localities and regions that are limited in implementation of heating district networks are relegated to heating greenhouses, generator heater blocks, internal office spaces and perhaps melting some snow on the sidewalk between the parking lot and front door, none of which will likely have a separate profitability line on the company 10K report? While shipping the hot air next door may not be a profit center, it can still deliver an internal benefit and some of these payback periods can still be relatively short. However, there are still significant opportunities associated with chilled water loops and various iterations of liquid cooling.
The second category of data center energy recapture I will call “tapping the loop”, i.e. the chilled water loop.
The figure to the right provides a simplified overview of the University of Syracuse’s data center and supporting electrical and mechanical infrastructure
This example had several unique and innovative features for a decade-old deployment and actually represented a proof-of-concept collaboration between the university and IBM.
First, primary power was generated on site by a roomful of natural gas-powered turbines
The electrical grid connection was purely as a back up to the back, i.e., there was N+2 turbine redundancy and then there was a propane reserve back-up to the liquid natural gas source and then full UPS battery back-up, charged by the turbine output
The connection to the electrical grid was intended for use only after the upstream four levels had either failed or been exhausted
Roger Schmidt, IBM Scientist Emeritus who was the IBM Technical Lead on this project, informed me that after ten years the turbines had worn out and the cost of electricity was so low that the university has since converted this data center over to a traditional chiller and cooling tower mechanical infrastructure. Nevertheless, this facility had a good run and provided impetus for explorations in the benefits of loop tapping.
- The figure here dramatically over-simplifies what they accomplished with this project, but it illustrates the basic building blocks into which all the detailed engineering is applied.
- The basic elements of this installation are not particularly innovative and have been employed in other places (data center as well as industrial) around the world over the years. Heat produced by the data center is captured on cooling coils and removed by water that is pumped through a heat exchanger under the street between the two sets of structures. Water passing through the other side of the heat exchanger has its temperature raised, but not raised high enough to be effective in providing comfort heating. Heat pumps with a coefficient of performance around 4 increase the temperature of the water to 120˚-130˚F, thereby economically generating a viable heat source for all the buildings of the Amazon campus. Boilers are in place in case the data centers do not produce adequate heat but, according to Jeff Sloan, McKinstry engineer who orchestrated this project, Amazon typically consumes only a small portion of the data center waste heat and the boilers have rarely operated since everything got up and operational. Once the heat is removed from the water to provide comfort heating in the office buildings and the associated biosphere dome, the return water from Amazon cools the data across the street.
- Post-process return air of 149˚F was an adequate grade heat energy for both building heating and cooling through an absorption chiller, without requiring a boost from heat pumps
It was within this operation that Dean Nelson and his team came up with a business mission-based data center efficiency metric tying data center costs to customer sales transactions, thereby giving form to that illusive tipping point between data center efficiency and effectiveness. In this case, the “customer” was internal and the waste heat was not used as a heat energy source but as a cooling source.
The Project Mercury model does, in fact, offer a vision for low risk warm water cooling that could be available to many data centers without having to transition all the way to some form of direct contact liquid cooling. For example, data centers using rear door heat exchangers can operate with supply temperatures north of 65˚F, which easily exceeds the return temperature of a building comfort cooling return water loop. Tapping into the return water is essentially free cooling and then during the time of year when the building AC might not be running continuously (or at all, my friends in Minnesota), the rear door heat exchangers can be supplied through a free cooling heat exchanger economizer. The same principle applies to direct contact liquid cooling, which should be essentially free to operate in any facility with any meaningful-size comfort cooling load.