More Related Content Similar to Datacentre of the Future (20) Datacentre of the Future2. COPYRIGHT © 2017 BY ASPERITAS
Asperitas, Robertus Nurksweg 5, 2033AA Haarlem, The Netherlands
Information in this document can be freely used and distributed for internal use only. Copying or
re-distribution for commercial use is strictly prohibited without written permission from
Asperitas.
If you would like to use the original powerpoint version of this document, you can send an email
to: Marketing@Asperitas.com
FOR MORE INFORMATION, FEEDBACK OR TO SUGGEST IMPROVEMENTS FOR THIS DOCUMENT,
PLEASE SEND YOUR SUGGESTIONS OR INQUIRIES TO:
WHITEPAPERS@ASPERITAS.COM
Copyright © 2017 by Asperitas
3. THE DATACENTRE OF THE FUTURE
A DATACENTRE IS NOT ABOUT ICT, POWER,
COOLING OR SECURITY.
IT IS NOT EVEN ABOUT SCALE OR
AVAILABILITY OF THESE SYSTEMS.
IT IS ABOUT THE AVAILABILITY OF
INFORMATION…
Copyright © 2017 by Asperitas
4. INCREASE IN INFORMATION FOOTPRINT
DEMAND FOR HIGH DENSITY CLOUD
CONSOLIDATION OF POWER DEMAND
OVERLOADING OF OUTDATED POWER GRID
GLOBAL NETWORK LOAD
CREATING EXERGY
THE CHALLENGES
Copyright © 2017 by Asperitas
5. Datacentres, Server rooms, Network hubs etc.
Estimated 4% Global Electricity Production (25 PWh)
4% = 1 PWh
1000000000000000 Wh (1015)
ENERGY FOOTPRINT OF INFORMATION
FACILITIES
Copyright © 2017 by Asperitas
6. EXPLANATION PREVIOUS SLIDE (ENERGY FOOTPRINT)
▪ There is no real data available which can substantiate any sort of claim to the percentage
mentioned. It is a very rough estimate for a global scale.
▪ The fact that the information is unavailable, is part of the problem. You cannot solve the problem which
cannot be identified.
▪ https://yearbook.enerdata.net/electricity/world-electricity-production-statistics.html
▪ The only available source for a global figure is the European Commission where Neelie Kroes
mentioned datacentres being responsible for 2% of the global electrical energy production and
ICT in general being responsible for 8-10%. (2012)
Copyright © 2017 by Asperitas
7. GLOBAL PUE: APPROX. 1.7
Cooling
37%
UPS/Nobreak
2%
Other
2%
Power supply (15%)
9%
Fans
(20%)
10%
Information
(65%)
40%
ICT
59%
Global estimated PUE breakdown
Copyright © 2017 by Asperitas
8. EXPLANATION PREVIOUS SLIDE (GLOBAL PUE ESTIMATE)
▪ There is no real data available which can substantiate any sort of claim to the percentages
mentioned. Everything is a very rough estimate for a global scale.
▪ For both the Power supply and Fans percentages, the estimates are based on the following
factors:
▪ Most datacentres (colo) have no control over the internal temperature management of the IT.
▪ Environmental temperatures in datacentres are usually too high for optimal power efficiency of IT.
▪ IT is in general greatly under-utilised (30% is already high in many cloud environments, corporate IT in
colo is often worse) which creates a very high overhead for both the PSU and fans.
▪ Most cloud and corporate backoffice platforms are running on inefficient, cheap IT hardware which is
driven by CAPEX as opposed to OPEX. Reason for this is that power consumption is not part of the IT
budget.
▪ Virtually all high density cloud environments (5-10 kW/rack) are based on 1U servers. These are the
least efficient when it comes to fan overhead.
Copyright © 2017 by Asperitas
9. ACTUAL INFORMATION EFFICIENCY
▪ 1000 TWh total energy
▪ Actual cooling: 471 TWh
▪ Actual power loss: 112 TWh
112 TJ/s
▪ Other overhead: 17 TWh
17 TJ/s
▪ Energy for information: 400 TWh
400TJ/s
▪ 1000 TWh/400TWh
▪ PUE equivalent: 2.5
Cooling + IT
fans
47%
UPS+IT
Power
supply
11%Other
2%
Information
40%
Actual efficiency
Copyright © 2017 by Asperitas
10. Global thermal energy production by Information
1904400000000000000 J or 1.9 EJ (1018) (529 TJ/s*3600):
Energy for heat rejection
EXERGY DESTRUCTION:
471000000000000 Wh
ENERGY TRANSFORMATION
Copyright © 2017 by Asperitas
11. EXCLUDE COOLING INSTALLATIONS
REDUCE IT OVERHEAD
BALANCE THE POWER GRID
REDUCE THE DATA NETWORK LOAD
BECOME ENERGY PRODUCERS
WHAT IF INFORMATION FACILITIES COULD...
Copyright © 2017 by Asperitas
12. 1 MW critical load, ΔT of 10°C
Thermal production: 1MJ/s
1°C rise with air:
1005 J/kg°C * 0.001205 kg/L = 1.211025 J/L/s
1MJ/s requires: 1000000 J/(10°C *1.2 J)
83333L/s AIR
COOLING THE CLOUD
Copyright © 2017 by Asperitas
13. Water required for ΔT of 10 °C
4187 J/kg°C * 1 kg/L = 4187 J/L/s per 1°C
10°C with 1 MJ/s: 24 L/s
83333 L/s AIR VS WATER
1 MJ/s
THE DATACENTER OF THE FUTURE:
6 L/s
AND IS AN ENERGY PRODUCER...…
Liquid can travel
200 TIMES THE DISTANCE
with same thermal losses
Copyright © 2017 by Asperitas
14. EXPLANATION PREVIOUS SLIDES (WATER VS AIR)
▪ For simplicity, the identical ΔT is maintained for both approaches.
▪ After feedback from reviewers and the audience of Datacentre Transformation Manchester, the
comparison was raised to 10 °C
▪ Air will usually allow the ΔT to become higher than 10 °C, although due to poor utilization, this is not
always achieved.
▪ In CRAC water circuits, the ΔT is usually below 10 °C.
Copyright © 2017 by Asperitas
15. DON’T ASK WHICH TECHNOLOGY TO USE
CHOOSE BETWEEN AIR OR LIQUID
THEN COMBINE LIQUID TECHNOLOGIES
THE WRONG QUESTION
Copyright © 2017 by Asperitas
16. TLC - IMMERSED COMPUTING®
▪ 100% Removal of heat from the IT
▪ Highest IT efficiency by eliminated fans
▪ No air required
▪ Level of intelligence
▪ Management control and insight
▪ Automatic optimisation of the water circuit
▪ Optimised for high density cloud/HPC nodes
▪ Varying servers
▪ Flexible IT hardware
▪ Feed: 18-40°C / 55°C Extreme / max ΔT 10°C
Copyright © 2017 by Asperitas
17. DLC - DIRECT-TO-CHIP LIQUID COOLING
▪ Removes heat from hottest parts of the IT
▪ Increased IT efficiency by reduced fan power
▪ Requires additional cooling (ILC)
▪ Level of intelligence
▪ Management control and insight
▪ Automatic optimisation of the water circuit
▪ Optimised for HPC racks with identical nodes
▪ Very high temperature chips
▪ High density computing
▪ Feed: 18-45°C / max ΔT 15°C
Copyright © 2017 by Asperitas
18. ILC - (ACTIVE) REAR DOOR COOLING
▪ 100% Removal of heat from the IT
▪ Small IT efficiency by assisted circulation
▪ Acts as air handler in the room
▪ Level of intelligence
▪ Management control and insight
▪ Automatic optimisation of the water circuit
▪ Optimised for IT with limited liquid compatibility
▪ Storage
▪ Network
▪ Legacy systems and high maintenance servers
▪ Feed: 18-23°C / 28°C Extreme / max ΔT 12°C
Copyright © 2017 by Asperitas
19. TECHNOLOGY NORMAL
INLET OUTLET
CRAC (generic) 6-18°C 12-25°C
ILC (U-Systems) 18-23°C 23-28°C
DLC (Asetek) 18-45°C 24-55°C
TLC (Asperitas) 18-40°C 22-48°C
TECHNOLOGY EXTREME
INLET OUTLET
CRAC (generic) 21°C 30°C
ILC (U-Systems) 28°C 32°C
DLC (Asetek) 45°C 65°C
TLC (Asperitas) 55°C 65°C
OPTIMISING LIQUID
INFRASTRUCTURES
Copyright © 2017 by Asperitas
20. 17°C
CRAC
Parallel
• +5°C
• Output 22°C
ILC Parallel
• +6°C
• Output 28°C
TLC & DLC
paired
• +16°C
• Output 44°C
TLC & DLC
paired
• +16°C
• Output 60°C
Facility
output
60°C
INCREASING ΔT WITH TEMPERATURE CHAINING
▪ Serial implementation of the infrastructure
CREATE HIGH ∆T
3-stage cooling for low water volume
Down to 35 °C, free-air
Between 35-28 °C, adiabatic
Below 28 °C, chiller
USABLE HEAT
Copyright © 2017 by Asperitas
21. TEMPERATURE CHAINING EXAMPLE
▪ Closed room 3-stage configuration
▪ ILC setup maintains air temperature
▪ Water volume decreased by 85%
▪ ΔT 6°C : 29.9 L/s
▪ ΔT 40°C : 4.5 L/s
▪ Cooling options
▪ Closed cooling circuit with pumps and
coolers
▪ Closed cooling circuit with pumps and reuse
behind Heat Exchanger
▪ Open cooling circuit with external water
source supplied for reuse
+8°C
+5°C+6°C+6°C+7°C+6°C
+6°C+7°C+4°C+3°C+6°C
+7°C
+9°C
+6°C
+6°C
+7°C+10°C+7°C
+7°C+8°C+6°C
+14°C
+16°C
+17°C
+12°C
Stage 1
ILC
Stage 2
DLC & TLC
Stage 3
Optimised DLC&TLC
Facility input
20°C
26°C
+6°C
+5°C
+8°C
+7°C
+2°C
+5°C
+7°C
+5°C
+9°C
+8°C
+4°C
+6°C
+5°C+7°C+6°C+2°C
+7°C+6°C+2°C+3°C+5°C
+9°C
+7°C
Facility output
60°C
400 kW TLC
40 kW DLC
+19°C
160 kW TLC
60 kW DLC
+15°C
120 kW ILC
+6°C
45°C
Copyright © 2017 by Asperitas
22. REUSE MICRO INFRASTRUCTURE
▪ Micro datacentre or server room
▪ Open water circuit
▪ Reusable requirement: 65°C
▪ Variable volume
▪ Feedback loop for
constant temperature
+7°C
+7°C
Constant temp
Facility output
65°C
51°C
Variable temp
Facility input
5-40°C
Copyright © 2017 by Asperitas
23. Water required for ΔT of 40 °C
4187 J/kg°C * 1 kg/L = 4187 J/L per 1°C
40°C with 1 MJ/s: 6 L/s
83333 L/s AIR VS WATER
1 MJ/s
TEMPERATURE CHAINING IMPACT
Copyright © 2017 by Asperitas
24. DATACENTRE DESIGN
▪ Cooling options:
▪ 100% Chillers
▪ 100% Free air/adiabatic + 100% Chillers (off)
▪ High volume, low ∆T (5-20 °C)
▪ Fluid handling
▪ Spacious high capacity air ducting
▪ Air filtration
▪ Hot/Cold aisle separation
▪ Information density (avg) 1,5 kW/m2
▪ Concrete floor + Raised floors
▪ Power
▪ UPS (IT only): 100%
▪ Gensets (facility): 100%
▪ Cooling options:
▪ External cold water supply by reuser
▪ 100% Free air/adiabatic + 5% chillers
▪ Low volume, high ∆T (20+°C)
▪ Fluid handling
▪ Normal capacity water circuit
▪ Water quality management
▪ Minimal “fresh-air” ventilation
▪ Information density (mixed) 12 kW/m2
▪ Bare concrete floor
▪ Power (compared to air)
▪ UPS (IT only): 90%
▪ Gensets: 60%
DESIGNED FOR AIR DESIGNED FOR MIXED LIQUID
Copyright © 2017 by Asperitas
25. Minimised energy footprint
Minimised installations requirements
Flexibility by minimal environmental impact
FOCUS ON 24/7 HEAT CONSUMERS
SITE PLANNING AND QUALIFICATION
Copyright © 2017 by Asperitas
26. REDEFINING THE LANDSCAPE
▪ Large facilities
▪ Core Datacentres
▪ On the edge of urban areas
▪ Distributed micro facilities
▪ Edge nodes
▪ Inside the urban area
▪ Energy balancing
▪ Distributed minimised power load
▪ Focus on heat reuse
Copyright © 2017 by Asperitas
27. DISTRIBUTED MICRO EDGE NODES
▪ 10-100 kW
▪ Edge of network, within urban areas
▪ IoT capture and processing
▪ Data caching (Netflix, Youtube, etc.)
▪ Localised cloud services (SaaS, Paas, IaaS)
▪ Minimised facilities
▪ External cooling input
▪ 24/7 energy rejection for reuse
▪ Geo redundant
▪ Tesla Powerpack for controlled failover
▪ District data hub
EDGE ENERGY REUSE
Spas, swimming pools (100% reuse)
Hospitals, hotels with hot water loops (100% reuse)
Urban fish/vegetable farms with aquaponics (100% reuse)
District heating (100% reuse)
Aquifers for heat storage (75% reuse)
Water mains (29% reuse)
Canals, lakes and sewage (exergy destruction)
Copyright © 2017 by Asperitas
28. CORE DATACENTRES
▪ Large facilities
▪ No-break systems
▪ Limited cooling infrastructure
▪ 24/7 information availability
▪ Edge management
▪ Replicated Edge back-end
▪ Communication hub
▪ Industrial scale reuse infrastructures
▪ 100% 24/7 heat reuse
▪ Agriculture
▪ Spas
▪ Cooking, pressurising, sterilisation,
bleaching
▪ Distillation, concentrating, drying or
Kilning
▪ Chemical
▪ Exergy destruction
▪ Rivers/ocean
▪ Liquid-to-air rejection
Copyright © 2017 by Asperitas
29. EDGE MANAGEMENT
Emerging platforms for decentralised cluster management
Integration with energy and heat management
CLOUD
CUSTOMERS
COMPUTING JOBS
COMPUTING RESULTS
CLOUD DEMAND
COMPUTING JOBS
COMPUTING RESULTS
JOB SCHEDULING
HEAT DEMAND
CLOUD DEMAND
DISTRIBUTION
MANAGEMENT FOR
HEATING
.ware
INFORMATION
HARDWARE
Q.RADS, O.MAR, AIC24
HARDWARE
INSTALLATION
HEAT DEMAND
CUSTOMERS
.rads
Free and green
heat
Copyright © 2017 by Asperitas
30. LIQUID INFORMATION FACILITIES
▪ Reduced or eliminated technical installations
▪ Cooling
▪ No-break
▪ Reduced build cost
▪ No raised floors
▪ Reduced space for fluid handling
▪ Increased, distributed power density
▪ Reduced m2
▪ Reduced operational cost
▪ Reduced maintenance on installations
▪ High IT density
▪ Higher specs IT hardware, also for cloud
▪ Reduced software cost
Copyright © 2017 by Asperitas
31. LIQUID IS COMING HOW TO PREPARE?
Design for water
(redundant) to IT
Sufficient ability to
distribute piping to
whitespace
Plan for reusable heat
Plan for liquid way of
work
IT maintenance rooms
for wet equipment
Staff training for liquid
management
Proper supplies and
tooling
Copyright © 2017 by Asperitas
32. WHAT NEEDS TO BE DONE?
▪ Focus on heat consumption without dependency
▪ Not with an invoice
▪ Free cooling guarantee
▪ Government involvement
▪ Incentives
▪ Intermediate for (industrial) heat reuse
▪ Information footprint as part of district planning
▪ More (low grade) heating networks
▪ Focus on TCO, not CAPEX
▪ PUE – need for a new “easy” metric
▪ PUE figures are widely manipulated
▪ PUE discourages IT efficiency
▪ It needs to give insight in actual inefficiency
Cooling
37%
UPS/Nobreak
2%
Other
2%
Power supply
(15%)
9%
Fans
(20%)
10% Information (65%)
40%
ICT
59%
PUE inefficiency
FEAR OF WATER
Copyright © 2017 by Asperitas