In this presentation Willie O'Connell will examine how Finland’s Centre for Science (CSC) has implemented a modular data centre solution, to support high performance compute for the Finnish University and Research Network (FUNET), while achieving an annualised power usage effectiveness (PUE) of less than 1.1. In addition he will outline how CSC has used data centre infrastructure management (DCIM) to micromanage the facilities ensuring reliability while maximising asset utilisation and return on investment.
4. CSC maintains and develops the state-owned centralized IT infrastructure and uses it to provide
nationwide IT services for research, libraries, archives, museums and culture as well as
information, education and research management.
CSC has the task of promoting the operational framework of Finnish research, education, culture
and administration. As a non-profit, government organization, it is our duty to foster exemplary
transparency, honesty and responsibility.
5. History - CSC Datacentres
HPC, HTC
High Capacity
Med to high redundancy
IAAS / DR -site
Growth path
Highly efficient
Digital Preservation
Hosting for Helsinki
metropolitan customers
Medium capacity
High redundancy
High security
IAAS
6. Former paper mill site, now a
thriving business park.
Heavy duty power grid operated
by the landlord.
Brownfield /greenfield
opportunities.
Good logistics & services
Local industrial/building
competences.
The Site: Kajaani
7. Plug & Play.
10 fully loaded trucks.
Physical assembly, power
connections, functionality and
blackout tests, load tests with heat
banks, air tightness for fire
suppression systems tested etc.
From arrival to handover in two
weeks.
1st installation was fast, 2nd was
even faster
10. Benefits of Modular
CAPACITY Right-sized from Day One –
expand howYOU want
COST Lower CapEx as well as lower
PUE and OpEx
CALENDAR 14-20 weeks
11. » Lower CAPEX
» Closer Alignment of CAPEX and Income
» Lower OPEX
» Time to Deploy
» Real White Space
» Lower Architect, Mechanical, Electrical Consultant costs
» Build What, Where and When required only
» Flexibility, Modules to Suit Requirements, (Density, Tier, Environmental etc.)
» Mobility, Move as Network Evolves
Modular Designed Purpose-Built Data Centres
CAPEX
Saving
$
13. Multiple Cooling
Options
1. Direct Outside Air with
Evaporative (Adiabatic)
2. Direct Outside Air with
Evaporative + Chilled
Water
3. Direct Outside Air with
Evaporative + DX
4. Standalone Chilled Water
cooling
5. Standalone DX Cooling
Cooling Efficiency
14. Structural
Integrity
Environmental
Load
Deployment
Maintenance
Space
Choice of 5 units:
1-rack, 4-rack,
10-rack, 20-rack,
30-rack
Plus: adjoining
Vestibule white space
Power
Choice of
1 to 35kW per Rack,
AC and DC Power
Efficiency GreenCapEx
OpEx
Flexible
Configuration
Scalable Intelligent
Management
Uptime
TIER
TimescalesCapacity
Controls
Manage
Control
Alarm
Operate
iTRACS DCIM
preconfigurable
Choice of
Free Air,
Evaporative,
DX, Chilled Water,
or combination
Cooling
Design Freedom
16. Site Layout – Example
True Collocation Hallway,Ample room for Rack and Pallet Movement,
Doors on IT Rooms and Electrical Panels Not Shown
Customizable Room forTechnician Benches, Patch Panels (MDF/IDF), UPS
20. Understanding Weather – Maximize Efficiency
Internal Conditions
Reaction
Out of Envelope
pPUE Calcs
Water Usage
Electrical
Utilization
Reaction
21. ASHRAE A2 (Annual)
pPUE 1.082
Water 262 m3/year
Chiller hours 1
ASHRAE Recommended (Annual)
pPUE 1.089
Water 4,067 m3/year
Chiller hours 118
ASHRAE A1 (Annual)
pPUE 1.084
Water 523 m3/year
Chiller hours 38
Weather Analysis Kijaani
Actual Results Jan –Dec 2015
Set-point: 17-25oC, 20-80% RH
pPUE 1.038
Water 146 m3/year
Chiller hours N/A
22. Welcome to the Connected and
Efficient Data Center
Data Center on
Demand™
iTRACS® Data Center
Infrastructure Management
imVision® Intelligent
Infrastructure Solutions
P O W E R I N G Y O U R F U T U R E
CommScope Fiber
Infrastructure
CommScope Copper
Infrastructure
• Fibre Guide
• Osmium
• MPO/MRJ21
23. CSC Kajaani : CommScope in Action
12 week Delivery
1.038 PUE (year to date)
0.018WUE (ltr./kWh)
Second DCU20 installation at Kajaani
24.
25. Summary
Major Challenges
• How much space will be require?
• When will we require it?
• What density will we require?
• What resilience will we require?
• Will I be able to move some applications to the Cloud?
• How can I demonstrate reduction in Carbon emission ?
• Energy loads/costs rising.
• Headache.
Solution: Build efficient units only what is required, when it is
required and where it is required. =MODULAR
Market leadership, strong brands and established channels
Diversity across markets, customers and geographies
A recognized leader in wireless, enterprise and broadband markets
A single-source provider for network infrastructure solutions
Carrier – Enterprise – Distribution – OEM
Global scale, strong financial position and an exceptional track record
Team of 25,000 people serving our customers via a culture of continuous improvement
More than 30 global manufacturing, R&D, and distribution facilities
Sales of $5.3 B, ongoing cost reductions and disciplined capital investment
Strong performance and cash-flow generation across economic cycles
Clear roadmap for new acquisitions and products
A commitment to innovation
Annual investment in R&D of more than $200M.
~9,800 patents and pending applications
Breakthrough enabling technologies provide focused, quick-to-market products
There is a confluence or convergence of a number of separate technologies and trends that some people now refer to IT as reaching the 3rd Platform. It relates to Mobility – the pervasive nature of mobility, its applications and the resdultant data streams (up and downstream). The Social Newtorks have also mostly gone mobile, but it’s the underlying technology and social behaviours that are having an equal impact on IT. Businesses are now trying to adopt this as a business platform and some are being created and functioning entirely within this environment. Cloud Computing takes our DC from data repository/processor and into becoming the first point of contact with customers, information exchange, transactions, customer service, analysis. The platform for doing business. The Benefits of Big Data Analytics are established, but for most it’s just the beginning, tools, services, strategies will take many businesses to new higher commercial effectiveness.
OVERENGINEERING VS. RIGHT-SIZED …
(1) Latency - need more DCs at the edge to supporting exploding growth of data and video traffic
(2) Over-engineering – many brick-and-mortar DCs are overbuilt and over-engineered – the opposite of Right-Sizing. A poor use of CapEx at build-out (wasted space may go years before it is actually populated with IT assets). And when it is time to expand – designers are trapped. They must “overbuild”.
“A survey of DC designers by the Uptime Institute suggests there is a mismatch between facility sizes and the ideal size for capacity addition. On balance, architects and managers would prefer to add increments of capacity that are smaller than the new facilities they build”
SOURCE: 451 Research, Global Prefabricated Modular Datacenter Forecast 2014-18, January 2015
“PFM DCs allow DC designers and managers to add exactly the type of capacity they need in the short term to minimize slack and mismatch in space, power, and cooling capacity with IT needs.
USPs for DCoD
Lower capital and operating costs. [The animation builds to show the nature of both savings ]
Time to deploy, we can deliver units 12 weeks from receipt of an order.
Real white space, inside the DCoD looks, sounds, and feels like a standard brick and mortar build DC.
Because a lot of the mechanical and electrical design is incorporated into the DCoD the associated consultant costs are greatly reduced over a conventional B&M build.
DC operators have traditionally faced a major challenge when designing a B&M DC, they have had to predict requirements, in terms of number of cabinets, kW per cabinet, server-in temperature, Tier level etc. These predictions then form the basis for the full design and history tells us these predictions rarely prove accurate. Changes in switch and server technology place new demands on the DC infrastructure and the operator has to either invest to change or put up with a sub-optimal performance, never a good thing in a competitive market. Modular on the other hand allows the operator to build what and when re requires it. That way he is making prediction for a much shorter period, possibly less than on eyear, and these predictions are more likely to be correct, ensure operating efficiency is maintained.
In addition not every part of a DC needs to be in a particular Tier, or operate at a particular temperature/humidity level. While this is very difficult to exploit within a B&M DC, in the modular world a DC can comprise numerous modules each operating at different Tiers or set point (temp and humidity), thus maximising the efficiency of the overall DC.
While not simple, the DCoD is moveable so if landlords prove difficult the operator can dismantle it and move to a new site. This may be of particular interest to mobile carriers who commonly operate from leased premises.
The SmartAir™ Intelligent Cooling System optimizes efficiency by allowing for a combination of cooling technologies into an overall cooling scheme that is design specifically for each project and location.
_____________
“The long-term trend in datacenters is to reduce all energy-associated expendIture by efficient (low PUE) designs ”
451 Research, March 2015
_____________
The SmartAir™ Intelligent Cooling System optimizes efficiency by allowing for a combination of cooling technologies into an overall cooling scheme that is design specifically for each project and location.
_____________
“The long-term trend in datacenters is to reduce all energy-associated expendIture by efficient (low PUE) designs ”
451 Research, March 2015
_____________
17
This is to give examples of contemporary equipment acceptable ranges, to reassure the partner that the A1/A2 temp ranges are appropriate
Showing: Carrier QAM (for triple play cable TV, phone & internet to the home), Ethernet Switch, Server, and Storage
Important to note that all datasheets specify non-condensing against RH (so presumption that wet bulb temperature is below that which causes condensation)
And some datasheets (EMC for example) specify recommended range, and max continuous operation, with a caveat about reduced reliability (and also exceptional allowable range)
Methodology:
The algorithm used to sum the bin data for the air-side economizer map is simple: for each hour where average dry bulb and dew point temperatures are below the ASHRAE recommended maximums, an hour is added to the possible free cooling hours.
This assumes that fresh air can be mixed with return air as needed, and that humidity is added to dry air if required. The total number of hours per year are then colour-coded and blended to produce the maps.
And the point of this slide is not to discourage attendees in zones other than blue from considering DCoD – if you want a datacentre in those areas DCoD is still more compelling than traditional.
Features to point out:
Scatter of dots on psychometric chart represent a full years worth of temp & RH samples for a particular location.
Colours are a DCoD classification (for internal condition) which were explained in previous slide. The more blue, the better the efficiency (more later)
pPUE in bottom left – compare outside air and supplemental cooling with DX/CW cooling only
The point of this slide is to illustrate the how we achieve such low pPUE numbers – by widening the tolerable cold aisle internal air range
Attendees should take away from this the value of understanding the client’s IT equipment ranges, and challenging tolerances, in order to drive down PUE
Holistic command-and-control over the physical ecosystem
See, understand, and manage the interdependencies
Make the complex simple