Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Datacentre
Optimization
for Cloud
Modular Design for achieving
Energy-Efficiency


●   Wesley LIM
●Data Center Efficiency ...
Agenda
●
  Today's Datacentre Challenges
●
  Best Practices to Future-proof
  Datacentre
●
  Sun's POD Principle
●
  Sun D...
Today's
Datacentre
Challenges



             3
Ever-changing IT demands
• Costs, Demand and Capacity are Colliding...
     > Innovation in technology & businesses demand...
Rack Densities Increasing

                           Reality:
                    More compute per watt
                 ...
Reality: Heterogeneous Data Centers




  Industry average is between 4-6kw/cabinet
    > 20kw Skyscrapers will be integra...
The Nature of Data Centers
• Space, power, cooling and connectivity envelope
  > Power and cooling most important
  > Don'...
Best Practices to
Future-Proof
Your Data Center
Strategies to improve your
Datacentre efficiency



                      ...
Strategies for New / Greenfield Facility
• Design with key considerations:
     > Set your business goals
     > Determine...
Tier-IV Data Center Power Flow
                                Pumps                                                      ...
Tier-IV Data Center Power Losses

         Chillers
          33%                                                         ...
Cooling System Issues
Traditional Design: Computer room air conditioners
(CRAC) or air handlers (CRAH) on the data center
...
The Limits of Raised Floors
•High density raised floor deployments makes it difficult to
maintain even and predictable ser...
Limits to Raised Floors
•Floor load consideration
>IT loads increasingly heavier – racks are beginning to weigh beyond 1
 ...
Limits to Raised Floors
• Fire suppression
     ●   Fire suppression is generally focus on isolation of smaller zones
    ...
Cooling System Solution
Containing the hot aisle and adding closely coupled cooling puts cooling capacity
where it is need...
Deploy Closely Coupled Cooling
Understanding the differences between room-oriented,
row-oriented and rack-oriented cooling...
In-Row Cooling with Hot Aisle
Containment




                                18
Overhead Cooling




                   19
Power Distribution Issues
Traditional Design: PDUs on the floor with whips
going to racks; or breaker panels with whips

•...
Electrical Busway Solution
Modular overhead, hot-pluggable busway with
conductors to handle multiple voltages and phases
•...
Cabling Issues
Traditional Design: Home run cabling from each rack to
centralized intermediate distribution frames (IDFs)
...
Cabling Solution
Patch panels with expansion capacity at each rack position,
move IDFs into PODs to make them more self-su...
Sun's POD
Principle
Building Block for the Future




                                24
Traditional Data Center designs vs.
    Next-Generation Data Center
                                         Next-Generati...
Modular POD Components
 Physical Design
 ●   Influenced by cooling, brick and mortar
     and/or container


 Cooling – Cl...
Sun's POD Principle
Small, self-contained group of racks that
optimize power, cooling and cabling efficiencies
based aroun...
KPI's &
Strategies for a
Service Provider



                   28
KPI : Power Usage Efficiency
•The power efficiency of a data center is described by the
relationship between the power use...
KPI : Data Center Efficiency
•Space Utilization Efficiency – How much of the space you
have built are usable/chargeable
  ...
KPI : Data Center Efficiency
•Marginal Cost and Marginal Revenue
>Marginal Cost (MC) is the change in total cost associate...
Top 5 Efficiency Strategies for a
Facility Service Provider / SSO
1)Improve Cooling Efficiency
2)Improve power distributio...
1. Improve Cooling Efficiency
•Right-size the cooling system
>Measure and monitor cooling capacity vs IT load
>Implement c...
New ASHRAE Guidelines




                        34
2. Improve Power Distribution
•Right-size the power delivery system
>Monitor power usage throughout the system
       • E....
3. Match Infrastructure to SLAs by
   Multi-Tier/Zone
•Customer segmentation - Infrastructure capacity should be right-siz...
4. Growing with Ease - Applying
   Modularity
•Minimizing CAPEX investment by enabling growth through modular
design and a...
Sun Datacentre
Efficiency
Consulting
Services


                 38
Sun Datacentre Efficiency Portfolio

                                        IT Business Strategy Consulting
             ...
DCE Cloud Services – Evaluate your readiness
     ●  Business                               Organization/Culture
         ...
Data Centre Facility Design (Greenfield)
•Core Concepts
>Holistic perspective – balancing availability vs. efficiency
>Sca...
Data Center design




  Example of BIM model for one engagement
                                            42
Data Center design




  Example of site model for one engagement
                                             43
Data Center design




Example of data center layout for one engagement
                                                  ...
Data Center design




Example of data center layout for one engagement
                                                  ...
The POD Architecture
                                       A group of racks or benches with a common hot or cold aisle
  ...
The POD Architecture




                                                             A group of racks or benches with a
 ...
Additional
Thoughts




             48
Best Practices = Competitive Weapon
•Align Facilities, IT & Engineering
>Partnering nets significant short term & long ter...
Sun Blueprint
•Released June 10, 2008
•Download:
http://sun.com/blueprints
•1st of 9 chapters to be released
over the next...
Thank You

Wesley Lim
wesley.lim@sun.com
Other Successful Sites Globally
                 ●Consolidation and relocation of                        ●Consolidation of...
Data Center Space Constraints
•The single biggest reason cited for running out of
capacity is insufficient space (39% of r...
Data Center Heat Densities




                             54
Why do we care about Heat?
   •Hot servers are less reliable than cool ones
   >For every increase of 7'C above 21'C, long...
Raised Floor Cooling
•“The hardware manufacturer shall design the equipment ... for a
temperature change from intake to ex...
Cooling Capacity per Tile
•4kW of IT load requires about 600 CFM of airflow – pretty much
the maximum for a single tile!
>...
In Row Cooling Requires Less Power
                                              Example Using Overhead Cooling Units
    ...
Upcoming SlideShare
Loading in …5
×

Datacentre Datacentre Optimization Optimization for Cloud for ...

1,118 views

Published on

  • Be the first to comment

Datacentre Datacentre Optimization Optimization for Cloud for ...

  1. 1. Datacentre Optimization for Cloud Modular Design for achieving Energy-Efficiency ● Wesley LIM ●Data Center Efficiency Practice ●Sun Microsystems
  2. 2. Agenda ● Today's Datacentre Challenges ● Best Practices to Future-proof Datacentre ● Sun's POD Principle ● Sun Datacentre Efficiency Consulting Services 2
  3. 3. Today's Datacentre Challenges 3
  4. 4. Ever-changing IT demands • Costs, Demand and Capacity are Colliding... > Innovation in technology & businesses demands for compute capacity > Power and cooling costs surging, insufficient capacity > Limits to existing floor space and new real estate Watts per 800  Demand Square Foot  Power  Users  Costs  Services  Space  Access  Heat 120 40 2003 2005 Next Generation Data Center 4
  5. 5. Rack Densities Increasing Reality: More compute per watt More watts per rack Old racks now fit into a single blade 2000 27x More Processors 2008 14 x 3U Servers 48 Blade Servers 28 Processors >10x More Heat 768 Processor Cores 2kW Heat Load 28.5kW Heat Load 5
  6. 6. Reality: Heterogeneous Data Centers Industry average is between 4-6kw/cabinet > 20kw Skyscrapers will be integrated Must deal with small buildings & skyscrapers 6
  7. 7. The Nature of Data Centers • Space, power, cooling and connectivity envelope > Power and cooling most important > Don't deploy maximum capacity day one, but still build for the future • Think watts per rack, not per square foot > Handle heterogeneity at the start > Forget about raised floors for cooling • Rate of change > Be prepared to be more flexible • Connectivity requirements > Ethernet, Fibre Channel, Infiniband 7
  8. 8. Best Practices to Future-Proof Your Data Center Strategies to improve your Datacentre efficiency 8
  9. 9. Strategies for New / Greenfield Facility • Design with key considerations: > Set your business goals > Determine cooling architecture - Heat load / rack > Review power distribution approach > Determine your cabling strategy 9
  10. 10. Tier-IV Data Center Power Flow Pumps Water A Thermal Store Chillers External Batteries Generators A Bus Grid HV Transformers ATS UPSs PDUs CDUs CRAHs Fire, Security, InRow Lighting, etc.. IT Loads Coolers 2(N+1) B Bus Water B Support Space Real Estate Raised Floor 10
  11. 11. Tier-IV Data Center Power Losses Chillers 33% PDUs Generator, 5% Switching 1% UPSs, CRACs Transformers 13% 12% Misc Fire, Security, Lighting, etc.. IT 2% Loads 2(N+1) Right-sizing = Matching Tier of your Data Systems Center facility with Business requirement 33% Support Space Real Estate Raised Floor 11
  12. 12. Cooling System Issues Traditional Design: Computer room air conditioners (CRAC) or air handlers (CRAH) on the data center perimeter, cooling through a raised floor • Effective to < 6 kW per rack • Trend to increase raised floor height • Hot spots need more open floor tiles > Reduces cooling for other racks > Mixture causes air delivery temperatures to be reduced • 60 percent of cooling does no work > Random intermixing of hot/cold air 12
  13. 13. The Limits of Raised Floors •High density raised floor deployments makes it difficult to maintain even and predictable server temperatures >Heat from one server can impact the reliability of another! 13
  14. 14. Limits to Raised Floors •Floor load consideration >IT loads increasingly heavier – racks are beginning to weigh beyond 1 Metric Ton each – and designing raised floors will soon require calculation of point loads and rolling loads (but would a facility provider be able to know these at design phase?) •Increasing Infrastructure costs > Increasing heat densities require much higher – and more expensive – raised floors height •Decreasing energy efficiency > Raised floors can be very efficient at low heat densities but become much less efficient as air velocities and sub-floor pressures increase •Increased design costs > High density raised floors require much more careful designs (i.e. CFD modeling) 14
  15. 15. Limits to Raised Floors • Fire suppression ● Fire suppression is generally focus on isolation of smaller zones and release of a clean agent to extinguish the fire in that zone. ● With raised floor, you instantly double the number of zones you must monitor, and deploy fire suppression systems into. • Cleanliness ● Unless it was installed yesterday, all sort of dirt, dust, debris will accumulate and lurk beneath every raised floor in actual production. ● Pollutants and contaminants in the air will lead to higher risk of failure. 15
  16. 16. Cooling System Solution Containing the hot aisle and adding closely coupled cooling puts cooling capacity where it is needed • POD is self-contained from a cooling perspective ● It removes its own heat, matching load ● Room air conditioning to meet habitation requirements • POD eliminates random intermixing of air ● Data center inlet temperatures can be raised, safely ● POD handles hot spots ● Modular, plug-in units can be added and moved to support heterogeneous, rapidly changing environments 16
  17. 17. Deploy Closely Coupled Cooling Understanding the differences between room-oriented, row-oriented and rack-oriented cooling • Targeted right sized cooling where it is required • Efficient and Optimized air flow 17
  18. 18. In-Row Cooling with Hot Aisle Containment 18
  19. 19. Overhead Cooling 19
  20. 20. Power Distribution Issues Traditional Design: PDUs on the floor with whips going to racks; or breaker panels with whips • Consumes valuable floor space • Imposes cooling load • Cables impede airflow • Changes > Requires time and expense > Exposes all connected systems to risk and downtime > Difficult to change and cables often abandoned in place • Cable home runs wastes copper 20
  21. 21. Electrical Busway Solution Modular overhead, hot-pluggable busway with conductors to handle multiple voltages and phases • Requires no floor space or cooling > Transformers moved outside the data center • Snap-in cans with short whips > Non-disruptive > Reduced copper consumption > No in-place abandonment > Significant time reduction – from months to minutes • Supports multiple Tier levels > Use multiple busways 21
  22. 22. Cabling Issues Traditional Design: Home run cabling from each rack to centralized intermediate distribution frames (IDFs) • Difficult to change > Cable trays are static > Interconnect mechanisms change more frequently • Huge amounts of cable per rack > Rack of 1U or blade servers can have >300 cables • Wastes copper, increases weight • Increasing density makes the problem worse 22
  23. 23. Cabling Solution Patch panels with expansion capacity at each rack position, move IDFs into PODs to make them more self-sufficient • Easy to change • Easy to scale • Cable lengths are short • Relatively small number of uplinks to aggregation layer switching • Localizing switching simplifies design and creates building blocks – rooms inside of rooms • Can cut cabling costs by up to 75% 23
  24. 24. Sun's POD Principle Building Block for the Future 24
  25. 25. Traditional Data Center designs vs. Next-Generation Data Center Next-Generation Datacentre designs: •Based on Sun's POD Principle •Higher efficiency / PUE, lower OPEX •Modular, scalable, flexible •Future-proofed Traditional Datacentre designs: •Lower efficiency, high PUE, high OPEX •As business and technology changes, adaptability of DC to future needs may require modifications 25
  26. 26. Modular POD Components Physical Design ● Influenced by cooling, brick and mortar and/or container Cooling – Closely Coupled ● In-row or overhead, hot-aisle containment, and passive Power Distribution ● On-tap, overhead or under-floor busway Cabling ● Recommended localized switching in each POD 26
  27. 27. Sun's POD Principle Small, self-contained group of racks that optimize power, cooling and cabling efficiencies based around a common hot or cold aisle • Unit of Scale for a data center, ~20 racks – building block • Self-contained, independent of the room • Efficiency achieved from putting resources where they are needed > Bringing cooling closer to the heat source • Flexibility from modular, snap-in systems that scale POD components up and down 27
  28. 28. KPI's & Strategies for a Service Provider 28
  29. 29. KPI : Power Usage Efficiency •The power efficiency of a data center is described by the relationship between the power used by the IT, and the total power used by the facility >Typically expressed as a number or as a percent: >e.g. PUE of 2.0 = DCE of 50%* Power Utilization IT Equipment Power + Infrastructure Power = Efficiency (PUE) IT Equipment Power Data Center IT Equipment Power % = Efficiency (DCE) IT Equipment Power + Infrastructure Power * PUE and DCE are Green Grid terms – which differ from Uptime terms, etc … 29
  30. 30. KPI : Data Center Efficiency •Space Utilization Efficiency – How much of the space you have built are usable/chargeable Chargeable floor space Space Utilization Efficiency = Total facility floor space •Operating Leverage >A higher Operating leverage means greater fixed cost commitments that have to be met even when utilization / chargeable volume declines >Objective is to lower fixed costs – i.e. unavoidable running costs Fixed Costs Operating Leverage = Total Costs 30
  31. 31. KPI : Data Center Efficiency •Marginal Cost and Marginal Revenue >Marginal Cost (MC) is the change in total cost associated with a unit change in quantity. >Marginal Revenue (MR) is the rate of change in total revenue with respect to quantity sold. >Marginal Analysis aids decision making that can be applied to financial decisions. In order to maximize profits, MR = MC Change in Total Cost d TC Marginal Cost = = per kW of new IT load hosted d Q (kW) Change in Total Revenue d TR Marginal Revenue = = per kW of new IT load hosted d Q (kW) 31
  32. 32. Top 5 Efficiency Strategies for a Facility Service Provider / SSO 1)Improve Cooling Efficiency 2)Improve power distribution 3)Match Infrastructure to SLAs by Multi-Tier/Zone 4)Growing with Ease - Applying Modularity 5)Cost Control – Metering & Charging 32
  33. 33. 1. Improve Cooling Efficiency •Right-size the cooling system >Measure and monitor cooling capacity vs IT load >Implement capacity management for the cooling system •Optimize chiller performance >Use free cooling (where available), airside or waterside economizers, variable frequency drives, thermal buffers •Optimize air flow within the room >Ensure the room is well sealed, use hot/cold aisles, seal cut-outs, optimize perforated tile layout, increase raised floor height, use blanking panels, high airflow doors and hot-air containment •Adopt new ASHRAE guidelines 33
  34. 34. New ASHRAE Guidelines 34
  35. 35. 2. Improve Power Distribution •Right-size the power delivery system >Monitor power usage throughout the system • E.g. UPS, PDU, Power Strip >Implement capacity planning for the power delivery system >Consider charge backs for power consumption •Deliver higher voltage circuits to the rack >E.g. 3 Phase, 208V today - perhaps higher in future >Minimizes power losses and infrastructure provisioning (less cables, conduits, breakers, etc..) •DC power is not recommended at this point >Higher costs, insufficient standards, fewer products, fewer trained people, etc.. >Energy savings are not substantially better than high voltage power deliver 35
  36. 36. 3. Match Infrastructure to SLAs by Multi-Tier/Zone •Customer segmentation - Infrastructure capacity should be right-sized for your customer's IT load, service levels, risk tolerance >e.g. – Capacity (kW), Availability (%), Scheduled downtime (min/yr), Single Failure vs. Multi-Failure •Infrastructure needs to be over-provisioned for sites with high Tier ratings. This has a big impact on power efficiency >e.g. - Tier-IV site at 30% utilization is only 33% efficient* •New architectures are evolving to address this >e.g. - Multi-tier, Modified Tier-IV >Zoning the facility to different tiers, catering to different customer segments with differing needs, e.g. high-density zone with localized/closely-coupled cooling and a general/low-density zone with room-oriented cooling. •These architectures require less infrastructure and energy, but are more complex to design, build and maintain 36
  37. 37. 4. Growing with Ease - Applying Modularity •Minimizing CAPEX investment by enabling growth through modular design and allowing charge-back to your customers >Power distribution >Cooling systems >Cabling Structure 5. Cost Control – Metering & Charging •Unable to anticipate what IT load each new customer will bring in? •Metering on actual usage at rack / suite level >Charge-back for actual power utilization 37
  38. 38. Sun Datacentre Efficiency Consulting Services 38
  39. 39. Sun Datacentre Efficiency Portfolio IT Business Strategy Consulting • Eco Policy/Social Resp. • Availability/Agility/TTM • Legislative, DR • Open Work Consulting Datacenter Strategy Datacenter Design & Build • Long-term planning • Site Selection & Site Acquisition • Refresh or Replace • Design Services • Technology options > Lifecycle Planning • Sourcing strategies > Building Information Modeling • Build, Commission, Move, Populate Datacenter Modernization Datacenter Optimization Datacenter Operations • Application Modernization • Eco Consulting & • Managed & Remote Operations • Infrastructure Assessments • Virtualization Best Practices Modernization • Consolidation Services *All projects are considered turn-key solutions in which Sun oversees Services management of the solution including vendor and partner • Migration entire project management, deployment schedule, leveraging proven best practices, quality assurance and delivering business results to budget. • Virtualization Services 39
  40. 40. DCE Cloud Services – Evaluate your readiness ● Business Organization/Culture ● Key business drivers & KPIs Structure & logistics Economic environment Culture orientation & emphasis Competitive landscape Decision making topology Capex and Opex targets Security & privacy policies, training Asset management & disposal Compliance, governance, training Compliance, security, privacy Information sharing & Governance, decision making communication Informal norms & policies ●Technology Operations 24 month technology roadmap IT Infrastructure support Current “Cloud-like” Datacenter/facility plans: initiatives/POCs -consolidation, migration Billing, metering, SLAs -build, co-locate, other Deployment & support process Deployment & support process Management, tracking, reporting Management, tracking, reporting Compliance, security, privacy Resources & training Resources & training ● 40
  41. 41. Data Centre Facility Design (Greenfield) •Core Concepts >Holistic perspective – balancing availability vs. efficiency >Scalable, repeatable, modular architecture >Modular right-sizing power and cooling >Simplified, flexible cabling and plumbing >Facilitates growth >Vendor independent >Lifecycle Planning - Flexible & Scale cost with use >Building Information Modelling (BIM) Entering a New Age of Engineered Datacentre 41
  42. 42. Data Center design Example of BIM model for one engagement 42
  43. 43. Data Center design Example of site model for one engagement 43
  44. 44. Data Center design Example of data center layout for one engagement 44
  45. 45. Data Center design Example of data center layout for one engagement 45
  46. 46. The POD Architecture A group of racks or benches with a common hot or cold aisle used as a building block to simplify data-center design for power, cooling, & cabling Vendor Independent, Slab or Raised Floor, Flexible, Scalable, High Density 46
  47. 47. The POD Architecture A group of racks or benches with a common hot or cold aisle used as a building block to simplify datacenter design for power, cooling, & cabling Vendor Independent, Slab or Raised Floor, Flexible, Scalable, High Density 47
  48. 48. Additional Thoughts 48
  49. 49. Best Practices = Competitive Weapon •Align Facilities, IT & Engineering >Partnering nets significant short term & long term savings http://www.sun.com/aboutsun/environment/docs/aligning_business_organizations.pdf •Hardware Replacement >Apply new hardware solutions and extend the life of your DC http://www.sun.com/aboutsun/environment/docs/creating_energy_efficient_dchw_consolidation.pdf •Simplify Datacentre design with the POD concept >Power: Modular, Scalable, Smart http://www.sun.com/aboutsun/environment/docs/powering_energy_efficientdc.pdf >Cooling: Adaptable, Scalable, Smart http://www.sun.com/aboutsun/environment/docs/cooling_energy_effiicientdc.pdf >Cabling: Distributed vs Centralized http://www.sun.com/aboutsun/environment/docs/connecting_energy_efficientdc.pdf >Measurement: Visibility gives you power to control http://www.sun.com/aboutsun/environment/docs/accurately_measure_dcpower.pdf •Video: http://www.sun.com/aboutsun/environment/media/datacenter_tour.xml 49
  50. 50. Sun Blueprint •Released June 10, 2008 •Download: http://sun.com/blueprints •1st of 9 chapters to be released over the next 12 months 50
  51. 51. Thank You Wesley Lim wesley.lim@sun.com
  52. 52. Other Successful Sites Globally ●Consolidation and relocation of ●Consolidation of four R&D labs into EMEA mission critical datacenter new datacenter ●Database stress testing ● 80% space reduction – 2,200 ft2 (204 m2) down to 450 ft2 (42 m2) ●High density – 10kW/rack, ● 3,600 ft2 (334 m2) total build-out expandable to 16kW/rack ●50% utility reduction ● 1,190 ft2 (110 m2) datacenter ●Base cooling with Liebert XD ●Second highest density in our Camberley (UK) (first install of XDO in EMEA) portfolio Trondheim (Norway) ●Datacenter supporting the 50%+ reduction on equipment ● growing engineering site footprint ●17% power reduction ● 2,600 ft2 (242 m2) of modular datacenter ●154% Compute capacity ●Liebert XD, base cooling under increase 10%, first install of XD in EMEA ● 3,000 ft2 (280 m2) Datacenter ●Highly efficient, expandable ●16 Datacenters down to 1 Prague (CZ) datacenter Bangalore (India) ●Innovative design in the region - PCQuest award for best IT implementation in 2007 Reality: Modular, scalable, fully redundant datacenter supporting long term growth. 52
  53. 53. Data Center Space Constraints •The single biggest reason cited for running out of capacity is insufficient space (39% of respondents) 53
  54. 54. Data Center Heat Densities 54
  55. 55. Why do we care about Heat? •Hot servers are less reliable than cool ones >For every increase of 7'C above 21'C, long-term electronics reliability falls by 50% •In order to maintain correct server temperatures, ASHRAE recommends server inlet temperatures remain between 20'C to 25'C •The temperature of a server is directly related to the power it uses (kW) and the air it draws (CFM) ASHRAE: American Society of Heating, Refrigerating and Air- Conditioning Engineers 55
  56. 56. Raised Floor Cooling •“The hardware manufacturer shall design the equipment ... for a temperature change from intake to exhaust (delta T) of not less than 15'F nor more than 20'F.”* - Uptime Institute CFM: Cubic Feet per Minute; Measurement of air volume velocity and is often used in measuring air flow from cooling diffusers. 56
  57. 57. Cooling Capacity per Tile •4kW of IT load requires about 600 CFM of airflow – pretty much the maximum for a single tile! >3.5kW is 1 Ton of AC per tile! 57
  58. 58. In Row Cooling Requires Less Power Example Using Overhead Cooling Units Annual Power Consumption Input Power - Comparison kW power to cool 1 kW of sensible heat 0.60 Fan Pump (XD) 0.50 Pump (CW) 0.40 Chiller 0.30 0.20 32% Lower 0.10 27% Lower 0.00 Traditional CW Overhead rack-based cooling @ 100% Capacity @ 50% Capacity Traditional CW Overhead rack-based cooling Liebert XD/XDV Liebert XD/XDV CRAC @ CRAC @ @ @ 100% Capacity 50% Capacity 100% Capacity 50% Capacity 58

×