1. Electricity use and efficiency of
servers and data centers: A review of
recent data and developments
Jonathan G. Koomey, Ph.D.
http://www.koomey.com
See podcast at http://www.smartenergyshow.com/
Lawrence Berkeley National Laboratory &
Stanford University
Presented at Google
December 4, 2007
1
3. Our options
• Adapt–modify human systems to make
them more flexible and resilient
• Suffer–accept what comes (but what
comes is likely to be costly in lives,
ecosystem damage, and economic
disruption)
• Mitigate–reduce emissions
3
4. Foster innovation in
• Individual behavior and attitudes
– Purchasing of goods and services
– Purchasing of energy-using equipment
– Operating energy using equipment
• Technologies
• Institutions
• In each case, information technology
and networks are our most potent allies
4
5. Innovation for Technologies
• Whole systems redesign
• Think about tasks
• Ignore illusory & historical
constraints (but heed real ones)
• Apply existing organizational
techniques and technologies in
new ways
• Build things better all the way
around so people want them for
more than efficiency
5
6. Innovation for Institutions
• Whole systems redesign (e.g. IT+productivity)
• Redefine business processes (e.g. Six Sigma)
– Assess opportunities
– Create local business cases
– Assign responsibility
– Measure results
– Reward good results (use prizes)
• Rethink underlying assumptions (e.g. extended
producer responsibility)
• Foster + reward innovation (e.g. Mutual Fun)
6
7. Harness the power of business
• Environmentalists and business must learn to work
together
• Give consumers information about companies (e.g.
scorecard.org and the Carbon Disclosure Project) and
products (e.g. Energy Star)
• Create internal and external pressure for continuous
improvement and reorganization
• Use the power of the supply chain
– Demand supplier responsibility
– Use purchasing power to move the market
7
9. Introduction to data centers
• Much confusion about data center
electricity use (see Mitchell-Jackson et
al., 2002 and 2003)
• Review recent data from
– Uptime facilities
– Koomey study (released 15 Feb 2007)
– Report to Congress (July 2007)
• Discuss implications for industry growth
9
10. Data centers in the news
• Recent activity in the Southeast, Texas, and
the Northwest
– Google
– Yahoo
– Microsoft
– Others
• Announcements often don’t give relevant
details like electrically-active floor area, so
use caution in interpreting them
• Can’t build them all in one place (constraints)
• Separate hosting, search, corporate, and
supercomputers (all different markets)
10
11. Electricity Flows in Data Centers
HVAC system
local distribution lines
lights, office space, etc.
uninterr
uptible
load
to the building, 480 V
computer
equipment
UPS PDU computer racks
backup diesel
generators
UPS = Uninterruptible Power Supply
PDU = Power Distribution Unit;
11
12. IT Equipment
• Servers
– High end enterprise servers
– Pedestal/tower servers
Enterprise Servers
– Rack mount servers
– Blade servers
• Storage
• Network equipment Rack Mount Server
– Routers
– LAN/WAN switches
– Hubs
Blade Server
12
14. Uptime Institute data
• The Site Uptime Network is a technical
membership organization for data center
operators and designers
• Uptime first reported data for 1.6 to 1.9 Msf of
facilities in 2002 (for 1999-2001)
• Uptime has tracked 19 data centers for 8
years (total = 0.92 M sf in 1999)
– Total IT load, floor area, and computer power
densities (W/sf)
14
15. Trends in 19 Site Uptime
Network facilities
W/sf measured as watts of IT power divided by electrically active floor area
15
16. Study on total server power
• Details
– Published 15 February 2007
– Funded by AMD
– Authored by Jonathan Koomey
• Download it at
http://enterprise.amd.com/us-en/AMD-Business/Technology-
Home/Power-Management.aspx
• Reviewers from all major industry players
16
17. Server power study
methods and data
• Estimated power use for servers
– 2000 and 2005
– Volume, mid-range, and high end servers
– U.S. and World
• Used IDC data for total installed base
and most popular models
• Used manufacturer data/estimates for
typical power used per unit
17
21. EPA report to Congress
• Released August 3, 2007
• Download at
http://www.energystar.gov/datacenters
• Purposes of the report
– Assess data center electricity use
– Identify barriers to efficiency
– Compile opportunities for industry collaboration
and future research
– Propose policies and identify federal leadership
opportunities
21
23. Trends pushing total data
center power use up
• Increasing demands for
– E-commerce
– VOIP
– Internet search
– software as a service
– video downloads
– resiliance in the face of disaster
– regulatory compliance (e.g. Sarbanes-Oxley)
– IT-enabled business transformation
• More transistors on a chip + more RAM +
more volume servers
23
24. Trends pushing total data
center power use down
• Virtualization/consolidation
• Cooling and power constraints
• Recognition of constraints by the C level
• Metrics
– Servers + other IT equipment (E*)
– Site infrastructure
• Utility rebates (PG&E)
24
25. Misplaced incentives throughout
• Energy efficiency metrics not standardized
• 90% of site infrastructure costs are related to
kW, not to floor area (Uptime) but costs
almost always charged per square foot
• Utility bills and infrastructure costs paid by
facilities department, cost of servers paid by
IT department
• IT, facilities, CFO, and real estate folks don’t
talk (hierarchy and culture differences)
• Piling safety factor upon safety factor
25
26. Compute total costs to
understand incentives
• Simple model of total costs, including
– Cooling and other infrastructure costs
– IT capital costs
– Energy costs
– Other operating costs
• Based on current industry practice for
high performance computing facilities
26
27. Site infrastructure capital
costs are 2/3 of IT cap. costs
Based on a simple model that calculates annualized total costs of ownership
of an HPC data center for the financial industry: Koomey, Jonathan,
Kenneth G. Brill, W. Pitt Turner, John R. Stanley, and Bruce Taylor. 2007. A
simple model for determining true total cost of ownership for data centers.
Santa Fe, NM: The Uptime Institute. September.
27
28. Efficiency opportunities
• Think “whole system redesign” (RMI)
• Align incentives to minimize True Cost of Ownership
• Low hanging fruit (Uptime, Ecos, LBNL)
– Modify current infrastructure/operations/incentives
– Kill comatose servers
– Buy efficient power supplies
• A little more work
– Metrics for servers tied to purchases
– Metrics for infrastructure efficiency
– DC power or high efficiency AC power
– Virtualization & consolidation
28
29. Intel’s new Eco-Rack
• Idea proposed by JK to
Lorie Wigle of Intel in early
Dec. 2006
• Announced at Intel
Developer Forum 9/18/07
• 16-18% savings
compared to good current
practice
• Normalized workloads
• Eco-Rack 1.5 and 2.0 now
in development
29
30. Eco-Rack savings 16-18%
Data current as of September 18, 2007. Both Standard and Eco-Rack cases assume power save switch
30
(SpeedStep) is on. Contact: JGKoomey@stanford.edu or rml@hpc.intel.com with questions or comments.
31. Sources of Eco-Rack Savings
Data current as of September 18, 2007. Both Standard and Eco-Rack cases assume power save switch
31
(SpeedStep) is on. Contact: JGKoomey@stanford.edu or rml@hpc.intel.com with questions or comments.
32. Conclusions
• Total power
– for servers is about 1.2% of U.S. electricity use
(including cooling and auxiliaries).
– for servers plus networking, storage, and
cooling/auxiliary equipment is about 1.5% of U.S.
electricity use
– roughly doubled from 2000 to 2005
• If IDC installed base forecast to 2010 holds,
server power use up another 40% to 76% from
2005
• W/sf appears to be going up
• Volume servers driving growth
32
33. Conclusions (continued)
• Perverse incentives abound
• Organizational changes are needed
– driven mainly by infrastructure costs going
up as a fraction of total cost of data centers
• Consensus efficiency metrics are
coming soon for IT and infrastructure
• The industry is focused on improving
data center efficiency, and big changes
are afoot (e.g. Eco-rack)
33
34. The final word
• Institutional and personal change are at
least as important as technical change
for solving the climate problem
– even currently available technologies are
not now being adopted
– Information technology can enable these
changes to happen more rapidly than ever
before
34
35. Key web sites
• EPA on data centers
http://www.energystar.gov/datacenters
• LBNL on data centers:
http://hightech.lbl.gov/datacenters.html
• The Green Grid:
http://www.thegreengrid.org/
• The Uptime Institute:
http://www.upsite.com/TUIpages/tuihome.html
35
37. Some important background:
What do we know about climate?
• “Unequivocal” that the earth’s climate
is warming
• More than 90% certainty that human
emissions of CO2 and other greenhouse
gases are the cause
(Findings from IPCC 2007 WGI, AR4)
37
39. Dramatic recent
changes in CO2
and CH4
concentrations
“We know humans are responsible for the
CO2 spike [since pre-industrial times]
because fossil CO2 lacks carbon-14, and
the drop in atmospheric C-14 from the
fossil-CO2 additions is measurable.”
–John P. Holdren, Harvard University
Source of graphs: IPCC Working Group 1 Summary for
Policy Makers, Fourth Assessment report, 2007.
39
40. We’re already seeing effects of
humans on the earth’s climate
• Reduced Arctic and Antarctic ice cover
• Glacial melting
• Sea level rise
• Floods
• Wildfires
• Drought
• Extreme weather events
• Damage to ecosystems
40
41. Going, going, gone?
Median 1979-2000
September 21, 2005
September 16, 2007
6.74 M square km
5.32 M square km
4.13 M square km
The difference between median minimum arctic ice coverage and the extent on Sept.
16, 2007 is equal to the area of Alaska and Texas combined (2.61 M sq. km or 1 M sq.
miles). http://nsidc.org/news/press/2007_seaiceminimum/20070810_index.html 41
42. If we don’t alter course, we’ll end up where we’re headed
Global average surface
temperature is heading for
a state outside the range
experienced during the
IPCC 2007 scenarios
tenure of Homo sapiens on to 2100 ---------------->
Earth (slide courtesy of
John P. Holdren, Harvard
University). Year 2000
concentrations
42