SlideShare a Scribd company logo
ENVIRONMENTALLY OPPORTUNISTIC COMPUTING
EOC Engineering
A Thesis
Submitted to the Graduate School
of the University of Notre Dame
in Partial Fulfillment of the Requirements
for the Degree of
Master’s of Science
in
Engineering, Science & Technology Entrepreneurship
by
S. Kiel Hockett Jr., BSAE, BA
Dr. David Go, Advisor
Robert Alworth, Director
Graduate Program in Engineering, Science & Technology Entrepreneurship Excellence
Notre Dame, Indiana
July 2011
c Copyright by
S. Kiel Hockett Jr
2011
All Rights Reserved
ENVIRONMENTALLY OPPORTUNISTIC COMPUTING
EOC Engineering
Executive Summary
by
S. Kiel Hockett Jr.
Waste heat created by high performance computing and information commu-
nications technology is a critical resource management issue. In the United States,
billions of dollars are spent annually to power and cool these data systems. The
August 2007 U.S. Environmental Protection Agency Report to Congress on Server
and Data Center Efficiency estimated that the U.S. Spent $4.5 billion on electric
power to operate high performance and information technology servers in 2006,
and that amount has grown to over $8 billion in 2011. The cooling systems for the
typical data center use close to 40% of the total power draw of the data center,
totaling over $2.5 billion nationally.
To address this market need, EOC Engineering will develop and deploy mod-
ule computing nodes that utilize our unique algorithm to supply useable heat
when and where it is needed and manage the data load when it is not. The
EOC Engineering nodes will function just like a normal data center, complete
with Uninterruptible Power Supply (UPS), power systems, and protection from
fire, smoke, humidity, condensation, and temperature changes. The nodes have
the option of also installing active cooling measures such as computer room air
chillers and handlers, but the algorithm renders those measures unnecessary and
the customer can save on capital costs by not installing them. The unique EOC
S. Kiel Hockett Jr.
Engineering algorithm is known as the Green Cloud Manager (GCM) and is de-
signed to manage the IT load in each of the EOC Engineering nodes within a
company or municipality to supply heat where the customer needs it, and concur-
rently mitigate the cooling costs for the server hardware.
The EOC nodes will reduce the power consumption of the customers computer
hardware by almost 40% and save the customer thousands of dollars each year in
heating costs. By removing all active cooling methods in a typical 5,000-square-
foot data center, a customer will save more than $370, 000 each year. Because
these nodes will reduce the carbon footprint of the customer, in addition to actively
saving the customer money in reduced electric bills, the customer may be eligible
for government grants for increasing the efficiency of their data center.
This thesis will: offer an explanation of the science and engineering underlying
the proposed innovation; provide a comprehensive review of the potential applica-
tions for the technology; detail the intellectual property inherent in the technology
and provide a competitive analysis of it; outline the barriers to successful commer-
cialization of the product; and outline additional work still required to create a
prosperous business. The appendices will provide a examination of the prototype
and the measurements taken to determine its efficacy, explore the section of the
United States Code that deals with the patentability of inventions, and present
the proposed business plan for EOC Engineering.
DEDICATION
To my parents, who, while always encouraging me to leave school and enter the
“real world,” were supportive of my love for Notre Dame and my desire to
continue my education.
Thank you for believing in me.
i
CONTENTS
DEDICATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . vii
CHAPTER 1: EXPLANATION OF SCIENCE AND ENGINEERING BASIS 1
1.1 Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Current Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Power Shedding . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Power Forecasting . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 High-Density Zone Cooling . . . . . . . . . . . . . . . . . . 5
1.2.4 Hot-Aisle or Cold-Aisle Containment . . . . . . . . . . . . 5
1.2.5 Water Cooling . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Overview of Proposed Environmentally Opportunistic Computing 6
1.3.1 Building-Integrated Information Technology . . . . . . . . 8
1.3.2 Market Forces . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 EOC Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . 11
1.4.2 Computational Control . . . . . . . . . . . . . . . . . . . . 14
1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
CHAPTER 2: COMPREHENSIVE REVIEW OF THE POTENTIAL AP-
PLICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.1 Retrofits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.1.1 Universities and Government Data Centers . . . . . . . . . 25
2.2 New Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.1 Major Industrial or Commercial Sites . . . . . . . . . . . . 26
2.3 Preheating Water . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
ii
CHAPTER 3: INTELLECTUAL PROPERTY . . . . . . . . . . . . . . . 28
3.1 Patent Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.1 Novelty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.1.1 Events Prior to the Date of Invention . . . . . . . . . . . 28
3.1.1.2 Events Prior to Filing the Patent Application . . . . . . 31
3.1.2 Nonobviousness . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Competitive Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 33
CHAPTER 4: BARRIERS TO SUCCESSFUL COMMERCIALIZATION 36
4.1 Funding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2 Technology Shortfalls . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.3 Organization and Staffing . . . . . . . . . . . . . . . . . . . . . . 39
4.4 Market Acceptance . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.5 Channels to Market . . . . . . . . . . . . . . . . . . . . . . . . . . 41
CHAPTER 5: ADDITIONAL WORK REQUIRED . . . . . . . . . . . . . 42
APPENDIX A: PROTOTYPE THERMAL MEASUREMENTS . . . . . . 45
APPENDIX B: PATENTABILITY PRIMER . . . . . . . . . . . . . . . . 52
B.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
B.2 Novelty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
B.3 Nonobviousness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
APPENDIX C: BUSINESS PLAN . . . . . . . . . . . . . . . . . . . . . . 56
C.1 Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 56
C.1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
C.1.2 Mission . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
C.1.3 Keys to Success . . . . . . . . . . . . . . . . . . . . . . . . 58
C.2 Company Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 59
C.2.1 Company Ownership . . . . . . . . . . . . . . . . . . . . . 59
C.2.2 Start-up Summary . . . . . . . . . . . . . . . . . . . . . . 59
C.3 Products and Services . . . . . . . . . . . . . . . . . . . . . . . . 59
C.4 Market Analysis Summary . . . . . . . . . . . . . . . . . . . . . . 60
C.4.1 Market Segmentation . . . . . . . . . . . . . . . . . . . . . 63
C.4.2 Service Business Analysis . . . . . . . . . . . . . . . . . . 64
C.4.2.1 Costumer Buying Patterns . . . . . . . . . . . . . . . . . 66
C.5 Strategy and Implementation Summary . . . . . . . . . . . . . . . 66
C.5.1 Competitive Edge . . . . . . . . . . . . . . . . . . . . . . . 66
C.5.2 Pricing Model & Revenue Forecast . . . . . . . . . . . . . 67
C.5.3 Product Development & Milestones . . . . . . . . . . . . . 72
iii
C.6 Management & Staffing Summary . . . . . . . . . . . . . . . . . . 74
C.7 Financials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
iv
FIGURES
1.1 Layout of prototype EOC container . . . . . . . . . . . . . . . . . 13
1.2 Schematic of prototype EOC container . . . . . . . . . . . . . . . 13
1.3 Basic algorithm workflow . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Spatially-aware algorithm workflow . . . . . . . . . . . . . . . . . 19
1.5 Comparison of GCM rule-sets . . . . . . . . . . . . . . . . . . . . 21
1.6 EOC in a Municipality . . . . . . . . . . . . . . . . . . . . . . . . 23
A.1 Schematic of average duct outlet velocity . . . . . . . . . . . . . . 47
A.2 Infrared thermal maps of a server . . . . . . . . . . . . . . . . . . 48
A.3 Container temperatures and available waste heat . . . . . . . . . . 48
A.4 Temperature measurements . . . . . . . . . . . . . . . . . . . . . 51
C.1 U.S. Data Center Market . . . . . . . . . . . . . . . . . . . . . . . 61
C.2 Data Center Power Draw . . . . . . . . . . . . . . . . . . . . . . . 63
v
TABLES
5.1 Activities & Milestones . . . . . . . . . . . . . . . . . . . . . . . . 44
C.1 Market forecast to 2020 . . . . . . . . . . . . . . . . . . . . . . . 62
C.2 Market size calculations . . . . . . . . . . . . . . . . . . . . . . . 68
C.3 Anticipated market share . . . . . . . . . . . . . . . . . . . . . . . 69
C.4 Anticipated projects and revenue . . . . . . . . . . . . . . . . . . 71
C.5 Milestones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
C.6 Income statement, years 1–5 . . . . . . . . . . . . . . . . . . . . . 76
C.7 Income statement, years 6–10 . . . . . . . . . . . . . . . . . . . . 77
C.8 Balance sheet, years 1–5 . . . . . . . . . . . . . . . . . . . . . . . 78
C.9 Balance sheet, years 6–10 . . . . . . . . . . . . . . . . . . . . . . . 79
C.10 Cash flow statement, years 1–5 . . . . . . . . . . . . . . . . . . . 80
C.11 Cash flow statement, years 6–10 . . . . . . . . . . . . . . . . . . . 81
vi
ACKNOWLEDGEMENTS
To Maria, our “Den Mother” — If it weren’t for you, none of us would have
gotten as far as we have. Thank you for pushing us, for consoling us, for making
sure we were taken care of, and, above all, for putting up with us. You didn’t
have to, and we didn’t deserve it.
You’re a saint.
To my advisor, Professor Go — Thank you for agreeing to work with me on
this project. It’s been one of my best experiences at Notre Dame. I’ve learned a
lot, and I hope I — and this thesis — have lived up to your expectations.
To our ESTEEM[ed] Director, Professor Alworth — I can’t believe you let
me into this program. Seriously, what were you thinking? But, since you did, I
owe you a debt of gratitude. I hope having me in the program wasn’t too
stressful on you. I know you’ve taught me quite a bit, and I thank you for it.
Kiel
vii
CHAPTER 1
EXPLANATION OF SCIENCE AND ENGINEERING BASIS
This chapter provides a background of current centralized data center concepts
and the challenges they encounter such as energy usage, utility costs, and heat
dissipation and management. It then moves into a discussion of techniques, some
established, some novel, that are being used to mitigate one or more of these
challenges while stressing that these techniques all retain the singular problem
of disposing of unwanted heat generated by the hardware they attempt to cool.
Next, two new concepts are introduced: Environmentally Opportunistic Comput-
ing and Building-Integrated Information Technology. Together, these concepts
bind information technology (IT) centers and buildings into a single unit where
the machine hardware can provide useful energy to the building and the building
can provide a useful heat sink for the IT systems. Finally, a current Environ-
mentally Opportunistic Computing prototype is discussed, and is shown to be a
unique solution to server cooling and building heating.
1.1 Challenge
Waste heat created by high performance computing and information commu-
nications technology (HPC/ICT) is a critical resource management issue. In the
United States, billions of dollars are spent annually to power and cool data sys-
tems. The August 2007 United States Environmental Protection Agency “Report
1
to Congress on Server and Data Center Efficiency” estimates that the U.S. spent
$4.5 billion on electrical power to operate HPC/ICT servers in 2006, and the same
report forecasts that our national ICT electrical energy expenditure will nearly
double — to $7.4 billion — by 2011. The reported number of federal data centers
grew from 432 in 1998 to more than 2000 in 2009. In 2006, those federal data
centers alone consumed over 6 billion kW ·h of electricity and will exceed 12 billion
kW · h by 2011. Current energy demand for HPC/ICT is already three percent
of U.S. electricity consumption and places considerable pressure on the domestic
power grid; the peak load from HPC/ICT is estimated at 7 GW — the equivalent
of 15 baseload power plants [31, 120].
As a result, optimized performance and enhanced systems efficiency have in-
creasingly become priorities amid mounting pressure from both environmental ad-
vocacy groups and company bottom lines. However, despite evolving low power
system architecture, demands for expanded computational capability continue to
drive utility power consumption toward economic limits equal to capital equip-
ment costs [31]. The faster and more efficient computational capability becomes,
the more society grows to require it, concomitantly increasing power utilization
for operation and cooling; put another way, top-end performance often translates
to top-end power demand and heat production [12]. Thus, architects and engi-
neers must always contend with growing heat loads generated by computational
systems, and the associated energy wasted in cooling them.
Recognizing that energy resources for data centers are indeed limited, several
professional organizations within the technology industry have begun to explore
this problem. The High-Performance Buildings for High Tech Industries Team at
Lawrence Berkeley National Laboratory [24], the ASHRAE1
Technical Committee
1
American Society of Heating, Refrigerating and Air-Conditioning Engineers
2
9.9 for Mission Critical Facilities, Technology Spaces, and Electronic Equipment
[87], and the Uptime Institute [30] have all expressed interest in creating solutions.
At the same time, efforts by corporations, universities, and government labs to
reduce their environmental footprint and more effectively manage their energy
consumption have resulted in the development of novel waste heat exhaust and
free cooling applications, such as the installation of the Barcelona Supercomputing
Center — MareNostrum — in an 18th
century Gothic masonry church [15], or
through novel waste heat recirculation applications, such as a centralized data
center in Winnipeg that uses recirculated thermal energy to heat the editorial
offices of a newspaper directly above [43]. Similar centralized data centers in
Israel [12] and Paris [62] use recaptured waste heat to condition adjacent office
spaces and an on-site arboretum, respectively.
Despite systems-side optimization of traditional centralized data centers and
advances in waste heat monitoring and management, current efforts in computer
waste heat regulation, distribution, and recapture are focused largely on immedi-
ate, localized solutions, and have not yet been met with comprehensive, integrated
whole building design solutions. While recommendations developed recently by
industry leaders to improve data center efficiency and reduce energy consumption
through the adoption of conventional metrics for measuring Power Usage Effec-
tiveness (PUE) recognize the importance of whole data center efficiency, the guide-
lines do not yet quantify the energy efficiency potential of a building-integrated
distributed data center model [11].
3
1.2 Current Techniques
Current designs for reduced heating and cooling loads, with the exception of
those in Sec. 1.1, are mostly stop gap measures used to improve the transfer of
heat away from HPC/ICT systems and into the environment outside the data
center, or to limit the amount of energy expended and thus lower the cost of
operation. Most current technologies focus on increasing the rapidity with which
heat is carried away from the servers and improving the efficiency of venting that
heat to the atmosphere.
1.2.1 Power Shedding
Power shedding is a very basic and blunt solution to cooling data centers.
These measures involve shedding the server workload to the minimum feasible
level or even powering servers down completely; they are typically most applicable
during power outages, cooling failures, and brownouts. This impractical method
is a last resort for data centers, and is only used in times of absolute necessity,
but nonetheless is an effective means of controlling the heat production and the
cooling necessary to maintain safe or tolerable temperatures.
1.2.2 Power Forecasting
The power forecasting technique can be implemented when computing needs
are known in advance and can be scheduled to coincide with periods of cheap
energy — particularly at night — and is most applicable for research institutions
that are able to use a data management program to schedule computing jobs
in advance and dispatch them from the queue when electricity prices fall to a
preset level, or when the outside ambient temperature reaches a level that requires
4
little to no cooling before the air is delivered to the data center. This system
requires smart utility meters designed to recognize the price of electricity and a
data management program capable of deploying jobs only when certain conditions
are met.
1.2.3 High-Density Zone Cooling
Many data centers are an amalgamation of low-density and high-density servers.
By separating these into high-density and low-density zones, the cooling and air-
flow can be scaled for each sector — high where it is needed, low where it is not —
as opposed to scaling only for the hottest servers. A high-density zone is a physical
area of the data center allocated to high-density operations, with self-contained
cooling so that the zone appears thermally “neutral” to the rest of the room —
requiring no cooling other than its own, and causing little or no disturbance to the
existing airflow in the room. The zone can be cooled and managed independently
of the rest of the room, thus simplifying deployment and minimizing disruption
of existing infrastructure [75].
1.2.4 Hot-Aisle or Cold-Aisle Containment
Prevention of hot and cold air mixing is key to all efficient data center cooling
strategies. Many data centers employ alternating hot- and cold-aisles to reduce
such mixing. By building containment partitions at the end of these aisles, the
chance for mixing is reduced. If the data center uses a cold-aisle containment
scheme, the hot air is drawn into a cooling system and pumped back through
a raised floor under the cold-aisle. If it is a hot-aisle scheme, the entire room
becomes a cold air plenum to feed the servers and the hot-aisle can either be
5
cooled and returned to mix with the room air, or vented to the outside and cooler
air can be brought in and conditioned to ensure proper temperature and humidity
[74].
1.2.5 Water Cooling
Air cooling for data centers is limited in that it creates a complex and ineffi-
cient infrastructure by requiring, from the beginning, all of the cooling hardware
necessary to meet future demands, without regard for current usage of the space.
Water cooling is a system whereby localized, passive, low-power dissipation liquid
cooling devices can be scaled rack by rack and permit a “pay as you go” cooling
implementation. This cooling hardware is modular and can be purchased at the
same rate as future IT hardware purchases. Water has a much higher thermal
capacity than air — roughly 3, 400 times higher — and thus can be used to more
quickly remove heat. Additionally, if located near a natural source of water, ‘free’
cooling may be obtained for the data center [76], although this method necessitates
the use of piping and pumps to control the flow of water and has the additional
expense of actively managing the pumping system.
1.3 Overview of Proposed Environmentally Opportunistic Computing
Environmentally Opportunistic Computing (EOC) is a sustainable computing
concept that capitalizes on the mobility of modern computing processes and en-
ables distributed computer hardware to be integrated into a building to facilitate
the consumption of computational waste heat in the building environment. Much
like a ground-sourced geothermal system, EOC performs as a “system-source”
thermal system, with the capability to create heat where it is locally required, to
6
utilize energy when and where it is least expensive, and to minimize a building’s
overall energy consumption. Instead of expanding active measures to contend
with thermal demands, the EOC concept utilizes HPC/ICT coupled with system
controls to enable energy hungry, heat producing data systems to become ser-
vice providers to a building while concurrently utilizing aspects of a building’s
heating, ventilating, and air conditioning (HVAC) infrastructure to cool the ma-
chines; essentially, the building receives ‘free’ heat, and the machines receive ‘free’
cooling.[31]
The EOC philosophy recognizes that despite evolving lower power architec-
tural technologies, demands for increased capability will propel power consump-
tion toward economic limits. The central component of EOC is the requirement
for efficient re-utilization of this expended electrical energy as thermal energy. In
contrast to the design of a single facility around centralized computational in-
frastructure, EOC capitalizes on grid and virtualization technologies to distribute
computational infrastructure in-line with existing facility thermal requirements.
The core of EOC is the recognition that computational infrastructure can be
strategically distributed via a grid in-line with facilities and processes that require
the thermal byproduct that computation delivers. Grid computing harnesses the
idle processing power of various computing units that may be spread out over a
wide geography, and uses that processing power to compute one job. The job itself
is controlled by one main computer, and is broken down into multiple tasks which
can be executed simultaneously on different machines. Grid computing allows
unused processing power to be effectively utilized, and reduces the time taken to
complete a large, computation intensive, task. Any framework built upon this un-
derstanding reduces or removes cooling requirements and realizes cost sharing on
7
primary utility expenditures. This contrasts with traditional data center models
where a single facility is designed around a centralized computational infrastruc-
ture. With similar motives for energy efficiency and environmental stewardship,
multiple organizations have made strides in the optimization of traditional cen-
tralized data centers [24, 30, 87].
Individual data centers have re-utilized the thermal energy to the benefit of
their own facilities; however, a grid distributed approach is necessary to utilize all
of the thermal energy effectively and remove the cooling requirement. EOC un-
derstands that the grid heating model centers around computation, energy trans-
formation, and energy transfer and recognizes that the transformation and trans-
portation of waste heat quickly reduces efficiency. Therefore, EOC targets the
distribution and scale of each heating grid node to match the geographic and
thermal requirements of the target heat sink.
1.3.1 Building-Integrated Information Technology
EOC is best described as a building-integrated information technology (BIIT)
that distributes IT hardware — computer workstations, displays, or high perfor-
mance computing servers — throughout an institution instead of consolidating
them in a single, central facility. The building-integrated nodes interact with the
energy requirements and capabilities of the buildings in which they are located to
deliver recoverable waste heat in order to offset a building’s energy requirements
and to utilize natural cooling from the building. To be truly integrated, however,
the deployment of building-integrated nodes must consider the impact on archi-
tectural form and function as well as the optimal engineering solution and must
answer the following questions:
8
1. How can the nodes be integrated into existing durable building stock in ways
that limit the impact on architectural form and the function of a building,
limiting adverse effects on the building occupants?
2. What is the potential for a technology such as building-integrated nodes to
contribute to optimized form-generation of new buildings in concert with
other existing passive design technologies (building location, orientation,
massing, materiality, bioclimatic responses)?
3. How should these nodes be controlled within a building or across a collection
of buildings such that they meet the requirements of both the computing
users and the building occupants?
If a building is to be retrofitted to make use of EOC nodes, the location of these
nodes is initially constrained by practical and energy performance limitations: the
node must be connected to the centralized HVAC system and duct work and it
must be located near the room(s) it will heat. If the building contains a large, open
floor of employee workstations, placing the EOC node somewhere along existing
HVAC ducts may not only impact the physical space, but it may also decrease
the usable space, alter the temperature distribution in the room, affect sight lines,
and impact the room lighting. For any location and orientation along the existing
HVAC duct lines, these impacts can be objectively quantified and used to evaluate
the qualitative comfort of the space.
Additionally, the introduction of EOC nodes as part of an HVAC system fur-
ther complicates an already complex control system. By itself, an HVAC system
attempts to deliver a combination of environmental and conditioned air from air
handling units in order to maintain a comfortable temperature (68 − 72 ◦
F [1])
and good air quality (15 ft3
/min per person) during peak load conditions based on
9
expected occupancy patterns. The air handling units are scheduled on an hour-
by-hour basis based upon these patterns. Actual heating loads and air quality,
however, can vary randomly from expected values and this variation is handled by
zone controls which adjust damper positions and reheating elements in the HVAC
system’s terminal box.
The introduction of EOC nodes as reheating elements in the HVAC system’s
terminal box is complicated by the time-varying nature of the EOC node’s activity.
The nodes produce heat as a by-product of computational loads, but the load
distribution can fluctuate over time. A major challenge of this product will be
control of the building’s existing HVAC system in order to make optimal use of the
excess heat generated by the EOC nodes while maintaining or exceeding health
and comfort standards. Similarly, when the computational units are receiving
free cooling, which may be time-variant based on HVAC production, the control
system must distribute computational loads in order to preserve the ICT/HPC
hardware.
1.3.2 Market Forces
Unlike current approaches to managing IT hardware, EOC integrates IT into
buildings to create heat where it is already needed, to exploit cooling where it is
already available, and to minimize the overall IT and building energy consumption
of an organization. IT hardware consume more than 3% of the U.S. electrical bud-
get, and this consumption will only grow as the demand for enhanced computing
capability continues. To that end, the national energy expenditure of data centers
is expected to increase by 50% from 2007 to 2011 [120]. Therefore, there is both
a strong economic and strong environmental need to readdress how IT hardware
10
is deployed and utilized as information technology continues to define and shape
modern life.
The American Physical Society suggests that up to 30% of the energy used
in commercial buildings is wasted [14, 81], and it has been shown that HVAC
accounts for 50% of the energy consumed in commercial buildings [37]. Efforts
to improve building energy efficiency through the LEED program have achieved
limited success [115] with the “lack of innovative controls and monitoring” iden-
tified as one of the chief obstacles to high energy efficiency. The development of a
networked, distributed control model BIIT deployment and optimization will set
the standard for improved approaches for traditional HVAC systems.
1.4 EOC Prototype
1.4.1 System Overview
The University of Notre Dame Center for Research Computing (CRC), the
City of South Bend, and the South Bend Botanical Society have collaborated
on a building-integrated distributed data center prototype at the South Bend
Botanical Garden and Greenhouse (BGG) called the Green Cloud Project —
the first field application of EOC. The Green Cloud prototype is a container that
houses CRC HPC servers and is situated immediately adjacent to the BGG facility
where it is ducted into one of the BGG public conservatories. The HPC hardware
components are directly connected to the CRC network and are currently able
to run typical University-level research computing loads. The heat generated
from the HPC hardware is exhausted into the BGG conservatory with the goal
of offsetting wintertime heating requirements and reducing BGG annual heating
expenditures.
11
The Green Cloud node prototype was designed to minimize cost while provid-
ing a suitably secure facility for use outdoors in a publicly accessible venue. The
container-based solution is 20ft long by 8ft wide and retrofitted with the following
additions: a 40 kW capacity power panel with 208 V power supplies on each rack,
lighting, internally insulated walls, man and cargo door access, ventilation louvers,
small fans and duct work connecting the node to the greenhouse. The prototype
has a total cost of under $20, 000. Exterior power infrastructure – including the
transformer, underground conduit, panel and meter – were coordinated by Amer-
ican Electric Power (AEP) and the City of South Bend. The slab foundation was
also provided by the City of South Bend. High bandwidth network connectivity
critical to viable scaling is possible via a 1Gb fiber network connection to the
University of Notre Dame campus on the St. Joseph County MetroNet Backbone.
The general overview of the Green Cloud setup can be seen in Fig. 1.1.
Over 100 four-core machines of two types were provided by eBay Corporation
for the Green Cloud prototype. Specific server model information is unavail-
able due to proprietary restrictions. For the data in this work, the machines
are arranged in three racks, as seen in Fig. 1.2, and placed in a simplified cold-
aisle/hot-aisle setup running through the middle of the container. Two networked
environmental sensors2
were also installed for real-time temperature and humidity
monitoring.
2
APC AP9512THBLK sensors with APC AP9319 environmental monitoring units
12
Figure 1.1. Layout of prototype EOC container integrated into BGG
facility
Figure 1.2. Schematic of prototype EOC container
13
1.4.2 Computational Control
The servers of the Green Cloud Project are fully integrated into the Notre
Dame Condor pool3
, to which the end-users submit their high-throughput jobs.4
Condor5
is a distributed batch computing system used at many institutions around
the world that allows many users to share their computing resources. This allows
users access to more computing power than any of them individually could afford.
Condor harnesses idle cycles from underused machines and sets tasks for those
machines to complete while the native user is not present.
As opposed to a traditional data center, machines in the Green Cloud pool
group are additionally managed by an environmentally-aware Green Cloud Man-
ager (GCM) controls system. This is necessary because the prototype is not fitted
with HVAC equipment of its own and relies solely on free cooling by either outside
air through the louvers or greenhouse air through a return vent during the hot
and cold seasons, respectively. The primary role of the GCM is to maintain each
machine within its safe operating temperatures as stated by their manufacturers
by shutting it down if needed to prevent damage. At the same time, the GCM
attempts to maximize the number of machines available for scientific computa-
tions, therefore maximizing the temperature of the hot-aisle air that is used for
greenhouse heating.
The GCM interfaces both Condor and xCAT6
, which provides access to the
functionality of the hardware’s built-in service processors: power state control and
3
http://crcmedia.hpcc.nd.edu/wiki/index.php/Notre Dame Condor Pool
4
The use of many computing resources over long periods of time to accomplish a computa-
tional task
5
http://www.cs.wisc.edu/condor/
6
Extreme Cloud Administration Toolkit, http://xcat.sourceforge.net/
14
measurements of intake, RAM7
and CPU8
temperatures, fan speeds and voltages.
Tests performed with a thermal imaging camera have confirmed the average intake
temperatures reported by the machine’s service processors to be suitably accurate.
As such, these are used by the GCM to decide whether or not the machine is
operating within a safe temperature range.
The Condor component handles all of the scientific workload management:
deploying jobs on running servers, evicting jobs from machines meant to be shut
down, and monitoring the work state of each core available in the Green Cloud.
The GCM posts new Condor configurations to the machines whenever any actions
are required. The interval for Condor collector status updates was lowered from
the default 5 minutes to 1 minute in order to provide the GCM with more up-to-
date information.
The GCM is written in Python and provides rule-based control of the servers
running in the Green Cloud based upon xCAT data as well as on-line measure-
ments of the cold-aisle/hot-aisle temperatures in the container gathered from the
two APC sensors. The rules are applied every 2 minutes to each machine in-
dividually, and decide what action the machine should take under the current
conditions: start (machine is started, Condor starts running jobs), suspend (Con-
dor jobs are suspended, machine is running idly), and hibernate (Condor jobs are
evicted, machine shuts down).
The basic GCM rule-set, Algorithm 1, was the first used to control the heat of
the servers themselves and prevent the machines from operating beyond supplier-
specified environment temperature ranges to prevent wear due to overheating. In
this rule-set, whenever a machine’s intake temperature exceeded the stop tem-
7
Random-Access Memory, a form of computer data storage
8
Central Processing Unit, the portion of a computer system that carries out the instructions
of a computer program
15
Algorithm 1: Basic rule-set for the GCM
if $mytemp ≤ $start temp then start;1
if $mytemp < $sleep temp then continue;2
if $mytemp ≥ $sleep temp then hibernate;3
Figure 1.3. Baseline logic algorithm workflow
perature (106◦
F) it was hibernated. The machine was only restarted after its
intake temperature dropped below a starting point (99◦
F) to prevent excessive
power cycling. Fig. 1.39
shows a walkthrough of the algorithm where Tin is the
inlet temperature of the server, Tcrit is the the stop temperature, and Tstart is the
temperature at which the machine can restart.
With this basic rule-set, a good proportion of machines (33 of 60) were running
Condor jobs during afternoons with outside temperatures ranging from 82◦
F to
95◦
F. As expected, the coldest machines were at the bottom of the racks, and the
9
Courtesy of Eric Ward
16
coolest rack was placed near the intake louver. On all racks, the 3–5 top machines
were always off, except for cold early mornings, effectively rendering them useless.
This was indicative of existing improvement opportunities in the basic algorithm.
There also existed a large intake temperature difference (8–15◦
F) between ma-
chines that were on and running jobs and ones that were hibernated. The differ-
ence was even sharper between machines directly above or below each other in a
rack. This led to the observation that a server’s intake fans are not spinning when
the machine is hibernated. This results in insufficient airflow for the machine to
cool down below the point of restart and can even result in the recirculation of
hot air from the hot-aisle to the cold-aisle. Because of this, during the early hours
of the night there were hibernating machines as hot as 115◦
F at the top of the
racks while the machines at the bottom were only 75◦
F.
Such conditions prompted further work on the GCM to include on-line fetch-
ing of hot- and cold-aisle temperatures and the placement of a machine in the
rack. With this new information, a spatially-aware rule-set, Algorithm 2, was
introduced. The rules regarding hibernation were split in two, depending on the
temperature of the cold-aisle. When the cold-aisle was above 85◦
F, the behavior
stayed the same as the basic approach. In cases where the cold-aisle was be-
low this level, the rules changed. The algorithm now suspends the machine from
performing more Condor jobs instead of hibernating it after exceeding the sleep
temperature of 106◦
F. The machine is only hibernated (shut down) if the upper
operating temperature of 109◦
F is exceeded. This rule prevents newly started ma-
chines from re-hibernating more quickly than they can cool down. This algorithm
forces a hibernating machine to awake, even if it reports a modest overheating
in order for its fans to run and cool the machine. However, this behavior is only
17
Algorithm 2: Spatially-aware rule-set for the GCM
if $mytemp ≤ $start temp then start;1
if $mytemp < $sleep temp then continue;2
if $cold aisle > 85 and $mytemp ≥ $sleep temp then hibernate;3
if $cold aisle ≤ 85 and $mytemp ≥ $sleep temp and $mytemp == “on”4
then suspend;
if $cold aisle ≤ 85 and $mytemp ≥ $danger temp then hibernate;5
if $cold aisle ≤ 85 and $mystate! = “on” and $mytemp ≤ $danger temp+66
and $neightemp ≤ $start temp then start;
applied if the average intake temperatures of the two machines directly below it
are below the starting point temperature of 99◦
F. Fig. 1.4 shows a walkthrough of
this new algorithm where Tin,j−1 and Tin,j−2 are the machines directly below and
two below, respectively, the machine being tested by the GCM.
Apart from system control, the GCM also provides detailed logging of the
transient conditions. Each log entry is stored separately and consists of general
container measurements and individual machine values. The logs are saved to
provide reference points after adjustments to GCM rules or the physical setup of
the prototype.
To provide ease in interpretation of the GCM logs, the AJAX-based GC
Viewer10
has been created. This on-line tool provides a near realtime view of
the rack-space with color gradient representation of temperatures and informa-
tion about core utilization and machine states. It also allows users to choose any
data point in the measurement period and run a slideshow-like presentation of the
changes to the machine states and temperatures.
Using this GC Viewer, it is easy to see the impact of the spatially-aware
rule set (Fig. 1.5). During two consecutive days of similar weather, the GCM
ran both rule-sets and recorded their impact on the state of the Green Cloud
10
Publicly available at: http://greencloud.crc.nd.edu/status
18
Figure 1.4. Spatially-aware logic algorithm workflow
19
during the evening (8p.m., 81◦
F outside air temperature). With the basic rule-
set, only 33 machines were running and 27 were hibernated. Using the spatially-
aware rule-set, the number of running machines increased to 41, which allowed 36
more processor cores to run scientific computations. Moreover, the spatially-aware
rule-set resulted in a 13.7% increase in hot-/cold-aisle temperature difference.
This amounted to an increase of 1.57 kW, to 11.29 kW, of heat recovered for
the greenhouse. The spatially-aware rule-set also allows 2–3 more machines to
run even during the hottest parts of the day. While the control algorithms are
important, it should be noted that the availability of outside air below 95◦
F has
the potential to provide nearly year-round cooling for continual server operation.11
11
For information regarding the thermal measurements of the prototype, including server inlet
and outlet temperatures, waste heat produced and supplied, and thermal calculations therein,
please refer to Appendix A
20
 (a) Spatially-aware rule-set (2010-08-01 20:00)
Running: 41, Hibernated: 19,
Cold-Aisle: 82◦
F, Hot-Aisle: 111◦
F
	
  
(b) Basic rule-set (2010-08-02 20:00)
Running: 33, Hibernated: 27,
Cold-Aisle: 84◦
F, Hot-Aisle: 109◦
F
Figure 1.5. Comparison of rule-set data points in GC Viewer. Note the
white and black boxes denoting hibernating and running machines,
respectively.
1.5 Conclusion
The Green Cloud prototype serves as a successful demonstration that ICT
resources can be integrated within the energy footprint of existing facilities and
can be dynamically controlled via the Green Cloud Manager to balance process
throughput, thermal energy transfer, and available cooling. Apart from not us-
ing energy intensive air conditioned cooling at a centralized data center, the GC
further improves energy efficiency by harvesting the wasted hot air vented by the
servers for the adjacent greenhouse facility. The success of this technique makes
the EOC concept especially attractive for sustainable cloud computing frame-
works.
To most effectively utilize EOC, information technology should be deployed
21
across a number of buildings. All of the EOC nodes will be grouped together in a
single municipality or spread across a region or the whole of the continent depend-
ing upon the type of compute resource and speed of connection.12
Computing jobs
will be migrated from facility to facility from a cloud based upon which building
needs the waste heat, which building can provide free cooling, or where the en-
ergy is the cheapest (Fig. 1.6). Environmentally Opportunistic Computing is then
controlled to achieve a balance between the computing needs of the end-users and
the needs of the building occupants.
Ultimately, a municipality will contain an interconnected network of EOC
nodes that local universities, businesses, and government offices are able to use
for their compute jobs and of which they all claim some ownership. In order to
receive the greatest benefit, the GCM must interface with the building manage-
ment systems that monitor and control the building’s mechanical and electrical
equipment including the HVAC systems. Many of these systems are constructed
using open standards and so the GCM will be able to interact with them. Partner-
ships and synergy across these institutions will greatly enhance the environmental,
thermal, and economic benefits of Environmentally Opportunistic Computing.
Furthermore, the Green Cloud Manager has been proven to successfully man-
age computational load and prevent critical overheating of server hardware. As
the GCM control system improves, available computer power will increase and
refined control of temperature output will be realized. With growing enterprise
utilization of cloud computing and virtualized services, EOC becomes more vi-
12
The University of Notre Dame, Purdue University-Calumet, and Purdue University-West
Lafayette have recently come together in a partnership that couples mutual interests to build
a cyber-infrastructure that allows a computational grid connecting the compute facilities at
all three locations known as the Northwest Indiana Computational Grid (NWICG). NWICG
allows researchers to submit jobs at any campus and use the compute resources of any other
campus. It is currently operated under the Department of Energy’s National Nuclear Security
Administration.
22
Figure 1.6. Environmentally Opportunistic Computing Across a
Municipality
able across the range of ICT services. Environmentally Opportunistic Computing
technology will continue to supply proven economic and environmental benefits
to both an organization and its community partners.
23
CHAPTER 2
COMPREHENSIVE REVIEW OF THE POTENTIAL APPLICATIONS
The Green Cloud Manager and Environmentally Opportunistic Computing
nodes will be scaled to create useful computing and thermal solutions for any
industrial, commercial, or academic facility. Based on the computational capabil-
ities of the organization, nodes will be developed and installed in existing buildings
to mitigate the cooling needed for the servers and lessen the central heating re-
quirements for the facility. However, the ideal approach will be for the EOC idea
to be actively integrated into the planning and construction of new structures.
2.1 Retrofits
When computing nodes utilizing EOC Engineering’s control system are com-
bined with current building stock, the location of the nodes is initially constrained
by both practical and energy performance limitations: the nodes must be con-
nected to the building’s HVAC system and duct work and they must be located
near the rooms they are helping to heat. If the building contains a large open
floor plan, placing the EOC node somewhere along the existing duct work may
not only impact the physical space, but detract from the usable space as well as
affect sight lines, alter the temperature distribution, and impact lighting. This is
not to say that retrofitting a building to include Environmentally Opportunistic
24
Computing concepts cannot be done, merely that it requires extra design analysis
to determine the best possible location for a node.
2.1.1 Universities and Government Data Centers
Universities and government data centers are an ideal customer for this technol-
ogy. The United States government currently has 2, 094 data centers (up from 432
in 1999) [61] and is currently in the process of consolidating them in an attempt
to increase efficiency and decrease government waste with the goal of reducing
the number of federal data centers by 40% by 2015. In addition, the 2009 federal
stimulus plan includes $3.4 billion for IT infrastructure [63]. Thus, the market for
technologies that would reduce the costs associated with running these facilities
is enormous.
Universities, with their multitudes of buildings and large research facilities are
another major market for this technology. EOC nodes could easily be added to
existing university building stock and save hundreds of thousands of dollars each
year in reduced heating costs for those facilities. It is also apparent that some
universities would be unwilling to install nodes on campus buildings for a number
of reasons: the nodes would change the “look” or “feel” of the campus buildings;
heating is supplied via steam from a campus cogeneration power plant; there is
no room to change the building footprint; etc. In these cases, the university could
contract with local area businesses to supply heat for those buildings. The univer-
sity would still see reduced cooling costs for the servers, and the business would
allow storage of servers on its property in exchange for comparatively inexpensive
heat. The servers could be connected to the main campus cluster via fiber optic
cable if infrastructure permits, or satellite if it does not.1
1
The original prototype utilized a satellite connection before the installation of a fiber-optic
25
2.2 New Facilities
New facilities allow the greatest freedom for EOC nodes to actively manage
heating loads in the building and to distribute computational ability where it is
most sensible from an economic, thermal, and aesthetic standpoint. In construct-
ing a new building, the computing nodes and the GCM will be employed to work
in conjunction with other passive design technologies such as location, orienta-
tion, massing, materiality, and bioclimatic responses in order to more fully take
advantage of the potential energy savings and thermal efficiencies (Sec. 1.3.1).
All new construction — whether it be commercial, academic, or governmental —
must account for these processes while designing for optimum efficiency and price.
2.2.1 Major Industrial or Commercial Sites
If constructing a new facility, a dedicated data center can be built to support
it while at the same time maximizing the form and function of the combined data
center and building. If the site needs a lot of heat, it is also possible to build
a very large data center capable of supplying more than enough thermal energy
while still using the GCM algorithm to control hot spots within the server racks
and use outside-air-cooling. Yahoo! Inc. has recently constructed a 155,000-
square-foot data center housing 50,000 servers and using only ambient air cooling
methods [42]. Resembling a chicken coop, the Yahoo! model brings cool air in
through grates on the sides of the building and collects the waste heat in ducts
in the ceiling before venting it to the atmosphere. The EOC model would collect
this heat and duct it to an adjoining facility to serve a useful purpose.
cable.
26
2.3 Preheating Water
The Green Cloud Manager and Environmentally Opportunistic Computing
can also be used even if a data center uses in-rack water cooling instead of air
cooling (Sec. 1.2.5). The GCM will still monitor the status and heat generation
of the servers and will direct compute load to where it is needed as per usual, but
instead of then ducting hot air into a facility, the system will pipe hot water to
a heat exchanger and preheat water for other purposes. The water will then be
cooled further — either through the use of chilling towers or by using a nearby
lake – before it is sent back into the data center for reuse cooling the servers.
27
CHAPTER 3
INTELLECTUAL PROPERTY
3.1 Patent Protection
3.1.1 Novelty
The patentability requirements for determining whether or not EOC Engineer-
ing’s Green Cloud Manager is novel fall into two basic categories: events prior to
the date of invention, and events prior to the filing of a patent application. In the
first case, the invention cannot be known or used by others in the United States.
This includes any previously patented invention, or printed publication that would
describe the invention. In the second, the invention must not be in the public use
or on sale in the United States for more than one year before the date of filing of
the patent application. For a background on the patentability of algorithms and
the novelty requirements, refer to Appendix B.1 and B.2, respectively.
3.1.1.1 Events Prior to the Date of Invention
In order to determine whether the process that constitutes Environmentally
Opportunistic Computing has been previously known or used by others in the
United States, it is necessary to scour previous patent applications and other pub-
lished materials for methods of using waste heat to actively heat other facilities
and for methods utilizing an algorithm for dispersing compute load for optimal
28
thermal conditions. There are many patents that relate to the cooling of data
centers, but because the central tenant of EOC is the management of computa-
tional load as it relates to heat management and cooling is limited to ambient-air
cooling, these previous patents are not relevant. Instead, the focus is on data
center schedulers and management systems. These systems generally fall under
Class 709 of the patent system and relate to “Electrical Computers and Digital
Processing Systems: Multicomputer Data Transferring.” This class provides for a
computing system or method including apparatus or steps for transferring data
or instruction information between multiple computers.
Patent 7,860,973 is for a Data Center Scheduler that describes various tech-
nologies allowing for optimal provisioning of global computing resources. These
technologies are designed to drive out inefficiencies that arise through the use of
multiple data centers and account for traffic and events that impact availability
and cost of computing resources. In this, three claims (1, 7, and 15) are impor-
tant. Claim 1 is for a data stream, output by a computing device via the internet,
comprising information about global computing resources accessible via the in-
ternet wherein the information contains valuable data for use by those making
requests for global computing resources. Claim 7 is for a method, implemented by
a computing device, that receives information about data center resources from
one or more data centers and streams that information via a network. And Claim
15 is for computer-readable media instructing a computer to receive information
about computing resources of one or more data centers and to issue requests for
consumption of some of the computing resources. This patent poses some trouble
for EOC and the Green Cloud Manager, specifically how it relates to a system
capable of collecting information from data centers and using that information
29
to determine where to send future compute jobs. Ultimately, EOC might have
to license the use of this patent from Microsoft Corp. in order for the GCM to
function.
Many other patents, such as 7,805,473, 2010/007667, etc., involve methods for
managing cooling methods and so are not infringed upon by the EOC method.
The system EOC uses for separating hot-aisle and cold-aisle containment has been
present in the prior art for many years and is not covered by patent protection,
therefore EOC is not infringing on any current method for collecting and removing
waste heat.
It is also important to establish that the EOC method does not already exist
in the prior art. There are many current examples of facilities utilizing waste heat
from data centers. Fortunately, these examples use a method of water-to-water
heat pumps (WTWHP). Some of these heat pumps are attached to the very air
chillers EOC will do without,1
and so are of no concern [88]. Other applications
of this concept use WTWHP to actively cool the server racks so as to not require
mechanical air conditioning and deliver the hot water to district heating networks
[54].
While many ideas are similar, no other company purposefully uses the wasted
thermal energy to directly heat an adjoining facility. In addition, the control
system combining data center utilization based upon need for computer jobs and
the need for heat delivery is unprecedented. We fully expect that we will receive
patent protection for the core algorithm utilization in data processing.
1
A heat pump uses a vapor compression cycle to take heat from a low-temperature source
and raise its temperature to a useful level. Air conditioners, chillers, and refrigeration systems
fit this definition, but the heat they generate usually is discarded. By attaching a WTWHP to
a chiller in a data center, the rejected heat serves a useful purpose.
30
3.1.1.2 Events Prior to Filing the Patent Application
In order to receive a patent under Section 102, the invention must not be in
the public use or on sale in the United States for more than twelve months before
the date of the patent application. It has long been held that an inventor loses
his right to a patent if he puts his invention into public use before filing a patent
application: “His voluntary act or acquiescence in the public sale and use is an
abandonment of his right” [8]. Nevertheless, an inventor who seeks to perfect his
discovery may conduct extensive testing without losing his right to obtain a patent
for his invention — even if such testing occurs in the public eye [4, 9]. It is in the
best interest of the public, as well as the inventor, that the invention should be
perfect and properly tested, before a patent is granted for it [4].
Such it is that even though the EOC prototype is operated on the property of
the City of South Bend, and the process and method have been discussed at length
with the City, the invention remains in the experimental phase. It has not yet been
offered for commercial sale for profit (which would constitute a critical date and
start the twelve month countdown) and the invention remains in the control of its
inventors. Nothing that has been done so far has placed the potentially patentable
method in the public domain outside of the realm of experimental testing. Thus,
the invention remains patentable and no critical date has occurred to force the
filing of a patent application.
3.1.2 Nonobviousness
The nonobvious requirement (Appendix B.3) constitutes a notion of something
meeting a sufficient inventive standard or nontriviality. This is to preclude dif-
ferences from prior art that are “formal, and destitute of ingenuity or invention...
31
[and] may afford evidence of judgment and skill in the selection and adaptation
of the materials in the manufacture of the instrument for the purposes intended,
but nothing more” [6]. Nonobviousness presents “a careful balance between the
need to promote innovation and the recognition that imitation and refinement
through imitation are both necessary to invention itself and the very lifeblood of
a competitive economy” [2].
IBM, Dell, Sun Microsystems, and Rackable Systems all make portable mod-
ular data centers that might compete with the proposed EOC nodes [50, 60, 121,
122]. These modular centers all include UPSs, cooling, and fire suppression in
order to create a fully-functional data center in a smaller footprint for clients who
would like to grow their data centers as they need them or spread the center out
over multiple locations. However, these modular data centers are not designed to
offer passive cooling of the server hardware, nor are they designed to collect and
use the waste heat produced therein.
While these modular data centers are different enough from traditional data
center design to overlap with the EOC nodes, they do not have the ability to
provide all of the savings that EOC does. They are not designed to interface with a
BMS, they require active cooling measures, and they are not constructed to supply
useful thermal energy to a facility. They also do not have a central algorithm that
directs compute load to different modules for the purpose of reducing the need
for cooling. As a result, although these modules might solve some problems for
customers, there is little direct competition between these technologies and an
EOC node complete with the Green Cloud Manager.
As mentioned in Sec. 2.2.1, Yahoo! Inc. has recently constructed a very large
data center that attempts to use only ambient air cooling methods [42]. Yahoo! is
32
not the first to cool data centers with environmental air, nor will they be the last.
This idea has been repeatedly toyed with, but on a case-by-case basis with no firm
specializing in the design of this type of data center. And again, this design only
uses ambient air cooling without collecting the resulting waste heat for use in an
adjoining building and does not include the use of an algorithm along the lines of
the Green Cloud Manager for distributing compute loads. As a result, the EOC
Engineering method is much more advanced than this technology.
We believe that the Green Cloud Manager utilized in the Environmentally
Opportunistic Computing method is nonobvious and will be awarded a patent.
Never before has a process existed that managed computation levels based upon
maintaining a set thermal output. The prior art was directed toward mitigating
heat levels and active cooling. EOC and the GCM are moving in the opposite
direction from the current philosophy of data center managers. Therefore, EOC’s
process for directing compute load to where heat is needed based on current com-
putational levels and subsequent heat production is novel and is deserving of a
patent.
3.2 Competitive Analysis
EOC Engineering currently has three classes of competitors:
1. Building Management Systems
2. Server Administration Toolkits
3. Data Center Design Firms
Building Management Systems (BMSs) are computer-based control systems
installed in buildings that control and monitor a building’s mechanical and elec-
33
trical equipment such as ventilation, lighting, power, fire, and security systems.
The BMSs also monitor the thermostats of different building sectors and attempt
to meet the patron needs. The BMSs can be written in proprietary languages, but
are more recently being constructed using any number of open standards. While
these systems are critical to the success of EOC Engineering and must interface
with our software, they cannot easily expand to the control of computational loads
within the data center.
On the other side of the spectrum, Server Administration Toolkits, such as
xCAT, are open-source distributed computing management software systems used
for the deployment and administration of computing clusters. They can: cre-
ate and manage cluster machines in parallel; set up high-performance computing
stacks; remotely control power; monitor RAM and CPU temperatures; manage
fan speeds and voltages; and measure computing load. While these systems are
integral to EOC Engineering’s software, by themselves they cannot grow to do
what EOC does and cannot interface with a BMS.
Of all our potential competitors, firms specializing in data center design are
the most significant; yet they are also our greatest customers. These companies
already have relationships with potential clients and are experienced in the field
of managing client concerns and facility construction. While it is not attractive
to compete with these companies by doing exactly what they do, we will be
successful by partnering with these design firms to leverage our unique knowledge
of data management systems and by implementing our patented system controls.
Effectively these firms will become key customers of EOC Engineering.
Confronted with the necessity of constant reliability and performance of their
data centers, many clients are concerned by new and unique approaches to main-
34
taining the data and information on which their business function. They have a
strong preference for working with technologies and people they know. Thus, if
our potential customers are to change their buying habits, they will only go with
those they trust. It is absolutely imperative, then, for us to create good working
relationships with our clients. It is not necessarily enough to save them money;
if they do not believe EOC Engineering can deliver on its promise, they will not
work with us. The manner in which we respond to our clients and their concerns
represents part of our competitive advantage.
EOC Engineering’s main competitive edge rests in its unique method for con-
trolling computer loads and heat production to best save the customer money. No
other firm can provide the level of control that is possible with EOC Engineering
and no one else can provide the power cost savings that we can. We will supply
the software that allows the Building Management Systems and Server Adminis-
trator Toolkits to communicate with each other and increase the overall efficiency
of the entire building system.
EOC Engineering is also filing for patent protection for our process. This
patent will allow us to protect our unique algorithm and eliminate the risk of
duplication of our algorithm by data center design firms. In order to offer our
proven savings to their clients, these data center design firms must first partner
with EOC Engineering.
35
CHAPTER 4
BARRIERS TO SUCCESSFUL COMMERCIALIZATION
4.1 Funding
EOC Engineering will initially be funded through a Small Business Innovative
Research (SBIR) grant from the federal government.1
SBIR is a competitive
program that encourages small businesses to explore their technological potential,
and provides incentives to profit from the commercialization of that technology.
The government does this in order to include qualified small businesses in the
nation’s R&D arena and, through them, further stimulate high-tech innovation
and encourage an entrepreneurial spirit while at the same time meeting its specific
research and development needs.
These grants come in two phases. Phase I is to determine, insofar as possible,
the scientific or technical merit of ideas submitted under this program. The awards
for Phase I are for periods of up to 6 months and in amounts up to $150, 000.
The state of Indiana also provides matching funds for these grants up to 50%.
EOC Engineering has already applied for Phase I grants through the Department
of Energy and the National Science Foundation. EOC Engineering anticipates
the acceptance of one of its grant proposals for a combined federal and state
contribution of $225, 000, and possibly both, for a total of $450, 000.
1
More information is available at http://www.sbir.gov/
36
Phase II is to expand upon the results of and to further pursue the development
of Phase I projects and the awards are for periods of 2 years in amounts up to
$900, 000. While this phase is principally designed for the R&D effort, the initial
award in Phase I will be enough to finish work on the Green Cloud Manager.
EOC Engineering will thus only apply for Phase II if cash reserves are in danger
of running low, and will instead focus on using the Phase I award for all of its
business needs.
There is also the option of approaching the IrishAngels network to generate
more funding. This network comprises a group of Notre Dame alumni and sup-
porters who are experienced in entrepreneurial endeavors and are interested in
supporting new venture development. The mission of the IrishAngels network is
to foster the development of new business opportunities created by those linked
to the University of Notre Dame — and EOC Engineering certainly fits into this
category.
4.2 Technology Shortfalls
As shown in Chapter 1, the technology behind EOC Engineering is sound.
The software provides the necessary controls to accurately allocate computer load
where heat generation is needed, efficiently monitor all equipment, and safely
cool the HTC/ICT hardware. The shortfall in the technology is not within EOC
Engineering, but in the Building Management Systems (BMSs) employed by many
buildings.2
The BMSs in older facilities are relatively unintelligent when it comes to mon-
itoring building occupancy and area thermal requirements. Many times, these
2
Microsoft, EnerNOC, Scientific Conservation, Johnson Controls, Schneider Electric, IBM
and Honeywell all have their own proprietary Building Management Systems [53].
37
systems will have only one thermocouple, and it will be located at the only ther-
mostat in the area (be that a small office, an open floor work area, or multiple
floors). Because the BMS is only sensing temperature in one area, without regard
to where people are gathered, it supplies the same thermal load to every area
even though some may be far warmer or cooler than the sensing location; it also
supplies that load to areas where none is required because no one is present.
In order for EOC nodes to be most effective, the Green Cloud Manager must be
able to interface with a well constructed BMS. Because thermal energy dissipates
rather rapidly if forced to travel long distances it would be beneficial to understand
which areas need heat and which do not so that in a building with more than one
EOC node compute jobs can be sent to the node most capable of supplying heat
to that area. This further increases the efficiency of the system and allows the
GCM to safely manage the thermal load on the servers.
The EOC system will still work with most Building Management Systems, but
can be limited by their type and intelligence. The BMSs must also be written in
an open-source software or EOC Engineering will have to license the proprietary
software necessary to interface with them. At the same time, a building must have
a BMS or the EOC node will not be able to dynamically deliver thermal energy
to it. The node will still work as a data center, but the GCM will not be able to
deliver compute load as a function of thermal requirements of the building.
Before EOC Engineering can begin to offer its algorithm for sale, the software
must be built and tested and its ability to interface with at least one BMS must
be shown. This will not be extremely difficult, but it will take time. The timeline
for the creation of the software is outlined below in Chapter 5. The software will
originally interface with only one Building Management System, but more will be
38
added in future iterations.
4.3 Organization and Staffing
EOC Engineering will have a very flat organizational structure. For the first
two years, there will be only a handful of employees: A CEO, a Project Manager,
a Chief Programmer, two salespeople, a small IT staff, and an administrative as-
sistant. The CEO will be responsible for working with our marketing, sales, and
distribution channels which will all be contracted outside of the company struc-
ture. The Project Manager will work directly with the customer and data center
design firms to ensure our specifications and standards are met in the construction
new buildings or the retrofit of existing facilities. The programmer will work with
the clients IT department to determine their needs, create additional user inter-
faces for each new BMS with which the GCM will work, and oversee the outside
programmers who are contracted to create any substantial changes to the software
and algorithm. Staffing will grow starting in the third year with the addition of
more salespeople and project managers as well as consultants and programmers.
4.4 Market Acceptance
Of all the potential barriers to successful commercialization of EOC Engineer-
ing’s technology, the question of whether or not the market will accept it is the
largest unknown. While networking and computing equipment manufacturers con-
stantly develop smaller and faster devices, rarely do they improve energy efficiency
and heat emission. In fact, utilizing equipment with smaller physical footprints,
such as modern blade servers, increases heat generation per square foot and, con-
sequently, increases the need for more powerful cooling systems. Customers know
39
this and have been trained to recognize that cooling systems are an absolute ne-
cessity when considering an expansion in data center capability in order to protect
the data upon which their businesses are built.
EOC Engineering’s clients will require constant reliability and performance
of their data centers and will have a very strong preference for working with the
technologies that have proven successful in the past. It does a company little good
to be a first mover in a new technology when the failure of that new technology
could, in point of fact, hurt their businesses. It is only with careful trepidation
that a company switches to a brand new technology, even if that technology
would save money, when not many others do. For this reason, it will be difficult
to rapidly gain a customer base. EOC Engineering will, therefore, gain customers
by focusing on smaller regions and moving market by market throughout the
country in a methodical manner. EOC Engineering will gain market share by
using pilot programs and providing a reduction in price for early adopters. The
pilot programs will offer clients the software for free when they use a Building
Management System that the GCM has not before been paired with in exchange
for the ability to monitor their systems and the interaction between the GCM
and the BMS. For customers using a BMS with which the GCM has already been
proven effective, the price of a new software license will be reduced for the first
few years; when a client installs an EOC node during this period, they will retain
the reduced license price for the life of their system which provides an incentive
for customers to purchase earlier. All customers will be offered a 3 month trial
period where they will be able to test the merits of the software on their own and
decide wether or not they will purchase the full software license.
40
4.5 Channels to Market
In order to get our product to market, EOC Engineering must find partners
with which to work and who will create buzz for the technology. To do this, it
would be beneficial to initially start with a pilot program in South Bend, Indiana,
by partnering with a local data center run by Data Realty. EOC Engineering
will license its software to them for free, and in exchange monitor their systems
and continually update the software controls to further refine the system. This
arrangement will also allow access to their data center design firm, Environmental
Systems Design, Inc. (ESD), during the design and construction of the facility.
By doing this, EOC Engineering will be able to develop a working relationship
that will allow it to prove its technology to a well known and respected design
firm. In this manner, EOC Engineering will be able to work with this firm on
future design projects and gain more customers.
After expanding further with ESD, EOC Engineering will branch out to other
design firms throughout the United States market-by-market. Some of these other
design firms are HP Critical Facilities Services, PTS Data Center Solutions, and
Integrated Design Group.
41
CHAPTER 5
ADDITIONAL WORK REQUIRED
In order for the technology to be successful, more work is required:
1. A CEO and Chief Programmer must be hired;
2. The Green Cloud Manager must be shown to work well with the majority
of Building Management Systems;
3. A polished version of the GCM will have to be completed by a team of
professional programmers;
4. A pilot test of the software must be completed; and,
5. More testing is needed to supply hard numbers to our prospective clients.
The current GCM system will be further developed and given the ability to
interface with BMSs. Although the reuse of HPC/ICT waste heat is one of the
major selling points of this technology, the system does not yet communicate with
Building Management Systems to supply heat where it is required. To do so,
interoperability must be created with the GCM and various BMSs. Without it,
the system will simply be a thermal heat source, much like geothermal, and the
BMS will have to control it through the use of dampers and louvers.
42
In addition to interfacing with a BMS, the GCM must look and feel like a
complete system. For this reason, it is necessary that the algorithm and software
be completed by a team with considerable experience in programming and systems
design. The system must be something that can be set up and forgotten about
by the majority of users, yet be easily used by IT personnel.
Concurrently with the above efforts, testing must be done on our current and
subsequent prototypes. EOC Engineering must be able to supply a proven record
to our prospective customers that shows how it will scale the product, how much
it can save in cooling costs, how the system interfaces with a BMS, and how much
waste heat can be supplied to a facility. Without this, it will be difficult to attract
many customers. As mentioned before, some of this can be completed on our
first iterations with clients, but there will be no revenue from these pilot projects.
EOC Engineering will, however, generate goodwill and word-of-mouth from these
original clients while they allow it to improve and perfect the technology.
Table 5.1 lays out the activities, timeline, and costs of EOC Engineering for
the first 3 years and presents the major milestones of the company.
43
TABLE 5.1
Activities & Milestones
Activity Start Date End Date Department Budget ($)
Initial Funding 6/2011 1/2012 Finance $30,000
Patent Filing &
Prosecution
6/2011 6/2013 Management, Le-
gal
Interview/Hire
Chief Program-
mer
8/2011 12/2011 Management $15,000
First Iteration of
Software
1/2012 6/2012 Programming $90,000
Interview/Hire
President & CEO
10/2011 2/2012 Management $15,000
Advertising 1/2012 12/2012 Marketing $102,000
Pilot Project(s) 4/2012 10/2012 Programming,
Management,
Sales
$60,000
Sale to Design
Firm
1/2012 12/2012 Sales
First Project
Completed
12/2012 1/2013 Management
Second Iteration
of Software
10/2012 3/2013 Programming $90,000
Sale to Second
Design Firm
1/2013 12/2013 Sales
44
APPENDIX A
PROTOTYPE THERMAL MEASUREMENTS
Thermal measurements were conducted on the prototype during June and July,
2010. These measurements consisted of the constant monitoring of various local
temperatures throughout the container as well as server temperatures and server
loads. In this way, local temperatures and heat recovery could be estimated and
directly correlated to the server usage and activity. Energy recovery rates, qwaste,
were estimated using
qwaste = ˙m [cp(Tha)Tha − cp(Tca)Tca] , (A.1)
where cp(T) is the specific heat of the air at the local temperature, and Tca and Tha
are the temperature of the cold-aisle upstream of the server racks and the hot-aisle
downstream of the server racks, respectively. The mass flow rate was determined
by applying mass conservation and calculating the total flow rate passing through
both exhaust fans,
˙m =
i=1,2
˙mfan,i =
i=1,2
ρ(Tout,i)Uavg,iAduct,i, (A.2)
where i indicates the two exhaust fan ducts, ρ(Tout) is the local air density, Aduct is
the cross-sectional area of the exhaust duct for each fan, and Uavg,i is the average
flow speed at the fan exit.
45
The exit flow speed was measured using a handheld velocimeter with inte-
grated thermocouple1
with an accuracy of approximately ±3.0% + 0.3m/s. The
velocimeter was placed at the exit of the fan duct, perpendicular to the flow, to
record speeds for 20 seconds. These values were then averaged over the measure-
ment time. To obtain the average exit flow speed, the same measurement was
conducted over a number of locations across the duct exit (Fig. A.1), and the
average flow speed was calculated:
Uavg =
1
Aduct
Aduct
U (r, θ) · rdrdrθ. (A.3)
The radii of ducts 1 and 2 were r = 5.0in and r = 4.0in, respectively. The spacing
between the measurements was δθ = π/4 and δr = 2.5in for duct 1 and δr = 2.0in
for duct 2. Symmetry was assumed and the flow was only measured in two quad-
rants. The integration was conducted numerically using the trapezoidal rule and
the uncertainty in the mass flow rate was estimated to be ±8.1%. Temperatures
were recorded with four temperature/humidity sensors2
. One sensor was placed
just upstream of each exhaust fan, Tout,i, one sensor was placed in the cold-aisle,
Tca, and one was placed in the hot-aisle, Tha. Temperatures were recorded in real
time at a rate of four readings per hour.3
The temperature at the inlet of the lou-
ver, Tin, was taken from weather measurements from the Indiana State Climate
Office (ISCO 2010). The heat recovery as a function of time, qrec(t), was estimated
using Eq. A.1 with the sensor readings and mass flow rate estimated using Eq. A.2.
The temperatures were validated using a handheld thermocouple and an infrared
1
Extech Model 407123
2
APC Model AP9512THBLK
3
At present, the prototype is not configured to allow temperature measurements in the ducts
down stream of the fans, which would be the best way to accurately measure waste heat recovery.
46
Figure A.1. Schematic of measurements to determine the average outlet
velocity
Courtesy of Dr. David Go
camera (Fig. A.2). Hardware temperatures were recorded from the hardware’s
internal temperature sensors using the intelligent platform management interface
(IPMI).
Fig. A.3(a) gives a representative plot of the temperature measurements from
the four sensors, the estimated fan downstream temperatures, the inlet tempera-
ture, and the temperatures from one of the servers over a 48-hour period. The plot
illustrates a number of significant points: Firstly, the temperature of the hot-aisle
is significantly warmer than that of the cold-aisle, exemplifying the amount of heat
wasted in conventional data centers; secondly, the single hardware temperature,
THPC, varies due to dynamic computational loads, yet the overall temperature is
fairly constant because the number of active servers is fairly constant; lastly, the
average server inlet temperatures range from ∼ 70 − 100◦
F (21 − 38◦
C), which
exceed current recommended HPC hardware operating ranges. EOC is confident
that ITC hardware can be operated beyond current limits, and the data demon-
strates server operation at temperatures greatly exceeding standards.
47
(a) Server inlet temperature (b) Server outlet temperature
Figure A.2. Illustrative infrared thermal maps of a server
Courtesy of Dr. Paul Brenner
	
   (a) Representative measured temperatures 	
   (b) Available waste heat
Figure A.3. Container measurements over a 48-hour period in July 2010
Courtesy of Dr. Paul Brenner
48
Fig. A.3(b) shows the amount of waste heat available for recovery for the
same 48-hour period. On average, nearly 33.6 × 103
Btu/hr (9.39 kW) was ex-
tracted from the data servers during this period, for a total energy recovery of
1.61 × 106
Btu (450.7 kW · h). Though this value is limited by approximations
for the heat loss and bulk temperature measurements, it is consistent with the
energy consumed by the container according to the energy bill. At an estimated
cost of $0.10/kW · h, the measured heat recovery corresponds to $45.07 in energy
savings for this time period, and extrapolates to approximately $676 per month.
This equates to ∼ 4.5% of the average monthly expenditures for the BGG during a
winter month.4
These preliminary studies limited the container to a maximum of
60 servers operating at any time because the warm summer air increased Tca and
limited the ability of the servers to remain below critical temperature levels. With
improved configuration and in cooler months, the prototype container should be
able to operate at a nearly constant heat recovery level of 102, 364 Btu per month,
corresponding to a ∼ 15% reduction in the BGG’s monthly energy consumption.
The installation of additional containers will also provide quantitative increases
in total energy cost savings.
One of the primary challenges of successfully integrating HPC/ITC into built
structures is meeting customer constraints such as preserving form and function,
heating needs, and reliability. (Sec. 1.3.1). Successful energy recovery from an
EOC container requires that heat be delivered to its partner facility when it is
required and in a predictable manner. With EOC, the availability of heat depends
on the computing users’ need for computational power at any given moment. This
issue is illustrated by the output of a pre-set pilot test, Fig. A.4, during which
4
The BGG reports an average expenditure of ∼ $15, 000 during the winter months of De-
cember through February, 2003 − 2006.
49
the computational load on the servers was intentionally varied. The servers were
initially idle for 27 hours, and then alternated between normal loading capacity
and idle for 12 hour periods each. As this test demonstrates, the temperature
difference not only reduces dramatically when the HPC hardware are idle, but
there is also a transient recovery period when the hardware are active but the
temperature rise follows more slowly. The thermal time constant of the system is
related not only to the heat capacity of air, but of the entire set of server compo-
nents and infrastructure (racks, walls, etc.) which serve as heat sinks whenever
they are at a lower temperature. For this prototype, the time constant is es-
timated to be 53 minutes.5
Similarly, when the servers become idle, the EOC
container reaches its ambient condition in approximately 44 minutes. In an ap-
plication where EOC containers are distributed across multiple facilities, control
algorithms will be needed to balance the demands of both the computer and build-
ing users against performance expectations for each, and these time constants will
be integral to that.
5
Here, the time constant is the time required to reach 95% of the maximum temperature
50
 
Figure A.4. Temperature difference between the hot- and cold-aisles
during the pilot test
Courtesy of Dr. Paul Brenner
51
APPENDIX B
PATENTABILITY PRIMER
B.1 Algorithm
The question of the dividing line between patentable subject matter under Title
35, Section 101, of the United States Code is not a clear one. Section 101 states:
“Whoever invents or discovers any new and useful process, machine, manufacture,
or composition of matter, or any new and useful improvement thereof, may obtain
a patent therefor, subject to the conditions and requirements of this title” [116].
The Supreme Court held, in Mackay Radio & Telegraph Co. v. Radio Corp. of
America, 306 U.S. 86 (1939), that “[w]hile a scientific truth, or the mathematical
expression of it, is not a patentable invention, a novel and useful structure created
with the aid of knowledge of scientific truth may be” [7]. Yet, in Gottschalk
v. Benson, 409 U.S. 63 (1972), the Supreme Court defined an “algorithm” as
a “procedure for solving a given type of mathematical problem,” and concluded
that such an algorithm, or mathematical formula, is like a law of nature, which
cannot be the subject of a patent [5].
In contrast, in Diamond v. Diehr, 450 U.S. 175 (1981) [3], the Court considered
a patent for a device that aided in curing rubber. At its essence, the invention
was an algorithm because it depended on actions taken at precise times during the
curing process and used a digital computer to execute and record these actions.
52
Unlike the algorithm in Gottschalk v. Benson, the Supreme Court held that
such subject matter was patentable because the patent was applied not to the
algorithm, but to the complete process for curing rubber, of which the algorithm
was only a part. Thus, it is possible to seek patent protection for a process that
utilizes an algorithm to produce a “useful, concrete, and tangible result” [10]
This thesis will thus define an “algorithm” as:
1. A fixed step-by-step procedure for accomplishing a given result; usually a
simplified procedure for solving a complex problem, also a full statement of
a finite number of steps.
2. A defined process or set of rules that leads to and assures the development
of a desired output from a given input. A sequence of formulas and/or
algebraic/logical steps to calculate or determine a given task; processing
rules.
Under this view, the algorithm is a part of a much larger process that accomplishes
a “useful, concrete, and tangible result” and is worthy of patent protection when
it is part of the larger process for distributing waste heat in an environment.
B.2 Novelty
Title 35, Section 102, of the United States Code states that a person shall be
entitled to a patent unless —
(a) the invention was known or used by others in this country, or patented or
described in a printed publication in this or a foreign country, before the
invention thereof by the applicant for patent, or
53
(b) the invention was patented or described in a printed publication in this or a
foreign country or in public use or on sale in this country, more than one
year prior to the date of the application for patent in the United States, or
(c) he has abandoned the invention, or
(d) the invention was first patented or caused to be patented, or was the subject
of an inventors certificate, by the applicant or his legal representatives or
assigns in a foreign country prior to the date of the application for patent in
this country on an application for patent or inventors certificate filed more
than twelve months before the filing of the application in the United States,
or
(e) the invention was described in
(1) an application for patent, published under section 122 (b), by another
filed in the United States before the invention by the applicant for
patent, or
(2) a patent granted on an application for patent by another filed in the
United States before the invention by the applicant for patent, except
that an international application filed under the treaty defined in sec-
tion 351 (a) shall have the effects for the purposes of this subsection of
an application filed in the United States only if the international appli-
cation designated the United States and was published under Article
21(2) of such treaty in the English language, or
(f) he did not himself invent the subject matter sought to be patented. [117]
54
B.3 Nonobviousness
This is perhaps the most difficult factual patent issue. In addition to meeting
the novelty requirements outlined above, Section 103 states
(a) A patent may not be obtained though the invention is not identically disclosed
or described as set forth in section 102 of this title, if the differences between
the subject matter sought to be patented and the prior art are such that
the subject matter as a whole would have been obvious at the time the
invention was made to a person having ordinary skill in the art to which
said subject matter pertains. Patentability shall not be negatived by the
manner in which the invention was made. [118]
55
APPENDIX C
BUSINESS PLAN
C.1 Executive Summary
Waste heat created by high performance computing and information commu-
nications technology is a critical resource management issue. In the United States,
billions of dollars are spent annually to power and cool these data systems. The
August 2007 U.S. Environmental Protection Agency Report to Congress on Server
and Data Center Efficiency estimates that the U.S. spent $4.5 billion of electrical
power to operate high performance and information technology servers in 2006,
and the same report forecasts that our national energy expenditure for these ma-
chines will be well over $7 billion by 2011. The cooling systems for the typical
data center use close to 40% of the total power draw of the data center, totaling
over $2.5 billion nationally.
To address this market need, EOC Engineering will produce and sell a unique
patented algorithm to data center design firms that allows module computing
nodes to supply useable heat when and where it is needed and manage the data
load when it is not. The EOC Engineering nodes will function just like a normal
data center, complete with UPS, power systems, and protection from fire, smoke,
humidity, condensation, and temperature changes. The nodes have the option
of also installing active cooling measures such as computer room air chillers and
56
handlers, but the algorithm renders those measures unnecessary and the customer
can save on capital costs by not installing them. The unique EOC Engineering
algorithm is known as the Green Cloud Manager (GCM) and is designed to man-
age the IT load in each of the EOC Engineering nodes within a company or
municipality to supply heat where the customer needs it, and concurrently miti-
gate the cooling costs for the server hardware. This is accomplished by directing
compute requests to the most beneficial area depending on the environment and
the customers needs. If the servers are running hot, and the customer has little
need for heat at the moment, the GCM will send the compute load to the coldest
servers and cool the hot ones via the built in cooling mechanisms. These cooling
mechanisms will direct the cold-air return of the buildings HVAC system to cool
the servers and open louvers for environmental air cooling. The IT hardware will
be cooled to the extent possible without using power hungry computer room air
conditioners to save the customer money and reduce greenhouse gas emissions.
The EOC nodes will reduce the power consumption of the customers compute
hardware by almost 40% and save the customer thousands of dollars each year
in heating costs. Our patented method adjusts the server load to maximize the
utilization of the compute power and maximize utilization of the heat output of
those servers and does it all as efficiently as possible. By removing all active cooling
methods in a typical 5,000-square-foot data center, we will save a customer more
than $374, 000 each year. Because these nodes will reduce the carbon footprint
of the customer, in addition to actively saving the customer money in reduced
electric bills, the customer may be eligible for government grants for increasing
the efficiency of their data center. Nothing else is like it.
57
C.1.1 Objectives
1. Ten projects in the first year of sales and 200 new projects in the fourth year
of sales.
2. Market share of 10% by year nine.
3. Grow gross annual revenue from new facilities to of $40 million by the end
of the decade.
4. Have yearly licensing fees also over $40 million by the end of the decade.
5. Positive net income after year four.
C.1.2 Mission
EOC Engineering offers data centers a reliable, highly-efficient alternative to
their current methods of cooling high performance computing and information
communications technology systems. Our design services and algorithm allow our
clients to immediately begin saving money on the operation of their data centers
by restructuring the hardware support systems and implementing sophisticated,
patented control algorithms. Clients know that working with EOC Engineering
is a safe, reliable, and inexpensive alternative to traditional means of data center
management. Our initial focus is on development in the United States with em-
phasis on small- to mid-sized data centers as well as government and university
research facilities.
C.1.3 Keys to Success
1. Excellence in fulfilling our promise of large energy savings.
58
2. Developing visibility and market channels to generate new business leads.
3. Keeping overhead costs low.
C.2 Company Summary
EOC Engineering is a design and engineering firm based at the Innovation
Park, Notre Dame that develops and implements novel opportunities for the re-
covery of waste heat from servers and data centers. As EOC Engineering grows,
it will expand to larger data centers and offer services abroad.
C.2.1 Company Ownership
EOC Engineering is a limited liability company owned by the three principal
partners. These partners will be the CEO, the chief programmer, and the head
project manager. Equity in the company will be shared equally among the three
and will be diluted evenly when outside capital investment is required.
C.2.2 Start-up Summary
Start-up funding for EOC Engineering will cover the first three years before net
income turns positive. The required funding is $3, 100, 000 and includes $830, 000
in government grants, thus $2, 300, 000 is needed in other investment. This invest-
ment will allow the company to finish coding, validate the algorithm, and provide
for the first few years of salaries as employment grows ahead of sales.
C.3 Products and Services
EOC Engineering provides engineering consulting services and patented algo-
rithms to provide advanced energy management to new data centers. We will
59
provide energy management solutions for our clients during the development of a
new facility to fully integrate the data center operations and thermal controls. We
will license our unique control model as a component of our services engagement
and will offer design and consulting services to our clients, through our partner-
ships with data center design firms, while they are constructing their new data
centers.
The control algorithm will allow clients to reduce or completely eliminate their
current active cooling measures in their data centers by monitoring the compute
requests and the server load to maximize utilization of the servers and maximize
their useable heat output as efficiently as possible. The price for the Green Cloud
Manager is small and allows the client to install it alongside traditional cooling
methods and still receive a very large payback on their investment. The GCM will
manage the compute requests of a data center in order to control the heat output
of the servers and reduce the need for active cooling in the entire data center.
C.4 Market Analysis Summary
The demand for data center space continues to increase at a much faster pace
than the supply. The October, 2009 Data Center Dynamics conference estimated
that the current demand is three times greater than the supply, and is coming from
companies of all sizes. The largest forecasted demand is for smaller data centers to
accommodate dispersed or cloud computing and allow for organizations to process
data in different locations, while sharing assets, minimizing risks, and maximizing
capital dollars spent. Figure C.1 shows the forecasted data center market growth
in square-footage for the years 2009 to 2013. This has been expanded through the
next decade in Table C.1 based upon a conservative annual growth rate of 12.5%.
60
Table C.1 also shows the amount of new space built each year and the number of
new data centers assuming an average 5,000-square-foot facility.
Figure C.1. Total U.S. data center market size forecast, 2009 to 2013
Frost & Sullivan
Over the last decade, cooling demand for these IT environments has shown a
dramatic upsurge due to increased server virtualization and a need for incremental
data storage, which in turn has led to high heat densities. Cooling solutions for
this market have traditionally incorporated perimeter-based cooling provided by
Computer Room Air Conditioning (CRAC) units and, to a certain extent, are
supplemented by the HVAC system of the building itself. To contain growing heat
issues, suppliers have innovated high density cooling modules that are capable
61
TABLEC.1:U.S.datacentermarketforecast,2011to2020
2011201220132014201520162017201820192020
TotalMarketSize(Mill.Sq.ft)137.4155.3177199.1224.0252.0283.5319.0358.8403.7
GrowthRate12.6%13.1%13.9%12.5%12.5%12.5%12.5%12.5%12.5%12.5%
TotalNewBuiltSpace(MillSq.ft)15.417.921.722.124.928.031.535.439.944.9
New5000Sq.ftProjects3080358043404425497856006300708879748971
62
of addressing heat loads at the point of origin. Emerging trends in power and
cooling needs as well as the need to enhance energy savings in this energy-intensive
industry has resulted in considerable initiatives at the supplier level to introduce a
host of energy saving solutions into the market. Currently, active cooling methods
account for an astounding 38% of data center energy expenditures (Figure C.2).
Figure C.2. Analysis of a typical 5,000-square-foot data center power
draw
Energy Logic
*This represents the average power draw (kW). Daily energy consumption (kW · h) can be
captured by multiplying the power draw by 24.
C.4.1 Market Segmentation
According to Tier I Research, data centers are currently utilized at an aver-
age rate of 55.1%. In the past year, growth in demand has outpaced supply by
63
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis
thesis

More Related Content

What's hot

Shale Gas & Hydraulic Fracturing Risks & Opportunities
Shale Gas & Hydraulic Fracturing Risks & OpportunitiesShale Gas & Hydraulic Fracturing Risks & Opportunities
Shale Gas & Hydraulic Fracturing Risks & Opportunities
Theodor COJOIANU
 
F1i s 2012wf-competition-regulations-rev1
F1i s 2012wf-competition-regulations-rev1F1i s 2012wf-competition-regulations-rev1
F1i s 2012wf-competition-regulations-rev1
PedroRomanoCE
 
Helicopter Safety Study 3 (HSS-3)
Helicopter Safety Study 3 (HSS-3)Helicopter Safety Study 3 (HSS-3)
Helicopter Safety Study 3 (HSS-3)
E.ON Exploration & Production
 
Engineering symbology-prints-and-drawings-1
Engineering symbology-prints-and-drawings-1Engineering symbology-prints-and-drawings-1
Engineering symbology-prints-and-drawings-1
Souvik Dutta
 
Productivity practioner
Productivity practionerProductivity practioner
Productivity practioner
Amit Kumar Senapati, PMP®
 
MEng Report Merged - FINAL
MEng Report Merged - FINALMEng Report Merged - FINAL
MEng Report Merged - FINALAmit Ramji ✈
 
CFD-Assignment_Ramji_Amit_10241445
CFD-Assignment_Ramji_Amit_10241445CFD-Assignment_Ramji_Amit_10241445
CFD-Assignment_Ramji_Amit_10241445Amit Ramji ✈
 
IMechE Report Final_Fixed
IMechE Report Final_FixedIMechE Report Final_Fixed
IMechE Report Final_FixedAmit Ramji ✈
 
Global-Photovoltaic-Power-Potential-by-Country.pdf
Global-Photovoltaic-Power-Potential-by-Country.pdfGlobal-Photovoltaic-Power-Potential-by-Country.pdf
Global-Photovoltaic-Power-Potential-by-Country.pdf
SimonBAmadisT
 
Agile project management
Agile project managementAgile project management
Agile project management
Amit Kumar Senapati, PMP®
 
Solar Energy - A Complete Guide
Solar Energy - A Complete GuideSolar Energy - A Complete Guide
Solar Energy - A Complete Guide
Naman Pratap Singh
 
Ifc+solar+report web+ 08+05
Ifc+solar+report web+ 08+05Ifc+solar+report web+ 08+05
Ifc+solar+report web+ 08+05
Mohammed Selim
 
Supplier-PPAP-Manual.pdf
Supplier-PPAP-Manual.pdfSupplier-PPAP-Manual.pdf
Supplier-PPAP-Manual.pdf
PhanHngBin
 
Pressure Vessel Selection Sizing and Troubleshooting
Pressure Vessel Selection Sizing and Troubleshooting Pressure Vessel Selection Sizing and Troubleshooting
Pressure Vessel Selection Sizing and Troubleshooting
Karl Kolmetz
 
Sw flowsimulation 2009 tutorial
Sw flowsimulation 2009 tutorialSw flowsimulation 2009 tutorial
Sw flowsimulation 2009 tutorial
Rahman Hakim
 
Water Treatment Unit Selection, Sizing and Troubleshooting
Water Treatment Unit Selection, Sizing and Troubleshooting Water Treatment Unit Selection, Sizing and Troubleshooting
Water Treatment Unit Selection, Sizing and Troubleshooting
Karl Kolmetz
 
2008 Testing and Inspection Programs Final
2008 Testing and Inspection Programs Final2008 Testing and Inspection Programs Final
2008 Testing and Inspection Programs FinalEdward D. Naylor Jr.
 
SolidWorks
SolidWorksSolidWorks
SolidWorks
keshow
 
(Deprecated) Slicing the Gordian Knot of SOA Governance
(Deprecated) Slicing the Gordian Knot of SOA Governance(Deprecated) Slicing the Gordian Knot of SOA Governance
(Deprecated) Slicing the Gordian Knot of SOA Governance
Ganesh Prasad
 

What's hot (19)

Shale Gas & Hydraulic Fracturing Risks & Opportunities
Shale Gas & Hydraulic Fracturing Risks & OpportunitiesShale Gas & Hydraulic Fracturing Risks & Opportunities
Shale Gas & Hydraulic Fracturing Risks & Opportunities
 
F1i s 2012wf-competition-regulations-rev1
F1i s 2012wf-competition-regulations-rev1F1i s 2012wf-competition-regulations-rev1
F1i s 2012wf-competition-regulations-rev1
 
Helicopter Safety Study 3 (HSS-3)
Helicopter Safety Study 3 (HSS-3)Helicopter Safety Study 3 (HSS-3)
Helicopter Safety Study 3 (HSS-3)
 
Engineering symbology-prints-and-drawings-1
Engineering symbology-prints-and-drawings-1Engineering symbology-prints-and-drawings-1
Engineering symbology-prints-and-drawings-1
 
Productivity practioner
Productivity practionerProductivity practioner
Productivity practioner
 
MEng Report Merged - FINAL
MEng Report Merged - FINALMEng Report Merged - FINAL
MEng Report Merged - FINAL
 
CFD-Assignment_Ramji_Amit_10241445
CFD-Assignment_Ramji_Amit_10241445CFD-Assignment_Ramji_Amit_10241445
CFD-Assignment_Ramji_Amit_10241445
 
IMechE Report Final_Fixed
IMechE Report Final_FixedIMechE Report Final_Fixed
IMechE Report Final_Fixed
 
Global-Photovoltaic-Power-Potential-by-Country.pdf
Global-Photovoltaic-Power-Potential-by-Country.pdfGlobal-Photovoltaic-Power-Potential-by-Country.pdf
Global-Photovoltaic-Power-Potential-by-Country.pdf
 
Agile project management
Agile project managementAgile project management
Agile project management
 
Solar Energy - A Complete Guide
Solar Energy - A Complete GuideSolar Energy - A Complete Guide
Solar Energy - A Complete Guide
 
Ifc+solar+report web+ 08+05
Ifc+solar+report web+ 08+05Ifc+solar+report web+ 08+05
Ifc+solar+report web+ 08+05
 
Supplier-PPAP-Manual.pdf
Supplier-PPAP-Manual.pdfSupplier-PPAP-Manual.pdf
Supplier-PPAP-Manual.pdf
 
Pressure Vessel Selection Sizing and Troubleshooting
Pressure Vessel Selection Sizing and Troubleshooting Pressure Vessel Selection Sizing and Troubleshooting
Pressure Vessel Selection Sizing and Troubleshooting
 
Sw flowsimulation 2009 tutorial
Sw flowsimulation 2009 tutorialSw flowsimulation 2009 tutorial
Sw flowsimulation 2009 tutorial
 
Water Treatment Unit Selection, Sizing and Troubleshooting
Water Treatment Unit Selection, Sizing and Troubleshooting Water Treatment Unit Selection, Sizing and Troubleshooting
Water Treatment Unit Selection, Sizing and Troubleshooting
 
2008 Testing and Inspection Programs Final
2008 Testing and Inspection Programs Final2008 Testing and Inspection Programs Final
2008 Testing and Inspection Programs Final
 
SolidWorks
SolidWorksSolidWorks
SolidWorks
 
(Deprecated) Slicing the Gordian Knot of SOA Governance
(Deprecated) Slicing the Gordian Knot of SOA Governance(Deprecated) Slicing the Gordian Knot of SOA Governance
(Deprecated) Slicing the Gordian Knot of SOA Governance
 

Similar to thesis

Aidan_O_Mahony_Project_Report
Aidan_O_Mahony_Project_ReportAidan_O_Mahony_Project_Report
Aidan_O_Mahony_Project_ReportAidan O Mahony
 
Senior Project: Methanol Injection Progressive Controller
Senior Project: Methanol Injection Progressive Controller Senior Project: Methanol Injection Progressive Controller
Senior Project: Methanol Injection Progressive Controller
QuyenVu47
 
Operations manual for_owners_and_managers_multi-unit_residential_buildings
Operations manual for_owners_and_managers_multi-unit_residential_buildingsOperations manual for_owners_and_managers_multi-unit_residential_buildings
Operations manual for_owners_and_managers_multi-unit_residential_buildingsSherry Schluessel
 
Supercoducting Cables in Grid
Supercoducting Cables in GridSupercoducting Cables in Grid
Supercoducting Cables in Grid
prajesh88
 
nasa-safer-using-b-method
nasa-safer-using-b-methodnasa-safer-using-b-method
nasa-safer-using-b-methodSylvain Verly
 
Analytical-Chemistry
Analytical-ChemistryAnalytical-Chemistry
Analytical-Chemistry
Kristen Carter
 
Software Engineering
Software EngineeringSoftware Engineering
Software Engineering
Software Guru
 
PLC & SCADA
PLC & SCADA PLC & SCADA
PLC & SCADA
Ritesh Kumawat
 
Graduation Report
Graduation ReportGraduation Report
Graduation Report
zied khayechi
 
Internship report_Georgios Katsouris
Internship report_Georgios KatsourisInternship report_Georgios Katsouris
Internship report_Georgios KatsourisGeorgios Katsouris
 
Design of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interfaceDesign of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interface
mangal das
 
Bidirectional Visitor Counter for efficient electricity usage.
Bidirectional Visitor Counter for efficient electricity usage.Bidirectional Visitor Counter for efficient electricity usage.
Bidirectional Visitor Counter for efficient electricity usage.
NandaVardhanThupalli
 
Smart Grid.pdf
Smart Grid.pdfSmart Grid.pdf
Smart Grid.pdf
ssuser3e25001
 

Similar to thesis (20)

CDP FINAL REPORT
CDP FINAL REPORTCDP FINAL REPORT
CDP FINAL REPORT
 
Aidan_O_Mahony_Project_Report
Aidan_O_Mahony_Project_ReportAidan_O_Mahony_Project_Report
Aidan_O_Mahony_Project_Report
 
tese
tesetese
tese
 
thesis
thesisthesis
thesis
 
Senior Project: Methanol Injection Progressive Controller
Senior Project: Methanol Injection Progressive Controller Senior Project: Methanol Injection Progressive Controller
Senior Project: Methanol Injection Progressive Controller
 
Operations manual for_owners_and_managers_multi-unit_residential_buildings
Operations manual for_owners_and_managers_multi-unit_residential_buildingsOperations manual for_owners_and_managers_multi-unit_residential_buildings
Operations manual for_owners_and_managers_multi-unit_residential_buildings
 
Supercoducting Cables in Grid
Supercoducting Cables in GridSupercoducting Cables in Grid
Supercoducting Cables in Grid
 
nasa-safer-using-b-method
nasa-safer-using-b-methodnasa-safer-using-b-method
nasa-safer-using-b-method
 
Master_Thesis
Master_ThesisMaster_Thesis
Master_Thesis
 
Analytical-Chemistry
Analytical-ChemistryAnalytical-Chemistry
Analytical-Chemistry
 
Software Engineering
Software EngineeringSoftware Engineering
Software Engineering
 
PLC & SCADA
PLC & SCADA PLC & SCADA
PLC & SCADA
 
Graduation Report
Graduation ReportGraduation Report
Graduation Report
 
Tilak's Report
Tilak's ReportTilak's Report
Tilak's Report
 
Internship report_Georgios Katsouris
Internship report_Georgios KatsourisInternship report_Georgios Katsouris
Internship report_Georgios Katsouris
 
Design of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interfaceDesign of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interface
 
Bidirectional Visitor Counter for efficient electricity usage.
Bidirectional Visitor Counter for efficient electricity usage.Bidirectional Visitor Counter for efficient electricity usage.
Bidirectional Visitor Counter for efficient electricity usage.
 
Notes econometricswithr
Notes econometricswithrNotes econometricswithr
Notes econometricswithr
 
Smart Grid.pdf
Smart Grid.pdfSmart Grid.pdf
Smart Grid.pdf
 
Thesis
ThesisThesis
Thesis
 

thesis

  • 1. ENVIRONMENTALLY OPPORTUNISTIC COMPUTING EOC Engineering A Thesis Submitted to the Graduate School of the University of Notre Dame in Partial Fulfillment of the Requirements for the Degree of Master’s of Science in Engineering, Science & Technology Entrepreneurship by S. Kiel Hockett Jr., BSAE, BA Dr. David Go, Advisor Robert Alworth, Director Graduate Program in Engineering, Science & Technology Entrepreneurship Excellence Notre Dame, Indiana July 2011
  • 2. c Copyright by S. Kiel Hockett Jr 2011 All Rights Reserved
  • 3. ENVIRONMENTALLY OPPORTUNISTIC COMPUTING EOC Engineering Executive Summary by S. Kiel Hockett Jr. Waste heat created by high performance computing and information commu- nications technology is a critical resource management issue. In the United States, billions of dollars are spent annually to power and cool these data systems. The August 2007 U.S. Environmental Protection Agency Report to Congress on Server and Data Center Efficiency estimated that the U.S. Spent $4.5 billion on electric power to operate high performance and information technology servers in 2006, and that amount has grown to over $8 billion in 2011. The cooling systems for the typical data center use close to 40% of the total power draw of the data center, totaling over $2.5 billion nationally. To address this market need, EOC Engineering will develop and deploy mod- ule computing nodes that utilize our unique algorithm to supply useable heat when and where it is needed and manage the data load when it is not. The EOC Engineering nodes will function just like a normal data center, complete with Uninterruptible Power Supply (UPS), power systems, and protection from fire, smoke, humidity, condensation, and temperature changes. The nodes have the option of also installing active cooling measures such as computer room air chillers and handlers, but the algorithm renders those measures unnecessary and the customer can save on capital costs by not installing them. The unique EOC
  • 4. S. Kiel Hockett Jr. Engineering algorithm is known as the Green Cloud Manager (GCM) and is de- signed to manage the IT load in each of the EOC Engineering nodes within a company or municipality to supply heat where the customer needs it, and concur- rently mitigate the cooling costs for the server hardware. The EOC nodes will reduce the power consumption of the customers computer hardware by almost 40% and save the customer thousands of dollars each year in heating costs. By removing all active cooling methods in a typical 5,000-square- foot data center, a customer will save more than $370, 000 each year. Because these nodes will reduce the carbon footprint of the customer, in addition to actively saving the customer money in reduced electric bills, the customer may be eligible for government grants for increasing the efficiency of their data center. This thesis will: offer an explanation of the science and engineering underlying the proposed innovation; provide a comprehensive review of the potential applica- tions for the technology; detail the intellectual property inherent in the technology and provide a competitive analysis of it; outline the barriers to successful commer- cialization of the product; and outline additional work still required to create a prosperous business. The appendices will provide a examination of the prototype and the measurements taken to determine its efficacy, explore the section of the United States Code that deals with the patentability of inventions, and present the proposed business plan for EOC Engineering.
  • 5. DEDICATION To my parents, who, while always encouraging me to leave school and enter the “real world,” were supportive of my love for Notre Dame and my desire to continue my education. Thank you for believing in me. i
  • 6. CONTENTS DEDICATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . vii CHAPTER 1: EXPLANATION OF SCIENCE AND ENGINEERING BASIS 1 1.1 Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Current Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Power Shedding . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Power Forecasting . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.3 High-Density Zone Cooling . . . . . . . . . . . . . . . . . . 5 1.2.4 Hot-Aisle or Cold-Aisle Containment . . . . . . . . . . . . 5 1.2.5 Water Cooling . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Overview of Proposed Environmentally Opportunistic Computing 6 1.3.1 Building-Integrated Information Technology . . . . . . . . 8 1.3.2 Market Forces . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 EOC Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . 11 1.4.2 Computational Control . . . . . . . . . . . . . . . . . . . . 14 1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 CHAPTER 2: COMPREHENSIVE REVIEW OF THE POTENTIAL AP- PLICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.1 Retrofits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.1.1 Universities and Government Data Centers . . . . . . . . . 25 2.2 New Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2.1 Major Industrial or Commercial Sites . . . . . . . . . . . . 26 2.3 Preheating Water . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 ii
  • 7. CHAPTER 3: INTELLECTUAL PROPERTY . . . . . . . . . . . . . . . 28 3.1 Patent Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.1.1 Novelty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.1.1.1 Events Prior to the Date of Invention . . . . . . . . . . . 28 3.1.1.2 Events Prior to Filing the Patent Application . . . . . . 31 3.1.2 Nonobviousness . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 Competitive Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 33 CHAPTER 4: BARRIERS TO SUCCESSFUL COMMERCIALIZATION 36 4.1 Funding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.2 Technology Shortfalls . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.3 Organization and Staffing . . . . . . . . . . . . . . . . . . . . . . 39 4.4 Market Acceptance . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.5 Channels to Market . . . . . . . . . . . . . . . . . . . . . . . . . . 41 CHAPTER 5: ADDITIONAL WORK REQUIRED . . . . . . . . . . . . . 42 APPENDIX A: PROTOTYPE THERMAL MEASUREMENTS . . . . . . 45 APPENDIX B: PATENTABILITY PRIMER . . . . . . . . . . . . . . . . 52 B.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 B.2 Novelty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 B.3 Nonobviousness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 APPENDIX C: BUSINESS PLAN . . . . . . . . . . . . . . . . . . . . . . 56 C.1 Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 56 C.1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 C.1.2 Mission . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 C.1.3 Keys to Success . . . . . . . . . . . . . . . . . . . . . . . . 58 C.2 Company Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 59 C.2.1 Company Ownership . . . . . . . . . . . . . . . . . . . . . 59 C.2.2 Start-up Summary . . . . . . . . . . . . . . . . . . . . . . 59 C.3 Products and Services . . . . . . . . . . . . . . . . . . . . . . . . 59 C.4 Market Analysis Summary . . . . . . . . . . . . . . . . . . . . . . 60 C.4.1 Market Segmentation . . . . . . . . . . . . . . . . . . . . . 63 C.4.2 Service Business Analysis . . . . . . . . . . . . . . . . . . 64 C.4.2.1 Costumer Buying Patterns . . . . . . . . . . . . . . . . . 66 C.5 Strategy and Implementation Summary . . . . . . . . . . . . . . . 66 C.5.1 Competitive Edge . . . . . . . . . . . . . . . . . . . . . . . 66 C.5.2 Pricing Model & Revenue Forecast . . . . . . . . . . . . . 67 C.5.3 Product Development & Milestones . . . . . . . . . . . . . 72 iii
  • 8. C.6 Management & Staffing Summary . . . . . . . . . . . . . . . . . . 74 C.7 Financials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 iv
  • 9. FIGURES 1.1 Layout of prototype EOC container . . . . . . . . . . . . . . . . . 13 1.2 Schematic of prototype EOC container . . . . . . . . . . . . . . . 13 1.3 Basic algorithm workflow . . . . . . . . . . . . . . . . . . . . . . . 16 1.4 Spatially-aware algorithm workflow . . . . . . . . . . . . . . . . . 19 1.5 Comparison of GCM rule-sets . . . . . . . . . . . . . . . . . . . . 21 1.6 EOC in a Municipality . . . . . . . . . . . . . . . . . . . . . . . . 23 A.1 Schematic of average duct outlet velocity . . . . . . . . . . . . . . 47 A.2 Infrared thermal maps of a server . . . . . . . . . . . . . . . . . . 48 A.3 Container temperatures and available waste heat . . . . . . . . . . 48 A.4 Temperature measurements . . . . . . . . . . . . . . . . . . . . . 51 C.1 U.S. Data Center Market . . . . . . . . . . . . . . . . . . . . . . . 61 C.2 Data Center Power Draw . . . . . . . . . . . . . . . . . . . . . . . 63 v
  • 10. TABLES 5.1 Activities & Milestones . . . . . . . . . . . . . . . . . . . . . . . . 44 C.1 Market forecast to 2020 . . . . . . . . . . . . . . . . . . . . . . . 62 C.2 Market size calculations . . . . . . . . . . . . . . . . . . . . . . . 68 C.3 Anticipated market share . . . . . . . . . . . . . . . . . . . . . . . 69 C.4 Anticipated projects and revenue . . . . . . . . . . . . . . . . . . 71 C.5 Milestones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 C.6 Income statement, years 1–5 . . . . . . . . . . . . . . . . . . . . . 76 C.7 Income statement, years 6–10 . . . . . . . . . . . . . . . . . . . . 77 C.8 Balance sheet, years 1–5 . . . . . . . . . . . . . . . . . . . . . . . 78 C.9 Balance sheet, years 6–10 . . . . . . . . . . . . . . . . . . . . . . . 79 C.10 Cash flow statement, years 1–5 . . . . . . . . . . . . . . . . . . . 80 C.11 Cash flow statement, years 6–10 . . . . . . . . . . . . . . . . . . . 81 vi
  • 11. ACKNOWLEDGEMENTS To Maria, our “Den Mother” — If it weren’t for you, none of us would have gotten as far as we have. Thank you for pushing us, for consoling us, for making sure we were taken care of, and, above all, for putting up with us. You didn’t have to, and we didn’t deserve it. You’re a saint. To my advisor, Professor Go — Thank you for agreeing to work with me on this project. It’s been one of my best experiences at Notre Dame. I’ve learned a lot, and I hope I — and this thesis — have lived up to your expectations. To our ESTEEM[ed] Director, Professor Alworth — I can’t believe you let me into this program. Seriously, what were you thinking? But, since you did, I owe you a debt of gratitude. I hope having me in the program wasn’t too stressful on you. I know you’ve taught me quite a bit, and I thank you for it. Kiel vii
  • 12. CHAPTER 1 EXPLANATION OF SCIENCE AND ENGINEERING BASIS This chapter provides a background of current centralized data center concepts and the challenges they encounter such as energy usage, utility costs, and heat dissipation and management. It then moves into a discussion of techniques, some established, some novel, that are being used to mitigate one or more of these challenges while stressing that these techniques all retain the singular problem of disposing of unwanted heat generated by the hardware they attempt to cool. Next, two new concepts are introduced: Environmentally Opportunistic Comput- ing and Building-Integrated Information Technology. Together, these concepts bind information technology (IT) centers and buildings into a single unit where the machine hardware can provide useful energy to the building and the building can provide a useful heat sink for the IT systems. Finally, a current Environ- mentally Opportunistic Computing prototype is discussed, and is shown to be a unique solution to server cooling and building heating. 1.1 Challenge Waste heat created by high performance computing and information commu- nications technology (HPC/ICT) is a critical resource management issue. In the United States, billions of dollars are spent annually to power and cool data sys- tems. The August 2007 United States Environmental Protection Agency “Report 1
  • 13. to Congress on Server and Data Center Efficiency” estimates that the U.S. spent $4.5 billion on electrical power to operate HPC/ICT servers in 2006, and the same report forecasts that our national ICT electrical energy expenditure will nearly double — to $7.4 billion — by 2011. The reported number of federal data centers grew from 432 in 1998 to more than 2000 in 2009. In 2006, those federal data centers alone consumed over 6 billion kW ·h of electricity and will exceed 12 billion kW · h by 2011. Current energy demand for HPC/ICT is already three percent of U.S. electricity consumption and places considerable pressure on the domestic power grid; the peak load from HPC/ICT is estimated at 7 GW — the equivalent of 15 baseload power plants [31, 120]. As a result, optimized performance and enhanced systems efficiency have in- creasingly become priorities amid mounting pressure from both environmental ad- vocacy groups and company bottom lines. However, despite evolving low power system architecture, demands for expanded computational capability continue to drive utility power consumption toward economic limits equal to capital equip- ment costs [31]. The faster and more efficient computational capability becomes, the more society grows to require it, concomitantly increasing power utilization for operation and cooling; put another way, top-end performance often translates to top-end power demand and heat production [12]. Thus, architects and engi- neers must always contend with growing heat loads generated by computational systems, and the associated energy wasted in cooling them. Recognizing that energy resources for data centers are indeed limited, several professional organizations within the technology industry have begun to explore this problem. The High-Performance Buildings for High Tech Industries Team at Lawrence Berkeley National Laboratory [24], the ASHRAE1 Technical Committee 1 American Society of Heating, Refrigerating and Air-Conditioning Engineers 2
  • 14. 9.9 for Mission Critical Facilities, Technology Spaces, and Electronic Equipment [87], and the Uptime Institute [30] have all expressed interest in creating solutions. At the same time, efforts by corporations, universities, and government labs to reduce their environmental footprint and more effectively manage their energy consumption have resulted in the development of novel waste heat exhaust and free cooling applications, such as the installation of the Barcelona Supercomputing Center — MareNostrum — in an 18th century Gothic masonry church [15], or through novel waste heat recirculation applications, such as a centralized data center in Winnipeg that uses recirculated thermal energy to heat the editorial offices of a newspaper directly above [43]. Similar centralized data centers in Israel [12] and Paris [62] use recaptured waste heat to condition adjacent office spaces and an on-site arboretum, respectively. Despite systems-side optimization of traditional centralized data centers and advances in waste heat monitoring and management, current efforts in computer waste heat regulation, distribution, and recapture are focused largely on immedi- ate, localized solutions, and have not yet been met with comprehensive, integrated whole building design solutions. While recommendations developed recently by industry leaders to improve data center efficiency and reduce energy consumption through the adoption of conventional metrics for measuring Power Usage Effec- tiveness (PUE) recognize the importance of whole data center efficiency, the guide- lines do not yet quantify the energy efficiency potential of a building-integrated distributed data center model [11]. 3
  • 15. 1.2 Current Techniques Current designs for reduced heating and cooling loads, with the exception of those in Sec. 1.1, are mostly stop gap measures used to improve the transfer of heat away from HPC/ICT systems and into the environment outside the data center, or to limit the amount of energy expended and thus lower the cost of operation. Most current technologies focus on increasing the rapidity with which heat is carried away from the servers and improving the efficiency of venting that heat to the atmosphere. 1.2.1 Power Shedding Power shedding is a very basic and blunt solution to cooling data centers. These measures involve shedding the server workload to the minimum feasible level or even powering servers down completely; they are typically most applicable during power outages, cooling failures, and brownouts. This impractical method is a last resort for data centers, and is only used in times of absolute necessity, but nonetheless is an effective means of controlling the heat production and the cooling necessary to maintain safe or tolerable temperatures. 1.2.2 Power Forecasting The power forecasting technique can be implemented when computing needs are known in advance and can be scheduled to coincide with periods of cheap energy — particularly at night — and is most applicable for research institutions that are able to use a data management program to schedule computing jobs in advance and dispatch them from the queue when electricity prices fall to a preset level, or when the outside ambient temperature reaches a level that requires 4
  • 16. little to no cooling before the air is delivered to the data center. This system requires smart utility meters designed to recognize the price of electricity and a data management program capable of deploying jobs only when certain conditions are met. 1.2.3 High-Density Zone Cooling Many data centers are an amalgamation of low-density and high-density servers. By separating these into high-density and low-density zones, the cooling and air- flow can be scaled for each sector — high where it is needed, low where it is not — as opposed to scaling only for the hottest servers. A high-density zone is a physical area of the data center allocated to high-density operations, with self-contained cooling so that the zone appears thermally “neutral” to the rest of the room — requiring no cooling other than its own, and causing little or no disturbance to the existing airflow in the room. The zone can be cooled and managed independently of the rest of the room, thus simplifying deployment and minimizing disruption of existing infrastructure [75]. 1.2.4 Hot-Aisle or Cold-Aisle Containment Prevention of hot and cold air mixing is key to all efficient data center cooling strategies. Many data centers employ alternating hot- and cold-aisles to reduce such mixing. By building containment partitions at the end of these aisles, the chance for mixing is reduced. If the data center uses a cold-aisle containment scheme, the hot air is drawn into a cooling system and pumped back through a raised floor under the cold-aisle. If it is a hot-aisle scheme, the entire room becomes a cold air plenum to feed the servers and the hot-aisle can either be 5
  • 17. cooled and returned to mix with the room air, or vented to the outside and cooler air can be brought in and conditioned to ensure proper temperature and humidity [74]. 1.2.5 Water Cooling Air cooling for data centers is limited in that it creates a complex and ineffi- cient infrastructure by requiring, from the beginning, all of the cooling hardware necessary to meet future demands, without regard for current usage of the space. Water cooling is a system whereby localized, passive, low-power dissipation liquid cooling devices can be scaled rack by rack and permit a “pay as you go” cooling implementation. This cooling hardware is modular and can be purchased at the same rate as future IT hardware purchases. Water has a much higher thermal capacity than air — roughly 3, 400 times higher — and thus can be used to more quickly remove heat. Additionally, if located near a natural source of water, ‘free’ cooling may be obtained for the data center [76], although this method necessitates the use of piping and pumps to control the flow of water and has the additional expense of actively managing the pumping system. 1.3 Overview of Proposed Environmentally Opportunistic Computing Environmentally Opportunistic Computing (EOC) is a sustainable computing concept that capitalizes on the mobility of modern computing processes and en- ables distributed computer hardware to be integrated into a building to facilitate the consumption of computational waste heat in the building environment. Much like a ground-sourced geothermal system, EOC performs as a “system-source” thermal system, with the capability to create heat where it is locally required, to 6
  • 18. utilize energy when and where it is least expensive, and to minimize a building’s overall energy consumption. Instead of expanding active measures to contend with thermal demands, the EOC concept utilizes HPC/ICT coupled with system controls to enable energy hungry, heat producing data systems to become ser- vice providers to a building while concurrently utilizing aspects of a building’s heating, ventilating, and air conditioning (HVAC) infrastructure to cool the ma- chines; essentially, the building receives ‘free’ heat, and the machines receive ‘free’ cooling.[31] The EOC philosophy recognizes that despite evolving lower power architec- tural technologies, demands for increased capability will propel power consump- tion toward economic limits. The central component of EOC is the requirement for efficient re-utilization of this expended electrical energy as thermal energy. In contrast to the design of a single facility around centralized computational in- frastructure, EOC capitalizes on grid and virtualization technologies to distribute computational infrastructure in-line with existing facility thermal requirements. The core of EOC is the recognition that computational infrastructure can be strategically distributed via a grid in-line with facilities and processes that require the thermal byproduct that computation delivers. Grid computing harnesses the idle processing power of various computing units that may be spread out over a wide geography, and uses that processing power to compute one job. The job itself is controlled by one main computer, and is broken down into multiple tasks which can be executed simultaneously on different machines. Grid computing allows unused processing power to be effectively utilized, and reduces the time taken to complete a large, computation intensive, task. Any framework built upon this un- derstanding reduces or removes cooling requirements and realizes cost sharing on 7
  • 19. primary utility expenditures. This contrasts with traditional data center models where a single facility is designed around a centralized computational infrastruc- ture. With similar motives for energy efficiency and environmental stewardship, multiple organizations have made strides in the optimization of traditional cen- tralized data centers [24, 30, 87]. Individual data centers have re-utilized the thermal energy to the benefit of their own facilities; however, a grid distributed approach is necessary to utilize all of the thermal energy effectively and remove the cooling requirement. EOC un- derstands that the grid heating model centers around computation, energy trans- formation, and energy transfer and recognizes that the transformation and trans- portation of waste heat quickly reduces efficiency. Therefore, EOC targets the distribution and scale of each heating grid node to match the geographic and thermal requirements of the target heat sink. 1.3.1 Building-Integrated Information Technology EOC is best described as a building-integrated information technology (BIIT) that distributes IT hardware — computer workstations, displays, or high perfor- mance computing servers — throughout an institution instead of consolidating them in a single, central facility. The building-integrated nodes interact with the energy requirements and capabilities of the buildings in which they are located to deliver recoverable waste heat in order to offset a building’s energy requirements and to utilize natural cooling from the building. To be truly integrated, however, the deployment of building-integrated nodes must consider the impact on archi- tectural form and function as well as the optimal engineering solution and must answer the following questions: 8
  • 20. 1. How can the nodes be integrated into existing durable building stock in ways that limit the impact on architectural form and the function of a building, limiting adverse effects on the building occupants? 2. What is the potential for a technology such as building-integrated nodes to contribute to optimized form-generation of new buildings in concert with other existing passive design technologies (building location, orientation, massing, materiality, bioclimatic responses)? 3. How should these nodes be controlled within a building or across a collection of buildings such that they meet the requirements of both the computing users and the building occupants? If a building is to be retrofitted to make use of EOC nodes, the location of these nodes is initially constrained by practical and energy performance limitations: the node must be connected to the centralized HVAC system and duct work and it must be located near the room(s) it will heat. If the building contains a large, open floor of employee workstations, placing the EOC node somewhere along existing HVAC ducts may not only impact the physical space, but it may also decrease the usable space, alter the temperature distribution in the room, affect sight lines, and impact the room lighting. For any location and orientation along the existing HVAC duct lines, these impacts can be objectively quantified and used to evaluate the qualitative comfort of the space. Additionally, the introduction of EOC nodes as part of an HVAC system fur- ther complicates an already complex control system. By itself, an HVAC system attempts to deliver a combination of environmental and conditioned air from air handling units in order to maintain a comfortable temperature (68 − 72 ◦ F [1]) and good air quality (15 ft3 /min per person) during peak load conditions based on 9
  • 21. expected occupancy patterns. The air handling units are scheduled on an hour- by-hour basis based upon these patterns. Actual heating loads and air quality, however, can vary randomly from expected values and this variation is handled by zone controls which adjust damper positions and reheating elements in the HVAC system’s terminal box. The introduction of EOC nodes as reheating elements in the HVAC system’s terminal box is complicated by the time-varying nature of the EOC node’s activity. The nodes produce heat as a by-product of computational loads, but the load distribution can fluctuate over time. A major challenge of this product will be control of the building’s existing HVAC system in order to make optimal use of the excess heat generated by the EOC nodes while maintaining or exceeding health and comfort standards. Similarly, when the computational units are receiving free cooling, which may be time-variant based on HVAC production, the control system must distribute computational loads in order to preserve the ICT/HPC hardware. 1.3.2 Market Forces Unlike current approaches to managing IT hardware, EOC integrates IT into buildings to create heat where it is already needed, to exploit cooling where it is already available, and to minimize the overall IT and building energy consumption of an organization. IT hardware consume more than 3% of the U.S. electrical bud- get, and this consumption will only grow as the demand for enhanced computing capability continues. To that end, the national energy expenditure of data centers is expected to increase by 50% from 2007 to 2011 [120]. Therefore, there is both a strong economic and strong environmental need to readdress how IT hardware 10
  • 22. is deployed and utilized as information technology continues to define and shape modern life. The American Physical Society suggests that up to 30% of the energy used in commercial buildings is wasted [14, 81], and it has been shown that HVAC accounts for 50% of the energy consumed in commercial buildings [37]. Efforts to improve building energy efficiency through the LEED program have achieved limited success [115] with the “lack of innovative controls and monitoring” iden- tified as one of the chief obstacles to high energy efficiency. The development of a networked, distributed control model BIIT deployment and optimization will set the standard for improved approaches for traditional HVAC systems. 1.4 EOC Prototype 1.4.1 System Overview The University of Notre Dame Center for Research Computing (CRC), the City of South Bend, and the South Bend Botanical Society have collaborated on a building-integrated distributed data center prototype at the South Bend Botanical Garden and Greenhouse (BGG) called the Green Cloud Project — the first field application of EOC. The Green Cloud prototype is a container that houses CRC HPC servers and is situated immediately adjacent to the BGG facility where it is ducted into one of the BGG public conservatories. The HPC hardware components are directly connected to the CRC network and are currently able to run typical University-level research computing loads. The heat generated from the HPC hardware is exhausted into the BGG conservatory with the goal of offsetting wintertime heating requirements and reducing BGG annual heating expenditures. 11
  • 23. The Green Cloud node prototype was designed to minimize cost while provid- ing a suitably secure facility for use outdoors in a publicly accessible venue. The container-based solution is 20ft long by 8ft wide and retrofitted with the following additions: a 40 kW capacity power panel with 208 V power supplies on each rack, lighting, internally insulated walls, man and cargo door access, ventilation louvers, small fans and duct work connecting the node to the greenhouse. The prototype has a total cost of under $20, 000. Exterior power infrastructure – including the transformer, underground conduit, panel and meter – were coordinated by Amer- ican Electric Power (AEP) and the City of South Bend. The slab foundation was also provided by the City of South Bend. High bandwidth network connectivity critical to viable scaling is possible via a 1Gb fiber network connection to the University of Notre Dame campus on the St. Joseph County MetroNet Backbone. The general overview of the Green Cloud setup can be seen in Fig. 1.1. Over 100 four-core machines of two types were provided by eBay Corporation for the Green Cloud prototype. Specific server model information is unavail- able due to proprietary restrictions. For the data in this work, the machines are arranged in three racks, as seen in Fig. 1.2, and placed in a simplified cold- aisle/hot-aisle setup running through the middle of the container. Two networked environmental sensors2 were also installed for real-time temperature and humidity monitoring. 2 APC AP9512THBLK sensors with APC AP9319 environmental monitoring units 12
  • 24. Figure 1.1. Layout of prototype EOC container integrated into BGG facility Figure 1.2. Schematic of prototype EOC container 13
  • 25. 1.4.2 Computational Control The servers of the Green Cloud Project are fully integrated into the Notre Dame Condor pool3 , to which the end-users submit their high-throughput jobs.4 Condor5 is a distributed batch computing system used at many institutions around the world that allows many users to share their computing resources. This allows users access to more computing power than any of them individually could afford. Condor harnesses idle cycles from underused machines and sets tasks for those machines to complete while the native user is not present. As opposed to a traditional data center, machines in the Green Cloud pool group are additionally managed by an environmentally-aware Green Cloud Man- ager (GCM) controls system. This is necessary because the prototype is not fitted with HVAC equipment of its own and relies solely on free cooling by either outside air through the louvers or greenhouse air through a return vent during the hot and cold seasons, respectively. The primary role of the GCM is to maintain each machine within its safe operating temperatures as stated by their manufacturers by shutting it down if needed to prevent damage. At the same time, the GCM attempts to maximize the number of machines available for scientific computa- tions, therefore maximizing the temperature of the hot-aisle air that is used for greenhouse heating. The GCM interfaces both Condor and xCAT6 , which provides access to the functionality of the hardware’s built-in service processors: power state control and 3 http://crcmedia.hpcc.nd.edu/wiki/index.php/Notre Dame Condor Pool 4 The use of many computing resources over long periods of time to accomplish a computa- tional task 5 http://www.cs.wisc.edu/condor/ 6 Extreme Cloud Administration Toolkit, http://xcat.sourceforge.net/ 14
  • 26. measurements of intake, RAM7 and CPU8 temperatures, fan speeds and voltages. Tests performed with a thermal imaging camera have confirmed the average intake temperatures reported by the machine’s service processors to be suitably accurate. As such, these are used by the GCM to decide whether or not the machine is operating within a safe temperature range. The Condor component handles all of the scientific workload management: deploying jobs on running servers, evicting jobs from machines meant to be shut down, and monitoring the work state of each core available in the Green Cloud. The GCM posts new Condor configurations to the machines whenever any actions are required. The interval for Condor collector status updates was lowered from the default 5 minutes to 1 minute in order to provide the GCM with more up-to- date information. The GCM is written in Python and provides rule-based control of the servers running in the Green Cloud based upon xCAT data as well as on-line measure- ments of the cold-aisle/hot-aisle temperatures in the container gathered from the two APC sensors. The rules are applied every 2 minutes to each machine in- dividually, and decide what action the machine should take under the current conditions: start (machine is started, Condor starts running jobs), suspend (Con- dor jobs are suspended, machine is running idly), and hibernate (Condor jobs are evicted, machine shuts down). The basic GCM rule-set, Algorithm 1, was the first used to control the heat of the servers themselves and prevent the machines from operating beyond supplier- specified environment temperature ranges to prevent wear due to overheating. In this rule-set, whenever a machine’s intake temperature exceeded the stop tem- 7 Random-Access Memory, a form of computer data storage 8 Central Processing Unit, the portion of a computer system that carries out the instructions of a computer program 15
  • 27. Algorithm 1: Basic rule-set for the GCM if $mytemp ≤ $start temp then start;1 if $mytemp < $sleep temp then continue;2 if $mytemp ≥ $sleep temp then hibernate;3 Figure 1.3. Baseline logic algorithm workflow perature (106◦ F) it was hibernated. The machine was only restarted after its intake temperature dropped below a starting point (99◦ F) to prevent excessive power cycling. Fig. 1.39 shows a walkthrough of the algorithm where Tin is the inlet temperature of the server, Tcrit is the the stop temperature, and Tstart is the temperature at which the machine can restart. With this basic rule-set, a good proportion of machines (33 of 60) were running Condor jobs during afternoons with outside temperatures ranging from 82◦ F to 95◦ F. As expected, the coldest machines were at the bottom of the racks, and the 9 Courtesy of Eric Ward 16
  • 28. coolest rack was placed near the intake louver. On all racks, the 3–5 top machines were always off, except for cold early mornings, effectively rendering them useless. This was indicative of existing improvement opportunities in the basic algorithm. There also existed a large intake temperature difference (8–15◦ F) between ma- chines that were on and running jobs and ones that were hibernated. The differ- ence was even sharper between machines directly above or below each other in a rack. This led to the observation that a server’s intake fans are not spinning when the machine is hibernated. This results in insufficient airflow for the machine to cool down below the point of restart and can even result in the recirculation of hot air from the hot-aisle to the cold-aisle. Because of this, during the early hours of the night there were hibernating machines as hot as 115◦ F at the top of the racks while the machines at the bottom were only 75◦ F. Such conditions prompted further work on the GCM to include on-line fetch- ing of hot- and cold-aisle temperatures and the placement of a machine in the rack. With this new information, a spatially-aware rule-set, Algorithm 2, was introduced. The rules regarding hibernation were split in two, depending on the temperature of the cold-aisle. When the cold-aisle was above 85◦ F, the behavior stayed the same as the basic approach. In cases where the cold-aisle was be- low this level, the rules changed. The algorithm now suspends the machine from performing more Condor jobs instead of hibernating it after exceeding the sleep temperature of 106◦ F. The machine is only hibernated (shut down) if the upper operating temperature of 109◦ F is exceeded. This rule prevents newly started ma- chines from re-hibernating more quickly than they can cool down. This algorithm forces a hibernating machine to awake, even if it reports a modest overheating in order for its fans to run and cool the machine. However, this behavior is only 17
  • 29. Algorithm 2: Spatially-aware rule-set for the GCM if $mytemp ≤ $start temp then start;1 if $mytemp < $sleep temp then continue;2 if $cold aisle > 85 and $mytemp ≥ $sleep temp then hibernate;3 if $cold aisle ≤ 85 and $mytemp ≥ $sleep temp and $mytemp == “on”4 then suspend; if $cold aisle ≤ 85 and $mytemp ≥ $danger temp then hibernate;5 if $cold aisle ≤ 85 and $mystate! = “on” and $mytemp ≤ $danger temp+66 and $neightemp ≤ $start temp then start; applied if the average intake temperatures of the two machines directly below it are below the starting point temperature of 99◦ F. Fig. 1.4 shows a walkthrough of this new algorithm where Tin,j−1 and Tin,j−2 are the machines directly below and two below, respectively, the machine being tested by the GCM. Apart from system control, the GCM also provides detailed logging of the transient conditions. Each log entry is stored separately and consists of general container measurements and individual machine values. The logs are saved to provide reference points after adjustments to GCM rules or the physical setup of the prototype. To provide ease in interpretation of the GCM logs, the AJAX-based GC Viewer10 has been created. This on-line tool provides a near realtime view of the rack-space with color gradient representation of temperatures and informa- tion about core utilization and machine states. It also allows users to choose any data point in the measurement period and run a slideshow-like presentation of the changes to the machine states and temperatures. Using this GC Viewer, it is easy to see the impact of the spatially-aware rule set (Fig. 1.5). During two consecutive days of similar weather, the GCM ran both rule-sets and recorded their impact on the state of the Green Cloud 10 Publicly available at: http://greencloud.crc.nd.edu/status 18
  • 30. Figure 1.4. Spatially-aware logic algorithm workflow 19
  • 31. during the evening (8p.m., 81◦ F outside air temperature). With the basic rule- set, only 33 machines were running and 27 were hibernated. Using the spatially- aware rule-set, the number of running machines increased to 41, which allowed 36 more processor cores to run scientific computations. Moreover, the spatially-aware rule-set resulted in a 13.7% increase in hot-/cold-aisle temperature difference. This amounted to an increase of 1.57 kW, to 11.29 kW, of heat recovered for the greenhouse. The spatially-aware rule-set also allows 2–3 more machines to run even during the hottest parts of the day. While the control algorithms are important, it should be noted that the availability of outside air below 95◦ F has the potential to provide nearly year-round cooling for continual server operation.11 11 For information regarding the thermal measurements of the prototype, including server inlet and outlet temperatures, waste heat produced and supplied, and thermal calculations therein, please refer to Appendix A 20
  • 32.  (a) Spatially-aware rule-set (2010-08-01 20:00) Running: 41, Hibernated: 19, Cold-Aisle: 82◦ F, Hot-Aisle: 111◦ F   (b) Basic rule-set (2010-08-02 20:00) Running: 33, Hibernated: 27, Cold-Aisle: 84◦ F, Hot-Aisle: 109◦ F Figure 1.5. Comparison of rule-set data points in GC Viewer. Note the white and black boxes denoting hibernating and running machines, respectively. 1.5 Conclusion The Green Cloud prototype serves as a successful demonstration that ICT resources can be integrated within the energy footprint of existing facilities and can be dynamically controlled via the Green Cloud Manager to balance process throughput, thermal energy transfer, and available cooling. Apart from not us- ing energy intensive air conditioned cooling at a centralized data center, the GC further improves energy efficiency by harvesting the wasted hot air vented by the servers for the adjacent greenhouse facility. The success of this technique makes the EOC concept especially attractive for sustainable cloud computing frame- works. To most effectively utilize EOC, information technology should be deployed 21
  • 33. across a number of buildings. All of the EOC nodes will be grouped together in a single municipality or spread across a region or the whole of the continent depend- ing upon the type of compute resource and speed of connection.12 Computing jobs will be migrated from facility to facility from a cloud based upon which building needs the waste heat, which building can provide free cooling, or where the en- ergy is the cheapest (Fig. 1.6). Environmentally Opportunistic Computing is then controlled to achieve a balance between the computing needs of the end-users and the needs of the building occupants. Ultimately, a municipality will contain an interconnected network of EOC nodes that local universities, businesses, and government offices are able to use for their compute jobs and of which they all claim some ownership. In order to receive the greatest benefit, the GCM must interface with the building manage- ment systems that monitor and control the building’s mechanical and electrical equipment including the HVAC systems. Many of these systems are constructed using open standards and so the GCM will be able to interact with them. Partner- ships and synergy across these institutions will greatly enhance the environmental, thermal, and economic benefits of Environmentally Opportunistic Computing. Furthermore, the Green Cloud Manager has been proven to successfully man- age computational load and prevent critical overheating of server hardware. As the GCM control system improves, available computer power will increase and refined control of temperature output will be realized. With growing enterprise utilization of cloud computing and virtualized services, EOC becomes more vi- 12 The University of Notre Dame, Purdue University-Calumet, and Purdue University-West Lafayette have recently come together in a partnership that couples mutual interests to build a cyber-infrastructure that allows a computational grid connecting the compute facilities at all three locations known as the Northwest Indiana Computational Grid (NWICG). NWICG allows researchers to submit jobs at any campus and use the compute resources of any other campus. It is currently operated under the Department of Energy’s National Nuclear Security Administration. 22
  • 34. Figure 1.6. Environmentally Opportunistic Computing Across a Municipality able across the range of ICT services. Environmentally Opportunistic Computing technology will continue to supply proven economic and environmental benefits to both an organization and its community partners. 23
  • 35. CHAPTER 2 COMPREHENSIVE REVIEW OF THE POTENTIAL APPLICATIONS The Green Cloud Manager and Environmentally Opportunistic Computing nodes will be scaled to create useful computing and thermal solutions for any industrial, commercial, or academic facility. Based on the computational capabil- ities of the organization, nodes will be developed and installed in existing buildings to mitigate the cooling needed for the servers and lessen the central heating re- quirements for the facility. However, the ideal approach will be for the EOC idea to be actively integrated into the planning and construction of new structures. 2.1 Retrofits When computing nodes utilizing EOC Engineering’s control system are com- bined with current building stock, the location of the nodes is initially constrained by both practical and energy performance limitations: the nodes must be con- nected to the building’s HVAC system and duct work and they must be located near the rooms they are helping to heat. If the building contains a large open floor plan, placing the EOC node somewhere along the existing duct work may not only impact the physical space, but detract from the usable space as well as affect sight lines, alter the temperature distribution, and impact lighting. This is not to say that retrofitting a building to include Environmentally Opportunistic 24
  • 36. Computing concepts cannot be done, merely that it requires extra design analysis to determine the best possible location for a node. 2.1.1 Universities and Government Data Centers Universities and government data centers are an ideal customer for this technol- ogy. The United States government currently has 2, 094 data centers (up from 432 in 1999) [61] and is currently in the process of consolidating them in an attempt to increase efficiency and decrease government waste with the goal of reducing the number of federal data centers by 40% by 2015. In addition, the 2009 federal stimulus plan includes $3.4 billion for IT infrastructure [63]. Thus, the market for technologies that would reduce the costs associated with running these facilities is enormous. Universities, with their multitudes of buildings and large research facilities are another major market for this technology. EOC nodes could easily be added to existing university building stock and save hundreds of thousands of dollars each year in reduced heating costs for those facilities. It is also apparent that some universities would be unwilling to install nodes on campus buildings for a number of reasons: the nodes would change the “look” or “feel” of the campus buildings; heating is supplied via steam from a campus cogeneration power plant; there is no room to change the building footprint; etc. In these cases, the university could contract with local area businesses to supply heat for those buildings. The univer- sity would still see reduced cooling costs for the servers, and the business would allow storage of servers on its property in exchange for comparatively inexpensive heat. The servers could be connected to the main campus cluster via fiber optic cable if infrastructure permits, or satellite if it does not.1 1 The original prototype utilized a satellite connection before the installation of a fiber-optic 25
  • 37. 2.2 New Facilities New facilities allow the greatest freedom for EOC nodes to actively manage heating loads in the building and to distribute computational ability where it is most sensible from an economic, thermal, and aesthetic standpoint. In construct- ing a new building, the computing nodes and the GCM will be employed to work in conjunction with other passive design technologies such as location, orienta- tion, massing, materiality, and bioclimatic responses in order to more fully take advantage of the potential energy savings and thermal efficiencies (Sec. 1.3.1). All new construction — whether it be commercial, academic, or governmental — must account for these processes while designing for optimum efficiency and price. 2.2.1 Major Industrial or Commercial Sites If constructing a new facility, a dedicated data center can be built to support it while at the same time maximizing the form and function of the combined data center and building. If the site needs a lot of heat, it is also possible to build a very large data center capable of supplying more than enough thermal energy while still using the GCM algorithm to control hot spots within the server racks and use outside-air-cooling. Yahoo! Inc. has recently constructed a 155,000- square-foot data center housing 50,000 servers and using only ambient air cooling methods [42]. Resembling a chicken coop, the Yahoo! model brings cool air in through grates on the sides of the building and collects the waste heat in ducts in the ceiling before venting it to the atmosphere. The EOC model would collect this heat and duct it to an adjoining facility to serve a useful purpose. cable. 26
  • 38. 2.3 Preheating Water The Green Cloud Manager and Environmentally Opportunistic Computing can also be used even if a data center uses in-rack water cooling instead of air cooling (Sec. 1.2.5). The GCM will still monitor the status and heat generation of the servers and will direct compute load to where it is needed as per usual, but instead of then ducting hot air into a facility, the system will pipe hot water to a heat exchanger and preheat water for other purposes. The water will then be cooled further — either through the use of chilling towers or by using a nearby lake – before it is sent back into the data center for reuse cooling the servers. 27
  • 39. CHAPTER 3 INTELLECTUAL PROPERTY 3.1 Patent Protection 3.1.1 Novelty The patentability requirements for determining whether or not EOC Engineer- ing’s Green Cloud Manager is novel fall into two basic categories: events prior to the date of invention, and events prior to the filing of a patent application. In the first case, the invention cannot be known or used by others in the United States. This includes any previously patented invention, or printed publication that would describe the invention. In the second, the invention must not be in the public use or on sale in the United States for more than one year before the date of filing of the patent application. For a background on the patentability of algorithms and the novelty requirements, refer to Appendix B.1 and B.2, respectively. 3.1.1.1 Events Prior to the Date of Invention In order to determine whether the process that constitutes Environmentally Opportunistic Computing has been previously known or used by others in the United States, it is necessary to scour previous patent applications and other pub- lished materials for methods of using waste heat to actively heat other facilities and for methods utilizing an algorithm for dispersing compute load for optimal 28
  • 40. thermal conditions. There are many patents that relate to the cooling of data centers, but because the central tenant of EOC is the management of computa- tional load as it relates to heat management and cooling is limited to ambient-air cooling, these previous patents are not relevant. Instead, the focus is on data center schedulers and management systems. These systems generally fall under Class 709 of the patent system and relate to “Electrical Computers and Digital Processing Systems: Multicomputer Data Transferring.” This class provides for a computing system or method including apparatus or steps for transferring data or instruction information between multiple computers. Patent 7,860,973 is for a Data Center Scheduler that describes various tech- nologies allowing for optimal provisioning of global computing resources. These technologies are designed to drive out inefficiencies that arise through the use of multiple data centers and account for traffic and events that impact availability and cost of computing resources. In this, three claims (1, 7, and 15) are impor- tant. Claim 1 is for a data stream, output by a computing device via the internet, comprising information about global computing resources accessible via the in- ternet wherein the information contains valuable data for use by those making requests for global computing resources. Claim 7 is for a method, implemented by a computing device, that receives information about data center resources from one or more data centers and streams that information via a network. And Claim 15 is for computer-readable media instructing a computer to receive information about computing resources of one or more data centers and to issue requests for consumption of some of the computing resources. This patent poses some trouble for EOC and the Green Cloud Manager, specifically how it relates to a system capable of collecting information from data centers and using that information 29
  • 41. to determine where to send future compute jobs. Ultimately, EOC might have to license the use of this patent from Microsoft Corp. in order for the GCM to function. Many other patents, such as 7,805,473, 2010/007667, etc., involve methods for managing cooling methods and so are not infringed upon by the EOC method. The system EOC uses for separating hot-aisle and cold-aisle containment has been present in the prior art for many years and is not covered by patent protection, therefore EOC is not infringing on any current method for collecting and removing waste heat. It is also important to establish that the EOC method does not already exist in the prior art. There are many current examples of facilities utilizing waste heat from data centers. Fortunately, these examples use a method of water-to-water heat pumps (WTWHP). Some of these heat pumps are attached to the very air chillers EOC will do without,1 and so are of no concern [88]. Other applications of this concept use WTWHP to actively cool the server racks so as to not require mechanical air conditioning and deliver the hot water to district heating networks [54]. While many ideas are similar, no other company purposefully uses the wasted thermal energy to directly heat an adjoining facility. In addition, the control system combining data center utilization based upon need for computer jobs and the need for heat delivery is unprecedented. We fully expect that we will receive patent protection for the core algorithm utilization in data processing. 1 A heat pump uses a vapor compression cycle to take heat from a low-temperature source and raise its temperature to a useful level. Air conditioners, chillers, and refrigeration systems fit this definition, but the heat they generate usually is discarded. By attaching a WTWHP to a chiller in a data center, the rejected heat serves a useful purpose. 30
  • 42. 3.1.1.2 Events Prior to Filing the Patent Application In order to receive a patent under Section 102, the invention must not be in the public use or on sale in the United States for more than twelve months before the date of the patent application. It has long been held that an inventor loses his right to a patent if he puts his invention into public use before filing a patent application: “His voluntary act or acquiescence in the public sale and use is an abandonment of his right” [8]. Nevertheless, an inventor who seeks to perfect his discovery may conduct extensive testing without losing his right to obtain a patent for his invention — even if such testing occurs in the public eye [4, 9]. It is in the best interest of the public, as well as the inventor, that the invention should be perfect and properly tested, before a patent is granted for it [4]. Such it is that even though the EOC prototype is operated on the property of the City of South Bend, and the process and method have been discussed at length with the City, the invention remains in the experimental phase. It has not yet been offered for commercial sale for profit (which would constitute a critical date and start the twelve month countdown) and the invention remains in the control of its inventors. Nothing that has been done so far has placed the potentially patentable method in the public domain outside of the realm of experimental testing. Thus, the invention remains patentable and no critical date has occurred to force the filing of a patent application. 3.1.2 Nonobviousness The nonobvious requirement (Appendix B.3) constitutes a notion of something meeting a sufficient inventive standard or nontriviality. This is to preclude dif- ferences from prior art that are “formal, and destitute of ingenuity or invention... 31
  • 43. [and] may afford evidence of judgment and skill in the selection and adaptation of the materials in the manufacture of the instrument for the purposes intended, but nothing more” [6]. Nonobviousness presents “a careful balance between the need to promote innovation and the recognition that imitation and refinement through imitation are both necessary to invention itself and the very lifeblood of a competitive economy” [2]. IBM, Dell, Sun Microsystems, and Rackable Systems all make portable mod- ular data centers that might compete with the proposed EOC nodes [50, 60, 121, 122]. These modular centers all include UPSs, cooling, and fire suppression in order to create a fully-functional data center in a smaller footprint for clients who would like to grow their data centers as they need them or spread the center out over multiple locations. However, these modular data centers are not designed to offer passive cooling of the server hardware, nor are they designed to collect and use the waste heat produced therein. While these modular data centers are different enough from traditional data center design to overlap with the EOC nodes, they do not have the ability to provide all of the savings that EOC does. They are not designed to interface with a BMS, they require active cooling measures, and they are not constructed to supply useful thermal energy to a facility. They also do not have a central algorithm that directs compute load to different modules for the purpose of reducing the need for cooling. As a result, although these modules might solve some problems for customers, there is little direct competition between these technologies and an EOC node complete with the Green Cloud Manager. As mentioned in Sec. 2.2.1, Yahoo! Inc. has recently constructed a very large data center that attempts to use only ambient air cooling methods [42]. Yahoo! is 32
  • 44. not the first to cool data centers with environmental air, nor will they be the last. This idea has been repeatedly toyed with, but on a case-by-case basis with no firm specializing in the design of this type of data center. And again, this design only uses ambient air cooling without collecting the resulting waste heat for use in an adjoining building and does not include the use of an algorithm along the lines of the Green Cloud Manager for distributing compute loads. As a result, the EOC Engineering method is much more advanced than this technology. We believe that the Green Cloud Manager utilized in the Environmentally Opportunistic Computing method is nonobvious and will be awarded a patent. Never before has a process existed that managed computation levels based upon maintaining a set thermal output. The prior art was directed toward mitigating heat levels and active cooling. EOC and the GCM are moving in the opposite direction from the current philosophy of data center managers. Therefore, EOC’s process for directing compute load to where heat is needed based on current com- putational levels and subsequent heat production is novel and is deserving of a patent. 3.2 Competitive Analysis EOC Engineering currently has three classes of competitors: 1. Building Management Systems 2. Server Administration Toolkits 3. Data Center Design Firms Building Management Systems (BMSs) are computer-based control systems installed in buildings that control and monitor a building’s mechanical and elec- 33
  • 45. trical equipment such as ventilation, lighting, power, fire, and security systems. The BMSs also monitor the thermostats of different building sectors and attempt to meet the patron needs. The BMSs can be written in proprietary languages, but are more recently being constructed using any number of open standards. While these systems are critical to the success of EOC Engineering and must interface with our software, they cannot easily expand to the control of computational loads within the data center. On the other side of the spectrum, Server Administration Toolkits, such as xCAT, are open-source distributed computing management software systems used for the deployment and administration of computing clusters. They can: cre- ate and manage cluster machines in parallel; set up high-performance computing stacks; remotely control power; monitor RAM and CPU temperatures; manage fan speeds and voltages; and measure computing load. While these systems are integral to EOC Engineering’s software, by themselves they cannot grow to do what EOC does and cannot interface with a BMS. Of all our potential competitors, firms specializing in data center design are the most significant; yet they are also our greatest customers. These companies already have relationships with potential clients and are experienced in the field of managing client concerns and facility construction. While it is not attractive to compete with these companies by doing exactly what they do, we will be successful by partnering with these design firms to leverage our unique knowledge of data management systems and by implementing our patented system controls. Effectively these firms will become key customers of EOC Engineering. Confronted with the necessity of constant reliability and performance of their data centers, many clients are concerned by new and unique approaches to main- 34
  • 46. taining the data and information on which their business function. They have a strong preference for working with technologies and people they know. Thus, if our potential customers are to change their buying habits, they will only go with those they trust. It is absolutely imperative, then, for us to create good working relationships with our clients. It is not necessarily enough to save them money; if they do not believe EOC Engineering can deliver on its promise, they will not work with us. The manner in which we respond to our clients and their concerns represents part of our competitive advantage. EOC Engineering’s main competitive edge rests in its unique method for con- trolling computer loads and heat production to best save the customer money. No other firm can provide the level of control that is possible with EOC Engineering and no one else can provide the power cost savings that we can. We will supply the software that allows the Building Management Systems and Server Adminis- trator Toolkits to communicate with each other and increase the overall efficiency of the entire building system. EOC Engineering is also filing for patent protection for our process. This patent will allow us to protect our unique algorithm and eliminate the risk of duplication of our algorithm by data center design firms. In order to offer our proven savings to their clients, these data center design firms must first partner with EOC Engineering. 35
  • 47. CHAPTER 4 BARRIERS TO SUCCESSFUL COMMERCIALIZATION 4.1 Funding EOC Engineering will initially be funded through a Small Business Innovative Research (SBIR) grant from the federal government.1 SBIR is a competitive program that encourages small businesses to explore their technological potential, and provides incentives to profit from the commercialization of that technology. The government does this in order to include qualified small businesses in the nation’s R&D arena and, through them, further stimulate high-tech innovation and encourage an entrepreneurial spirit while at the same time meeting its specific research and development needs. These grants come in two phases. Phase I is to determine, insofar as possible, the scientific or technical merit of ideas submitted under this program. The awards for Phase I are for periods of up to 6 months and in amounts up to $150, 000. The state of Indiana also provides matching funds for these grants up to 50%. EOC Engineering has already applied for Phase I grants through the Department of Energy and the National Science Foundation. EOC Engineering anticipates the acceptance of one of its grant proposals for a combined federal and state contribution of $225, 000, and possibly both, for a total of $450, 000. 1 More information is available at http://www.sbir.gov/ 36
  • 48. Phase II is to expand upon the results of and to further pursue the development of Phase I projects and the awards are for periods of 2 years in amounts up to $900, 000. While this phase is principally designed for the R&D effort, the initial award in Phase I will be enough to finish work on the Green Cloud Manager. EOC Engineering will thus only apply for Phase II if cash reserves are in danger of running low, and will instead focus on using the Phase I award for all of its business needs. There is also the option of approaching the IrishAngels network to generate more funding. This network comprises a group of Notre Dame alumni and sup- porters who are experienced in entrepreneurial endeavors and are interested in supporting new venture development. The mission of the IrishAngels network is to foster the development of new business opportunities created by those linked to the University of Notre Dame — and EOC Engineering certainly fits into this category. 4.2 Technology Shortfalls As shown in Chapter 1, the technology behind EOC Engineering is sound. The software provides the necessary controls to accurately allocate computer load where heat generation is needed, efficiently monitor all equipment, and safely cool the HTC/ICT hardware. The shortfall in the technology is not within EOC Engineering, but in the Building Management Systems (BMSs) employed by many buildings.2 The BMSs in older facilities are relatively unintelligent when it comes to mon- itoring building occupancy and area thermal requirements. Many times, these 2 Microsoft, EnerNOC, Scientific Conservation, Johnson Controls, Schneider Electric, IBM and Honeywell all have their own proprietary Building Management Systems [53]. 37
  • 49. systems will have only one thermocouple, and it will be located at the only ther- mostat in the area (be that a small office, an open floor work area, or multiple floors). Because the BMS is only sensing temperature in one area, without regard to where people are gathered, it supplies the same thermal load to every area even though some may be far warmer or cooler than the sensing location; it also supplies that load to areas where none is required because no one is present. In order for EOC nodes to be most effective, the Green Cloud Manager must be able to interface with a well constructed BMS. Because thermal energy dissipates rather rapidly if forced to travel long distances it would be beneficial to understand which areas need heat and which do not so that in a building with more than one EOC node compute jobs can be sent to the node most capable of supplying heat to that area. This further increases the efficiency of the system and allows the GCM to safely manage the thermal load on the servers. The EOC system will still work with most Building Management Systems, but can be limited by their type and intelligence. The BMSs must also be written in an open-source software or EOC Engineering will have to license the proprietary software necessary to interface with them. At the same time, a building must have a BMS or the EOC node will not be able to dynamically deliver thermal energy to it. The node will still work as a data center, but the GCM will not be able to deliver compute load as a function of thermal requirements of the building. Before EOC Engineering can begin to offer its algorithm for sale, the software must be built and tested and its ability to interface with at least one BMS must be shown. This will not be extremely difficult, but it will take time. The timeline for the creation of the software is outlined below in Chapter 5. The software will originally interface with only one Building Management System, but more will be 38
  • 50. added in future iterations. 4.3 Organization and Staffing EOC Engineering will have a very flat organizational structure. For the first two years, there will be only a handful of employees: A CEO, a Project Manager, a Chief Programmer, two salespeople, a small IT staff, and an administrative as- sistant. The CEO will be responsible for working with our marketing, sales, and distribution channels which will all be contracted outside of the company struc- ture. The Project Manager will work directly with the customer and data center design firms to ensure our specifications and standards are met in the construction new buildings or the retrofit of existing facilities. The programmer will work with the clients IT department to determine their needs, create additional user inter- faces for each new BMS with which the GCM will work, and oversee the outside programmers who are contracted to create any substantial changes to the software and algorithm. Staffing will grow starting in the third year with the addition of more salespeople and project managers as well as consultants and programmers. 4.4 Market Acceptance Of all the potential barriers to successful commercialization of EOC Engineer- ing’s technology, the question of whether or not the market will accept it is the largest unknown. While networking and computing equipment manufacturers con- stantly develop smaller and faster devices, rarely do they improve energy efficiency and heat emission. In fact, utilizing equipment with smaller physical footprints, such as modern blade servers, increases heat generation per square foot and, con- sequently, increases the need for more powerful cooling systems. Customers know 39
  • 51. this and have been trained to recognize that cooling systems are an absolute ne- cessity when considering an expansion in data center capability in order to protect the data upon which their businesses are built. EOC Engineering’s clients will require constant reliability and performance of their data centers and will have a very strong preference for working with the technologies that have proven successful in the past. It does a company little good to be a first mover in a new technology when the failure of that new technology could, in point of fact, hurt their businesses. It is only with careful trepidation that a company switches to a brand new technology, even if that technology would save money, when not many others do. For this reason, it will be difficult to rapidly gain a customer base. EOC Engineering will, therefore, gain customers by focusing on smaller regions and moving market by market throughout the country in a methodical manner. EOC Engineering will gain market share by using pilot programs and providing a reduction in price for early adopters. The pilot programs will offer clients the software for free when they use a Building Management System that the GCM has not before been paired with in exchange for the ability to monitor their systems and the interaction between the GCM and the BMS. For customers using a BMS with which the GCM has already been proven effective, the price of a new software license will be reduced for the first few years; when a client installs an EOC node during this period, they will retain the reduced license price for the life of their system which provides an incentive for customers to purchase earlier. All customers will be offered a 3 month trial period where they will be able to test the merits of the software on their own and decide wether or not they will purchase the full software license. 40
  • 52. 4.5 Channels to Market In order to get our product to market, EOC Engineering must find partners with which to work and who will create buzz for the technology. To do this, it would be beneficial to initially start with a pilot program in South Bend, Indiana, by partnering with a local data center run by Data Realty. EOC Engineering will license its software to them for free, and in exchange monitor their systems and continually update the software controls to further refine the system. This arrangement will also allow access to their data center design firm, Environmental Systems Design, Inc. (ESD), during the design and construction of the facility. By doing this, EOC Engineering will be able to develop a working relationship that will allow it to prove its technology to a well known and respected design firm. In this manner, EOC Engineering will be able to work with this firm on future design projects and gain more customers. After expanding further with ESD, EOC Engineering will branch out to other design firms throughout the United States market-by-market. Some of these other design firms are HP Critical Facilities Services, PTS Data Center Solutions, and Integrated Design Group. 41
  • 53. CHAPTER 5 ADDITIONAL WORK REQUIRED In order for the technology to be successful, more work is required: 1. A CEO and Chief Programmer must be hired; 2. The Green Cloud Manager must be shown to work well with the majority of Building Management Systems; 3. A polished version of the GCM will have to be completed by a team of professional programmers; 4. A pilot test of the software must be completed; and, 5. More testing is needed to supply hard numbers to our prospective clients. The current GCM system will be further developed and given the ability to interface with BMSs. Although the reuse of HPC/ICT waste heat is one of the major selling points of this technology, the system does not yet communicate with Building Management Systems to supply heat where it is required. To do so, interoperability must be created with the GCM and various BMSs. Without it, the system will simply be a thermal heat source, much like geothermal, and the BMS will have to control it through the use of dampers and louvers. 42
  • 54. In addition to interfacing with a BMS, the GCM must look and feel like a complete system. For this reason, it is necessary that the algorithm and software be completed by a team with considerable experience in programming and systems design. The system must be something that can be set up and forgotten about by the majority of users, yet be easily used by IT personnel. Concurrently with the above efforts, testing must be done on our current and subsequent prototypes. EOC Engineering must be able to supply a proven record to our prospective customers that shows how it will scale the product, how much it can save in cooling costs, how the system interfaces with a BMS, and how much waste heat can be supplied to a facility. Without this, it will be difficult to attract many customers. As mentioned before, some of this can be completed on our first iterations with clients, but there will be no revenue from these pilot projects. EOC Engineering will, however, generate goodwill and word-of-mouth from these original clients while they allow it to improve and perfect the technology. Table 5.1 lays out the activities, timeline, and costs of EOC Engineering for the first 3 years and presents the major milestones of the company. 43
  • 55. TABLE 5.1 Activities & Milestones Activity Start Date End Date Department Budget ($) Initial Funding 6/2011 1/2012 Finance $30,000 Patent Filing & Prosecution 6/2011 6/2013 Management, Le- gal Interview/Hire Chief Program- mer 8/2011 12/2011 Management $15,000 First Iteration of Software 1/2012 6/2012 Programming $90,000 Interview/Hire President & CEO 10/2011 2/2012 Management $15,000 Advertising 1/2012 12/2012 Marketing $102,000 Pilot Project(s) 4/2012 10/2012 Programming, Management, Sales $60,000 Sale to Design Firm 1/2012 12/2012 Sales First Project Completed 12/2012 1/2013 Management Second Iteration of Software 10/2012 3/2013 Programming $90,000 Sale to Second Design Firm 1/2013 12/2013 Sales 44
  • 56. APPENDIX A PROTOTYPE THERMAL MEASUREMENTS Thermal measurements were conducted on the prototype during June and July, 2010. These measurements consisted of the constant monitoring of various local temperatures throughout the container as well as server temperatures and server loads. In this way, local temperatures and heat recovery could be estimated and directly correlated to the server usage and activity. Energy recovery rates, qwaste, were estimated using qwaste = ˙m [cp(Tha)Tha − cp(Tca)Tca] , (A.1) where cp(T) is the specific heat of the air at the local temperature, and Tca and Tha are the temperature of the cold-aisle upstream of the server racks and the hot-aisle downstream of the server racks, respectively. The mass flow rate was determined by applying mass conservation and calculating the total flow rate passing through both exhaust fans, ˙m = i=1,2 ˙mfan,i = i=1,2 ρ(Tout,i)Uavg,iAduct,i, (A.2) where i indicates the two exhaust fan ducts, ρ(Tout) is the local air density, Aduct is the cross-sectional area of the exhaust duct for each fan, and Uavg,i is the average flow speed at the fan exit. 45
  • 57. The exit flow speed was measured using a handheld velocimeter with inte- grated thermocouple1 with an accuracy of approximately ±3.0% + 0.3m/s. The velocimeter was placed at the exit of the fan duct, perpendicular to the flow, to record speeds for 20 seconds. These values were then averaged over the measure- ment time. To obtain the average exit flow speed, the same measurement was conducted over a number of locations across the duct exit (Fig. A.1), and the average flow speed was calculated: Uavg = 1 Aduct Aduct U (r, θ) · rdrdrθ. (A.3) The radii of ducts 1 and 2 were r = 5.0in and r = 4.0in, respectively. The spacing between the measurements was δθ = π/4 and δr = 2.5in for duct 1 and δr = 2.0in for duct 2. Symmetry was assumed and the flow was only measured in two quad- rants. The integration was conducted numerically using the trapezoidal rule and the uncertainty in the mass flow rate was estimated to be ±8.1%. Temperatures were recorded with four temperature/humidity sensors2 . One sensor was placed just upstream of each exhaust fan, Tout,i, one sensor was placed in the cold-aisle, Tca, and one was placed in the hot-aisle, Tha. Temperatures were recorded in real time at a rate of four readings per hour.3 The temperature at the inlet of the lou- ver, Tin, was taken from weather measurements from the Indiana State Climate Office (ISCO 2010). The heat recovery as a function of time, qrec(t), was estimated using Eq. A.1 with the sensor readings and mass flow rate estimated using Eq. A.2. The temperatures were validated using a handheld thermocouple and an infrared 1 Extech Model 407123 2 APC Model AP9512THBLK 3 At present, the prototype is not configured to allow temperature measurements in the ducts down stream of the fans, which would be the best way to accurately measure waste heat recovery. 46
  • 58. Figure A.1. Schematic of measurements to determine the average outlet velocity Courtesy of Dr. David Go camera (Fig. A.2). Hardware temperatures were recorded from the hardware’s internal temperature sensors using the intelligent platform management interface (IPMI). Fig. A.3(a) gives a representative plot of the temperature measurements from the four sensors, the estimated fan downstream temperatures, the inlet tempera- ture, and the temperatures from one of the servers over a 48-hour period. The plot illustrates a number of significant points: Firstly, the temperature of the hot-aisle is significantly warmer than that of the cold-aisle, exemplifying the amount of heat wasted in conventional data centers; secondly, the single hardware temperature, THPC, varies due to dynamic computational loads, yet the overall temperature is fairly constant because the number of active servers is fairly constant; lastly, the average server inlet temperatures range from ∼ 70 − 100◦ F (21 − 38◦ C), which exceed current recommended HPC hardware operating ranges. EOC is confident that ITC hardware can be operated beyond current limits, and the data demon- strates server operation at temperatures greatly exceeding standards. 47
  • 59. (a) Server inlet temperature (b) Server outlet temperature Figure A.2. Illustrative infrared thermal maps of a server Courtesy of Dr. Paul Brenner   (a) Representative measured temperatures   (b) Available waste heat Figure A.3. Container measurements over a 48-hour period in July 2010 Courtesy of Dr. Paul Brenner 48
  • 60. Fig. A.3(b) shows the amount of waste heat available for recovery for the same 48-hour period. On average, nearly 33.6 × 103 Btu/hr (9.39 kW) was ex- tracted from the data servers during this period, for a total energy recovery of 1.61 × 106 Btu (450.7 kW · h). Though this value is limited by approximations for the heat loss and bulk temperature measurements, it is consistent with the energy consumed by the container according to the energy bill. At an estimated cost of $0.10/kW · h, the measured heat recovery corresponds to $45.07 in energy savings for this time period, and extrapolates to approximately $676 per month. This equates to ∼ 4.5% of the average monthly expenditures for the BGG during a winter month.4 These preliminary studies limited the container to a maximum of 60 servers operating at any time because the warm summer air increased Tca and limited the ability of the servers to remain below critical temperature levels. With improved configuration and in cooler months, the prototype container should be able to operate at a nearly constant heat recovery level of 102, 364 Btu per month, corresponding to a ∼ 15% reduction in the BGG’s monthly energy consumption. The installation of additional containers will also provide quantitative increases in total energy cost savings. One of the primary challenges of successfully integrating HPC/ITC into built structures is meeting customer constraints such as preserving form and function, heating needs, and reliability. (Sec. 1.3.1). Successful energy recovery from an EOC container requires that heat be delivered to its partner facility when it is required and in a predictable manner. With EOC, the availability of heat depends on the computing users’ need for computational power at any given moment. This issue is illustrated by the output of a pre-set pilot test, Fig. A.4, during which 4 The BGG reports an average expenditure of ∼ $15, 000 during the winter months of De- cember through February, 2003 − 2006. 49
  • 61. the computational load on the servers was intentionally varied. The servers were initially idle for 27 hours, and then alternated between normal loading capacity and idle for 12 hour periods each. As this test demonstrates, the temperature difference not only reduces dramatically when the HPC hardware are idle, but there is also a transient recovery period when the hardware are active but the temperature rise follows more slowly. The thermal time constant of the system is related not only to the heat capacity of air, but of the entire set of server compo- nents and infrastructure (racks, walls, etc.) which serve as heat sinks whenever they are at a lower temperature. For this prototype, the time constant is es- timated to be 53 minutes.5 Similarly, when the servers become idle, the EOC container reaches its ambient condition in approximately 44 minutes. In an ap- plication where EOC containers are distributed across multiple facilities, control algorithms will be needed to balance the demands of both the computer and build- ing users against performance expectations for each, and these time constants will be integral to that. 5 Here, the time constant is the time required to reach 95% of the maximum temperature 50
  • 62.   Figure A.4. Temperature difference between the hot- and cold-aisles during the pilot test Courtesy of Dr. Paul Brenner 51
  • 63. APPENDIX B PATENTABILITY PRIMER B.1 Algorithm The question of the dividing line between patentable subject matter under Title 35, Section 101, of the United States Code is not a clear one. Section 101 states: “Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title” [116]. The Supreme Court held, in Mackay Radio & Telegraph Co. v. Radio Corp. of America, 306 U.S. 86 (1939), that “[w]hile a scientific truth, or the mathematical expression of it, is not a patentable invention, a novel and useful structure created with the aid of knowledge of scientific truth may be” [7]. Yet, in Gottschalk v. Benson, 409 U.S. 63 (1972), the Supreme Court defined an “algorithm” as a “procedure for solving a given type of mathematical problem,” and concluded that such an algorithm, or mathematical formula, is like a law of nature, which cannot be the subject of a patent [5]. In contrast, in Diamond v. Diehr, 450 U.S. 175 (1981) [3], the Court considered a patent for a device that aided in curing rubber. At its essence, the invention was an algorithm because it depended on actions taken at precise times during the curing process and used a digital computer to execute and record these actions. 52
  • 64. Unlike the algorithm in Gottschalk v. Benson, the Supreme Court held that such subject matter was patentable because the patent was applied not to the algorithm, but to the complete process for curing rubber, of which the algorithm was only a part. Thus, it is possible to seek patent protection for a process that utilizes an algorithm to produce a “useful, concrete, and tangible result” [10] This thesis will thus define an “algorithm” as: 1. A fixed step-by-step procedure for accomplishing a given result; usually a simplified procedure for solving a complex problem, also a full statement of a finite number of steps. 2. A defined process or set of rules that leads to and assures the development of a desired output from a given input. A sequence of formulas and/or algebraic/logical steps to calculate or determine a given task; processing rules. Under this view, the algorithm is a part of a much larger process that accomplishes a “useful, concrete, and tangible result” and is worthy of patent protection when it is part of the larger process for distributing waste heat in an environment. B.2 Novelty Title 35, Section 102, of the United States Code states that a person shall be entitled to a patent unless — (a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for patent, or 53
  • 65. (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of the application for patent in the United States, or (c) he has abandoned the invention, or (d) the invention was first patented or caused to be patented, or was the subject of an inventors certificate, by the applicant or his legal representatives or assigns in a foreign country prior to the date of the application for patent in this country on an application for patent or inventors certificate filed more than twelve months before the filing of the application in the United States, or (e) the invention was described in (1) an application for patent, published under section 122 (b), by another filed in the United States before the invention by the applicant for patent, or (2) a patent granted on an application for patent by another filed in the United States before the invention by the applicant for patent, except that an international application filed under the treaty defined in sec- tion 351 (a) shall have the effects for the purposes of this subsection of an application filed in the United States only if the international appli- cation designated the United States and was published under Article 21(2) of such treaty in the English language, or (f) he did not himself invent the subject matter sought to be patented. [117] 54
  • 66. B.3 Nonobviousness This is perhaps the most difficult factual patent issue. In addition to meeting the novelty requirements outlined above, Section 103 states (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. [118] 55
  • 67. APPENDIX C BUSINESS PLAN C.1 Executive Summary Waste heat created by high performance computing and information commu- nications technology is a critical resource management issue. In the United States, billions of dollars are spent annually to power and cool these data systems. The August 2007 U.S. Environmental Protection Agency Report to Congress on Server and Data Center Efficiency estimates that the U.S. spent $4.5 billion of electrical power to operate high performance and information technology servers in 2006, and the same report forecasts that our national energy expenditure for these ma- chines will be well over $7 billion by 2011. The cooling systems for the typical data center use close to 40% of the total power draw of the data center, totaling over $2.5 billion nationally. To address this market need, EOC Engineering will produce and sell a unique patented algorithm to data center design firms that allows module computing nodes to supply useable heat when and where it is needed and manage the data load when it is not. The EOC Engineering nodes will function just like a normal data center, complete with UPS, power systems, and protection from fire, smoke, humidity, condensation, and temperature changes. The nodes have the option of also installing active cooling measures such as computer room air chillers and 56
  • 68. handlers, but the algorithm renders those measures unnecessary and the customer can save on capital costs by not installing them. The unique EOC Engineering algorithm is known as the Green Cloud Manager (GCM) and is designed to man- age the IT load in each of the EOC Engineering nodes within a company or municipality to supply heat where the customer needs it, and concurrently miti- gate the cooling costs for the server hardware. This is accomplished by directing compute requests to the most beneficial area depending on the environment and the customers needs. If the servers are running hot, and the customer has little need for heat at the moment, the GCM will send the compute load to the coldest servers and cool the hot ones via the built in cooling mechanisms. These cooling mechanisms will direct the cold-air return of the buildings HVAC system to cool the servers and open louvers for environmental air cooling. The IT hardware will be cooled to the extent possible without using power hungry computer room air conditioners to save the customer money and reduce greenhouse gas emissions. The EOC nodes will reduce the power consumption of the customers compute hardware by almost 40% and save the customer thousands of dollars each year in heating costs. Our patented method adjusts the server load to maximize the utilization of the compute power and maximize utilization of the heat output of those servers and does it all as efficiently as possible. By removing all active cooling methods in a typical 5,000-square-foot data center, we will save a customer more than $374, 000 each year. Because these nodes will reduce the carbon footprint of the customer, in addition to actively saving the customer money in reduced electric bills, the customer may be eligible for government grants for increasing the efficiency of their data center. Nothing else is like it. 57
  • 69. C.1.1 Objectives 1. Ten projects in the first year of sales and 200 new projects in the fourth year of sales. 2. Market share of 10% by year nine. 3. Grow gross annual revenue from new facilities to of $40 million by the end of the decade. 4. Have yearly licensing fees also over $40 million by the end of the decade. 5. Positive net income after year four. C.1.2 Mission EOC Engineering offers data centers a reliable, highly-efficient alternative to their current methods of cooling high performance computing and information communications technology systems. Our design services and algorithm allow our clients to immediately begin saving money on the operation of their data centers by restructuring the hardware support systems and implementing sophisticated, patented control algorithms. Clients know that working with EOC Engineering is a safe, reliable, and inexpensive alternative to traditional means of data center management. Our initial focus is on development in the United States with em- phasis on small- to mid-sized data centers as well as government and university research facilities. C.1.3 Keys to Success 1. Excellence in fulfilling our promise of large energy savings. 58
  • 70. 2. Developing visibility and market channels to generate new business leads. 3. Keeping overhead costs low. C.2 Company Summary EOC Engineering is a design and engineering firm based at the Innovation Park, Notre Dame that develops and implements novel opportunities for the re- covery of waste heat from servers and data centers. As EOC Engineering grows, it will expand to larger data centers and offer services abroad. C.2.1 Company Ownership EOC Engineering is a limited liability company owned by the three principal partners. These partners will be the CEO, the chief programmer, and the head project manager. Equity in the company will be shared equally among the three and will be diluted evenly when outside capital investment is required. C.2.2 Start-up Summary Start-up funding for EOC Engineering will cover the first three years before net income turns positive. The required funding is $3, 100, 000 and includes $830, 000 in government grants, thus $2, 300, 000 is needed in other investment. This invest- ment will allow the company to finish coding, validate the algorithm, and provide for the first few years of salaries as employment grows ahead of sales. C.3 Products and Services EOC Engineering provides engineering consulting services and patented algo- rithms to provide advanced energy management to new data centers. We will 59
  • 71. provide energy management solutions for our clients during the development of a new facility to fully integrate the data center operations and thermal controls. We will license our unique control model as a component of our services engagement and will offer design and consulting services to our clients, through our partner- ships with data center design firms, while they are constructing their new data centers. The control algorithm will allow clients to reduce or completely eliminate their current active cooling measures in their data centers by monitoring the compute requests and the server load to maximize utilization of the servers and maximize their useable heat output as efficiently as possible. The price for the Green Cloud Manager is small and allows the client to install it alongside traditional cooling methods and still receive a very large payback on their investment. The GCM will manage the compute requests of a data center in order to control the heat output of the servers and reduce the need for active cooling in the entire data center. C.4 Market Analysis Summary The demand for data center space continues to increase at a much faster pace than the supply. The October, 2009 Data Center Dynamics conference estimated that the current demand is three times greater than the supply, and is coming from companies of all sizes. The largest forecasted demand is for smaller data centers to accommodate dispersed or cloud computing and allow for organizations to process data in different locations, while sharing assets, minimizing risks, and maximizing capital dollars spent. Figure C.1 shows the forecasted data center market growth in square-footage for the years 2009 to 2013. This has been expanded through the next decade in Table C.1 based upon a conservative annual growth rate of 12.5%. 60
  • 72. Table C.1 also shows the amount of new space built each year and the number of new data centers assuming an average 5,000-square-foot facility. Figure C.1. Total U.S. data center market size forecast, 2009 to 2013 Frost & Sullivan Over the last decade, cooling demand for these IT environments has shown a dramatic upsurge due to increased server virtualization and a need for incremental data storage, which in turn has led to high heat densities. Cooling solutions for this market have traditionally incorporated perimeter-based cooling provided by Computer Room Air Conditioning (CRAC) units and, to a certain extent, are supplemented by the HVAC system of the building itself. To contain growing heat issues, suppliers have innovated high density cooling modules that are capable 61
  • 74. of addressing heat loads at the point of origin. Emerging trends in power and cooling needs as well as the need to enhance energy savings in this energy-intensive industry has resulted in considerable initiatives at the supplier level to introduce a host of energy saving solutions into the market. Currently, active cooling methods account for an astounding 38% of data center energy expenditures (Figure C.2). Figure C.2. Analysis of a typical 5,000-square-foot data center power draw Energy Logic *This represents the average power draw (kW). Daily energy consumption (kW · h) can be captured by multiplying the power draw by 24. C.4.1 Market Segmentation According to Tier I Research, data centers are currently utilized at an aver- age rate of 55.1%. In the past year, growth in demand has outpaced supply by 63