SlideShare a Scribd company logo
1 of 537
Contents lists available at ScienceDirect
Optical Switching and Networking
Optical Switching and Networking 23 (2017) 225–240
http://d
1573-42
n Corr
305-701
E-m
dkkang
chyoun
journal homepage: www.elsevier.com/locate/osn
Energy and QoS aware resource allocation for heterogeneous
sustainable cloud datacenters
Yuyang Peng, Dong-Ki Kang, Fawaz Al-Hazemi, Chan-Hyun
Youn n
Department of Electrical Engineering, Korea Advanced Institute
of Science and Technology, Daejeon, Republic of Korea
a r t i c l e i n f o
Article history:
Received 31 August 2015
Received in revised form
1 February 2016
Accepted 19 February 2016
Available online 27 February 2016
Keywords:
Sustainable cloud datacenters
Renewable energy
Virtual machine allocation
Heterogeneity
x.doi.org/10.1016/j.osn.2016.02.001
77/& 2016 Elsevier B.V. All rights reserved.
esponding author at: 373-1 Guseong-dong,
, Korea. Tel.: +82 42 350 3495; fax: +82 42
ail addresses: [email protected] (Y. Pen
@kaist.ac.kr (D.-K. Kang), [email protected] (
@kaist.ac.kr (C.-H. Youn).
a b s t r a c t
As the demand on Internet services such as cloud and mobile
cloud services drastically
increases recently, the energy consumption consumed by the
cloud datacenters has
become a burning topic. The deployment of renewable energy
generators such as Pho-
toVoltaic (PV) and wind farms is an attractive candidate to
reduce the carbon footprint
and, achieve the sustainable cloud datacenters. However,
current studies have focused on
geographical load balancing of Virtual Machine (VM) requests
to reduce the cost of brown
energy usage, while most of them have ignored the
heterogeneity of power consumption
of each cloud datacenter and the incurred performance
degradation by VM co-location. In
this paper, we propose Evolutionary Energy Efficient Virtual
Machine Allocation (EEE-
VMA), a Genetic Algorithm (GA) based metaheuristic which
supports a power hetero-
geneity aware VM request allocation of multiple sustainable
cloud datacenters. This
approach provides a novel metric called powerMark which
diagnoses the power efficiency
of each cloud datacenter in order to reduce the energy
consumption of cloud datacenters
more efficiently. Furthermore, performance degradation caused
by VM co-location and
bandwidth cost between cloud service users and cloud
datacenters are considered to
avoid the deterioration of Quality-of-Service (QoS) required by
cloud service users by
using our proposed cost model. Extensive experiments including
real-world traces based
simulation and the implementation of cloud testbed with a
power measuring device are
conducted to demonstrate the energy efficiency and
performance assurance of the pro-
posed EEE-VMA approach compared to the existing VM request
allocation strategies.
& 2016 Elsevier B.V. All rights reserved.
1. Introduction
The electric energy consumption of datacenters is
accounted to be 1.5% of the worldwide electricity usage in
2010, and the energy cost is a primary fraction of a data-
center's maintenance expenditure [1,2]. Therefore, there is
a growing push to improve the energy efficiency of the
Yuseong-gu, Daejeon
350 7260.
g),
F. Al-Hazemi),
data centers behind cloud computing [3,4]. Traditionally,
datacenters get their power supply from the utility grid
which is generated by dirty energy generators, such as
coal, or nuclear plants [15]. These conventional energy
generators not only produce much carbon but also
increase the operation cost for datacenters. Towards
addressing this inefficiency, a promising solution receiving
spotlights is the incorporation of renewable energy gen-
erators such as PhotoVoltaic (PV) and wind turbines into
the design of datacenters (i.e., achieving “sustainable”
datacenters which reduce not only the electricity cost but
also the carbon footprint). Renewable energy generator is
becoming drastically an attractive candidate for designing
www.sciencedirect.com/science/journal/15734277
www.elsevier.com/locate/osn
http://dx.doi.org/10.1016/j.osn.2016.02.001
http://dx.doi.org/10.1016/j.osn.2016.02.001
http://dx.doi.org/10.1016/j.osn.2016.02.001
http://crossmark.crossref.org/dialog/?doi=10.1016/j.osn.2016.02
.001&domain=pdf
http://crossmark.crossref.org/dialog/?doi=10.1016/j.osn.2016.02
.001&domain=pdf
http://crossmark.crossref.org/dialog/?doi=10.1016/j.osn.2016.02
.001&domain=pdf
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
http://dx.doi.org/10.1016/j.osn.2016.02.001
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240226
green datacenters in academia. Recently, researchers have
proposed several studies to integrate renewable energy
sources into cloud datacenters. The cost optimization
model considering both of renewable energy source and
cooling infrastructure is proposed to realize the potential
of sustainable cloud datacenters [9]. They propose the
demand shifting which schedules non-interactive work-
load to maximize the utilization of renewable power
source. The energy storage management of sustainable
cloud datacenters has been proposed to minimize the
cloud service provider's electricity cost [10,11]. The sche-
duling scheme for parallel batch jobs has been proposed in
order to maximize the utilization of green energy con-
sumption while ensuring the Service Level Agreements
(SLAs) of requests [16]. However, there are still remaining
challenges to achieve the energy efficient sustainable
cloud datacenters.
First, each cloud datacenter have heterogeneous server
architecture, i.e., they require different power consump-
tion even for serving of the same amount of workload. The
server heterogeneity is caused by hardware upgrades,
capacity extension, and the replacement of peripheral
devices [6–8]. However, traditional cloud datacenter
management schemes assume that all the cloud data-
centers have homogeneous server architecture with same
power efficiency although this assumption is unrealistic
for most cloud resource providers. Second, two issues of
greening cloud datacenters and Quality-of-Service (QoS)
assurance are conflicting goals in the resource manage-
Fig. 1. Cloud environment consists of multiple cloud
datacenters and Cloud
ment. Especially, the performance degradation might be
induced by VM co-location interference when multiple VM
instances are running on common physical server in cloud
datacenters [12]. As more VM instances are packed into
common servers, the required number of active servers is
decreased, while the resource contention is deteriorated.
This means that the energy consumption is reduced with
sacrificing the QoS assurance of the processing for VM
requests. It is important to find a desirable tradeoff
between above two goals corresponding to the dynamic
workload level.
To solve these challenges, we propose an Evolutionary
Energy Efficient Virtual Machine Allocation (EEE-VMA)
approach which depends on an energy optimization
model for sustainable cloud datacenters having hetero-
geneous power efficiency with renewable energy gen-
erators. This paper proposes four contributions as belows.
First, our approach tries to find a near optimized solu-
tion of VM request allocation by applying Genetic Algo-
rithm (GA) with consideration for both of renewable
energy cost and traditional utility grid cost. The funda-
mental strategy adopted in EEE-VMA as an energy saving
scheme is Dynamic Right Sizing (DRS) which is for making
cloud datacenters be power-proportional (i.e., consumes
power only in proportion to the workload level) by
adjusting the number of active servers in response to
actual workload (i.e., to adaptively “right-size” the data-
center) [3,5]. In DRS, the energy saving can be achieved
through allowing idle servers that do not have any running
Request Brokers (CRBs) with Cloud Request Broker Manager
(CRBM).
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240 227
VM instances to be low-power mode (e.g., sleep or hiber-
nation). Note that our proposed energy consumption
model for the EEE-VMA approach includes a switching
cost of DRS which is incurred by toggling a server from
low-power mode into active mode (i.e., awaken transi-
tion). This makes our proposed approach more practical
for energy efficient cloud datacenter in real world.
Second, in our proposed EEE-VMA approach, in order to
adopt the heterogeneous power efficiency of each cloud
datacenters, we propose a novel metric called powerMark
to quantize the power efficiency of servers by measuring
their power consumption at each utilization level of
resources such as CPU, memory, and I/O bandwidth.
Especially, we compute powerMark for serveral types of
server by measuring their power consumption for pro-
cessing CPU-intensive applications.Through powerMark,
we are able to determine the allocation priority of each
cloud datacenter based on their power efficiency so as to
improve the performance of energy saving.
Third, we achieve the significant energy saving of cloud
datacenters while minimizing the performance degrada-
tion caused by VM co-location interference through our
EEE-VMA approach. The workload model including both of
the number of co-located VM instances and the resource
utilization which are key factors reflecting VM co-location
interference is applied to the cost model of the EEE-VMA
approach. Moreover, we consider the bandwidth cost
between cloud service users and cloud datacenters as an
additional part contributing the QoS deterioration of VM
request processing [31,32]. The desirable cloud datacenter
selection for each VM request assignment are conducted
with consideration for both of energy saving and QoS
assurance corresponding to the dynamic workload level.
Finally, we conduct extensive experiments through
simulations at various workload levels based on real-world
traces such as dynamic capacity of renewable energy and
electricity prices of traditional grid power [9,18–20], and
the implementation of testbed with a power measuring
Table 1
Set of key notations.
Notation Description
DC The set of cloud datacenters
CRB The set of CRBs
F The set of flavor types of VM request supported by cloud
resour
RC The set of resource components such as CPU and memory
Λi tð Þ The set of VM requests arrived at whole CRBs at time t
X tð Þ Resource allocation plan of VM requests from CRBs to
cloud datac
each VM request
M tð Þ DRS plan of cloud datacenters at time t, which
determines the n
S tð Þ A solution including resource allocation plan X tð Þ and
DRS plan
Dj tð Þ Performance degradation of cloud datacenter DCj by
CPU resour
UR The set of predetermined resource utilization levels
pwMjrch i
An average power consumption per an unit level of utilization o
pivotS The predetermined pivot server used as a criterion of
resource c
ej tð Þ The energy consumption of cloud datacenter DCj at time
t
ctotal tð Þ The total cost of whole cloud datacenters at time t
f EEE �VMA Uð Þ The objective function to get ctotal tð Þ in
the EEE-VMA solver
device called Yocto-Watt to measure a real power con-
sumption of several cloud server types [21].
The rest of the paper is organized as follows. Section 2
gives an overview of the proposed system architecture of
multiple cloud datacenters and cloud request brokers. In
Section 3, the objective cost model including workload and
energy consumption model with powerMark are for-
mulated. Our EEE-VMA approach based on Genetic Algo-
rithm is proposed to obtain the approximated optimal
solution minimizing the total cost of cloud datacenters in
Section 4. Section 5 shows the various experimental
results that demonstrate the effectiveness of our proposed
approach based on real-world traces. The conclusion is
given in Section 5.
2. System architecture and design
Our considered cloud environment including multiple
Cloud Request Brokers (CRBs) which support mesh net-
working with distributed multiple cloud datacenters is
depicted in Fig. 1. There are h CRBs and m cloud data-
centers with h�m communication links. In each cloud
datacenter, the information of resource utilization, the
available renewable energy, and the power consumption
of each server are collected through monitoring modules,
power measuring devices, and reported to the Cloud
Request Broker Manager (CRBM) which is responsible for
solving the allocation of VM requests submitted to CRBs.
The CRBM has two modules: the powerMark analyzer and
the EEE-VMA solver. The powerMark analyzer is respon-
sible for capturing the power efficiency of each cloud
datacenter through our proposed novel metric called
powerMark. We describe this metric in detail in Section 3.
The EEE-VMA solver is responsible for finding a near
optimal solution of VM request allocation from CRB to the
cloud datacenter. The solution derived by the EEE-VMA
solver based on the amount of submitted VM requests in
ce provider
enters at time t, which determines the destined cloud
datacenters for
umber of active servers of cloud datacenters
M tð Þ
ce contention at time t
f resource component rcARC of servers in the cloud datacenter
DCj
apacity
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240228
each CRB and the reported information from each cloud
datacenter is delivered to whole CRBs and all the
submitted VM requests are allocated to their destined
cloud datacenters.
The owner of cloud datacenters has to minimize costs
for resource operation while boosting benefits which
can be realized since cloud service users have a good
reputation on observed QoS by cloud services. In this
paper, our EEE-VMA solver tries to find a solution to
minimize the total cost of resource operation including
three sub cost models: energy consumption cost, band-
width cost, and performance degradation cost. In the
perspective of energy consumption cost, the EEE-VMA
solver tries to maximize the utilization of renewable
energy with consideration on the dynamic capacity of
each renewable energy generator since the price of
renewable energy is much cheaper than the one of grid
energy.
VM requests from CRBs are preferably allocated to
cloud datacenters which have the higher capacity of
renewable energy and the higher power efficiency (i.e., the
lower powerMark value). In the perspective of bandwidth
cost, the EEE-VMA tends to route VM requests to cloud
datacenters having the cheaper bandwidth cost. Obviously,
different pairs of CRB and cloud datacenter have different
bandwidth cost according to the hop distance and the
amount of transferred data of routed VM requests.
Therefore, it is clear that VM requests need to be allocated
to the closest cloud datacenter to their source CRB in order
to minimize the bandwidth cost. To simplify our model,
we assume that the transferred data size of each VM
request is known to the EEE-VMA solver in CRBM
beforehand.
In the perspective of performance degradation cost, the
EEE-VMA solver tries to spread whole VM requests over
multiple cloud datacenters in order to avoid QoS dete-
rioration of VM request processing. In cloud datacenters,
the VM co-location interference is the key factor that
makes servers undergo severe performance degradation
[12,22]. The VM co-location interference is caused by
resource contention which can be reflected mainly by the
number of co-located VM instances and resource utiliza-
tion of them. In brief, the VM co-location interference
grows bigger as more VM instances are co-located on the
common server and the higher resource utilization is
occurred. Therefore, VM requests have to be scattered in
order to try its hardest to avoid performance degradation
by VM co-location interference. Because of the complexity
of optimization for aggregated cost model, the EEE-VMA
solver adopts metaheuristic based on GA to obtain near
optimal solution of VM requests allocation within the
acceptable computation time. In next section, we propose
a mathematical model to describe the cost of cloud data-
center and describe the metric powerMark in detail. The
set of involved key notations are shown in Table 1.
3. Problem formulation
3.1. Workload model
There are many different kinds of workloads in cloud
datacenters which can be classified into two categories:
interactive or transactional (delay-sensitive) and non-
interactive or batch (delay-tolerant) workload [9]. The inter-
active workloads such as Internet web services and multi-
media streaming services have to be processed within a cer-
tain response time defined by service users. They are often
network I/O intensive jobs which have less impact to the
power consumption of servers. In contrast, the batch work-
loads such as scientific applications and big data analysis can
be scheduled to process anytime as long as the whole tasks
are finished before the predetermined deadline. They are
usually computation intensive jobs that require a lot of CPU
utilization causing a significant power consumption of ser-
vers. In this paper, we are interested in the computation
intensive batch workloads since they have a greater influence
to server power consumption than interactive workloads. We
assume that all the VM requests have computation intensive
workloads, and the resource contention is always occurred in
CPU resource. A workload λki tð ÞAΛi tð Þ denotes the number
of
arrived VM requests with a required flavor type (e.g., instance
type such as m3.medium or c4.large in Amazon EC2) Fk AF at
the CRBi ACRB at time t [29]. We use rkrch i to denote the
required amount of resource component rcARC by a VM
request with flavor type Fk where RC ¼ rcCPU; rcMEMf g. For
example, rkrch isuchth at Fk ¼ m3:medium and rc ¼ rcCPU
represents the required number of CPU cores by a VM request
of which the flavor type is m3:medium. When multiple VM
requests are arrived at the CRB, then the CRB would decide in
which cloud datacenters each VM request should be routed
for processing. We assume no data buffering at the CRB so
that whenever a VM request arrives at the CRB, it would be
routed to a cloud datacenter for processing immediately [11].
We denote the number of VM requests with the flavor type Fk
routed from the CRBi to DCj at time t as x
i;k
j tð Þ, which is
derived by a resource allocation plan for cloud datacenters,
X tð Þ. Then we have the following constraints:X
8DCj ADC
xi; kj tð Þ ¼ λ
k
i tð Þ; 8CRBi ACℛℬ; 8Fk Aℱ; 8t ð1Þ
0rxi;kj tð Þrλ
k
i tð Þ; 8CRBi ACRB; 8Fk AF ; 8DCj ADC; 8t ð2Þ
Above Eq. (1) means that the total number of VM
requests arrived at CRBs must agree with the one of whole
VM requests allocated to cloud datacenters. Another con-
straint we should consider is a resource capacity of the
cloud datacenter. Each cloud datacenter only can accom-
modate VM requests within their resource capacity (e.g.,
the total number of CPU cores). Then, we have the fol-
lowing constraintsX
8CRBi ACℛℬ
X
8Fk Aℱ
rk⟨ rc ¼ rcCPU⟩ Ux
i; k
j tð Þrscp
j
⟨ rc ¼ rcCPU⟩
Umj tð Þ; 8DCj ADC; 8t ð3Þ
X
8CRBi ACℛℬ
X
8Fk Aℱ
rk⟨ rc ¼ rcmem⟩ Ux
i; k
j tð Þrscp
j
⟨ rc ¼ rcmem⟩
Umj tð Þ; 8DCj ADC; 8t ð4Þ
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240 229
0rmj tð ÞrN DCj
� �
; 8DCj ADC; 8t ð5Þ
where rkrc ¼ rcCPUh i and r
k
rc ¼ rcmemh i are required CPU cores
and memory size of VM request with flavor type Fk AF .
scp jrch i is the physical capacity of resource component
rcARC of an arbitrary server in the cloud datacenter DCj.
Constraints (3) and (4) represent that allocated VM
requests can not always exceed the capacity of resource
provided by cloud datacenter DCj. We use mj tð Þ to denote
the number of active servers in cloud datacenter DCj at
time t and it is determined by a DRS plan M tð Þ, and its
upper bound is N DCj
� �
which is the number of total phy-
sical servers in the cloud datacenter DCj. The constraint (5)
represents that mj tð Þ can be determined in the range of
0 to N DCj
� �
through the DRS plan. mj tð Þ ¼ 0 means that
whole servers in the cloud datacenter DCj are in the sleep
state, while mj tð Þ ¼ N DCj
� �
means that they are in the
active state at time t.
Next, we consider a VM co-location interference to
build a performance degradation model of resource allo-
cation in cloud datacenter [12]. The VM co-location inter-
ference implies that the virtualization of cloud supports
resource isolation explicitly when multiple VM requests
are running simultaneously on common PM, but it does
not mean the assurance of performance isolation between
VM requests internally. In the perspective of CPU resource,
physical CPU cores of the server are not pinned to each
running VM request, but assigned dynamically. The
switching overhead by the dynamical CPU assignment
policy might cause the undesirable performance degra-
dation of allocated VM requests. Moreover, the CPU
resource contention aggravates the performance degra-
dation since it is very difficult to isolate the cache space of
CPU. There is a strong relationship between VM co-
location interference and the number of co-located VMs
in PM [12]. The more co-located VM instances, the more
severe VM co-location interference is occurred. Based on
[12], we estimate the performance degradation Dj tð Þ of the
cloud datacenter DCj ADC by the CPU resource contention
at time t as follows,
Dj tð Þ ¼
P
8CRBi ACℛℬ
P
8Fk Aℱx
i; k
j tð ÞUrk⟨ rc ¼ rcCPU⟩ U vu⟨ rc ¼ CPU⟩ tð Þ þtsj tð Þ
� �
scp j⟨ rc ¼ rcCPU⟩ Umj tð Þ
;
8DCj ADC; 8t ð6Þ
where tsj tð Þ is an average allocated time slice deter-
mined by Hypervisor [25,28] for VM requests allocated to
the cloud datacenter DCj at time t. We use vu rch i tð Þ to
denote an average utilization of assigned virtual resources
of whole VM requests allocated to cloud datacenters at
time t. Note that in Eq. (6), tsj tð Þ and rkrc ¼ CPUh i can be
known in advance, while vu rch i tð Þ can not be recognized
beforehand, until the utilization of CPU resource is mea-
sured through the internal monitoring module of each
server in cloud datacenters at time t [24]. Therefore it is
required to use the historical information of CPU resource
utilization of VM requests to find optimal solution of
resource allocation for the current workload. As shown in
Fig. 1, the data repository module is responsible for col-
lecting and storing the monitoring information of resource
utilization of each VM request to estimate the future
demand. Our EEE-VMA solver uses the historical data of
the resource utilization from the data repository module in
each cloud datacenter to estimate the expected perfor-
mance degradation of solution candidates.
3.2. Energy consumption model
3.2.1. The renewable energy model
The renewable energy such as the PV and wind energy
is more sustainable than the traditional grid power, and
its price is low and the less carbon is emitted [9]. There
are two models to achieve the sustainable cloud data-
centers by deploying the renewable energy generation.
One is on-site deployment of renewable energy genera-
tion at the datacenter facility itself. For example, Apple
has built its own local biogas fuel cells and two 20-MW
solar arrays in Maiden, NC and they have been powered
by 100% renewable energy sources [26,27]. Such on-site
renewable energy generator can alleviate energy losses
due to the transmission and distribution of generated
energy, but its energy potential depends greatly on the
location of the cloud datacenter. Another model is
building the renewable energy generator at off-site
facilities. It has the flexibility to locate the generator in
a location with good weather (e.g., strong wind speed or
bright sunshine), but the significant transmission losses
of energy can be occurred. In this paper, we use the first
model which has been adopted by most major datacenter
owners.
We denote rwej tð Þ and rpej tð Þ as dynamic capacity of
renewable wind energy and renewable photovoltaic
energy of the cloud datacenter DCj ADC at time t, respec-
tively. Obviously, it is required to forecast the future
capacity of renewable energy to achieve energy efficient
resource management of cloud datacenters since they are
usually intermittent and irregular. Therefore, we estimate
the future capacity of renewable energy generation by
using the historical data from the data repository module
in the cloud datacenter through calculating an Exponen-
tially Weighted Moving Average (EWMA) values. The
detailed descriptions of the EWMA based forecasting
scheme for estimated capacity of renewable energy is
omitted in this paper.
3.2.2. Heterogeneous power consumption model
We propose a novel power efficient metric called
powerMark to evaluate the heterogeneous power con-
sumption of cloud datacenters. Servers consist of each
cloud datacenter have heterogeneous architecture, which
implies that the specification of their resources are dif-
ferent, consequently, even though they process the same
application, for which each required power consumption
might be different [8,13]. To describe powerMark in detail,
we propose Definition 1 and 2 as belows,
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240230
Definition 1. (powerMark):
The powerMark pwMjrch i is an average power con-
sumption per an unit level of utilization of
resource component rcARC of servers in the cloud
datacenter DCj.
Definition 2. (pivot server):
The predetermined pivot server pivotS used as a cri-
terion of resource capacity for normalizing
powerMark of each cloud datacenter.
Moreover, we propose the novel concept pivot server
pivotS in Definition 2 to normalize the powerMark of each
cloud datacenter. For simplicity, we assume that servers in
the same cloud datacenter has power-Homogeneity to each
other. To obtain the powerMark value, we predetermined the
set of resource utilization levels UR ¼ ur1; ur2; …; urk
� �
.
The powerMark pwMjrch i represents the power efficiency of
servers in the cloud datacenter DCj with respect to the
certain resource component rcAR by calculating an arith-
metic mean of power consumption measured at each
resource utilization level urk AUR; 8urk 40. For example,
we set UR ¼ ur1 ¼ 0:1; ur2 ¼ 0:2; …; ur9 ¼ 0:9f g and rc ¼
rcCPU, then the power consumption of server is measured at
each CPU utilization level 0:1; 0:2; …; 0:9 respectively. Based
on the data of measured power consumption, the power-
Mark pwMjrch i is given by
pwMj⟨ rc⟩ ¼
1
Uℛj j
X
8urk AUℛ
pw j⟨ rc⟩ ; urk
urk
: ð7Þ
npwMj⟨ rc⟩ ¼
1
Uℛj j
X
8urk AUℛ
scppivot
⟨ rc⟩
scpj
⟨ rc⟩
Upw j⟨ rc⟩ ;urk
urk
: ð8Þ
where pw jrch i;urk is the power consumption of servers in
the cloud datacenter DCj at the utilization level urk of the
resource component rc. npwMjrch i is the normalized value
of pwMjrch i based on the capacity of resource component
rc of the pivot server pivotS where
scppivotrch i
scpjrch i
Upw jrch i;urk is the
normalized value of pwjrch i;urk . The lower powerMark
represents the higher power efficiency of the cloud
datacenter and, with larger URjj , powerMark can accu-
rately describe the power efficiency of the cloud data-
center. In order to investigate the availability of power-
Mark, we conduct the preliminary experiment to obtain
Table 2
Three server types for an experiment to investigate powerMark.
Server types CPU architecture CPU cores CPU
Server-1 Intel i5-760 4 2.8
Server-2 Intel i5-4590 4 3.3
Server-3 Intel i7-3770 8 3.4
power consumption of heterogeneous servers with run-
ning VM requests processing computation-intensive jobs
on a real test bed. There are 3 server types to investigate
the heterogeneity of power consumption. The hardware
specifications of each server type are shown in Table 2.
Each server has two Ethernet interface cards with 1 Gbps
and uses Ubuntu 14.04. The test application for the
experiment is mProject module (m108 with range 1.7) of
Montage Project to make astronomy image files of space
galaxy, which is the computation-intensive application
[14]. Fig. 2 shows curves of power consumption of each
server type as the resource utilization of CPU is increased
by mProject running when Server-1 type is pivotS. As
mentioned earlier, each server requires different power
consumption even at same resource utilization level. The
Server-1 has the largest amount of power consumption
than others, which means that this server type has the
worst performance in terms of energy consumption.
Fig. 3 shows the calculated normalized powerMark npwM
values with rc ¼ rcCPU of Server-1, 2, and 3 based on Eq.
(8). Note that the difference of normalized powerMark
npwM values among servers of Fig. 3 is bigger than the
one of powerMark pwM values of Fig. 2. The Server-3 has
the smallest value of npwM, which means that this server
has the best power efficiency among three servers, and
this is in concordance with the results in Fig. 2. Based on
result curves in Figs. 2 and 3, we conclude that our pro-
posed metric powerMark is simple and useful to represent
the relative power efficiency of heterogeneous cloud
datacenters in practice.
3.2.3. Dynamic right sizing model
To achieve power-proportional cloud datacenter
which consumes power only in proportion to the work-
load, we consider DRS approach which adjusts the
number of active servers by turning them on or off
dynamically [3]. Obviously, there is no need to turn all
the servers in cloud datacenter on when the total
workload is low. In DRS approach, the state of servers
which have no running applications can be transit to the
power saving mode (e.g., sleep or hibernation) in order
to avoid wasting energy as shown in Fig. 4. In order to
successfully deploy DRS approach onto our system, we
should consider the switching overhead for adjusting
the number of active servers (i.e., for turning sleep ser-
vers on again).
The switching overhead includes: (1) additional energy
consumption by transition from sleep to active state (i.e.,
clocks (GHz) Cache size (kB) Memory size (GB)
8192 3
6144 8
8192 16
Fig. 2. Normalized power consumption results of Server-1, 2, 3
under
execution of Montage applications as an example.
Fig. 3. Results of normalized powerMark of Server-1, 2, 3 based
on Eq. (7).
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240 231
awaken transition); (2) wear-and-tear cost of server;
(3) fault occurrence by turning sleep servers on when
toggled is high [3]. We only consider the energy con-
sumption as the overhead by DRS execution. Therefore, we
define a constant αaWaken to denote the amount of energy
consumption for awaken transition of servers. Then the
total energy consumption ej tð Þ of cloud datacenter DCj at
time t is defined as follows,
ej tð Þ ¼
X
8rcAℛ pwM
j
⟨ rc⟩ U
P
8CRBi ACℛℬ
P
8Fk Aℱ r
k
⟨ rc⟩ Ux
i; k
j tð ÞUvurc tð Þ
scpjrc Umj tð Þ
!
þαaWaken � mj tð Þ �mj t�1ð Þ
� �þ
; 8DCj ADC; 8t ð9Þ
where xð Þþ ¼ max 0; xð Þ. The first term of the right hand
in (9) represents an energy consumption for using servers
to serve VM requests allocated to the cloud datacenter
DCj at time t and the second term represents an energy
consumption for awaken transition of sleeping servers.
Especially, the second term implies that a frequent
changes in the number of active servers might increase
the undesirable waste of energy. Note that the overhead
by transition from active to sleep state (i.e., asleep tran-
sition) is ignored in our model since a time required for
asleep transition is relatively short compared to the one
for awaken transition.
3.3. The cloud datacenter cost minimization problem
We build a cost model based on workload model and
energy consumption model proposed in Sections 3.1 and
3.2. We focus on minimizing the total cost including three
sub costs: (1) energy cost; (2) performance degradation
cost; (3) bandwidth cost. In our energy cost model, to
simplify it, we assume that the price for renewable energy
usage is zero in this paper (strictly, the real price is not
zero since the investment expense and the maintenance
expenditure for renewable energy generation equipments
are required to deploy the renewable energy generator
onto the cloud datacenter). Generally, the price of power
grid and the capacity of the generated renewable energy
are time-varying according to the electricity market and
the location of the cloud datacenter [17,19]. We use cej tð Þ to
denote the energy cost of cloud datacenter DCj at time t as
follows,
cej tð Þ ¼ ρgrid tð ÞU ej tð Þ�rwej tð Þ�rpej tð Þ
� �þ
; 8DCj ADC; 8t
ð10Þ
where ρgrid tð Þ denotes the time-varying price of
power grid at time t. Next, the performance degradation
cost can be determined by the total performance
degradation of the cloud datacenter based on Eq. (6).
When we use ρperf to denote the constant of penalty
price for performance degradation, then the perfor-
mance degradation cost of the cloud datacenter DCj at
time t, cperfj tð Þ is given by,
cperfj tð Þ ¼ ρ
perf � Dj tð Þ; 8DCj ADC; 8t ð11Þ
Note that ρperf is a constant in contrast with ρgrid tð Þ
which is dynamically changed according to time. Third,
the bandwidth cost is the one for the data transfer
between the cloud service users closed to CRBs and VM
requests allocated on servers in cloud datacenters.
Obviously, different links between CRB and cloud data-
center require the different bandwidth cost. The band-
width cost is determined by the network distance (e.g.,
hop distance) and the transferred data size. We use cbwj tð Þ
to denote the bandwidth cost of the cloud datacenter DCj
at time t as given by,
cbwj tð Þ ¼
X
8CRBi ACℛℬ
X
8DCj ADC
ρbwi; j U
X
8Fk Aℱ
xi; kj tð ÞUds
k
� �
;
8DCj ADC; 8t ð12Þ
where ρbwi;j denotes is the bandwidth cost coefficient of
the communication link between the cloud request broker
CRBi and the cloud datacenter DCj, and ds
k denotes the
transferred data size of VM request with flavor type Fk.
Fig. 4. Illustration of Dynamic Right Sizing procedure.
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240232
Obviously, as the hop distance between the CRBi and DCj
grows longer, ρbwi;j is also increased. (12) implies that the
allocation of more VM requests to the cloud datacenter
which is far away (i.e., has long hop distance) from the
source CRB increases the bandwidth cost cbwj tð Þ. It is
advantageous for bandwidth cost saving to allocate VM
requests to the nearest cloud datacenter to their
source CRB.
Consequently, we focus on minimizing the total cost of
whole cloud datacenters through our proposed approach
of the EEE-VMA solver. We use ctotal tð Þ to denote the total
cost of whole cloud datacenters at time t, which includes
the energy costs, the performance degradation cost and
the bandwidth cost. Then we define the objective function
f EEE �VMA S tð Þð Þ to calculate the total cost determined by
the
solution S tð Þ ¼ X tð Þ ¼ x1; 11 tð Þ;
nn
x1;1 2 tð Þ; …; x
Cℛℬj j; ℱj j
DCj j tð Þ g;
M tð Þ ¼ m1 tð Þ; m2 tð Þ; …; m DCj j tð Þ
� �g at time t as belows,
f EEE �VMA S tð Þð Þ : ctotal tð Þ ¼
X
8DCj ADC
cej tð Þþc
perf
j tð Þþc
bw
j tð Þ
� �
s:t 1ð Þ � ð5Þ
ð13Þ
To solve this function, we propose the EEE-VMA
approach based on GA in order to find an approximated
optimal solution for VM request allocation. In next section,
we describe our algorithm in detail.
3.4. Evolutionary energy efficient virtual machine allocation
In this section, we propose EEE-VMA approach based
on GA which is one of efficient metaheuristics to solve a
complex optimization problem. In order to successfully
deploy GA onto the EEE-VMA, we should define accu-
rate strategies for GA and set their appropriate para-
meters. To do this, we consider five basic steps of GA as
follows.
3.4.1. Encoding scheme
A chromosome (i.e., individuals in the population)
features the solution S tð Þ of our datacenter management
scheme in cloud datacenters. The format of genes in the
chromosome is described as an integer value. The chro-
mosome includes multiple genes which are divided into
two parts: the first part is for the VM request allocation
plan X tð ÞAS tð Þ; and the second part is for the DRS plan
M tð ÞAS tð Þ. This detailed structure of the chromosome is
shown in the Fig. 5.
In the first part of the chromosome, gene values repre-
sent the number of VM requests allocated to the cloud
datacenter at time t. For example, as shown in Fig. 5, the
gene 1; 210ð Þ in the chromosome represents x1;11 tð Þ ¼ 210
which means that the number of allocated VM requests
with flavor type F1 from the CRB1 to DC1 is 210 at time t. In
the second part of the chromosome, gene values represent
the number of active servers in the cloud datacenter at time
t. In Fig. 5, the gene CRB � F � DC þ1; 3200j Þ
����������� in the
chromosome means that the number of active servers in
the cloud datacenter DC1 is 3200 at time t.
3.4.2. Initialization
In the first generation g ¼ 1, GA in the EEE-VMA
approach begins with randomly generated populations
according to submitted VM requests at each CRB. To
reduce the computation time for GA execution, the range
of value for each gene can be predetermined based on (2),
and (5).
3.4.3. Evaluation
In EEE-VMA approach, we use (13) to evaluate the
performance of each chromosome (i.e., solution) in the
population. The fitness value of a chromosome is inversely
Fig. 5. Encoding example of VM allocation with chromosome.
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240 233
related to the cost value. The higher fitness function value
implies the higher performance of the chromosome. Note
that if a certain chromosome violates any of constraints
(1)–(5), then its cost value is counted as “positive infinity”.
Otherwise, the chromosome which has the smallest cost
value among all the chromosomes in the generated
population at g ¼ gMax (a max step of generation) is cho-
sen as an optimal solution S� tð Þ finally.
3.4.4. Selection
There are several candidate schemes for selection of
appropriate solutions in GA. Especially, we adopt the
roulette-wheel selection which determines the probability
of each chromosome to be chosen according to their fit-
ness function value. This scheme tends to preserve
superior solutions and evolve them in the next generation
[30].
3.4.5. Crossover
The role of crossover is to generate offspring from
two parents by cutting certain genes of parents and
conducting recombination of each gene fragment. The
offspring inherits characteristics of each parent. Our
EEE-VMA approach adopts the simple crossover scheme
by which the first half of the first parent and the second
half of the second parent are aggregated to genes of their
offspring. Note that crossover has to be conducted
separately on each part of the chromosome since it has
two parts of the VM request allocation plan and the
DRS plan.
3.4.6. Mutation
It is necessary to ensure the diversity of the generated
population at all generation steps in order to avoid local
minima problem in GA. At each generation step g, gene
values constituting chromosome can be modified ran-
domly according to the predetermined probability pbmt.
If pbmt is too large, the superior genes inherited from
parents can be loss, otherwise, the diversity of popula-
tion might be lower when pbmt is too small. It is
important to determine the appropriate pbmt in order to
maintain the diverse and superior population. However,
we do not consider this issue since it is out of scope in
this paper.
The proposed GA for EEE-VMA approach is described in
Algorithm 1. In order to get the near optimal solution x�t of
datacenter management for VM requests arrived at CRBs
at time t, the state information of servers in all the data-
centers at time t�1 is required. If the current time t ¼ 0,
then we assume that the previous state of all the servers is
active (i.e., all the servers are switched on). In line 02, we
initialize the candidate population cand_popg tð Þ, g ¼ 1ð Þ
with the population size ps (represents the limit number
of chromosomes in the population) randomly. The popu-
lation cand_popg tð Þ evolves until g ¼ gMax to generate the
final population popg ¼ gMax tð Þ to search the near optimal
solution x�t as shown from line 03 to 29. Two parent
chromosomes Sg;i tð Þ and Sg;j tð Þ are released from
temp_popg tð Þ to produce an offspring Sg;k tð Þ from line 06
to
10. In line 11, each offspring in the set offspringg tð Þ is
mutated by modifying each gene according to the prob-
ability prmt to maintain the diversity of the population. We
check constraints (i.e., Eqs. (1)–(5)) of each solution Sg;i tð Þ
in cand_popg tð Þ are whether violated or not in line 14. If
they are violated, the corresponding solution has the cost
value counted as “positive_infinity”. Otherwise, the
objective function value of the solution is calculated
through f EEE �VMA Uð Þ in line 17. If we find the solution
Sg;i tð Þ
having the objective function value cg;i tð Þ which is smaller
than the predetermined threshold value cthr, then we
count Sg;i tð Þ as the near optimal solution S� tð Þ and the
algorithm 1 is finished. Otherwise, chromosomes to be
preserved until the next generation are chosen from the
current population through the Iterative Roulette-wheel
Selection (Algorithm 2) procedure as shown in line 23.
When reaching the max step of generation gMax, the near
optimal solution S� tð Þ having minimum cost value of
f EEE �VMA Uð Þ in popg ¼ gMax tð Þ is found and returned
to the
EEE-VMA solver in our system in line 30.
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240234
The IterativeRwSelection for Algorithm 1 is described
in Algorithm 2. From line 02 to 07, the cost values which
have “positive_infinity” are released from cand_Cg tð Þ and
added to illeg_Cg tð Þ since solutions which do not violate
constraints (i.e., (1)–(5)) are preferentially considered as
candidates to be preserved until the next generation. The
maximum (worst) and minimum (best) objective function
Algorithm 1.
values are found from cand_Cg tð Þ in line 09 and 10. Fitness
values of each solution are calculated as shown in line 13.
Through this equation, the fitness value of the best solu-
tion comes out as α times of one of the worst solution. The
selection pressure which represents the difference
between fitness values of superior solutions and inferiors
is increased as α is increased. The sum of fitness values of
each solution, SF is updated in line 15.
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240 235
Algorithm 2.
This value represents the total size of roulette-wheel, and
each solution is assigned to spaces on the roulette-wheel.
That means that the selection probability of each solution is
proportional to the size of their assigned spaces. The selec-
tion procedure of the roulette-wheel is described from line
19 to 26. At every step, the cumulative summation QS is
updated according to the fitness value fvg;i tð Þ in FVg tð Þ. If
the
selection point SP is smaller than QS with the latest update
by fvg;i tð Þ (i.e.,
Pi�1
k ¼ 1
fvg;k tð ÞrSPr
Pi
k ¼ 1
fvg;k tð Þ), then the index
i of Sg;i tð Þ in cand_popg tð Þ is added to chIdxSetg tð Þ. If
the total
Fig. 6. Wind speed (m=s) and its corresponding amount of
generated wind energy (kW) at Oak Ridge National Lab (a and
b), Univ of Arizona (c and d), and
Univ of Nevada (e and f) at EST 05:20–17:54 on September 9,
2015 [33].
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240236
number of chosen chromosomes by the roulette-wheel
procedure is not sufficient (i.e., the cardinality of
chIdxSetg tð Þ is smaller than the predetermined size of the
population ps), then we supplement chIdxSetg tð Þ by ran-
domly putting out the indices of chromosomes from
illeg_Cg tð Þ. After all the procedures are finished, then
chIdxSetg tð Þ is finally returned to the Algorithm 1 in line 32.
4. Performance evaluation
In this section, we evaluate the performance of our
proposed EEE-VMA approach based on both of simulation
analysis and experiments on real testbeds. To highlight the
benefits of our design for renewable and QoS aware
workload management, we perform a numerical simula-
tion based on real-world traces of renewable energy
capacity.
4.1. Dynamic capacity of renewable energy
We consider three locations to employ the raw data in
order to build a capacity trace of renewable energy
including wind energy: Oak Ridge National Lab (Eastern
Tennessee); University of Arizona (Tucson, Arizona); Uni-
versity of Nevada, Las Vegas (Paradise, Nevada) [11,33]. We
obtain the capacity traces of wind energy at those three
locations baced on [33] that collects data of wind speed
every day. The capacity traces of each location at EST
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240 237
05:20–17:54 on September 9, 2015 are shown in Fig. 6.
Fig. 6(a), (c), and (e) shows the wind speed of each location
and we can find that it is fluctuated a lot even during short
period. We assume that each generator has 30 wind tur-
bines and the amount of generated wind energy is esti-
Fig. 7. Real time price of power grid at three regions.
Fig. 8. CPU utilization of the running VM request including
pbzip2,
iozone3, and netperf.
Fig. 9. Total cost of datacenters including servers with heter
mated based on the wind power prediction scheme from
[34]. Then the curves of the amount of available wind
energy are shown in Fig. 6(d), (e), and (f).
4.2. Energy price description
As mentioned earlier, only the grid power price is
considered since we assume that the renewable energy
price is free. The grid power price is dynamically changed
according to the electricity consuming time. We use the
electricity price information in our simulation based on
the real time pricing during 24 h in the electricity market
which is shown in Fig. 7 [23,35]. Note that the electricity
price is high from 6 a.m. to 14 p.m., and from 19 p.m. to
21 p.m. The electricity usage is usually increased during
these periods due to the needs of industrial and household
appliances. In our simulation, each cloud datacenter ran-
domly has the electricity pricing curve among datacenter
1, 2, and 3 in Fig. 7.
4.3. Cloud resource description
The total number of multiple cloud datacenters is nine,
and each datacenter owns 2 � 103 homogeneous servers in
this paper. In perspective of VM instance specifications, we
adopt the policy of Amazon EC2 Web Services (AWS), our
cloud datacenters support the set of flavor types
F ¼ F1 ¼ CPU ¼ 2cores; mem ¼ 4 GBð Þ; F2 ¼ 4; 8ð Þ; F3 ¼
8; 16ð Þ;
�
F4 ¼ 16; 32ð Þg and each VM request has an arbitrary flavor
type Fk AF randomly [29]. As mentioned in Section 3, each
cloud datacenter has heterogeneous server architecture,
they have different powerMark value in the range of 200–
500 based on results in Fig. 3.
4.4. Workload scenario
Our considered workload includes two parts: the
number of VM requests tð Þ , and their required resource
utilization vu rch i tð Þ at time t. The number of VM requests
Λ tð Þ can be defined as from 3 � 103 to 100 � 103 in this
paper. Obviously, as Λ tð Þ is increased, both of energy con-
sumption and performance degradation are also increased.
ogeneous (a) and homogeneous power efficiency (b).
Fig. 10. Active server ratio of cloud datacenters including
servers with heterogeneous (a) and homogeneous power
efficiency (b).
Fig. 11. Performance degradation of CPU contention by co-
located VM requests in Server-1 (a), 2 (b), and 3 (c).
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240238
In perspective of resource utilization, we only consider the
resource component rc ¼ rcCPU and ignore the resource
component rcmem since the energy consumption and per-
formance degradation caused by rcmem is negligible com-
pared to ones by rcCPU. We use the real traces of CPU
resource utilization measured by the monitoring module
with serveral benchmark applications on the physical
machines. Fig. 8 shows the CPU resource utilization of
running benchmarks including a mixture of pbzip2,
iozone3, netperf on VM instances.
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240 239
4.5. GA Parameters for EEE-VMA approach
We consider a population size ps with a range from 102
to 104, the max step of generation gMax in the range of
100 to 1000, and the mutation probability 0.001, 0.005 and
0.01 in the Algorithm 1 and 2. As the parameters such ps
and gMax are increased, the performance of the derived
solution is increased, but its computation need is also
deteriorated.
4.6. Traditional resource management schemes
To demonstrate that our proposed approach outper-
forms existing resource management schemes, we com-
pare the EEE-VMA approach to both of VM consolidation
and VM balancing based allocation approaches. The VM
consolidation approach tries to pack multiple VM requests
as many as possible in the common physical server. This
scheme tends to reduce the number of active servers.
Therefore, the energy saving performance is increased,
while the performance degradation is deteriorated. In
contrast, the VM balancing approach splits VM requests
over multiple cloud datacenters. This scheme avoids the
performance degradation of resource contention by VM
request co-location, but causes the large energy con-
sumption due to a lot of active servers.
Figs. 9, 10, and 11 show the performance of our pro-
posed EEE-VMA approach and existing VM balancing and
consolidation approaches at ps ¼ 102, gMax ¼ 500, and
mutation probability is 0.001. Fig. 9 shows the total cost in
Eq. (13) of the VM balancing, VM consolidation and our
proposed EEE-VMA approach at different workload offered
load level. Fig. 9(a) shows the curves of total cost of all the
approaches assuming that each cloud datacenter has het-
erogeneous power efficiency. Our proposed approach
achieves the improvements of the cost saving performance
about 8% and 53% compared to VM consolidation and VM
balancing approaches, respectively. However, the differ-
ence of the cost saving performance between traditional
approaches and our EEE-VMA approach in Fig. 9
(b) assuming that each cloud datacenter has homogeneous
power efficiency is relatively small compared to the one in
Fig. 9(a). The EEE-VMA approach achieves the improve-
ments of the cost saving performance about 10% and 15%
compared to VM consolidation and VM balancing
approaches, respectively. Note that our EEE-VMA approach
further improves the performance of energy saving in the
heterogeneous cloud datacenters since it uses powerMark
value which can rank the power efficiency of each cloud
datacenter to maximize the energy efficiency of resource
allocation. However, our proposed approach still has the
better performance than ones of existing approaches even
under the assumption of homogeneous power efficiency of
each cloud datacenter. Fig. 10 shows the active server ratio
of cloud datacenters by our EEE-VMA approach and
existing resource management approaches. In Fig. 10(a),
the average active server ratio of the EEE-VMA approach is
under 30%, while the ones of VM balancing is closed to
60%. Our EEE-VMA approach considers both of energy
consumption and performance degradation of VM
requests, while the VM balancing only focuses on the
performance degradation. Note that the energy saving
performance of VM consolidation is worse than the one of
the EEE-VMA approach even though the VM consolidation
focuses on the energy consumption of cloud datacenters.
This is because our EEE-VMA approach allocates VM
requests to power efficient cloud datacenters pre-
ferentially based on their powerMark values, while the VM
consolidation randomly assigns VM requests to cloud
datacenters. In Fig. 10(b), the active server ratio of VM
consolidation is lower than the one of our EEE-VMA
approach, this is because the VM consolidation approach
only focuses on the energy consumption of cloud data-
center, but the EEE-VMA approach avoids unacceptable
performance degradation of running VM requests through
Eq. (11).
Fig. 11 shows the performance degradation of allocated
VM requests in each cloud datacenter by the EEE-VMA, VM
consolidation, and VM balancing approaches based on the
server types. The performance degradation is calculated by
Eq. (6). In the perspective of performance degradation, the
VM balancing approach outperforms the others including
our proposed EEE-VMA approach. The VM balancing tries
to spread submitted VM requests over whole cloud data-
centers as fair as possible, therefore the CPU resource
contention of co-located VM requests can be minimized. In
Server-1 type, the performance degradation of VM balan-
cing is lower than the ones of the EEE-VMA approach and
the VM consolidation by 40% and 60%, respectively. In
Server-2 type, the performance degradation of VM balan-
cing is lower than ones of the EEE-VMA approach and the
VM consolidation by 55% and 62%, respectively. Finally, in
Server-3 type, the VM balancing approach can improve the
performance degradation about 39% and 55% compared to
the EEE-VMA approach and the VM consolidation,
respectively.
5. Conclusions
In this paper, we introduced the EEE-VMA approach for
greening cloud datacenters with renewable energy gen-
erators. We proposed a novel energy efficient metric
powerMark to classify the power efficiency of hetero-
geneous servers in cloud datacenters and built a con-
siderate cost model considering switching overheads in
order to reduce efficiently the energy consumption of
servers without a significant performance degradation by
co-located VM requests and DRS execution. We deployed
the iterative roulette-wheel algorithm for GA of the EEE-
VMA approach in order to solve the complex objective
function of our cost model. Through various experimental
results based on simulation and Openstack platform justify
that our proposed algorithms are supposed to be deployed
for prevalent cloud data centers. In the perspective of total
cost, our EEE-VMA approach can improve the average cost
by 28% compared to existing resource management
schemes at all the workload level. With the increase of the
computation investment for GA in EEE-VMA approach, our
proposed approach can get arbitrarily close to the
optimal value.
Y. Peng et al. / Optical Switching and Networking 23 (2017)
225–240240
Acknowledgments
This work was supported by 'The Cross-Ministry Giga
KOREA Project' of the Ministry of Science, ICT and Future
Planning, Korea [GK13P0100, Development of Tele-
Experience Service SW Platform based on Giga Media].
References
[1] J. Hamilton, Cost of Power in Large-Scale Data Centers,
Nov. 2009.
Available Online: ⟨ http://perspectives.mvdirona.com/⟩ .
[2] J. Koomey, Growth in Data Center Electricity Use 2005–
2010, Ana-
lytics Press, Burlingame, CA, USA, 2011.
[3] M. Lin, A. Wierman, L. Lachlan, H. Andrew, E. Thereska,
Dynamic
right-sizing for power-proportional data centers, IEEE/ACM
Trans.
Netw. 21 (5) (2013) 1378–1391.
[4] L.A. Barroso, U. Holzle, The case for energy-proportional
computing,
Computer 40 (12) (2007) 33–37.
[5] T. Lu, M. Chen, L. Lachlan, H. Andrew, Simple and
effective dynamic
provisioning for power-proportional data centers, IEEE Trans.
Par-
allel Distrib. Syst. 24 (6) (2013) 1161–1171.
[6] Z. Ou, H. Zhuang, J. K. Nurminen, A. Yla-Jaaski, and P.
Hui, Exploiting
hardware heterogeneity within the same instance type of
Amazon
EC2, In: Proceedings of the 4th USENIX Workshop on
HotCloud,
2012.
[7] Z. Ou, H. Zhuang, A. Lukyanenko, J.K. Nurminen, P. Hui,
V. Mazalov,
A. Yla-Jaaski, Is the same instance type created equal?
Exploiting
heterogeneity of public clouds, IEEE Trans. Cloud Comput. 1
(2)
(2013) 201–214.
[8] A. Beloglazov, R. Buyya, Optimal online deterministic
algorithms and
adaptive heuristics for energy and performance efficient
dynamic
consolidation of virtual machines in cloud data centers,
Concurr.
Comput.: Pract. Exp. 24 (2012) 1397–1420, http://dx.doi.org/
10.1002/cpe.1867.
[9] Z. Liu, Y. Chen, C. Bash, A. Wierman, D. Gmach, Z. Wang,
M. Marwah,
C. Hyser, Renewable and cooling aware workload management
for
sustainable data centers, In: Proceedings of the ACM
SIGMETRICS,
London, UK, 2012.
[10] Y. Guo, Y. Fang, Electricity cost saving strategy in data
centers by
using energy storage, IEEE Trans. Parallel Distrib. Syst. 24 (6)
(2013)
1149–1160.
[11] Y. Guo, Y. Gong, Y. Fang, P.P. Khargonekar, X. Geng,
Energy and
network aware workload management for sustainable data
centers
with thermal storage, IEEE Trans. Parallel Distrib. Syst. 25 (8)
(2014)
2030–2042.
[12] F. Xu, F. Liu, L. Liu, H. Jin, B. Li, B. Li, iAware: making
live migration of
virtual machines interference-aware in the cloud, IEEE Trans.
Com-
put. 63 (12) (2014) 3012–3025.
[13] X. Wang, B. Li, B. Liang, Dominant Resource Fainess in
Cloud Com-
puting Systems with Heterogeneous Servers, In: Proceedings of
the
IEEE INFOCOM, Toronto, Canada, 2014.
[14] Montage, ⟨ http://montage.ipack.caltech.edu/⟩ .
[15] S.K. Garg, R. Buyya, Green cloud computing and
environmental
sustainability, in: S.M. A. G. G. (Ed.), Harnessing Green IT:
Principles
and Practices, Wiley Press, UK, 2012, pp. 315–340.
[16] I. Goiri, M.E. Haquc, K. Le, R. Beauchea, T.D. Nguyen, J.
Guitart,
J. Torres, R. Bianchini, Matching renewable energy supply and
demand in green datacenters, Elsevier Ad Hoc Netw. 25 (2015)
520–534.
[17] C. Wu, H. Mohsenian-Rad, J. Huang, A. Y. Wang, Demand
side
management for wind power integration in microgrid using
dynamic potential game theory, In: Proceedings of the IEEE
CLO-
BECOM Workshop on Smart Grid Communications and
Networking,
Houston, Tx, Dec. 2001.
[18] Solar Anywhere, Solar anywhere overview, Clean Power
Research,
Web. 17 Apr. 2012.
⟨ http://www.solaranywhere.com/Public/Over
view.aspx⟩ .
[19] R. Huang, T. Huang, R. Gadh, Solar Generation Prediction
using the
ARMA Model in a Laboratory-level Micro-grid, In: Proceedings
of the
IEEE SmartGridComm Symposium, Tainan, Taiwan, Nov. 2012.
[20] Openstack, ⟨ http://www.openstack.org/⟩ .
[21] YOCTO-WATT,
⟨ http://www.yoctopuce.com/EN/products/usb-elec
trical-sensors/yocto-watt⟩ .
[22] D. Gupta, L. Cherkasove, R. Gardner, A. Vahdata,
Enforcing perfor-
mance isolation across virtual machines in Xen, In: Proceedings
of
the ACM/IFIP/USENIX 2006 International Conference on
Middle-
ware, Nov. 2006.
[23] A. Qureshi, R. Weber, H. Balakrishnan, J. Guttag, B.
Maggs, Cutting
the electric bill for internet-scale systems, In: Proceedings of
the
ACM SIGCOMM Computer Communications Review, vol.
39(4), Aug.
2009, pp. 123–134.
[24] T. Wood, P. Shenoy, A. Venkataramani, M. Yousif,
Sandpiper: black-
box and gray-box resource management for virtual machines,
Elsevier Comput. Netw. 53 (17) (2009) 2923–2938.
[25] Credit Scheduler,
⟨ http://wiki.xen.org/wiki/Credit_scheduler⟩ .
[26] C. Ren, D. Wang, B. Urgaonkar, A. Sivasubramaniam,
Carbon-aware
energy capacity planning for datacenters, In: Proceedings of the
IEEE
MASCOTS, 2012, pp. 391–400.
[27] Apple Environmental Responsibility,
⟨ http://www.apple.com/envir
onment.renewable-resources/⟩ .
[28] H. Fawaz, Y. Peng, C. Youn, A MISO mode for power
consumption in
virtualized servers, Clust. Comput. 18 (2) (2015) 847–863.
[29] Amazon Web Services, ⟨ https://aws.amazon.com⟩ .
[30] E.D. Dasgupta, Z. Michalewicz, Evolutionary Algorithms
in Engi-
neering Applications, Springer, Berlin, Germany, 1997.
[31] M. Chen, Y. Wen, H. Jin, V. Leuna, Enaling technologies
for future
data center networking: a primer, IEEE Netw. 27 (4) (2013) 8–
15.
[32] M. Chen, Y. Zhana, L. Hu, T. Taleb, Z. Shena, Cloud-
based wireless
network: virtualized, reconfigurable, smart wireless network to
enable 5G technologies, ACM/Springer Mob. Netw. Appl. 20
(6)
(2015) 704–712.
[33] Measurement and Instrumentation Data Center (MIDC),
⟨ http://
www.nrel.gov/midc/⟩ .
[34] C. Wu, H. Mohsenian-Rad, J. Huang, A.Y. Wang, Demand
side man-
agement for wind power integration in microgrid using dynamic
potential game theory, In: Proceedings of the IEEE
GLOBECOM
Workshop on Smart Grid Communications and Networking,
Hous-
ton, Tx, Dec. 2001.
[35] H. Mohsenian-Rad, A. Leon-Garcia, Optimal residential
load control
with price prediction in real-time electricity pricing
environments,
IEEE Trans. Smart Grid 1 (2) (2010) 120–133.
http://www.perspectives.mvdirona.com/
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref1
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref1
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref2
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref2
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref2
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref2
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref3
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref3
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref3
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref4
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref4
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref4
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref4
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref5
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref5
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref5
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref5
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref5
http://dx.doi.org/10.1002/cpe.1867
http://dx.doi.org/10.1002/cpe.1867
http://dx.doi.org/10.1002/cpe.1867
http://dx.doi.org/10.1002/cpe.1867
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref7
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref7
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref7
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref7
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref8
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref8
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref8
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref8
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref8
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref9
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref9
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref9
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref9
http://montage.ipack.caltech.edu/
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref10
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref10
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref10
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref10
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref11
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref11
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref11
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref11
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref11
http://www.solaranywhere.com/Public/Overview.aspx
http://www.solaranywhere.com/Public/Overview.aspx
http://www.openstack.org/
http://www.yoctopuce.com/EN/products/usb-electrical-
sensors/yocto-watt
http://www.yoctopuce.com/EN/products/usb-electrical-
sensors/yocto-watt
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref12
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref12
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref12
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref12
http://wiki.xen.org/wiki/Credit_scheduler
http://www.apple.com/environment.renewable-resources/
http://www.apple.com/environment.renewable-resources/
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref13
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref13
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref13
http://https://www.aws.amazon.com
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref14
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref14
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref15
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref15
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref15
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref16
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref16
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref16
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref16
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref16
http://www.nrel.gov/midc/
http://www.nrel.gov/midc/
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref17
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref17
http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref17
http://refhub.elsevier.com/S1573-4277(16)00005-
9/sbref17Energy and QoS aware resource allocation for
heterogeneous sustainable cloud datacentersIntroductionSystem
architecture and designProblem formulationWorkload
modelEnergy consumption modelThe renewable energy
modelHeterogeneous power consumption modelDynamic right
sizing modelThe cloud datacenter cost minimization
problemEvolutionary energy efficient virtual machine
allocationEncoding
schemeInitializationEvaluationSelectionCrossoverMutationPerf
ormance evaluationDynamic capacity of renewable
energyEnergy price descriptionCloud resource
descriptionWorkload scenarioGA Parameters for EEE-VMA
approachTraditional resource management
schemesConclusionsAcknowledgmentsReferences
Renewable and Sustainable Energy Reviews 62 (2016) 195–214
Contents lists available at ScienceDirect
Renewable and Sustainable Energy Reviews
http://d
1364-03
n Corr
E-m
wasimra
journal homepage: www.elsevier.com/locate/rser
Sustainable Cloud Data Centers: A survey of enabling
techniques
and technologies
Junaid Shuja a, Abdullah Gani a,n, Shahaboddin Shamshirband
b, Raja Wasim Ahmad a,
Kashif Bilal c
a Centre for Mobile Cloud Computing Research (C4MCCR),
FSKTM, University of Malaya, Kuala Lumpur 50603, Malaysia
b Faculty of Computer Science and Information Technology,
University of Malaya, Malaysia
c Department of Computer Science, COMSATS Institute of
Information Technology, Pakistan
a r t i c l e i n f o
Article history:
Received 19 June 2015
Received in revised form
15 February 2016
Accepted 16 April 2016
Available online 4 May 2016
Keywords:
Cloud Data Centers
Energy efficiency
Renewable energy
Waste heat utilization
Modular data centers
VM migration
x.doi.org/10.1016/j.rser.2016.04.034
21/& 2016 Elsevier Ltd. All rights reserved.
esponding author. Tel.: þ60 0379676300; fax
ail addresses: [email protected] (
[email protected] (R.W. Ahmad), kashifbil
a b s t r a c t
Cloud computing services have gained tremendous popularity
and widespread adoption due to their
flexible and on-demand nature. Cloud computing services are
hosted in Cloud Data Centers (CDC) that
deploy thousands of computation, storage, and communication
devices leading to high energy utilization
and carbon emissions. Renewable energy resources replace
fossil fuels based grid energy to effectively
reduce carbon emissions of CDCs. Moreover, waste heat
generated from electronic components can be
utilized in absorption based cooling systems to offset cooling
costs of data centers. However, data centers
need to be located at ideal geographical locations to reap
benefits of renewable energy and waste heat
recovery options. Modular Data Centers (MDC) can enable
energy as a location paradigm due to their
shippable nature. Moreover, workload can be transferred
between intelligently placed geographically
dispersed data centers to utilize renewable energy available
elsewhere with virtual machine migration
techniques. However, adoption of aforementioned sustainability
techniques and technologies opens new
challenges, such as, intermittency of power supply from
renewable resources and higher capital costs. In
this paper, we examine sustainable CDCs from various aspects
to survey the enabling techniques and
technologies. We present case studies from both academia and
industry that demonstrate favorable
results for sustainability measures in CDCs. Moreover, we
discuss state-of-the-art research in sustainable
CDCs. Furthermore, we debate the integration challenges and
open research issues to sustainable CDCs.
& 2016 Elsevier Ltd. All rights reserved.
Contents
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 196
2. Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 197
2.1. Renewable energy in CDC . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 197
2.2. Waste heat utilization in CDC . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 198
2.3. Modular CDC designs . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 198
2.4. VM migrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 198
3. Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 198
3.1. Parasol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 198
3.2. Free lunch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 199
3.3. Aquasar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 199
3.4. MDC with free cooling . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 200
3.5. Facebook Arctic CDC . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 200
3.6. Green House Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 200
4. Renewable Energy based CDCs . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 200
4.1. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 201
: þ60 379579249.
J. Shuja), [email protected] (A. Gani), [email protected],
[email protected] (S. Shamshirband),
[email protected] (K. Bilal).
www.sciencedirect.com/science/journal/13640321
www.elsevier.com/locate/rser
http://dx.doi.org/10.1016/j.rser.2016.04.034
http://dx.doi.org/10.1016/j.rser.2016.04.034
http://dx.doi.org/10.1016/j.rser.2016.04.034
http://crossmark.crossref.org/dialog/?doi=10.1016/j.rser.2016.0
4.034&domain=pdf
http://crossmark.crossref.org/dialog/?doi=10.1016/j.rser.2016.0
4.034&domain=pdf
http://crossmark.crossref.org/dialog/?doi=10.1016/j.rser.2016.0
4.034&domain=pdf
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
mailto:[email protected]
http://dx.doi.org/10.1016/j.rser.2016.04.034
J. Shuja et al. / Renewable and Sustainable Energy Reviews 62
(2016) 195–214196
4.2. State-of-the-Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 201
4.2.1. Dynamic load balancing . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 201
4.2.2. Follow the renewables. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 202
4.2.3. Renewable based power capping . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 202
5. Waste heat utilization in CDCs . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 203
5.1. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 204
5.2. State-of-the-Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 205
6. Modular data centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . 206
6.1. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 206
6.2. State-of-the-art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 207
7. VM migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 208
7.1. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 208
7.2. State-of-the-Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 209
8. Research Issues and Challenges . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . 210
8.1. Renewable energy-CDC integration . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 211
8.2. Waste heat utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 211
8.3. MDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 211
8.4. VM WAN migrations . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 212
9. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 212
Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 212
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 212
1. Introduction
Cloud Data Centers (CDC) are increasingly being deployed by
Information Technology (IT) service providers, such as Google,
Amazon, and Microsoft to cater for world's digital needs. CDCs
provide an efficient infrastructure to store large amount of data
along with enormous processing capabilities. Business
objectives
and Service Level Agreements (SLA) demand that the storage
and
compute facilities be replicated redundantly to provide fault tol-
erance and minimal service delay. Therefore, IT service
providers
run data centers 24/7 with thousands of servers, storage, and
networking devices to ensure 99.99% availability of cloud
services
[1,2]. Our digital activities such as social media, search, file
sharing,
and streaming are creating huge amount of data. Each bit of
data
created needs to be processed, stored, and transmitted, adding to
energy costs and leaving environmental impact in the form of
Greenhouse Gas (GHG) emissions [3]. While sustainable energy
economy is one of the major challenges faced by the world com-
munity, CDCs have emerged as a major consumer of electricity.
The number and of size of data centers has been increasing
exponentially over the past decade to keep pace with the
growing
number of cloud based applications and users. CDCs are
estimated
to consume more than 2.4% of electricity worldwide with a
global
economic impact of $30 billion [4]. Despite advancements in IT
equipment efficiencies, data center electricity consumptions are
expected to grow 15-20% annually [5]. Additionally, CDCs are
responsible for emission of GHG produced during the process of
electricity generation, IT equipment manufacturing, and
disposal
[6,7]. It is estimated that the data centers were responsible for
78.7
million metric ton of CO2 emissions equal to 2% of global
emissions
in 2011 [8]. These figures advocate application of innovative
and
disruptive measures in CDCs for energy and carbon efficiency.
Power Usage Efficiency (PUE) and Carbon Usage Efficiency
(CUE)
are commonly applied sustainability indicators in CDCs. PUE is
defined as the ratio of total CDC energy usage to IT equipment
energy usage [9,10]. Energy wasted in measures other than com-
puting, such as cooling, leads to poor PUE values. CUE is the
ratio
of total CO2 emissions caused by CDC power consumption to
total
power used by the CDC. Complete dependency on fossil fuel
based
grid energy in CDCs leads to poor CUE values [11].
Sustainable and green CDCs necessitate application of multiple
techniques and technologies to achieve lower energy costs and
GHG emissions. The main elements of sustainable CDCs are
[9]:
� On/off-site renewable energy generation techniques to reduce
GHG emission. Renewable energy resource powered CDCs lead
to lower GHG emissions while eliminating fossil fuel based
energy resources.
� Waste heat recovery and free cooling techniques to lower
cooling costs. Cooling costs which make up 40% of total CDC
energy consumption on average. Both renewable energy and
waste heat utilization techniques in CDCs are dependent on
geo-dispersed MDC designs and virtualization based workload
migrations.
� Transportable Modular Data Center (MDC) designs that facil-
itate exploitation of renewable energy, waste heat, and free
cooling opportunities in geo-dispersed locations.
� Virtualization based workload migrations that enable
workload
and resource management across geo-dispersed CDC nodes.
Renewable energy generation and free cooling techniques
require ideal climatic conditions which are dependent on the
location of CDC. Similarly, waste heat utilization requires co-
location of CDC with places suitable for waste heat recovery
opportunities, such as district heating. As MDC designs are
based
on shipping containers, they enable relocation of CDC nodes to
places where sustainable computing opportunities are abundant.
Hence, the opportunistic relocation of CDCs nodes is based on
two
factors: (a) on-site availability of renewable energy resource
and
(b) proximity to free cooling resources and waste heat recovery
opportunities [12,13]. Sustainable CDCs are supported by and
dependent on geo-dispersed MDC designs and virtualization
based
workload migration techniques. MDC shippable containers
allow
distribution of CDC nodes to optimal locations with sustainable
computing opportunities.
Moreover, virtualization of CDC resources allows efficient
migration of workloads between geo-dispersed data center
nodes
to pursue sustainable computing opportunities across the globe
[14]. IT service providers, such as, Google and Facebook have
also
emphasized on migration from grid energy resources to
renewable
energy resources in geo- dispersed configurations [15,16]. Fig.
1
Fig. 1. Elements of sustainable Cloud Data Center Model.
J. Shuja et al. / Renewable and Sustainable Energy Reviews 62
(2016) 195–214 197
presents a model of green CDCs with application of techniques
and
technologies for sustainability.
To the best of our knowledge, this is the first survey on sus-
tainable CDCs that covers all major factors of sustainability and
green economy in the cloud. Previous surveys have largely
focused
on a single aspect of sustainable CDCs. For instance, Oro et al.
[17]
reviewed renewable energy integration schemes for CDCs.
Ebra-
himi et al. [5] presented a survey on waste heat opportunities in
CDCs. Similarly, Ahmad et al. [18] surveyed the Virtual
Machine
(VM) based workload consolidation schemes in CDCs. A
compre-
hensive survey covering the major techniques and technologies
of
sustainable CDCs is not present in the literature. Furthermore,
open research issues and challenges in context of sustainable
CDCs
need to be investigated in detail. The major contributions of this
article are: (a) we classify state-of-the-art techniques and tech-
nologies enabling sustainable CDCs, (b), we detail cases studies
from IT industry and research community that advocate the
application of sustainability measures for CDCs, (c) we present
a
survey of existing studies in sustainable CDCs, and (d) we
highlight
research challenges and issues in adoption of sustainable and
green energy techniques and technologies among geo-
dispersed CDCs.
The rest of the paper is organized as follows. Section 2 provides
background knowledge to sustainable CDC techniques and tech-
nologies. Section 3 presents case studies from leading IT
compa-
nies and research community that demonstrate the benefits of
the
integration of renewable energy, waste heat recovery, geo-
dispersed MDC designs, and VM migration techniques in CDCs.
In Section 4, we examine adoption of renewable in CDCs with
corresponding taxonomy of solutions and summary of research
issues. Section 5 investigates waste heat utilization
opportunities
in CDCs. In Section 6, we elaborate on MDC architectures and
the
corresponding server, network, and cooling designs. Section 7
reviews Wide Area Network (WAN) VM migration techniques
in
context of geo-dispersed CDCs. In Section 8, we debate on
future
research directions and open challenges in the field of
sustainable
CDCs. Section 9 provides the concluding findings of our study.
2. Background
In this section, we provide basic knowledge of sustainable and
green CDCs. We provide brief summary of enabling techniques
and
technologies for sustainable CDCs, namely, renewable energy,
waste heat utilization, modular CDC designs, and VM
migration.
2.1. Renewable energy in CDC
Sustainable and green computing requires application of both
energy efficiency measures and renewable energy resources to
lower energy and carbon footprint [6,19]. Brown energy
generated
from fossil fuels, such as coal, gas, and oil results in large
amount
of CO2 emissions. On the other hand, green energy produced
from
renewable resources, such as water, wind, and sun results in
almost zero CO2 emissions [20]. Hydroelectricity, although
cate-
gorized as green energy, is available only through grid
electricity
supplied by government corporations. On the contrary, solar and
wind energy can be generated with both on-site installations or
purchased from off-site corporations. The capital cost and
unpre-
dictability of renewable energy resources are barriers to their
widespread adoption [21]. However, cost/Watt of renewable
energy resources is estimated to half in the next decade [22].
The
reduction in the cost/Watt of renewable energy is based on (a)
advancements in capacity of materials, such as photovoltaic
arrays,
(b) increase in storage capacity of rechargeable batteries, and
(c)
monetary incentives by governmental organizations for the inte-
gration of renewable energy resources [23]. The issue of unpre-
dictability in renewable energy supply can be addressed by
power-
supply load balancing and workload migration techniques
among
geo-dispersed CDCs [24,25]. Moreover, hybrid grid designs that
draw power from both steady grid resources and variable on-site
renewable energy sources are essential to guarantee 100% avail-
ability of cloud services [17]. However, abundant renewable
energy resources are often located away from commercial CDC
sites. Therefore, transportable MDC designs need to be utilized
to
locate CDC nodes near renewable energy resources [23]. The
integration of renewable energy in CDC results in lower CUE
metric. Higher capital costs and intermittency of renewable
energy
J. Shuja et al. / Renewable and Sustainable Energy Reviews 62
(2016) 195–214198
resources remain a challenge for widespread adoption in CDCs
[26].
2.2. Waste heat utilization in CDC
Fossil fuel deposits are diminishing at rapid pace calling for
reuse of waste heat in all type of energy conversion systems.
Most
of the electric energy supplied to the CDC servers is converted
into
heat energy requiring deployment of large scale cooling systems
to
keep server rack temperatures in operational range [9]. As a
result,
40-50% of the electricity consumed by CDCs is used to cool
heat
dissipating servers [5]. With advent of multi-core and stacked
server designs, power densities of servers have increased,
resulting
in increased cooling costs. Minimizing the energy used in
cooling
can lead to significant impact on energy efficiency in CDCs
[27].
However, reduction in cooling cost requires relocation of CDCs
to
places where free cooling resources are available in the form of
lower environment temperatures [15]. Multiple geographically
dispersed locations are also exploited for variable electricity
prices
[28]. Moreover, as most of the power supplied to the servers is
dissipated as heat, CDCs can act as heat generators for many
waste
heat recovery techniques [5]. Waste heat can be ideally applied
to
vapor-absorption based CDC cooling systems. When heat is
sup-
plied to a refrigerant in vapor-absorption based cooling, it eva-
porates while taking away heat from the system. In this manner,
application of waste heat utilization and free cooling techniques
results in ideal PUE values by neutralizing cooling costs while
powering vapor-absorption based CDC cooling systems [29].
Heat
generated by CDCs can also be supplied to district heating
facilities
in areas with lower temperatures. However, CDCs are often not
located in proximity of waste heat recovery locations.
Therefore,
either CDCs have to apply waste heat to internal vapor-
absorption
based cooling system, or relocate to proximity of a waste heat
recovery site. MDC shippable nodes are ideal to tap into waste
heat
recovery opportunities in geo-dispersed sites. Moreover, VM
based
workload migrations are also necessary to balance CDC load
between geo-dispersed computing nodes [22]. The main
challenge
to waste heat utilization is low heat quality in CDCs and higher
capital costs of heat exchange interfaces.
2.3. Modular CDC designs
CDCs need to intelligently tap into renewable energy resources
and waste heat utilization opportunities present at sites that are
often remote to commercial CDC buildings [30,13]. Modular
Data
Centers (MDC) enable location as an energy efficiency measure
as
they are built inside shipping containers that can be transported
to
remote locations. The container based MDC design offers two
desirable properties for sustainable CDCs. Firstly, the shippable
nature of MDC allows cloud providers to relocate their compute
facilities to geo-dispersed locations abundant with sustainability
opportunities. Secondly, the container based closed looped
system
of MDC is ideal for application of free cooling and waste heat
utilization measures [12]. The container design can efficiently
perform hot-aisle containment so that high grade waste heat can
be captured from the servers. Hot-aisle containment also leads
to
better cooling efficiency resulting in lower operational costs. In
a
generic MDC design, computing and cooling devices are setup
inside the container before shipment to a remote location. MDC
nodes provide flexibility to cloud service providers with
service-
free design as computing resources are setup before deployment
and not repaired or upgraded upon failure. The MDC node is
kept
in service until the assembled components provide a minimum
level of computational output [31,32]. MDC nodes can be
operated
as continuously moving entities searching for opportunistic
sustainability options, or static entities that are operated from a
location that has redundant sustainability options for CDCs
[30].
2.4. VM migrations
Virtualization technology lies at the core of CDC infrastructure
while providing resource management, resource consolidation,
and migration for energy efficiency and fault-tolerance [1,9].
Vir-
tualization adeptly manages existing cloud resources through
highly dynamic resource provisioning to significantly reduce
operational costs. Intermittent nature of renewable energy
resources and decentralized MDC nodes necessitate workload
migration while balancing workload among multiple geo-
dispersed nodes. Virtual Machine (VM) migration techniques
enable migration of workloads when on-site renewable energy
generation is low and available elsewhere in geo-dispersed
sites.
Similarly, virtualization also enables workload migration
between
distributed MDC nodes where some nodes leverage on-site
renewable energy while other nodes employ nearby waste heat
utilization opportunities [33,34]. Moreover, VM based workload
migration and consolidation techniques are utilized to pack a set
of VMs to fewer number of physical devices to balance
renewable
power generation and workload demand [35]. Researchers have
leveraged both MDC designs and VM migration techniques to
efficiently harness renewable energy resources and waste heat
utilization alternatives in green CDCs [36–38]. However, the
cost,
in terms of network delay and energy consumption, between
geo-
dispersed nodes is the foremost challenge to VM based
workload
migrations in CDCs.
3. Case Studies
The relationship between sustainable CDCs techniques and
technologies is established and complemented by several case
studies carried out by the IT industry and published in scholarly
articles. Many IT companies including Apple, Google, and
Facebook
have added green and sustainable CDC nodes to their expanding
infrastructure [39]. In this section, we will present the case
studies
that report significantly efficient PUE values while leveraging
multiple sustainability measures, such as renewable energy,
MDC
design, and waste heat recovery. Table 1 summarizes the cases
studies of sustainable CDCs discussed in this section.
3.1. Parasol
Parasol [23] is a green CDC prototype based on four key tech-
niques and technologies of sustainable CDCs, namely, MDC
design,
on-site renewable energy generated through solar panels, free
cooling, and net-metering. A one year case study was conducted
with a MDC powered by on-site solar panels set on roof-top of a
building located in New Jersey. Parasol works on dynamic load
balancing of CDC power between renewable and grid energy
defined by the CDC workload. The MDC container consists of
two
server racks that are free cooled whenever possible. The
workload
and power source scheduling is based on workload and power
predictions, existing power stored in batteries, analytical
models
of power consumption, peak power, and power costs. Excessive
renewable energy is either stored in batteries or net-metered to
the grid. The experimental results show 36% and 13% error
respectively in workload and solar power generation prediction
in
a 1 hour prediction time frame. The total grid electricity cost
was
reduced by 75% in the parasol design. Moreover, the Parasol
design
can amortize the capital cost of a solar setup without batteries
in
4.8 to 7.1 years with 60% government incentives. The study
esti-
mated that the efficiency of the photovoltaic technology (multi-
Ta
b
le
1
C
as
e
st
u
d
ie
s
o
f
su
st
ai
n
ab
le
C
D
C
s.
C
a
se
st
u
d
y
O
b
je
ct
iv
e
Su
st
a
in
a
b
il
it
y
m
ea
su
re
s
A
p
p
ro
a
ch
A
ss
es
sm
en
t
P
ar
as
o
l
Sc
h
ed
u
le
w
o
rk
lo
ad
b
as
ed
o
n
w
ea
th
er
,
tr
ac
e,
an
d
b
at
te
ry
le
ve
l
in
p
u
ts
M
D
C
,
re
n
ew
ab
le
en
er
g
y,
h
y
b
ri
d
g
ri
d
,
fr
ee
co
o
li
n
g
,
n
et
-m
et
er
in
g
O
p
er
at
e
a
M
D
C
o
n
a
ro
o
ft
o
p
fo
r
m
ax
im
u
m
so
-
la
r
co
ve
ra
g
e
P
U
E
va
lu
e
1.
1,
7
5
%
lo
w
er
en
er
g
y
co
st
s,
ca
p
it
al
co
st
s
o
f
so
la
r
en
er
g
y
m
ay
h
al
f
in
n
ex
t
15
ye
ar
s
Fr
ee
Lu
n
ch
U
ti
li
ze
re
n
ew
ab
le
en
er
g
y
av
ai
la
b
le
in
ge
o
-d
is
-
p
er
se
d
M
D
C
n
o
d
es
p
la
ce
d
in
d
if
fe
re
n
t
ti
m
e
zo
n
es
R
en
ew
ab
le
en
er
g
y,
ge
o
-
d
is
p
er
se
d
C
D
C
,
M
D
C
d
e-
si
g
n
,
V
M
m
ig
ra
ti
o
n
s
M
ig
ra
te
V
M
s
w
h
er
e
fr
ee
re
n
ew
ab
le
en
er
g
y
is
av
ai
l-
ab
le
u
si
n
g
d
ed
ic
at
ed
n
et
-
w
o
rk
an
d
li
m
it
u
se
o
f
g
ri
d
en
er
g
y
W
h
en
re
n
ew
ab
le
en
er
g
y
is
lo
w
at
o
n
e
C
D
C
n
o
d
e,
m
o
st
o
f
th
e
ti
m
es
th
e
w
o
rk
lo
ad
ca
n
b
e
m
ig
ra
te
d
to
an
o
th
er
n
o
d
e
A
q
u
as
ar
A
n
al
y
ze
w
at
er
an
d
ai
r
co
o
le
d
Su
p
er
co
m
p
u
te
r
fo
r
w
as
te
h
ea
t
re
co
v
er
y
W
as
te
h
ea
t
re
u
se
,
M
D
C
d
es
ig
n
,
fr
ee
co
o
li
n
g
C
o
o
l
a
M
D
C
w
it
h
w
at
er
th
at
is
h
ea
te
d
b
y
th
e
se
rv
er
h
ea
t
d
is
si
p
at
io
n
P
U
E
1.
15
,
3
4
%
in
cr
ea
se
s
in
ex
er
g
et
ic
ef
fi
ci
en
cy
an
d
8
0
%
d
is
si
p
at
ed
h
ea
t
re
u
se
d
fo
r
co
o
li
n
g
M
D
C
w
it
h
fr
ee
co
o
li
n
g
C
o
m
p
ar
e
M
D
C
an
d
co
n
v
en
ti
o
n
al
C
D
C
d
es
ig
n
s
fo
r
fr
ee
co
o
li
n
g
an
d
en
er
g
y
ef
fi
ci
en
cy
M
D
C
d
es
ig
n
,
fr
ee
co
o
li
n
g
,
ge
o
-d
is
-
p
er
se
d
,
v
ir
tu
al
iz
at
io
n
Si
m
u
la
te
si
n
g
u
la
r
an
d
m
u
lt
ip
le
ge
o
-d
is
p
er
se
d
d
at
a
ce
n
te
r
w
it
h
d
if
fe
re
n
t
d
es
ig
n
s
fo
r
ev
al
u
at
io
n
P
U
E
o
f
1.
3
5
,
3
4
%
in
cr
ea
se
s
in
ex
er
ge
ti
c
ef
fi
ci
en
cy
,
M
D
C
ac
h
ie
ve
s
u
p
to
4
4
%
en
er
g
y
sa
v
in
g
as
co
m
p
ar
ed
to
co
n
ve
n
-
ti
o
n
al
C
D
C
d
es
ig
n
Fa
ce
b
o
o
k
A
rc
ti
c
C
D
C
O
p
er
at
e
a
C
D
C
in
A
rc
-
ti
c
re
g
io
n
to
lo
w
er
co
o
li
n
g
co
st
s
R
en
ew
ab
le
en
er
g
y
(h
y
d
ro
),
M
D
C
d
es
ig
n
,
fr
ee
co
o
li
n
g
,
ge
o
-d
is
p
er
se
d
U
ti
li
ze
A
rc
ti
c
ai
r
an
d
w
at
er
fr
o
m
n
ea
rb
y
ri
ve
r
fo
r
fr
ee
co
o
li
n
g
w
h
il
e
u
si
n
g
g
ri
d
en
er
g
y
b
as
ed
o
n
h
y
d
ro
-p
o
w
er
P
U
E
va
lu
e
1.
0
7
an
d
7
0
%
re
d
u
ct
io
n
in
n
u
m
b
er
o
f
b
ac
k
u
p
ge
n
er
at
o
rs
d
u
e
to
h
y
d
ro
-p
o
w
er
G
re
en
H
o
u
se
D
at
a
O
p
er
at
e
at
10
0
%
re
n
ew
-
ab
le
en
er
g
y
R
en
ew
ab
le
en
er
g
y,
M
D
C
ge
o
-d
is
p
er
se
d
n
o
d
es
,
fr
ee
co
o
li
n
g
,
v
ir
tu
al
iz
at
io
n
P
ro
v
id
e
10
0
%
su
st
ai
n
ab
le
cl
o
u
d
se
rv
ic
es
P
U
E
va
lu
e
o
f
1.
14
,
6
4
.5
%
re
d
u
ct
io
n
in
en
er
g
y
co
st
s,
el
im
-
in
at
io
n
re
d
u
ce
d
ca
r-
b
o
n
em
is
si
o
n
s
J. Shuja et al. / Renewable and Sustainable Energy Reviews 62
(2016) 195–214 199
crystalline silicon) will increase from 15% to 25% by 2030.
More-
over, it is estimated that on current space and capacity values,
the
space required by the solar panels to power a CDC is 47 times
larger than that occupied by the racks. However, with the
increasing capacity factor of solar technologies, the space
requirement can decrease to 24 times by 2020-2030. According
to
the case study, the installed cost of solar energy will decrease
by
50% by 2030.
These forecasts depict that the cost and space requirements of
sustainable CDCs will decrease significantly over the next
decade.
3.2. Free lunch
Free Lunch [40] is a MDC architecture evaluated to experiment
with the viability of sustainable CDC elements. Free lunch is
based
on three principles of sustainability: (a) utilization of on-site
renewable energy through remote geo- dispersed CDCs, (b)
dedi-
cated high speed network connectivity between two CDC nodes,
and (c) VM based workload migrations. The study identified
vir-
tualization, MDC architecture, and renewable energy as key
enabling technologies for sustainable CDCs. The authors chose
two
locations (near the Red Sea and the Southwest of Australia)
ideal
for harvesting solar energy that are situated in different time
and
climatic zones to complement each other. Moreover, wind
turbines
of 1.5 MW power were modeled with year average climatic con-
ditions. The study assumed 10 Km2 of solar cells with 10% effi-
ciency. It was found that throughout the year the power
generated
by the renewable energy sources dropped 615 times below the
average demand (150 W per server). At 331 of these instances,
excessive power was available at the other CDC node. However,
on
the remaining 284 instances, the servers have to be powered off.
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx
Contents lists available at ScienceDirectOptical Switching a.docx

More Related Content

Similar to Contents lists available at ScienceDirectOptical Switching a.docx

An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...IJECEIAES
 
Energy efficient virtual network embedding for cloud networks
Energy efficient virtual network embedding for cloud networksEnergy efficient virtual network embedding for cloud networks
Energy efficient virtual network embedding for cloud networksieeepondy
 
A Study on Energy Efficient Server Consolidation Heuristics for Virtualized C...
A Study on Energy Efficient Server Consolidation Heuristics for Virtualized C...A Study on Energy Efficient Server Consolidation Heuristics for Virtualized C...
A Study on Energy Efficient Server Consolidation Heuristics for Virtualized C...Susheel Thakur
 
G-SLAM:OPTIMIZING ENERGY EFFIIENCY IN CLOUD
G-SLAM:OPTIMIZING ENERGY EFFIIENCY IN CLOUDG-SLAM:OPTIMIZING ENERGY EFFIIENCY IN CLOUD
G-SLAM:OPTIMIZING ENERGY EFFIIENCY IN CLOUDAlfiya Mahmood
 
An energy optimization with improved QOS approach for adaptive cloud resources
An energy optimization with improved QOS approach for adaptive cloud resources An energy optimization with improved QOS approach for adaptive cloud resources
An energy optimization with improved QOS approach for adaptive cloud resources IJECEIAES
 
Reliable and efficient webserver management for task scheduling in edge-cloud...
Reliable and efficient webserver management for task scheduling in edge-cloud...Reliable and efficient webserver management for task scheduling in edge-cloud...
Reliable and efficient webserver management for task scheduling in edge-cloud...IJECEIAES
 
ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...
ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...
ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...IAEME Publication
 
DYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEY
DYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEYDYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEY
DYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEYijccsa
 
Sla based optimization of power and migration cost in cloud computing
Sla based optimization of power and migration cost in cloud computingSla based optimization of power and migration cost in cloud computing
Sla based optimization of power and migration cost in cloud computingNikhil Venugopal
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Power consumption and energy management for edge computing: state of the art
Power consumption and energy management for edge computing: state of the artPower consumption and energy management for edge computing: state of the art
Power consumption and energy management for edge computing: state of the artTELKOMNIKA JOURNAL
 
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
 
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...neirew J
 
Energy aware load balancing and application scaling for the cloud ecosystem
Energy aware load balancing and application scaling for the cloud ecosystemEnergy aware load balancing and application scaling for the cloud ecosystem
Energy aware load balancing and application scaling for the cloud ecosystemPvrtechnologies Nellore
 
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...cscpconf
 
A survey on dynamic energy management at virtualization level in cloud data c...
A survey on dynamic energy management at virtualization level in cloud data c...A survey on dynamic energy management at virtualization level in cloud data c...
A survey on dynamic energy management at virtualization level in cloud data c...csandit
 
Performance analysis of an energy efficient virtual machine consolidation alg...
Performance analysis of an energy efficient virtual machine consolidation alg...Performance analysis of an energy efficient virtual machine consolidation alg...
Performance analysis of an energy efficient virtual machine consolidation alg...IAEME Publication
 
Cawsac cost aware workload scheduling and admission control for distributed c...
Cawsac cost aware workload scheduling and admission control for distributed c...Cawsac cost aware workload scheduling and admission control for distributed c...
Cawsac cost aware workload scheduling and admission control for distributed c...ieeepondy
 
Energy-aware Load Balancing and Application Scaling for the Cloud Ecosystem
Energy-aware Load Balancing and Application Scaling for the Cloud EcosystemEnergy-aware Load Balancing and Application Scaling for the Cloud Ecosystem
Energy-aware Load Balancing and Application Scaling for the Cloud Ecosystem1crore projects
 

Similar to Contents lists available at ScienceDirectOptical Switching a.docx (20)

An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...
 
Energy efficient virtual network embedding for cloud networks
Energy efficient virtual network embedding for cloud networksEnergy efficient virtual network embedding for cloud networks
Energy efficient virtual network embedding for cloud networks
 
A Study on Energy Efficient Server Consolidation Heuristics for Virtualized C...
A Study on Energy Efficient Server Consolidation Heuristics for Virtualized C...A Study on Energy Efficient Server Consolidation Heuristics for Virtualized C...
A Study on Energy Efficient Server Consolidation Heuristics for Virtualized C...
 
G-SLAM:OPTIMIZING ENERGY EFFIIENCY IN CLOUD
G-SLAM:OPTIMIZING ENERGY EFFIIENCY IN CLOUDG-SLAM:OPTIMIZING ENERGY EFFIIENCY IN CLOUD
G-SLAM:OPTIMIZING ENERGY EFFIIENCY IN CLOUD
 
An energy optimization with improved QOS approach for adaptive cloud resources
An energy optimization with improved QOS approach for adaptive cloud resources An energy optimization with improved QOS approach for adaptive cloud resources
An energy optimization with improved QOS approach for adaptive cloud resources
 
Reliable and efficient webserver management for task scheduling in edge-cloud...
Reliable and efficient webserver management for task scheduling in edge-cloud...Reliable and efficient webserver management for task scheduling in edge-cloud...
Reliable and efficient webserver management for task scheduling in edge-cloud...
 
ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...
ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...
ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...
 
DYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEY
DYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEYDYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEY
DYNAMIC ENERGY MANAGEMENT IN CLOUD DATA CENTERS: A SURVEY
 
Sla based optimization of power and migration cost in cloud computing
Sla based optimization of power and migration cost in cloud computingSla based optimization of power and migration cost in cloud computing
Sla based optimization of power and migration cost in cloud computing
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
Summer Intern Report
Summer Intern ReportSummer Intern Report
Summer Intern Report
 
Power consumption and energy management for edge computing: state of the art
Power consumption and energy management for edge computing: state of the artPower consumption and energy management for edge computing: state of the art
Power consumption and energy management for edge computing: state of the art
 
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
 
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
 
Energy aware load balancing and application scaling for the cloud ecosystem
Energy aware load balancing and application scaling for the cloud ecosystemEnergy aware load balancing and application scaling for the cloud ecosystem
Energy aware load balancing and application scaling for the cloud ecosystem
 
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...
 
A survey on dynamic energy management at virtualization level in cloud data c...
A survey on dynamic energy management at virtualization level in cloud data c...A survey on dynamic energy management at virtualization level in cloud data c...
A survey on dynamic energy management at virtualization level in cloud data c...
 
Performance analysis of an energy efficient virtual machine consolidation alg...
Performance analysis of an energy efficient virtual machine consolidation alg...Performance analysis of an energy efficient virtual machine consolidation alg...
Performance analysis of an energy efficient virtual machine consolidation alg...
 
Cawsac cost aware workload scheduling and admission control for distributed c...
Cawsac cost aware workload scheduling and admission control for distributed c...Cawsac cost aware workload scheduling and admission control for distributed c...
Cawsac cost aware workload scheduling and admission control for distributed c...
 
Energy-aware Load Balancing and Application Scaling for the Cloud Ecosystem
Energy-aware Load Balancing and Application Scaling for the Cloud EcosystemEnergy-aware Load Balancing and Application Scaling for the Cloud Ecosystem
Energy-aware Load Balancing and Application Scaling for the Cloud Ecosystem
 

More from dickonsondorris

Copyright © eContent Management Pty Ltd. Health Sociology Revi.docx
Copyright © eContent Management Pty Ltd. Health Sociology Revi.docxCopyright © eContent Management Pty Ltd. Health Sociology Revi.docx
Copyright © eContent Management Pty Ltd. Health Sociology Revi.docxdickonsondorris
 
Copyright © Pearson Education 2010 Digital Tools in Toda.docx
Copyright © Pearson Education 2010 Digital Tools in Toda.docxCopyright © Pearson Education 2010 Digital Tools in Toda.docx
Copyright © Pearson Education 2010 Digital Tools in Toda.docxdickonsondorris
 
Copyright © Jen-Wen Lin 2018 1 STA457 Time series .docx
Copyright © Jen-Wen Lin 2018   1 STA457 Time series .docxCopyright © Jen-Wen Lin 2018   1 STA457 Time series .docx
Copyright © Jen-Wen Lin 2018 1 STA457 Time series .docxdickonsondorris
 
Copyright © John Wiley & Sons, Inc. All rights reserved..docx
Copyright © John Wiley & Sons, Inc. All rights reserved..docxCopyright © John Wiley & Sons, Inc. All rights reserved..docx
Copyright © John Wiley & Sons, Inc. All rights reserved..docxdickonsondorris
 
Copyright © by The McGraw-Hill Companies, Inc. The Aztec Accou.docx
Copyright © by The McGraw-Hill Companies, Inc. The Aztec Accou.docxCopyright © by The McGraw-Hill Companies, Inc. The Aztec Accou.docx
Copyright © by The McGraw-Hill Companies, Inc. The Aztec Accou.docxdickonsondorris
 
Copyright © Cengage Learning. All rights reserved. CHAPTE.docx
Copyright © Cengage Learning.  All rights reserved. CHAPTE.docxCopyright © Cengage Learning.  All rights reserved. CHAPTE.docx
Copyright © Cengage Learning. All rights reserved. CHAPTE.docxdickonsondorris
 
Copyright © by Holt, Rinehart and Winston. All rights reserved.docx
Copyright © by Holt, Rinehart and Winston. All rights reserved.docxCopyright © by Holt, Rinehart and Winston. All rights reserved.docx
Copyright © by Holt, Rinehart and Winston. All rights reserved.docxdickonsondorris
 
Copyright © 2020 by Jones & Bartlett Learning, LLC, an Ascend .docx
Copyright © 2020 by Jones & Bartlett Learning, LLC, an Ascend .docxCopyright © 2020 by Jones & Bartlett Learning, LLC, an Ascend .docx
Copyright © 2020 by Jones & Bartlett Learning, LLC, an Ascend .docxdickonsondorris
 
Copyright © 2019, American Institute of Certified Public Accou.docx
Copyright © 2019, American Institute of Certified Public Accou.docxCopyright © 2019, American Institute of Certified Public Accou.docx
Copyright © 2019, American Institute of Certified Public Accou.docxdickonsondorris
 
Copyright © 2018 Pearson Education, Inc. All Rights ReservedChild .docx
Copyright © 2018 Pearson Education, Inc. All Rights ReservedChild .docxCopyright © 2018 Pearson Education, Inc. All Rights ReservedChild .docx
Copyright © 2018 Pearson Education, Inc. All Rights ReservedChild .docxdickonsondorris
 
Copyright © 2018 Pearson Education, Inc. C H A P T E R 6.docx
Copyright © 2018 Pearson Education, Inc. C H A P T E R  6.docxCopyright © 2018 Pearson Education, Inc. C H A P T E R  6.docx
Copyright © 2018 Pearson Education, Inc. C H A P T E R 6.docxdickonsondorris
 
Copyright © 2018 Capella University. Copy and distribution o.docx
Copyright © 2018 Capella University. Copy and distribution o.docxCopyright © 2018 Capella University. Copy and distribution o.docx
Copyright © 2018 Capella University. Copy and distribution o.docxdickonsondorris
 
Copyright © 2018 Pearson Education, Inc.C H A P T E R 3.docx
Copyright © 2018 Pearson Education, Inc.C H A P T E R  3.docxCopyright © 2018 Pearson Education, Inc.C H A P T E R  3.docx
Copyright © 2018 Pearson Education, Inc.C H A P T E R 3.docxdickonsondorris
 
Copyright © 2018 by Steven Levitsky and Daniel.docx
Copyright © 2018 by Steven Levitsky and Daniel.docxCopyright © 2018 by Steven Levitsky and Daniel.docx
Copyright © 2018 by Steven Levitsky and Daniel.docxdickonsondorris
 
Copyright © 2017, 2014, 2011 Pearson Education, Inc. All Right.docx
Copyright © 2017, 2014, 2011 Pearson Education, Inc. All Right.docxCopyright © 2017, 2014, 2011 Pearson Education, Inc. All Right.docx
Copyright © 2017, 2014, 2011 Pearson Education, Inc. All Right.docxdickonsondorris
 
Copyright © 2017 Wolters Kluwer Health Lippincott Williams.docx
Copyright © 2017 Wolters Kluwer Health  Lippincott Williams.docxCopyright © 2017 Wolters Kluwer Health  Lippincott Williams.docx
Copyright © 2017 Wolters Kluwer Health Lippincott Williams.docxdickonsondorris
 
Copyright © 2016, 2013, 2010 Pearson Education, Inc. All Right.docx
Copyright © 2016, 2013, 2010 Pearson Education, Inc. All Right.docxCopyright © 2016, 2013, 2010 Pearson Education, Inc. All Right.docx
Copyright © 2016, 2013, 2010 Pearson Education, Inc. All Right.docxdickonsondorris
 
Copyright © 2017 by University of Phoenix. All rights rese.docx
Copyright © 2017 by University of Phoenix. All rights rese.docxCopyright © 2017 by University of Phoenix. All rights rese.docx
Copyright © 2017 by University of Phoenix. All rights rese.docxdickonsondorris
 
Copyright © 2016 John Wiley & Sons, Inc.Copyright © 20.docx
Copyright © 2016 John Wiley & Sons, Inc.Copyright © 20.docxCopyright © 2016 John Wiley & Sons, Inc.Copyright © 20.docx
Copyright © 2016 John Wiley & Sons, Inc.Copyright © 20.docxdickonsondorris
 
Copyright © 2016 Pearson Education, Inc. .docx
Copyright © 2016 Pearson Education, Inc.                    .docxCopyright © 2016 Pearson Education, Inc.                    .docx
Copyright © 2016 Pearson Education, Inc. .docxdickonsondorris
 

More from dickonsondorris (20)

Copyright © eContent Management Pty Ltd. Health Sociology Revi.docx
Copyright © eContent Management Pty Ltd. Health Sociology Revi.docxCopyright © eContent Management Pty Ltd. Health Sociology Revi.docx
Copyright © eContent Management Pty Ltd. Health Sociology Revi.docx
 
Copyright © Pearson Education 2010 Digital Tools in Toda.docx
Copyright © Pearson Education 2010 Digital Tools in Toda.docxCopyright © Pearson Education 2010 Digital Tools in Toda.docx
Copyright © Pearson Education 2010 Digital Tools in Toda.docx
 
Copyright © Jen-Wen Lin 2018 1 STA457 Time series .docx
Copyright © Jen-Wen Lin 2018   1 STA457 Time series .docxCopyright © Jen-Wen Lin 2018   1 STA457 Time series .docx
Copyright © Jen-Wen Lin 2018 1 STA457 Time series .docx
 
Copyright © John Wiley & Sons, Inc. All rights reserved..docx
Copyright © John Wiley & Sons, Inc. All rights reserved..docxCopyright © John Wiley & Sons, Inc. All rights reserved..docx
Copyright © John Wiley & Sons, Inc. All rights reserved..docx
 
Copyright © by The McGraw-Hill Companies, Inc. The Aztec Accou.docx
Copyright © by The McGraw-Hill Companies, Inc. The Aztec Accou.docxCopyright © by The McGraw-Hill Companies, Inc. The Aztec Accou.docx
Copyright © by The McGraw-Hill Companies, Inc. The Aztec Accou.docx
 
Copyright © Cengage Learning. All rights reserved. CHAPTE.docx
Copyright © Cengage Learning.  All rights reserved. CHAPTE.docxCopyright © Cengage Learning.  All rights reserved. CHAPTE.docx
Copyright © Cengage Learning. All rights reserved. CHAPTE.docx
 
Copyright © by Holt, Rinehart and Winston. All rights reserved.docx
Copyright © by Holt, Rinehart and Winston. All rights reserved.docxCopyright © by Holt, Rinehart and Winston. All rights reserved.docx
Copyright © by Holt, Rinehart and Winston. All rights reserved.docx
 
Copyright © 2020 by Jones & Bartlett Learning, LLC, an Ascend .docx
Copyright © 2020 by Jones & Bartlett Learning, LLC, an Ascend .docxCopyright © 2020 by Jones & Bartlett Learning, LLC, an Ascend .docx
Copyright © 2020 by Jones & Bartlett Learning, LLC, an Ascend .docx
 
Copyright © 2019, American Institute of Certified Public Accou.docx
Copyright © 2019, American Institute of Certified Public Accou.docxCopyright © 2019, American Institute of Certified Public Accou.docx
Copyright © 2019, American Institute of Certified Public Accou.docx
 
Copyright © 2018 Pearson Education, Inc. All Rights ReservedChild .docx
Copyright © 2018 Pearson Education, Inc. All Rights ReservedChild .docxCopyright © 2018 Pearson Education, Inc. All Rights ReservedChild .docx
Copyright © 2018 Pearson Education, Inc. All Rights ReservedChild .docx
 
Copyright © 2018 Pearson Education, Inc. C H A P T E R 6.docx
Copyright © 2018 Pearson Education, Inc. C H A P T E R  6.docxCopyright © 2018 Pearson Education, Inc. C H A P T E R  6.docx
Copyright © 2018 Pearson Education, Inc. C H A P T E R 6.docx
 
Copyright © 2018 Capella University. Copy and distribution o.docx
Copyright © 2018 Capella University. Copy and distribution o.docxCopyright © 2018 Capella University. Copy and distribution o.docx
Copyright © 2018 Capella University. Copy and distribution o.docx
 
Copyright © 2018 Pearson Education, Inc.C H A P T E R 3.docx
Copyright © 2018 Pearson Education, Inc.C H A P T E R  3.docxCopyright © 2018 Pearson Education, Inc.C H A P T E R  3.docx
Copyright © 2018 Pearson Education, Inc.C H A P T E R 3.docx
 
Copyright © 2018 by Steven Levitsky and Daniel.docx
Copyright © 2018 by Steven Levitsky and Daniel.docxCopyright © 2018 by Steven Levitsky and Daniel.docx
Copyright © 2018 by Steven Levitsky and Daniel.docx
 
Copyright © 2017, 2014, 2011 Pearson Education, Inc. All Right.docx
Copyright © 2017, 2014, 2011 Pearson Education, Inc. All Right.docxCopyright © 2017, 2014, 2011 Pearson Education, Inc. All Right.docx
Copyright © 2017, 2014, 2011 Pearson Education, Inc. All Right.docx
 
Copyright © 2017 Wolters Kluwer Health Lippincott Williams.docx
Copyright © 2017 Wolters Kluwer Health  Lippincott Williams.docxCopyright © 2017 Wolters Kluwer Health  Lippincott Williams.docx
Copyright © 2017 Wolters Kluwer Health Lippincott Williams.docx
 
Copyright © 2016, 2013, 2010 Pearson Education, Inc. All Right.docx
Copyright © 2016, 2013, 2010 Pearson Education, Inc. All Right.docxCopyright © 2016, 2013, 2010 Pearson Education, Inc. All Right.docx
Copyright © 2016, 2013, 2010 Pearson Education, Inc. All Right.docx
 
Copyright © 2017 by University of Phoenix. All rights rese.docx
Copyright © 2017 by University of Phoenix. All rights rese.docxCopyright © 2017 by University of Phoenix. All rights rese.docx
Copyright © 2017 by University of Phoenix. All rights rese.docx
 
Copyright © 2016 John Wiley & Sons, Inc.Copyright © 20.docx
Copyright © 2016 John Wiley & Sons, Inc.Copyright © 20.docxCopyright © 2016 John Wiley & Sons, Inc.Copyright © 20.docx
Copyright © 2016 John Wiley & Sons, Inc.Copyright © 20.docx
 
Copyright © 2016 Pearson Education, Inc. .docx
Copyright © 2016 Pearson Education, Inc.                    .docxCopyright © 2016 Pearson Education, Inc.                    .docx
Copyright © 2016 Pearson Education, Inc. .docx
 

Recently uploaded

Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxUnboundStockton
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,Virag Sontakke
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfadityarao40181
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfakmcokerachita
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 

Recently uploaded (20)

Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docx
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdf
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdf
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 

Contents lists available at ScienceDirectOptical Switching a.docx

  • 1. Contents lists available at ScienceDirect Optical Switching and Networking Optical Switching and Networking 23 (2017) 225–240 http://d 1573-42 n Corr 305-701 E-m dkkang chyoun journal homepage: www.elsevier.com/locate/osn Energy and QoS aware resource allocation for heterogeneous sustainable cloud datacenters Yuyang Peng, Dong-Ki Kang, Fawaz Al-Hazemi, Chan-Hyun Youn n Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea a r t i c l e i n f o Article history: Received 31 August 2015 Received in revised form 1 February 2016 Accepted 19 February 2016 Available online 27 February 2016 Keywords:
  • 2. Sustainable cloud datacenters Renewable energy Virtual machine allocation Heterogeneity x.doi.org/10.1016/j.osn.2016.02.001 77/& 2016 Elsevier B.V. All rights reserved. esponding author at: 373-1 Guseong-dong, , Korea. Tel.: +82 42 350 3495; fax: +82 42 ail addresses: [email protected] (Y. Pen @kaist.ac.kr (D.-K. Kang), [email protected] ( @kaist.ac.kr (C.-H. Youn). a b s t r a c t As the demand on Internet services such as cloud and mobile cloud services drastically increases recently, the energy consumption consumed by the cloud datacenters has become a burning topic. The deployment of renewable energy generators such as Pho- toVoltaic (PV) and wind farms is an attractive candidate to reduce the carbon footprint and, achieve the sustainable cloud datacenters. However, current studies have focused on geographical load balancing of Virtual Machine (VM) requests to reduce the cost of brown energy usage, while most of them have ignored the heterogeneity of power consumption of each cloud datacenter and the incurred performance degradation by VM co-location. In this paper, we propose Evolutionary Energy Efficient Virtual Machine Allocation (EEE- VMA), a Genetic Algorithm (GA) based metaheuristic which supports a power hetero- geneity aware VM request allocation of multiple sustainable cloud datacenters. This
  • 3. approach provides a novel metric called powerMark which diagnoses the power efficiency of each cloud datacenter in order to reduce the energy consumption of cloud datacenters more efficiently. Furthermore, performance degradation caused by VM co-location and bandwidth cost between cloud service users and cloud datacenters are considered to avoid the deterioration of Quality-of-Service (QoS) required by cloud service users by using our proposed cost model. Extensive experiments including real-world traces based simulation and the implementation of cloud testbed with a power measuring device are conducted to demonstrate the energy efficiency and performance assurance of the pro- posed EEE-VMA approach compared to the existing VM request allocation strategies. & 2016 Elsevier B.V. All rights reserved. 1. Introduction The electric energy consumption of datacenters is accounted to be 1.5% of the worldwide electricity usage in 2010, and the energy cost is a primary fraction of a data- center's maintenance expenditure [1,2]. Therefore, there is a growing push to improve the energy efficiency of the Yuseong-gu, Daejeon 350 7260. g), F. Al-Hazemi), data centers behind cloud computing [3,4]. Traditionally, datacenters get their power supply from the utility grid which is generated by dirty energy generators, such as coal, or nuclear plants [15]. These conventional energy generators not only produce much carbon but also
  • 4. increase the operation cost for datacenters. Towards addressing this inefficiency, a promising solution receiving spotlights is the incorporation of renewable energy gen- erators such as PhotoVoltaic (PV) and wind turbines into the design of datacenters (i.e., achieving “sustainable” datacenters which reduce not only the electricity cost but also the carbon footprint). Renewable energy generator is becoming drastically an attractive candidate for designing www.sciencedirect.com/science/journal/15734277 www.elsevier.com/locate/osn http://dx.doi.org/10.1016/j.osn.2016.02.001 http://dx.doi.org/10.1016/j.osn.2016.02.001 http://dx.doi.org/10.1016/j.osn.2016.02.001 http://crossmark.crossref.org/dialog/?doi=10.1016/j.osn.2016.02 .001&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.osn.2016.02 .001&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.osn.2016.02 .001&domain=pdf mailto:[email protected] mailto:[email protected] mailto:[email protected] mailto:[email protected] http://dx.doi.org/10.1016/j.osn.2016.02.001 Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240226 green datacenters in academia. Recently, researchers have proposed several studies to integrate renewable energy sources into cloud datacenters. The cost optimization model considering both of renewable energy source and cooling infrastructure is proposed to realize the potential of sustainable cloud datacenters [9]. They propose the demand shifting which schedules non-interactive work-
  • 5. load to maximize the utilization of renewable power source. The energy storage management of sustainable cloud datacenters has been proposed to minimize the cloud service provider's electricity cost [10,11]. The sche- duling scheme for parallel batch jobs has been proposed in order to maximize the utilization of green energy con- sumption while ensuring the Service Level Agreements (SLAs) of requests [16]. However, there are still remaining challenges to achieve the energy efficient sustainable cloud datacenters. First, each cloud datacenter have heterogeneous server architecture, i.e., they require different power consump- tion even for serving of the same amount of workload. The server heterogeneity is caused by hardware upgrades, capacity extension, and the replacement of peripheral devices [6–8]. However, traditional cloud datacenter management schemes assume that all the cloud data- centers have homogeneous server architecture with same power efficiency although this assumption is unrealistic for most cloud resource providers. Second, two issues of greening cloud datacenters and Quality-of-Service (QoS) assurance are conflicting goals in the resource manage- Fig. 1. Cloud environment consists of multiple cloud datacenters and Cloud ment. Especially, the performance degradation might be induced by VM co-location interference when multiple VM instances are running on common physical server in cloud datacenters [12]. As more VM instances are packed into common servers, the required number of active servers is decreased, while the resource contention is deteriorated. This means that the energy consumption is reduced with sacrificing the QoS assurance of the processing for VM requests. It is important to find a desirable tradeoff between above two goals corresponding to the dynamic workload level.
  • 6. To solve these challenges, we propose an Evolutionary Energy Efficient Virtual Machine Allocation (EEE-VMA) approach which depends on an energy optimization model for sustainable cloud datacenters having hetero- geneous power efficiency with renewable energy gen- erators. This paper proposes four contributions as belows. First, our approach tries to find a near optimized solu- tion of VM request allocation by applying Genetic Algo- rithm (GA) with consideration for both of renewable energy cost and traditional utility grid cost. The funda- mental strategy adopted in EEE-VMA as an energy saving scheme is Dynamic Right Sizing (DRS) which is for making cloud datacenters be power-proportional (i.e., consumes power only in proportion to the workload level) by adjusting the number of active servers in response to actual workload (i.e., to adaptively “right-size” the data- center) [3,5]. In DRS, the energy saving can be achieved through allowing idle servers that do not have any running Request Brokers (CRBs) with Cloud Request Broker Manager (CRBM). Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240 227 VM instances to be low-power mode (e.g., sleep or hiber- nation). Note that our proposed energy consumption model for the EEE-VMA approach includes a switching cost of DRS which is incurred by toggling a server from low-power mode into active mode (i.e., awaken transi- tion). This makes our proposed approach more practical for energy efficient cloud datacenter in real world. Second, in our proposed EEE-VMA approach, in order to
  • 7. adopt the heterogeneous power efficiency of each cloud datacenters, we propose a novel metric called powerMark to quantize the power efficiency of servers by measuring their power consumption at each utilization level of resources such as CPU, memory, and I/O bandwidth. Especially, we compute powerMark for serveral types of server by measuring their power consumption for pro- cessing CPU-intensive applications.Through powerMark, we are able to determine the allocation priority of each cloud datacenter based on their power efficiency so as to improve the performance of energy saving. Third, we achieve the significant energy saving of cloud datacenters while minimizing the performance degrada- tion caused by VM co-location interference through our EEE-VMA approach. The workload model including both of the number of co-located VM instances and the resource utilization which are key factors reflecting VM co-location interference is applied to the cost model of the EEE-VMA approach. Moreover, we consider the bandwidth cost between cloud service users and cloud datacenters as an additional part contributing the QoS deterioration of VM request processing [31,32]. The desirable cloud datacenter selection for each VM request assignment are conducted with consideration for both of energy saving and QoS assurance corresponding to the dynamic workload level. Finally, we conduct extensive experiments through simulations at various workload levels based on real-world traces such as dynamic capacity of renewable energy and electricity prices of traditional grid power [9,18–20], and the implementation of testbed with a power measuring Table 1 Set of key notations. Notation Description
  • 8. DC The set of cloud datacenters CRB The set of CRBs F The set of flavor types of VM request supported by cloud resour RC The set of resource components such as CPU and memory Λi tð Þ The set of VM requests arrived at whole CRBs at time t X tð Þ Resource allocation plan of VM requests from CRBs to cloud datac each VM request M tð Þ DRS plan of cloud datacenters at time t, which determines the n S tð Þ A solution including resource allocation plan X tð Þ and DRS plan Dj tð Þ Performance degradation of cloud datacenter DCj by CPU resour UR The set of predetermined resource utilization levels pwMjrch i An average power consumption per an unit level of utilization o pivotS The predetermined pivot server used as a criterion of resource c ej tð Þ The energy consumption of cloud datacenter DCj at time t ctotal tð Þ The total cost of whole cloud datacenters at time t f EEE �VMA Uð Þ The objective function to get ctotal tð Þ in the EEE-VMA solver device called Yocto-Watt to measure a real power con- sumption of several cloud server types [21]. The rest of the paper is organized as follows. Section 2 gives an overview of the proposed system architecture of multiple cloud datacenters and cloud request brokers. In Section 3, the objective cost model including workload and
  • 9. energy consumption model with powerMark are for- mulated. Our EEE-VMA approach based on Genetic Algo- rithm is proposed to obtain the approximated optimal solution minimizing the total cost of cloud datacenters in Section 4. Section 5 shows the various experimental results that demonstrate the effectiveness of our proposed approach based on real-world traces. The conclusion is given in Section 5. 2. System architecture and design Our considered cloud environment including multiple Cloud Request Brokers (CRBs) which support mesh net- working with distributed multiple cloud datacenters is depicted in Fig. 1. There are h CRBs and m cloud data- centers with h�m communication links. In each cloud datacenter, the information of resource utilization, the available renewable energy, and the power consumption of each server are collected through monitoring modules, power measuring devices, and reported to the Cloud Request Broker Manager (CRBM) which is responsible for solving the allocation of VM requests submitted to CRBs. The CRBM has two modules: the powerMark analyzer and the EEE-VMA solver. The powerMark analyzer is respon- sible for capturing the power efficiency of each cloud datacenter through our proposed novel metric called powerMark. We describe this metric in detail in Section 3. The EEE-VMA solver is responsible for finding a near optimal solution of VM request allocation from CRB to the cloud datacenter. The solution derived by the EEE-VMA solver based on the amount of submitted VM requests in ce provider enters at time t, which determines the destined cloud datacenters for umber of active servers of cloud datacenters
  • 10. M tð Þ ce contention at time t f resource component rcARC of servers in the cloud datacenter DCj apacity Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240228 each CRB and the reported information from each cloud datacenter is delivered to whole CRBs and all the submitted VM requests are allocated to their destined cloud datacenters. The owner of cloud datacenters has to minimize costs for resource operation while boosting benefits which can be realized since cloud service users have a good reputation on observed QoS by cloud services. In this paper, our EEE-VMA solver tries to find a solution to minimize the total cost of resource operation including three sub cost models: energy consumption cost, band- width cost, and performance degradation cost. In the perspective of energy consumption cost, the EEE-VMA solver tries to maximize the utilization of renewable energy with consideration on the dynamic capacity of each renewable energy generator since the price of renewable energy is much cheaper than the one of grid energy. VM requests from CRBs are preferably allocated to cloud datacenters which have the higher capacity of renewable energy and the higher power efficiency (i.e., the lower powerMark value). In the perspective of bandwidth cost, the EEE-VMA tends to route VM requests to cloud
  • 11. datacenters having the cheaper bandwidth cost. Obviously, different pairs of CRB and cloud datacenter have different bandwidth cost according to the hop distance and the amount of transferred data of routed VM requests. Therefore, it is clear that VM requests need to be allocated to the closest cloud datacenter to their source CRB in order to minimize the bandwidth cost. To simplify our model, we assume that the transferred data size of each VM request is known to the EEE-VMA solver in CRBM beforehand. In the perspective of performance degradation cost, the EEE-VMA solver tries to spread whole VM requests over multiple cloud datacenters in order to avoid QoS dete- rioration of VM request processing. In cloud datacenters, the VM co-location interference is the key factor that makes servers undergo severe performance degradation [12,22]. The VM co-location interference is caused by resource contention which can be reflected mainly by the number of co-located VM instances and resource utiliza- tion of them. In brief, the VM co-location interference grows bigger as more VM instances are co-located on the common server and the higher resource utilization is occurred. Therefore, VM requests have to be scattered in order to try its hardest to avoid performance degradation by VM co-location interference. Because of the complexity of optimization for aggregated cost model, the EEE-VMA solver adopts metaheuristic based on GA to obtain near optimal solution of VM requests allocation within the acceptable computation time. In next section, we propose a mathematical model to describe the cost of cloud data- center and describe the metric powerMark in detail. The set of involved key notations are shown in Table 1. 3. Problem formulation 3.1. Workload model
  • 12. There are many different kinds of workloads in cloud datacenters which can be classified into two categories: interactive or transactional (delay-sensitive) and non- interactive or batch (delay-tolerant) workload [9]. The inter- active workloads such as Internet web services and multi- media streaming services have to be processed within a cer- tain response time defined by service users. They are often network I/O intensive jobs which have less impact to the power consumption of servers. In contrast, the batch work- loads such as scientific applications and big data analysis can be scheduled to process anytime as long as the whole tasks are finished before the predetermined deadline. They are usually computation intensive jobs that require a lot of CPU utilization causing a significant power consumption of ser- vers. In this paper, we are interested in the computation intensive batch workloads since they have a greater influence to server power consumption than interactive workloads. We assume that all the VM requests have computation intensive workloads, and the resource contention is always occurred in CPU resource. A workload λki tð ÞAΛi tð Þ denotes the number of arrived VM requests with a required flavor type (e.g., instance type such as m3.medium or c4.large in Amazon EC2) Fk AF at the CRBi ACRB at time t [29]. We use rkrch i to denote the required amount of resource component rcARC by a VM request with flavor type Fk where RC ¼ rcCPU; rcMEMf g. For example, rkrch isuchth at Fk ¼ m3:medium and rc ¼ rcCPU represents the required number of CPU cores by a VM request of which the flavor type is m3:medium. When multiple VM requests are arrived at the CRB, then the CRB would decide in which cloud datacenters each VM request should be routed for processing. We assume no data buffering at the CRB so that whenever a VM request arrives at the CRB, it would be routed to a cloud datacenter for processing immediately [11]. We denote the number of VM requests with the flavor type Fk
  • 13. routed from the CRBi to DCj at time t as x i;k j tð Þ, which is derived by a resource allocation plan for cloud datacenters, X tð Þ. Then we have the following constraints:X 8DCj ADC xi; kj tð Þ ¼ λ k i tð Þ; 8CRBi ACℛℬ; 8Fk Aℱ; 8t ð1Þ 0rxi;kj tð Þrλ k i tð Þ; 8CRBi ACRB; 8Fk AF ; 8DCj ADC; 8t ð2Þ Above Eq. (1) means that the total number of VM requests arrived at CRBs must agree with the one of whole VM requests allocated to cloud datacenters. Another con- straint we should consider is a resource capacity of the cloud datacenter. Each cloud datacenter only can accom- modate VM requests within their resource capacity (e.g., the total number of CPU cores). Then, we have the fol- lowing constraintsX 8CRBi ACℛℬ X 8Fk Aℱ rk⟨ rc ¼ rcCPU⟩ Ux i; k j tð Þrscp
  • 14. j ⟨ rc ¼ rcCPU⟩ Umj tð Þ; 8DCj ADC; 8t ð3Þ X 8CRBi ACℛℬ X 8Fk Aℱ rk⟨ rc ¼ rcmem⟩ Ux i; k j tð Þrscp j ⟨ rc ¼ rcmem⟩ Umj tð Þ; 8DCj ADC; 8t ð4Þ Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240 229 0rmj tð ÞrN DCj � � ; 8DCj ADC; 8t ð5Þ where rkrc ¼ rcCPUh i and r k rc ¼ rcmemh i are required CPU cores and memory size of VM request with flavor type Fk AF . scp jrch i is the physical capacity of resource component rcARC of an arbitrary server in the cloud datacenter DCj.
  • 15. Constraints (3) and (4) represent that allocated VM requests can not always exceed the capacity of resource provided by cloud datacenter DCj. We use mj tð Þ to denote the number of active servers in cloud datacenter DCj at time t and it is determined by a DRS plan M tð Þ, and its upper bound is N DCj � � which is the number of total phy- sical servers in the cloud datacenter DCj. The constraint (5) represents that mj tð Þ can be determined in the range of 0 to N DCj � � through the DRS plan. mj tð Þ ¼ 0 means that whole servers in the cloud datacenter DCj are in the sleep state, while mj tð Þ ¼ N DCj � � means that they are in the active state at time t. Next, we consider a VM co-location interference to build a performance degradation model of resource allo- cation in cloud datacenter [12]. The VM co-location inter- ference implies that the virtualization of cloud supports resource isolation explicitly when multiple VM requests are running simultaneously on common PM, but it does not mean the assurance of performance isolation between VM requests internally. In the perspective of CPU resource, physical CPU cores of the server are not pinned to each running VM request, but assigned dynamically. The switching overhead by the dynamical CPU assignment
  • 16. policy might cause the undesirable performance degra- dation of allocated VM requests. Moreover, the CPU resource contention aggravates the performance degra- dation since it is very difficult to isolate the cache space of CPU. There is a strong relationship between VM co- location interference and the number of co-located VMs in PM [12]. The more co-located VM instances, the more severe VM co-location interference is occurred. Based on [12], we estimate the performance degradation Dj tð Þ of the cloud datacenter DCj ADC by the CPU resource contention at time t as follows, Dj tð Þ ¼ P 8CRBi ACℛℬ P 8Fk Aℱx i; k j tð ÞUrk⟨ rc ¼ rcCPU⟩ U vu⟨ rc ¼ CPU⟩ tð Þ þtsj tð Þ � � scp j⟨ rc ¼ rcCPU⟩ Umj tð Þ ; 8DCj ADC; 8t ð6Þ where tsj tð Þ is an average allocated time slice deter- mined by Hypervisor [25,28] for VM requests allocated to the cloud datacenter DCj at time t. We use vu rch i tð Þ to denote an average utilization of assigned virtual resources of whole VM requests allocated to cloud datacenters at time t. Note that in Eq. (6), tsj tð Þ and rkrc ¼ CPUh i can be known in advance, while vu rch i tð Þ can not be recognized
  • 17. beforehand, until the utilization of CPU resource is mea- sured through the internal monitoring module of each server in cloud datacenters at time t [24]. Therefore it is required to use the historical information of CPU resource utilization of VM requests to find optimal solution of resource allocation for the current workload. As shown in Fig. 1, the data repository module is responsible for col- lecting and storing the monitoring information of resource utilization of each VM request to estimate the future demand. Our EEE-VMA solver uses the historical data of the resource utilization from the data repository module in each cloud datacenter to estimate the expected perfor- mance degradation of solution candidates. 3.2. Energy consumption model 3.2.1. The renewable energy model The renewable energy such as the PV and wind energy is more sustainable than the traditional grid power, and its price is low and the less carbon is emitted [9]. There are two models to achieve the sustainable cloud data- centers by deploying the renewable energy generation. One is on-site deployment of renewable energy genera- tion at the datacenter facility itself. For example, Apple has built its own local biogas fuel cells and two 20-MW solar arrays in Maiden, NC and they have been powered by 100% renewable energy sources [26,27]. Such on-site renewable energy generator can alleviate energy losses due to the transmission and distribution of generated energy, but its energy potential depends greatly on the location of the cloud datacenter. Another model is building the renewable energy generator at off-site facilities. It has the flexibility to locate the generator in a location with good weather (e.g., strong wind speed or bright sunshine), but the significant transmission losses
  • 18. of energy can be occurred. In this paper, we use the first model which has been adopted by most major datacenter owners. We denote rwej tð Þ and rpej tð Þ as dynamic capacity of renewable wind energy and renewable photovoltaic energy of the cloud datacenter DCj ADC at time t, respec- tively. Obviously, it is required to forecast the future capacity of renewable energy to achieve energy efficient resource management of cloud datacenters since they are usually intermittent and irregular. Therefore, we estimate the future capacity of renewable energy generation by using the historical data from the data repository module in the cloud datacenter through calculating an Exponen- tially Weighted Moving Average (EWMA) values. The detailed descriptions of the EWMA based forecasting scheme for estimated capacity of renewable energy is omitted in this paper. 3.2.2. Heterogeneous power consumption model We propose a novel power efficient metric called powerMark to evaluate the heterogeneous power con- sumption of cloud datacenters. Servers consist of each cloud datacenter have heterogeneous architecture, which implies that the specification of their resources are dif- ferent, consequently, even though they process the same application, for which each required power consumption might be different [8,13]. To describe powerMark in detail, we propose Definition 1 and 2 as belows, Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240230 Definition 1. (powerMark):
  • 19. The powerMark pwMjrch i is an average power con- sumption per an unit level of utilization of resource component rcARC of servers in the cloud datacenter DCj. Definition 2. (pivot server): The predetermined pivot server pivotS used as a cri- terion of resource capacity for normalizing powerMark of each cloud datacenter. Moreover, we propose the novel concept pivot server pivotS in Definition 2 to normalize the powerMark of each cloud datacenter. For simplicity, we assume that servers in the same cloud datacenter has power-Homogeneity to each other. To obtain the powerMark value, we predetermined the set of resource utilization levels UR ¼ ur1; ur2; …; urk � � . The powerMark pwMjrch i represents the power efficiency of servers in the cloud datacenter DCj with respect to the certain resource component rcAR by calculating an arith- metic mean of power consumption measured at each resource utilization level urk AUR; 8urk 40. For example, we set UR ¼ ur1 ¼ 0:1; ur2 ¼ 0:2; …; ur9 ¼ 0:9f g and rc ¼ rcCPU, then the power consumption of server is measured at each CPU utilization level 0:1; 0:2; …; 0:9 respectively. Based on the data of measured power consumption, the power- Mark pwMjrch i is given by pwMj⟨ rc⟩ ¼ 1
  • 20. Uℛj j X 8urk AUℛ pw j⟨ rc⟩ ; urk urk : ð7Þ npwMj⟨ rc⟩ ¼ 1 Uℛj j X 8urk AUℛ scppivot ⟨ rc⟩ scpj ⟨ rc⟩ Upw j⟨ rc⟩ ;urk urk : ð8Þ where pw jrch i;urk is the power consumption of servers in the cloud datacenter DCj at the utilization level urk of the resource component rc. npwMjrch i is the normalized value of pwMjrch i based on the capacity of resource component rc of the pivot server pivotS where
  • 21. scppivotrch i scpjrch i Upw jrch i;urk is the normalized value of pwjrch i;urk . The lower powerMark represents the higher power efficiency of the cloud datacenter and, with larger URjj , powerMark can accu- rately describe the power efficiency of the cloud data- center. In order to investigate the availability of power- Mark, we conduct the preliminary experiment to obtain Table 2 Three server types for an experiment to investigate powerMark. Server types CPU architecture CPU cores CPU Server-1 Intel i5-760 4 2.8 Server-2 Intel i5-4590 4 3.3 Server-3 Intel i7-3770 8 3.4 power consumption of heterogeneous servers with run- ning VM requests processing computation-intensive jobs on a real test bed. There are 3 server types to investigate the heterogeneity of power consumption. The hardware specifications of each server type are shown in Table 2. Each server has two Ethernet interface cards with 1 Gbps and uses Ubuntu 14.04. The test application for the experiment is mProject module (m108 with range 1.7) of Montage Project to make astronomy image files of space galaxy, which is the computation-intensive application [14]. Fig. 2 shows curves of power consumption of each server type as the resource utilization of CPU is increased by mProject running when Server-1 type is pivotS. As mentioned earlier, each server requires different power consumption even at same resource utilization level. The Server-1 has the largest amount of power consumption than others, which means that this server type has the
  • 22. worst performance in terms of energy consumption. Fig. 3 shows the calculated normalized powerMark npwM values with rc ¼ rcCPU of Server-1, 2, and 3 based on Eq. (8). Note that the difference of normalized powerMark npwM values among servers of Fig. 3 is bigger than the one of powerMark pwM values of Fig. 2. The Server-3 has the smallest value of npwM, which means that this server has the best power efficiency among three servers, and this is in concordance with the results in Fig. 2. Based on result curves in Figs. 2 and 3, we conclude that our pro- posed metric powerMark is simple and useful to represent the relative power efficiency of heterogeneous cloud datacenters in practice. 3.2.3. Dynamic right sizing model To achieve power-proportional cloud datacenter which consumes power only in proportion to the work- load, we consider DRS approach which adjusts the number of active servers by turning them on or off dynamically [3]. Obviously, there is no need to turn all the servers in cloud datacenter on when the total workload is low. In DRS approach, the state of servers which have no running applications can be transit to the power saving mode (e.g., sleep or hibernation) in order to avoid wasting energy as shown in Fig. 4. In order to successfully deploy DRS approach onto our system, we should consider the switching overhead for adjusting the number of active servers (i.e., for turning sleep ser- vers on again). The switching overhead includes: (1) additional energy consumption by transition from sleep to active state (i.e., clocks (GHz) Cache size (kB) Memory size (GB) 8192 3
  • 23. 6144 8 8192 16 Fig. 2. Normalized power consumption results of Server-1, 2, 3 under execution of Montage applications as an example. Fig. 3. Results of normalized powerMark of Server-1, 2, 3 based on Eq. (7). Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240 231 awaken transition); (2) wear-and-tear cost of server; (3) fault occurrence by turning sleep servers on when toggled is high [3]. We only consider the energy con- sumption as the overhead by DRS execution. Therefore, we define a constant αaWaken to denote the amount of energy consumption for awaken transition of servers. Then the total energy consumption ej tð Þ of cloud datacenter DCj at time t is defined as follows, ej tð Þ ¼ X 8rcAℛ pwM j ⟨ rc⟩ U P 8CRBi ACℛℬ P 8Fk Aℱ r
  • 24. k ⟨ rc⟩ Ux i; k j tð ÞUvurc tð Þ scpjrc Umj tð Þ ! þαaWaken � mj tð Þ �mj t�1ð Þ � �þ ; 8DCj ADC; 8t ð9Þ where xð Þþ ¼ max 0; xð Þ. The first term of the right hand in (9) represents an energy consumption for using servers to serve VM requests allocated to the cloud datacenter DCj at time t and the second term represents an energy consumption for awaken transition of sleeping servers. Especially, the second term implies that a frequent changes in the number of active servers might increase the undesirable waste of energy. Note that the overhead by transition from active to sleep state (i.e., asleep tran- sition) is ignored in our model since a time required for asleep transition is relatively short compared to the one for awaken transition. 3.3. The cloud datacenter cost minimization problem We build a cost model based on workload model and energy consumption model proposed in Sections 3.1 and 3.2. We focus on minimizing the total cost including three sub costs: (1) energy cost; (2) performance degradation cost; (3) bandwidth cost. In our energy cost model, to simplify it, we assume that the price for renewable energy
  • 25. usage is zero in this paper (strictly, the real price is not zero since the investment expense and the maintenance expenditure for renewable energy generation equipments are required to deploy the renewable energy generator onto the cloud datacenter). Generally, the price of power grid and the capacity of the generated renewable energy are time-varying according to the electricity market and the location of the cloud datacenter [17,19]. We use cej tð Þ to denote the energy cost of cloud datacenter DCj at time t as follows, cej tð Þ ¼ ρgrid tð ÞU ej tð Þ�rwej tð Þ�rpej tð Þ � �þ ; 8DCj ADC; 8t ð10Þ where ρgrid tð Þ denotes the time-varying price of power grid at time t. Next, the performance degradation cost can be determined by the total performance degradation of the cloud datacenter based on Eq. (6). When we use ρperf to denote the constant of penalty price for performance degradation, then the perfor- mance degradation cost of the cloud datacenter DCj at time t, cperfj tð Þ is given by, cperfj tð Þ ¼ ρ perf � Dj tð Þ; 8DCj ADC; 8t ð11Þ Note that ρperf is a constant in contrast with ρgrid tð Þ which is dynamically changed according to time. Third, the bandwidth cost is the one for the data transfer between the cloud service users closed to CRBs and VM requests allocated on servers in cloud datacenters. Obviously, different links between CRB and cloud data- center require the different bandwidth cost. The band-
  • 26. width cost is determined by the network distance (e.g., hop distance) and the transferred data size. We use cbwj tð Þ to denote the bandwidth cost of the cloud datacenter DCj at time t as given by, cbwj tð Þ ¼ X 8CRBi ACℛℬ X 8DCj ADC ρbwi; j U X 8Fk Aℱ xi; kj tð ÞUds k � � ; 8DCj ADC; 8t ð12Þ where ρbwi;j denotes is the bandwidth cost coefficient of the communication link between the cloud request broker CRBi and the cloud datacenter DCj, and ds k denotes the transferred data size of VM request with flavor type Fk. Fig. 4. Illustration of Dynamic Right Sizing procedure.
  • 27. Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240232 Obviously, as the hop distance between the CRBi and DCj grows longer, ρbwi;j is also increased. (12) implies that the allocation of more VM requests to the cloud datacenter which is far away (i.e., has long hop distance) from the source CRB increases the bandwidth cost cbwj tð Þ. It is advantageous for bandwidth cost saving to allocate VM requests to the nearest cloud datacenter to their source CRB. Consequently, we focus on minimizing the total cost of whole cloud datacenters through our proposed approach of the EEE-VMA solver. We use ctotal tð Þ to denote the total cost of whole cloud datacenters at time t, which includes the energy costs, the performance degradation cost and the bandwidth cost. Then we define the objective function f EEE �VMA S tð Þð Þ to calculate the total cost determined by the solution S tð Þ ¼ X tð Þ ¼ x1; 11 tð Þ; nn x1;1 2 tð Þ; …; x Cℛℬj j; ℱj j DCj j tð Þ g; M tð Þ ¼ m1 tð Þ; m2 tð Þ; …; m DCj j tð Þ � �g at time t as belows, f EEE �VMA S tð Þð Þ : ctotal tð Þ ¼ X 8DCj ADC cej tð Þþc
  • 28. perf j tð Þþc bw j tð Þ � � s:t 1ð Þ � ð5Þ ð13Þ To solve this function, we propose the EEE-VMA approach based on GA in order to find an approximated optimal solution for VM request allocation. In next section, we describe our algorithm in detail. 3.4. Evolutionary energy efficient virtual machine allocation In this section, we propose EEE-VMA approach based on GA which is one of efficient metaheuristics to solve a complex optimization problem. In order to successfully deploy GA onto the EEE-VMA, we should define accu- rate strategies for GA and set their appropriate para- meters. To do this, we consider five basic steps of GA as follows. 3.4.1. Encoding scheme A chromosome (i.e., individuals in the population) features the solution S tð Þ of our datacenter management scheme in cloud datacenters. The format of genes in the chromosome is described as an integer value. The chro- mosome includes multiple genes which are divided into two parts: the first part is for the VM request allocation plan X tð ÞAS tð Þ; and the second part is for the DRS plan M tð ÞAS tð Þ. This detailed structure of the chromosome is shown in the Fig. 5.
  • 29. In the first part of the chromosome, gene values repre- sent the number of VM requests allocated to the cloud datacenter at time t. For example, as shown in Fig. 5, the gene 1; 210ð Þ in the chromosome represents x1;11 tð Þ ¼ 210 which means that the number of allocated VM requests with flavor type F1 from the CRB1 to DC1 is 210 at time t. In the second part of the chromosome, gene values represent the number of active servers in the cloud datacenter at time t. In Fig. 5, the gene CRB � F � DC þ1; 3200j Þ ����������� in the chromosome means that the number of active servers in the cloud datacenter DC1 is 3200 at time t. 3.4.2. Initialization In the first generation g ¼ 1, GA in the EEE-VMA approach begins with randomly generated populations according to submitted VM requests at each CRB. To reduce the computation time for GA execution, the range of value for each gene can be predetermined based on (2), and (5). 3.4.3. Evaluation In EEE-VMA approach, we use (13) to evaluate the performance of each chromosome (i.e., solution) in the population. The fitness value of a chromosome is inversely Fig. 5. Encoding example of VM allocation with chromosome. Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240 233
  • 30. related to the cost value. The higher fitness function value implies the higher performance of the chromosome. Note that if a certain chromosome violates any of constraints (1)–(5), then its cost value is counted as “positive infinity”. Otherwise, the chromosome which has the smallest cost value among all the chromosomes in the generated population at g ¼ gMax (a max step of generation) is cho- sen as an optimal solution S� tð Þ finally. 3.4.4. Selection There are several candidate schemes for selection of appropriate solutions in GA. Especially, we adopt the roulette-wheel selection which determines the probability of each chromosome to be chosen according to their fit- ness function value. This scheme tends to preserve superior solutions and evolve them in the next generation [30]. 3.4.5. Crossover The role of crossover is to generate offspring from two parents by cutting certain genes of parents and conducting recombination of each gene fragment. The offspring inherits characteristics of each parent. Our EEE-VMA approach adopts the simple crossover scheme by which the first half of the first parent and the second half of the second parent are aggregated to genes of their offspring. Note that crossover has to be conducted separately on each part of the chromosome since it has two parts of the VM request allocation plan and the DRS plan. 3.4.6. Mutation It is necessary to ensure the diversity of the generated
  • 31. population at all generation steps in order to avoid local minima problem in GA. At each generation step g, gene values constituting chromosome can be modified ran- domly according to the predetermined probability pbmt. If pbmt is too large, the superior genes inherited from parents can be loss, otherwise, the diversity of popula- tion might be lower when pbmt is too small. It is important to determine the appropriate pbmt in order to maintain the diverse and superior population. However, we do not consider this issue since it is out of scope in this paper. The proposed GA for EEE-VMA approach is described in Algorithm 1. In order to get the near optimal solution x�t of datacenter management for VM requests arrived at CRBs at time t, the state information of servers in all the data- centers at time t�1 is required. If the current time t ¼ 0, then we assume that the previous state of all the servers is active (i.e., all the servers are switched on). In line 02, we initialize the candidate population cand_popg tð Þ, g ¼ 1ð Þ with the population size ps (represents the limit number of chromosomes in the population) randomly. The popu- lation cand_popg tð Þ evolves until g ¼ gMax to generate the final population popg ¼ gMax tð Þ to search the near optimal solution x�t as shown from line 03 to 29. Two parent chromosomes Sg;i tð Þ and Sg;j tð Þ are released from temp_popg tð Þ to produce an offspring Sg;k tð Þ from line 06 to 10. In line 11, each offspring in the set offspringg tð Þ is mutated by modifying each gene according to the prob- ability prmt to maintain the diversity of the population. We check constraints (i.e., Eqs. (1)–(5)) of each solution Sg;i tð Þ in cand_popg tð Þ are whether violated or not in line 14. If they are violated, the corresponding solution has the cost value counted as “positive_infinity”. Otherwise, the objective function value of the solution is calculated
  • 32. through f EEE �VMA Uð Þ in line 17. If we find the solution Sg;i tð Þ having the objective function value cg;i tð Þ which is smaller than the predetermined threshold value cthr, then we count Sg;i tð Þ as the near optimal solution S� tð Þ and the algorithm 1 is finished. Otherwise, chromosomes to be preserved until the next generation are chosen from the current population through the Iterative Roulette-wheel Selection (Algorithm 2) procedure as shown in line 23. When reaching the max step of generation gMax, the near optimal solution S� tð Þ having minimum cost value of f EEE �VMA Uð Þ in popg ¼ gMax tð Þ is found and returned to the EEE-VMA solver in our system in line 30. Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240234 The IterativeRwSelection for Algorithm 1 is described in Algorithm 2. From line 02 to 07, the cost values which have “positive_infinity” are released from cand_Cg tð Þ and added to illeg_Cg tð Þ since solutions which do not violate constraints (i.e., (1)–(5)) are preferentially considered as candidates to be preserved until the next generation. The maximum (worst) and minimum (best) objective function Algorithm 1. values are found from cand_Cg tð Þ in line 09 and 10. Fitness values of each solution are calculated as shown in line 13. Through this equation, the fitness value of the best solu- tion comes out as α times of one of the worst solution. The selection pressure which represents the difference between fitness values of superior solutions and inferiors is increased as α is increased. The sum of fitness values of each solution, SF is updated in line 15.
  • 33. Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240 235 Algorithm 2. This value represents the total size of roulette-wheel, and each solution is assigned to spaces on the roulette-wheel. That means that the selection probability of each solution is proportional to the size of their assigned spaces. The selec- tion procedure of the roulette-wheel is described from line 19 to 26. At every step, the cumulative summation QS is updated according to the fitness value fvg;i tð Þ in FVg tð Þ. If the selection point SP is smaller than QS with the latest update by fvg;i tð Þ (i.e., Pi�1 k ¼ 1 fvg;k tð ÞrSPr Pi k ¼ 1 fvg;k tð Þ), then the index i of Sg;i tð Þ in cand_popg tð Þ is added to chIdxSetg tð Þ. If the total Fig. 6. Wind speed (m=s) and its corresponding amount of generated wind energy (kW) at Oak Ridge National Lab (a and b), Univ of Arizona (c and d), and Univ of Nevada (e and f) at EST 05:20–17:54 on September 9, 2015 [33].
  • 34. Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240236 number of chosen chromosomes by the roulette-wheel procedure is not sufficient (i.e., the cardinality of chIdxSetg tð Þ is smaller than the predetermined size of the population ps), then we supplement chIdxSetg tð Þ by ran- domly putting out the indices of chromosomes from illeg_Cg tð Þ. After all the procedures are finished, then chIdxSetg tð Þ is finally returned to the Algorithm 1 in line 32. 4. Performance evaluation In this section, we evaluate the performance of our proposed EEE-VMA approach based on both of simulation analysis and experiments on real testbeds. To highlight the benefits of our design for renewable and QoS aware workload management, we perform a numerical simula- tion based on real-world traces of renewable energy capacity. 4.1. Dynamic capacity of renewable energy We consider three locations to employ the raw data in order to build a capacity trace of renewable energy including wind energy: Oak Ridge National Lab (Eastern Tennessee); University of Arizona (Tucson, Arizona); Uni- versity of Nevada, Las Vegas (Paradise, Nevada) [11,33]. We obtain the capacity traces of wind energy at those three locations baced on [33] that collects data of wind speed every day. The capacity traces of each location at EST Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240 237 05:20–17:54 on September 9, 2015 are shown in Fig. 6.
  • 35. Fig. 6(a), (c), and (e) shows the wind speed of each location and we can find that it is fluctuated a lot even during short period. We assume that each generator has 30 wind tur- bines and the amount of generated wind energy is esti- Fig. 7. Real time price of power grid at three regions. Fig. 8. CPU utilization of the running VM request including pbzip2, iozone3, and netperf. Fig. 9. Total cost of datacenters including servers with heter mated based on the wind power prediction scheme from [34]. Then the curves of the amount of available wind energy are shown in Fig. 6(d), (e), and (f). 4.2. Energy price description As mentioned earlier, only the grid power price is considered since we assume that the renewable energy price is free. The grid power price is dynamically changed according to the electricity consuming time. We use the electricity price information in our simulation based on the real time pricing during 24 h in the electricity market which is shown in Fig. 7 [23,35]. Note that the electricity price is high from 6 a.m. to 14 p.m., and from 19 p.m. to 21 p.m. The electricity usage is usually increased during these periods due to the needs of industrial and household appliances. In our simulation, each cloud datacenter ran- domly has the electricity pricing curve among datacenter 1, 2, and 3 in Fig. 7. 4.3. Cloud resource description The total number of multiple cloud datacenters is nine, and each datacenter owns 2 � 103 homogeneous servers in this paper. In perspective of VM instance specifications, we
  • 36. adopt the policy of Amazon EC2 Web Services (AWS), our cloud datacenters support the set of flavor types F ¼ F1 ¼ CPU ¼ 2cores; mem ¼ 4 GBð Þ; F2 ¼ 4; 8ð Þ; F3 ¼ 8; 16ð Þ; � F4 ¼ 16; 32ð Þg and each VM request has an arbitrary flavor type Fk AF randomly [29]. As mentioned in Section 3, each cloud datacenter has heterogeneous server architecture, they have different powerMark value in the range of 200– 500 based on results in Fig. 3. 4.4. Workload scenario Our considered workload includes two parts: the number of VM requests tð Þ , and their required resource utilization vu rch i tð Þ at time t. The number of VM requests Λ tð Þ can be defined as from 3 � 103 to 100 � 103 in this paper. Obviously, as Λ tð Þ is increased, both of energy con- sumption and performance degradation are also increased. ogeneous (a) and homogeneous power efficiency (b). Fig. 10. Active server ratio of cloud datacenters including servers with heterogeneous (a) and homogeneous power efficiency (b). Fig. 11. Performance degradation of CPU contention by co- located VM requests in Server-1 (a), 2 (b), and 3 (c). Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240238 In perspective of resource utilization, we only consider the resource component rc ¼ rcCPU and ignore the resource component rcmem since the energy consumption and per-
  • 37. formance degradation caused by rcmem is negligible com- pared to ones by rcCPU. We use the real traces of CPU resource utilization measured by the monitoring module with serveral benchmark applications on the physical machines. Fig. 8 shows the CPU resource utilization of running benchmarks including a mixture of pbzip2, iozone3, netperf on VM instances. Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240 239 4.5. GA Parameters for EEE-VMA approach We consider a population size ps with a range from 102 to 104, the max step of generation gMax in the range of 100 to 1000, and the mutation probability 0.001, 0.005 and 0.01 in the Algorithm 1 and 2. As the parameters such ps and gMax are increased, the performance of the derived solution is increased, but its computation need is also deteriorated. 4.6. Traditional resource management schemes To demonstrate that our proposed approach outper- forms existing resource management schemes, we com- pare the EEE-VMA approach to both of VM consolidation and VM balancing based allocation approaches. The VM consolidation approach tries to pack multiple VM requests as many as possible in the common physical server. This scheme tends to reduce the number of active servers. Therefore, the energy saving performance is increased, while the performance degradation is deteriorated. In contrast, the VM balancing approach splits VM requests over multiple cloud datacenters. This scheme avoids the
  • 38. performance degradation of resource contention by VM request co-location, but causes the large energy con- sumption due to a lot of active servers. Figs. 9, 10, and 11 show the performance of our pro- posed EEE-VMA approach and existing VM balancing and consolidation approaches at ps ¼ 102, gMax ¼ 500, and mutation probability is 0.001. Fig. 9 shows the total cost in Eq. (13) of the VM balancing, VM consolidation and our proposed EEE-VMA approach at different workload offered load level. Fig. 9(a) shows the curves of total cost of all the approaches assuming that each cloud datacenter has het- erogeneous power efficiency. Our proposed approach achieves the improvements of the cost saving performance about 8% and 53% compared to VM consolidation and VM balancing approaches, respectively. However, the differ- ence of the cost saving performance between traditional approaches and our EEE-VMA approach in Fig. 9 (b) assuming that each cloud datacenter has homogeneous power efficiency is relatively small compared to the one in Fig. 9(a). The EEE-VMA approach achieves the improve- ments of the cost saving performance about 10% and 15% compared to VM consolidation and VM balancing approaches, respectively. Note that our EEE-VMA approach further improves the performance of energy saving in the heterogeneous cloud datacenters since it uses powerMark value which can rank the power efficiency of each cloud datacenter to maximize the energy efficiency of resource allocation. However, our proposed approach still has the better performance than ones of existing approaches even under the assumption of homogeneous power efficiency of each cloud datacenter. Fig. 10 shows the active server ratio of cloud datacenters by our EEE-VMA approach and existing resource management approaches. In Fig. 10(a), the average active server ratio of the EEE-VMA approach is under 30%, while the ones of VM balancing is closed to
  • 39. 60%. Our EEE-VMA approach considers both of energy consumption and performance degradation of VM requests, while the VM balancing only focuses on the performance degradation. Note that the energy saving performance of VM consolidation is worse than the one of the EEE-VMA approach even though the VM consolidation focuses on the energy consumption of cloud datacenters. This is because our EEE-VMA approach allocates VM requests to power efficient cloud datacenters pre- ferentially based on their powerMark values, while the VM consolidation randomly assigns VM requests to cloud datacenters. In Fig. 10(b), the active server ratio of VM consolidation is lower than the one of our EEE-VMA approach, this is because the VM consolidation approach only focuses on the energy consumption of cloud data- center, but the EEE-VMA approach avoids unacceptable performance degradation of running VM requests through Eq. (11). Fig. 11 shows the performance degradation of allocated VM requests in each cloud datacenter by the EEE-VMA, VM consolidation, and VM balancing approaches based on the server types. The performance degradation is calculated by Eq. (6). In the perspective of performance degradation, the VM balancing approach outperforms the others including our proposed EEE-VMA approach. The VM balancing tries to spread submitted VM requests over whole cloud data- centers as fair as possible, therefore the CPU resource contention of co-located VM requests can be minimized. In Server-1 type, the performance degradation of VM balan- cing is lower than the ones of the EEE-VMA approach and the VM consolidation by 40% and 60%, respectively. In Server-2 type, the performance degradation of VM balan- cing is lower than ones of the EEE-VMA approach and the VM consolidation by 55% and 62%, respectively. Finally, in Server-3 type, the VM balancing approach can improve the
  • 40. performance degradation about 39% and 55% compared to the EEE-VMA approach and the VM consolidation, respectively. 5. Conclusions In this paper, we introduced the EEE-VMA approach for greening cloud datacenters with renewable energy gen- erators. We proposed a novel energy efficient metric powerMark to classify the power efficiency of hetero- geneous servers in cloud datacenters and built a con- siderate cost model considering switching overheads in order to reduce efficiently the energy consumption of servers without a significant performance degradation by co-located VM requests and DRS execution. We deployed the iterative roulette-wheel algorithm for GA of the EEE- VMA approach in order to solve the complex objective function of our cost model. Through various experimental results based on simulation and Openstack platform justify that our proposed algorithms are supposed to be deployed for prevalent cloud data centers. In the perspective of total cost, our EEE-VMA approach can improve the average cost by 28% compared to existing resource management schemes at all the workload level. With the increase of the computation investment for GA in EEE-VMA approach, our proposed approach can get arbitrarily close to the optimal value. Y. Peng et al. / Optical Switching and Networking 23 (2017) 225–240240 Acknowledgments This work was supported by 'The Cross-Ministry Giga KOREA Project' of the Ministry of Science, ICT and Future Planning, Korea [GK13P0100, Development of Tele-
  • 41. Experience Service SW Platform based on Giga Media]. References [1] J. Hamilton, Cost of Power in Large-Scale Data Centers, Nov. 2009. Available Online: ⟨ http://perspectives.mvdirona.com/⟩ . [2] J. Koomey, Growth in Data Center Electricity Use 2005– 2010, Ana- lytics Press, Burlingame, CA, USA, 2011. [3] M. Lin, A. Wierman, L. Lachlan, H. Andrew, E. Thereska, Dynamic right-sizing for power-proportional data centers, IEEE/ACM Trans. Netw. 21 (5) (2013) 1378–1391. [4] L.A. Barroso, U. Holzle, The case for energy-proportional computing, Computer 40 (12) (2007) 33–37. [5] T. Lu, M. Chen, L. Lachlan, H. Andrew, Simple and effective dynamic provisioning for power-proportional data centers, IEEE Trans. Par- allel Distrib. Syst. 24 (6) (2013) 1161–1171. [6] Z. Ou, H. Zhuang, J. K. Nurminen, A. Yla-Jaaski, and P. Hui, Exploiting hardware heterogeneity within the same instance type of Amazon EC2, In: Proceedings of the 4th USENIX Workshop on HotCloud, 2012. [7] Z. Ou, H. Zhuang, A. Lukyanenko, J.K. Nurminen, P. Hui,
  • 42. V. Mazalov, A. Yla-Jaaski, Is the same instance type created equal? Exploiting heterogeneity of public clouds, IEEE Trans. Cloud Comput. 1 (2) (2013) 201–214. [8] A. Beloglazov, R. Buyya, Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers, Concurr. Comput.: Pract. Exp. 24 (2012) 1397–1420, http://dx.doi.org/ 10.1002/cpe.1867. [9] Z. Liu, Y. Chen, C. Bash, A. Wierman, D. Gmach, Z. Wang, M. Marwah, C. Hyser, Renewable and cooling aware workload management for sustainable data centers, In: Proceedings of the ACM SIGMETRICS, London, UK, 2012. [10] Y. Guo, Y. Fang, Electricity cost saving strategy in data centers by using energy storage, IEEE Trans. Parallel Distrib. Syst. 24 (6) (2013) 1149–1160. [11] Y. Guo, Y. Gong, Y. Fang, P.P. Khargonekar, X. Geng, Energy and network aware workload management for sustainable data centers with thermal storage, IEEE Trans. Parallel Distrib. Syst. 25 (8) (2014)
  • 43. 2030–2042. [12] F. Xu, F. Liu, L. Liu, H. Jin, B. Li, B. Li, iAware: making live migration of virtual machines interference-aware in the cloud, IEEE Trans. Com- put. 63 (12) (2014) 3012–3025. [13] X. Wang, B. Li, B. Liang, Dominant Resource Fainess in Cloud Com- puting Systems with Heterogeneous Servers, In: Proceedings of the IEEE INFOCOM, Toronto, Canada, 2014. [14] Montage, ⟨ http://montage.ipack.caltech.edu/⟩ . [15] S.K. Garg, R. Buyya, Green cloud computing and environmental sustainability, in: S.M. A. G. G. (Ed.), Harnessing Green IT: Principles and Practices, Wiley Press, UK, 2012, pp. 315–340. [16] I. Goiri, M.E. Haquc, K. Le, R. Beauchea, T.D. Nguyen, J. Guitart, J. Torres, R. Bianchini, Matching renewable energy supply and demand in green datacenters, Elsevier Ad Hoc Netw. 25 (2015) 520–534. [17] C. Wu, H. Mohsenian-Rad, J. Huang, A. Y. Wang, Demand side management for wind power integration in microgrid using dynamic potential game theory, In: Proceedings of the IEEE CLO- BECOM Workshop on Smart Grid Communications and Networking, Houston, Tx, Dec. 2001.
  • 44. [18] Solar Anywhere, Solar anywhere overview, Clean Power Research, Web. 17 Apr. 2012. ⟨ http://www.solaranywhere.com/Public/Over view.aspx⟩ . [19] R. Huang, T. Huang, R. Gadh, Solar Generation Prediction using the ARMA Model in a Laboratory-level Micro-grid, In: Proceedings of the IEEE SmartGridComm Symposium, Tainan, Taiwan, Nov. 2012. [20] Openstack, ⟨ http://www.openstack.org/⟩ . [21] YOCTO-WATT, ⟨ http://www.yoctopuce.com/EN/products/usb-elec trical-sensors/yocto-watt⟩ . [22] D. Gupta, L. Cherkasove, R. Gardner, A. Vahdata, Enforcing perfor- mance isolation across virtual machines in Xen, In: Proceedings of the ACM/IFIP/USENIX 2006 International Conference on Middle- ware, Nov. 2006. [23] A. Qureshi, R. Weber, H. Balakrishnan, J. Guttag, B. Maggs, Cutting the electric bill for internet-scale systems, In: Proceedings of the ACM SIGCOMM Computer Communications Review, vol. 39(4), Aug. 2009, pp. 123–134. [24] T. Wood, P. Shenoy, A. Venkataramani, M. Yousif, Sandpiper: black-
  • 45. box and gray-box resource management for virtual machines, Elsevier Comput. Netw. 53 (17) (2009) 2923–2938. [25] Credit Scheduler, ⟨ http://wiki.xen.org/wiki/Credit_scheduler⟩ . [26] C. Ren, D. Wang, B. Urgaonkar, A. Sivasubramaniam, Carbon-aware energy capacity planning for datacenters, In: Proceedings of the IEEE MASCOTS, 2012, pp. 391–400. [27] Apple Environmental Responsibility, ⟨ http://www.apple.com/envir onment.renewable-resources/⟩ . [28] H. Fawaz, Y. Peng, C. Youn, A MISO mode for power consumption in virtualized servers, Clust. Comput. 18 (2) (2015) 847–863. [29] Amazon Web Services, ⟨ https://aws.amazon.com⟩ . [30] E.D. Dasgupta, Z. Michalewicz, Evolutionary Algorithms in Engi- neering Applications, Springer, Berlin, Germany, 1997. [31] M. Chen, Y. Wen, H. Jin, V. Leuna, Enaling technologies for future data center networking: a primer, IEEE Netw. 27 (4) (2013) 8– 15. [32] M. Chen, Y. Zhana, L. Hu, T. Taleb, Z. Shena, Cloud- based wireless network: virtualized, reconfigurable, smart wireless network to enable 5G technologies, ACM/Springer Mob. Netw. Appl. 20 (6)
  • 46. (2015) 704–712. [33] Measurement and Instrumentation Data Center (MIDC), ⟨ http:// www.nrel.gov/midc/⟩ . [34] C. Wu, H. Mohsenian-Rad, J. Huang, A.Y. Wang, Demand side man- agement for wind power integration in microgrid using dynamic potential game theory, In: Proceedings of the IEEE GLOBECOM Workshop on Smart Grid Communications and Networking, Hous- ton, Tx, Dec. 2001. [35] H. Mohsenian-Rad, A. Leon-Garcia, Optimal residential load control with price prediction in real-time electricity pricing environments, IEEE Trans. Smart Grid 1 (2) (2010) 120–133. http://www.perspectives.mvdirona.com/ http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref1 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref1 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref2 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref2 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref2 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref2 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref3 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref3 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref3 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref4 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref4 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref4 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref4 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref5
  • 47. http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref5 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref5 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref5 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref5 http://dx.doi.org/10.1002/cpe.1867 http://dx.doi.org/10.1002/cpe.1867 http://dx.doi.org/10.1002/cpe.1867 http://dx.doi.org/10.1002/cpe.1867 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref7 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref7 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref7 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref7 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref8 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref8 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref8 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref8 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref8 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref9 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref9 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref9 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref9 http://montage.ipack.caltech.edu/ http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref10 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref10 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref10 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref10 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref11 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref11 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref11 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref11 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref11 http://www.solaranywhere.com/Public/Overview.aspx http://www.solaranywhere.com/Public/Overview.aspx http://www.openstack.org/ http://www.yoctopuce.com/EN/products/usb-electrical- sensors/yocto-watt
  • 48. http://www.yoctopuce.com/EN/products/usb-electrical- sensors/yocto-watt http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref12 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref12 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref12 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref12 http://wiki.xen.org/wiki/Credit_scheduler http://www.apple.com/environment.renewable-resources/ http://www.apple.com/environment.renewable-resources/ http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref13 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref13 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref13 http://https://www.aws.amazon.com http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref14 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref14 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref15 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref15 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref15 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref16 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref16 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref16 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref16 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref16 http://www.nrel.gov/midc/ http://www.nrel.gov/midc/ http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref17 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref17 http://refhub.elsevier.com/S1573-4277(16)00005-9/sbref17 http://refhub.elsevier.com/S1573-4277(16)00005- 9/sbref17Energy and QoS aware resource allocation for heterogeneous sustainable cloud datacentersIntroductionSystem architecture and designProblem formulationWorkload modelEnergy consumption modelThe renewable energy modelHeterogeneous power consumption modelDynamic right sizing modelThe cloud datacenter cost minimization problemEvolutionary energy efficient virtual machine
  • 49. allocationEncoding schemeInitializationEvaluationSelectionCrossoverMutationPerf ormance evaluationDynamic capacity of renewable energyEnergy price descriptionCloud resource descriptionWorkload scenarioGA Parameters for EEE-VMA approachTraditional resource management schemesConclusionsAcknowledgmentsReferences Renewable and Sustainable Energy Reviews 62 (2016) 195–214 Contents lists available at ScienceDirect Renewable and Sustainable Energy Reviews http://d 1364-03 n Corr E-m wasimra journal homepage: www.elsevier.com/locate/rser Sustainable Cloud Data Centers: A survey of enabling techniques and technologies Junaid Shuja a, Abdullah Gani a,n, Shahaboddin Shamshirband b, Raja Wasim Ahmad a, Kashif Bilal c a Centre for Mobile Cloud Computing Research (C4MCCR), FSKTM, University of Malaya, Kuala Lumpur 50603, Malaysia b Faculty of Computer Science and Information Technology, University of Malaya, Malaysia c Department of Computer Science, COMSATS Institute of Information Technology, Pakistan
  • 50. a r t i c l e i n f o Article history: Received 19 June 2015 Received in revised form 15 February 2016 Accepted 16 April 2016 Available online 4 May 2016 Keywords: Cloud Data Centers Energy efficiency Renewable energy Waste heat utilization Modular data centers VM migration x.doi.org/10.1016/j.rser.2016.04.034 21/& 2016 Elsevier Ltd. All rights reserved. esponding author. Tel.: þ60 0379676300; fax ail addresses: [email protected] ( [email protected] (R.W. Ahmad), kashifbil a b s t r a c t Cloud computing services have gained tremendous popularity and widespread adoption due to their flexible and on-demand nature. Cloud computing services are hosted in Cloud Data Centers (CDC) that deploy thousands of computation, storage, and communication devices leading to high energy utilization and carbon emissions. Renewable energy resources replace fossil fuels based grid energy to effectively reduce carbon emissions of CDCs. Moreover, waste heat generated from electronic components can be utilized in absorption based cooling systems to offset cooling costs of data centers. However, data centers
  • 51. need to be located at ideal geographical locations to reap benefits of renewable energy and waste heat recovery options. Modular Data Centers (MDC) can enable energy as a location paradigm due to their shippable nature. Moreover, workload can be transferred between intelligently placed geographically dispersed data centers to utilize renewable energy available elsewhere with virtual machine migration techniques. However, adoption of aforementioned sustainability techniques and technologies opens new challenges, such as, intermittency of power supply from renewable resources and higher capital costs. In this paper, we examine sustainable CDCs from various aspects to survey the enabling techniques and technologies. We present case studies from both academia and industry that demonstrate favorable results for sustainability measures in CDCs. Moreover, we discuss state-of-the-art research in sustainable CDCs. Furthermore, we debate the integration challenges and open research issues to sustainable CDCs. & 2016 Elsevier Ltd. All rights reserved. Contents 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 2. Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 2.1. Renewable energy in CDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 2.2. Waste heat utilization in CDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
  • 52. 2.3. Modular CDC designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 2.4. VM migrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 3. Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 3.1. Parasol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 3.2. Free lunch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 3.3. Aquasar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 3.4. MDC with free cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 3.5. Facebook Arctic CDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 3.6. Green House Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 4. Renewable Energy based CDCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 4.1. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 : þ60 379579249.
  • 53. J. Shuja), [email protected] (A. Gani), [email protected], [email protected] (S. Shamshirband), [email protected] (K. Bilal). www.sciencedirect.com/science/journal/13640321 www.elsevier.com/locate/rser http://dx.doi.org/10.1016/j.rser.2016.04.034 http://dx.doi.org/10.1016/j.rser.2016.04.034 http://dx.doi.org/10.1016/j.rser.2016.04.034 http://crossmark.crossref.org/dialog/?doi=10.1016/j.rser.2016.0 4.034&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.rser.2016.0 4.034&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1016/j.rser.2016.0 4.034&domain=pdf mailto:[email protected] mailto:[email protected] mailto:[email protected] mailto:[email protected] mailto:[email protected] mailto:[email protected] http://dx.doi.org/10.1016/j.rser.2016.04.034 J. Shuja et al. / Renewable and Sustainable Energy Reviews 62 (2016) 195–214196 4.2. State-of-the-Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 4.2.1. Dynamic load balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 4.2.2. Follow the renewables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
  • 54. 4.2.3. Renewable based power capping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 5. Waste heat utilization in CDCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 5.1. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5.2. State-of-the-Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 6. Modular data centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 6.1. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 6.2. State-of-the-art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 7. VM migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 7.1. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 7.2. State-of-the-Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 8. Research Issues and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  • 55. . . . . . . . . . . . . . . . . . . 210 8.1. Renewable energy-CDC integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 8.2. Waste heat utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 8.3. MDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 8.4. VM WAN migrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 9. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 1. Introduction Cloud Data Centers (CDC) are increasingly being deployed by Information Technology (IT) service providers, such as Google, Amazon, and Microsoft to cater for world's digital needs. CDCs provide an efficient infrastructure to store large amount of data along with enormous processing capabilities. Business objectives and Service Level Agreements (SLA) demand that the storage and compute facilities be replicated redundantly to provide fault tol- erance and minimal service delay. Therefore, IT service providers
  • 56. run data centers 24/7 with thousands of servers, storage, and networking devices to ensure 99.99% availability of cloud services [1,2]. Our digital activities such as social media, search, file sharing, and streaming are creating huge amount of data. Each bit of data created needs to be processed, stored, and transmitted, adding to energy costs and leaving environmental impact in the form of Greenhouse Gas (GHG) emissions [3]. While sustainable energy economy is one of the major challenges faced by the world com- munity, CDCs have emerged as a major consumer of electricity. The number and of size of data centers has been increasing exponentially over the past decade to keep pace with the growing number of cloud based applications and users. CDCs are estimated to consume more than 2.4% of electricity worldwide with a global economic impact of $30 billion [4]. Despite advancements in IT equipment efficiencies, data center electricity consumptions are expected to grow 15-20% annually [5]. Additionally, CDCs are responsible for emission of GHG produced during the process of electricity generation, IT equipment manufacturing, and disposal [6,7]. It is estimated that the data centers were responsible for 78.7 million metric ton of CO2 emissions equal to 2% of global emissions in 2011 [8]. These figures advocate application of innovative and disruptive measures in CDCs for energy and carbon efficiency. Power Usage Efficiency (PUE) and Carbon Usage Efficiency (CUE) are commonly applied sustainability indicators in CDCs. PUE is defined as the ratio of total CDC energy usage to IT equipment
  • 57. energy usage [9,10]. Energy wasted in measures other than com- puting, such as cooling, leads to poor PUE values. CUE is the ratio of total CO2 emissions caused by CDC power consumption to total power used by the CDC. Complete dependency on fossil fuel based grid energy in CDCs leads to poor CUE values [11]. Sustainable and green CDCs necessitate application of multiple techniques and technologies to achieve lower energy costs and GHG emissions. The main elements of sustainable CDCs are [9]: � On/off-site renewable energy generation techniques to reduce GHG emission. Renewable energy resource powered CDCs lead to lower GHG emissions while eliminating fossil fuel based energy resources. � Waste heat recovery and free cooling techniques to lower cooling costs. Cooling costs which make up 40% of total CDC energy consumption on average. Both renewable energy and waste heat utilization techniques in CDCs are dependent on geo-dispersed MDC designs and virtualization based workload migrations. � Transportable Modular Data Center (MDC) designs that facil- itate exploitation of renewable energy, waste heat, and free cooling opportunities in geo-dispersed locations. � Virtualization based workload migrations that enable workload and resource management across geo-dispersed CDC nodes. Renewable energy generation and free cooling techniques require ideal climatic conditions which are dependent on the location of CDC. Similarly, waste heat utilization requires co-
  • 58. location of CDC with places suitable for waste heat recovery opportunities, such as district heating. As MDC designs are based on shipping containers, they enable relocation of CDC nodes to places where sustainable computing opportunities are abundant. Hence, the opportunistic relocation of CDCs nodes is based on two factors: (a) on-site availability of renewable energy resource and (b) proximity to free cooling resources and waste heat recovery opportunities [12,13]. Sustainable CDCs are supported by and dependent on geo-dispersed MDC designs and virtualization based workload migration techniques. MDC shippable containers allow distribution of CDC nodes to optimal locations with sustainable computing opportunities. Moreover, virtualization of CDC resources allows efficient migration of workloads between geo-dispersed data center nodes to pursue sustainable computing opportunities across the globe [14]. IT service providers, such as, Google and Facebook have also emphasized on migration from grid energy resources to renewable energy resources in geo- dispersed configurations [15,16]. Fig. 1 Fig. 1. Elements of sustainable Cloud Data Center Model. J. Shuja et al. / Renewable and Sustainable Energy Reviews 62 (2016) 195–214 197 presents a model of green CDCs with application of techniques
  • 59. and technologies for sustainability. To the best of our knowledge, this is the first survey on sus- tainable CDCs that covers all major factors of sustainability and green economy in the cloud. Previous surveys have largely focused on a single aspect of sustainable CDCs. For instance, Oro et al. [17] reviewed renewable energy integration schemes for CDCs. Ebra- himi et al. [5] presented a survey on waste heat opportunities in CDCs. Similarly, Ahmad et al. [18] surveyed the Virtual Machine (VM) based workload consolidation schemes in CDCs. A compre- hensive survey covering the major techniques and technologies of sustainable CDCs is not present in the literature. Furthermore, open research issues and challenges in context of sustainable CDCs need to be investigated in detail. The major contributions of this article are: (a) we classify state-of-the-art techniques and tech- nologies enabling sustainable CDCs, (b), we detail cases studies from IT industry and research community that advocate the application of sustainability measures for CDCs, (c) we present a survey of existing studies in sustainable CDCs, and (d) we highlight research challenges and issues in adoption of sustainable and green energy techniques and technologies among geo- dispersed CDCs. The rest of the paper is organized as follows. Section 2 provides background knowledge to sustainable CDC techniques and tech- nologies. Section 3 presents case studies from leading IT
  • 60. compa- nies and research community that demonstrate the benefits of the integration of renewable energy, waste heat recovery, geo- dispersed MDC designs, and VM migration techniques in CDCs. In Section 4, we examine adoption of renewable in CDCs with corresponding taxonomy of solutions and summary of research issues. Section 5 investigates waste heat utilization opportunities in CDCs. In Section 6, we elaborate on MDC architectures and the corresponding server, network, and cooling designs. Section 7 reviews Wide Area Network (WAN) VM migration techniques in context of geo-dispersed CDCs. In Section 8, we debate on future research directions and open challenges in the field of sustainable CDCs. Section 9 provides the concluding findings of our study. 2. Background In this section, we provide basic knowledge of sustainable and green CDCs. We provide brief summary of enabling techniques and technologies for sustainable CDCs, namely, renewable energy, waste heat utilization, modular CDC designs, and VM migration. 2.1. Renewable energy in CDC Sustainable and green computing requires application of both energy efficiency measures and renewable energy resources to lower energy and carbon footprint [6,19]. Brown energy generated from fossil fuels, such as coal, gas, and oil results in large amount
  • 61. of CO2 emissions. On the other hand, green energy produced from renewable resources, such as water, wind, and sun results in almost zero CO2 emissions [20]. Hydroelectricity, although cate- gorized as green energy, is available only through grid electricity supplied by government corporations. On the contrary, solar and wind energy can be generated with both on-site installations or purchased from off-site corporations. The capital cost and unpre- dictability of renewable energy resources are barriers to their widespread adoption [21]. However, cost/Watt of renewable energy resources is estimated to half in the next decade [22]. The reduction in the cost/Watt of renewable energy is based on (a) advancements in capacity of materials, such as photovoltaic arrays, (b) increase in storage capacity of rechargeable batteries, and (c) monetary incentives by governmental organizations for the inte- gration of renewable energy resources [23]. The issue of unpre- dictability in renewable energy supply can be addressed by power- supply load balancing and workload migration techniques among geo-dispersed CDCs [24,25]. Moreover, hybrid grid designs that draw power from both steady grid resources and variable on-site renewable energy sources are essential to guarantee 100% avail- ability of cloud services [17]. However, abundant renewable energy resources are often located away from commercial CDC sites. Therefore, transportable MDC designs need to be utilized to locate CDC nodes near renewable energy resources [23]. The integration of renewable energy in CDC results in lower CUE metric. Higher capital costs and intermittency of renewable
  • 62. energy J. Shuja et al. / Renewable and Sustainable Energy Reviews 62 (2016) 195–214198 resources remain a challenge for widespread adoption in CDCs [26]. 2.2. Waste heat utilization in CDC Fossil fuel deposits are diminishing at rapid pace calling for reuse of waste heat in all type of energy conversion systems. Most of the electric energy supplied to the CDC servers is converted into heat energy requiring deployment of large scale cooling systems to keep server rack temperatures in operational range [9]. As a result, 40-50% of the electricity consumed by CDCs is used to cool heat dissipating servers [5]. With advent of multi-core and stacked server designs, power densities of servers have increased, resulting in increased cooling costs. Minimizing the energy used in cooling can lead to significant impact on energy efficiency in CDCs [27]. However, reduction in cooling cost requires relocation of CDCs to places where free cooling resources are available in the form of lower environment temperatures [15]. Multiple geographically dispersed locations are also exploited for variable electricity prices [28]. Moreover, as most of the power supplied to the servers is
  • 63. dissipated as heat, CDCs can act as heat generators for many waste heat recovery techniques [5]. Waste heat can be ideally applied to vapor-absorption based CDC cooling systems. When heat is sup- plied to a refrigerant in vapor-absorption based cooling, it eva- porates while taking away heat from the system. In this manner, application of waste heat utilization and free cooling techniques results in ideal PUE values by neutralizing cooling costs while powering vapor-absorption based CDC cooling systems [29]. Heat generated by CDCs can also be supplied to district heating facilities in areas with lower temperatures. However, CDCs are often not located in proximity of waste heat recovery locations. Therefore, either CDCs have to apply waste heat to internal vapor- absorption based cooling system, or relocate to proximity of a waste heat recovery site. MDC shippable nodes are ideal to tap into waste heat recovery opportunities in geo-dispersed sites. Moreover, VM based workload migrations are also necessary to balance CDC load between geo-dispersed computing nodes [22]. The main challenge to waste heat utilization is low heat quality in CDCs and higher capital costs of heat exchange interfaces. 2.3. Modular CDC designs CDCs need to intelligently tap into renewable energy resources and waste heat utilization opportunities present at sites that are often remote to commercial CDC buildings [30,13]. Modular Data
  • 64. Centers (MDC) enable location as an energy efficiency measure as they are built inside shipping containers that can be transported to remote locations. The container based MDC design offers two desirable properties for sustainable CDCs. Firstly, the shippable nature of MDC allows cloud providers to relocate their compute facilities to geo-dispersed locations abundant with sustainability opportunities. Secondly, the container based closed looped system of MDC is ideal for application of free cooling and waste heat utilization measures [12]. The container design can efficiently perform hot-aisle containment so that high grade waste heat can be captured from the servers. Hot-aisle containment also leads to better cooling efficiency resulting in lower operational costs. In a generic MDC design, computing and cooling devices are setup inside the container before shipment to a remote location. MDC nodes provide flexibility to cloud service providers with service- free design as computing resources are setup before deployment and not repaired or upgraded upon failure. The MDC node is kept in service until the assembled components provide a minimum level of computational output [31,32]. MDC nodes can be operated as continuously moving entities searching for opportunistic sustainability options, or static entities that are operated from a location that has redundant sustainability options for CDCs [30]. 2.4. VM migrations Virtualization technology lies at the core of CDC infrastructure while providing resource management, resource consolidation,
  • 65. and migration for energy efficiency and fault-tolerance [1,9]. Vir- tualization adeptly manages existing cloud resources through highly dynamic resource provisioning to significantly reduce operational costs. Intermittent nature of renewable energy resources and decentralized MDC nodes necessitate workload migration while balancing workload among multiple geo- dispersed nodes. Virtual Machine (VM) migration techniques enable migration of workloads when on-site renewable energy generation is low and available elsewhere in geo-dispersed sites. Similarly, virtualization also enables workload migration between distributed MDC nodes where some nodes leverage on-site renewable energy while other nodes employ nearby waste heat utilization opportunities [33,34]. Moreover, VM based workload migration and consolidation techniques are utilized to pack a set of VMs to fewer number of physical devices to balance renewable power generation and workload demand [35]. Researchers have leveraged both MDC designs and VM migration techniques to efficiently harness renewable energy resources and waste heat utilization alternatives in green CDCs [36–38]. However, the cost, in terms of network delay and energy consumption, between geo- dispersed nodes is the foremost challenge to VM based workload migrations in CDCs. 3. Case Studies The relationship between sustainable CDCs techniques and technologies is established and complemented by several case studies carried out by the IT industry and published in scholarly articles. Many IT companies including Apple, Google, and Facebook
  • 66. have added green and sustainable CDC nodes to their expanding infrastructure [39]. In this section, we will present the case studies that report significantly efficient PUE values while leveraging multiple sustainability measures, such as renewable energy, MDC design, and waste heat recovery. Table 1 summarizes the cases studies of sustainable CDCs discussed in this section. 3.1. Parasol Parasol [23] is a green CDC prototype based on four key tech- niques and technologies of sustainable CDCs, namely, MDC design, on-site renewable energy generated through solar panels, free cooling, and net-metering. A one year case study was conducted with a MDC powered by on-site solar panels set on roof-top of a building located in New Jersey. Parasol works on dynamic load balancing of CDC power between renewable and grid energy defined by the CDC workload. The MDC container consists of two server racks that are free cooled whenever possible. The workload and power source scheduling is based on workload and power predictions, existing power stored in batteries, analytical models of power consumption, peak power, and power costs. Excessive renewable energy is either stored in batteries or net-metered to the grid. The experimental results show 36% and 13% error respectively in workload and solar power generation prediction in a 1 hour prediction time frame. The total grid electricity cost was reduced by 75% in the parasol design. Moreover, the Parasol design can amortize the capital cost of a solar setup without batteries
  • 67. in 4.8 to 7.1 years with 60% government incentives. The study esti- mated that the efficiency of the photovoltaic technology (multi- Ta b le 1 C as e st u d ie s o f su st ai n ab le C D C s. C
  • 105. - in at io n re d u ce d ca r- b o n em is si o n s J. Shuja et al. / Renewable and Sustainable Energy Reviews 62 (2016) 195–214 199 crystalline silicon) will increase from 15% to 25% by 2030. More- over, it is estimated that on current space and capacity values, the space required by the solar panels to power a CDC is 47 times larger than that occupied by the racks. However, with the increasing capacity factor of solar technologies, the space requirement can decrease to 24 times by 2020-2030. According to the case study, the installed cost of solar energy will decrease
  • 106. by 50% by 2030. These forecasts depict that the cost and space requirements of sustainable CDCs will decrease significantly over the next decade. 3.2. Free lunch Free Lunch [40] is a MDC architecture evaluated to experiment with the viability of sustainable CDC elements. Free lunch is based on three principles of sustainability: (a) utilization of on-site renewable energy through remote geo- dispersed CDCs, (b) dedi- cated high speed network connectivity between two CDC nodes, and (c) VM based workload migrations. The study identified vir- tualization, MDC architecture, and renewable energy as key enabling technologies for sustainable CDCs. The authors chose two locations (near the Red Sea and the Southwest of Australia) ideal for harvesting solar energy that are situated in different time and climatic zones to complement each other. Moreover, wind turbines of 1.5 MW power were modeled with year average climatic con- ditions. The study assumed 10 Km2 of solar cells with 10% effi- ciency. It was found that throughout the year the power generated by the renewable energy sources dropped 615 times below the average demand (150 W per server). At 331 of these instances, excessive power was available at the other CDC node. However, on the remaining 284 instances, the servers have to be powered off.