Fog Computing and IoT systems make use of end-user premises devices as local servers. Here, we are identifying the scenarios for which running applications from NDCs are more energy-efficient than running the same applications from MDC. With the complete survey and analysis of various energy consumption factors such as different flow-variants and time-variants with respect to the Network Equipment we found two energy consumption use cases and respective results. Parameters such as current Load, Pmax, Cmax, Incremental Energy etc evolved with respect to system structure and various data related parameters leading to the conclusion that the NDC utilizes relatively reduced factor of energy comparative to the MDC. The study reveals that NDC as a part of Fog makeweights the MDCs to accompany respective applications, especially in the scenarios where IoT based applications are used where end users are the source data providers and can maximize the server utilization.
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
Fog Computing – Enhancing the Maximum Energy Consumption of Data Servers.
1. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 1 | P a g e Copyright@IDL-2017
1
Fog Computing – Enhancing the Maximum Energy
Consumption of Data Servers.
Priyanka Chettiyar1
, Prabadevi B2
and Jeyanthi N3
School of Information Technology, Vellore Institute of Technology, Vellore, Tamil Nadu.
priyanka.mannarsamy@vit.ac.in
prabadevi.b@vit.ac.in
njeyanthi@vit.ac.in
Abstract
Fog Computing and IoT systems make use of end-user premises devices as local servers. Here, we are identifying the
scenarios for which running applications from NDCs are more energy-efficient than running the same applications from MDC.
With the complete survey and analysis of various energy consumption factors such as different flow-variants and time-variants
with respect to the Network Equipment we found two energy consumption use cases and respective results. Parameters such as
current Load, Pmax, Cmax, Incremental Energy etc evolved with respect to system structure and various data related
parameters leading to the conclusion that the NDC utilizes relatively reduced factor of energy comparative to the MDC. The
study reveals that NDC as a part of Fog makeweights the MDCs to accompany respective applications, especially in the
scenarios where IoT based applications are used where end users are the source data providers and can maximize the server
utilization.
Index Terms— Centralized Data Servers, Cmax, Energy expenditure, Fog Computing, Nano Data Servers, Pmax.
1. INTRODUCTION
loud computing and respective cloud relative
applications are on increasing demand and growing
swiftly in this digital sector of technology. Studies
until date reflects cloud computing as the highly energy
efficient for processing any job instead of running it
locally. Nevertheless, when energy utilization evaluated
with respect to network topology and some other factors
such as power consumption due to interactive cloud
services at the end user side; energy consumption
seemed to be varying with respect to various use cases.
The pervasiveness of universality for the increasing
demand and growth of various smart devices
communicating and making the world more connected,
well known as IoT. Recent surveys have expressed the
fact that soon nobody can stop IoT from transforming
the traditional technology to digital world rapidly. Cloud
computing appeared where application services easily
made available to end users as frameworks, platforms
and softwares. Cloud computing, still cannot be termed
as “A platform for all” as it lags various issues to meet
the requirements of IoT applications.
C
2. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
2
Fog computing also known Fog networking or
fogging, or Edge computing is a concept in which clients
or intermediate users near to end users can accumulate
ample amount of capacity in order to perform the same
communication and provide similar services in way that
is more efficient rather than controlled over the central
cloud servers. It can recognize any capacious cloud
server or any big data structures, where accessing data
can be a troublesome task. To make computing possible
in an end-to-end manner for any network topology
where new services and required applications delivered
more efficiently and easily to millions of smartly
interconnected devices, fog was introduced. The
interconnected fog devices are mostly consisting of set-
top-boxes, access points, roadside units, cellular base
stations, etc. A 3-level hierarchy formed in the process
of a complete end-to-end services delivery from cloud to
smart devices.Thus, fog computing is nothing but an
Intermediate node between the end user smart devices
and centralized cloud data centers extending the
functionality of cloud computing in way that is more
flexible. Fog computing turning out to be more popular
for enormous number of applications with respect to
IoT. Here, we often use a term as “Nano Data Servers”
(NDC) which are nothing but small storage capacity
servers, which are present in end user locations used for
inter-communication of data with its peers.We can state
that Fog Computing is a paradigm that brings cloud
computing at the edges of the network topology.
In this work, we try to find out the different use
cases in which when the application is running on NDC
is more efficient than the centralized cloud server is.For
any network architecture, various energy consumption
models are put forward based on their content
distribution. Two types of network equipment are
studied as Shared network and Unshared network.
Shared network equipment is when many users share
equipment and services. Unshared network equipment is
a network where the equipment or services situated at
end users is shared by single user or to a limited fixed set
of users. Initially, a complete end-to-end network
architecture is used in which all-necessary data required
for processing from NDC and Central Data center is
present. As we are aware that the data or information is
processed and located in the data servers of the cloud
storage, the attire need to understand the energy usage of
data servers is focused. Since the data in Cloud services
is processed and stored in data centers, an obvious focus
for studying energy consumption of Cloud services is the
data centers. Nonetheless, even the transport network
which routes the end users to the cloud servers play a
visible role in energy utilization. Normally, when the
end users access the cloud servers, a subtle amount of
energy is consumed.The statistics reveal that
improvement in energy consumption in the transport
network and end user smart devices will help improving
the performance of end NDC. The experimental results
show that the Nano data servers can obverse Centralized
Data Servers and reduce the energy consumption
for the appliances that can be easily migrated from cloud
servers to NDC. The following figure explains broadly
the fog node and its role.
3. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
3
Fig 1. Fog Computing.
Many more interesting features which fog computing
made available to us are Knowledge of location-
tracking end user devices helping motion, the
hierarchical interplays between fog, cloud and the end
user devices signifying how fog node gets local
overview when global overview was possible only at
higher level, real time computation, modifiable
optimizations depending on client side network and
applications, improved caching methodology, end user
smart devices knowledge etc
The key to handle and manage the analytics rapidly with
the help of data provided by IoT applications made
possible by fog data processing.
2. RELATED SURVEY.
Fog computing and its services are rapidly
growing in every other sector with a purpose adding to
our global digital revenue. Let us have peek overview of
the various implementations of fog respective to its
heads and tails.
A] Fog – IoT Platform.
IoT brings more than a hazardous conception of
endpoints, which is problematic in a few ways.[1] In this
part, analysis of those interruptions is done, and a
progressive appropriated engineering that stretches out
from the edge of the system to the center nicknamed Fog
Computing is proposed. Specifically, focused on another
dimension precomputing to Big Data and Analytics by
IoT: an enormously distributed no. of sources at the
ends.
B] Internet – Nano Data Centers.
The growing concern about Energy utilization in the
modern data centers, gave rise to the model of Nano
Data Centers(NaDa). [7] ISP-controlled home gateways
were used to facilitate computing services and storage as
well. It also forms a distributed architecture with peer-to-
peer data center model. Video-on-Demand (VoD)
services used to verify the actual capability of NaDa.
We develop an energy consumption model for VoD in
traditional and in NaDa data centers and evaluate this
model using a large set of empirical VoD access data.
We find that even under the most pessimistic scenarios,
NaDa saves at least 20% to 30% of the energy compared
to traditional data centers. These savings stem from
energy-preserving properties inherent to NaDa such as
the reuse of already committed baseline power on
underutilized gateways, the avoidance of cooling costs,
4. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
4
and the reduction of network energy consumption
because of demand and service co-localization in NaDa.
C] Green Cloud computing: Balanced Energy
Management.
Network-based cloud computing is rapidly expanding as
an alternative to conventional office-based
computing[8]. As cloud computing becomes more
widespread, the energy consumption of the network and
computing resources that underpin the cloud will grow.
This is happening at a time when there is increasing
attention being paid to the need to manage energy
consumption across the entire information and
communications technology (ICT) sector. While data
center energy use has received much attention recently,
there has been less attention paid to the energy
consumption of the transmission and switching networks
that are key to connecting users to the cloud. In this
paper, we present an analysis of energy consumption in
cloud computing. The analysis considers both public and
private clouds, and includes energy consumption in
switching and transmission as well as data processing
and data storage. We show that energy consumption in
transport and switching can be a significant percentage
of total energy consumption in cloud computing. Cloud
computing can enable more energy-efficient use of
computing power, especially when the computing tasks
are of low intensity or infrequent. However, under some
circumstances cloud computing can consume more
energy than conventional computing where each user
performs all computing on their own personal computer
(PC).
[D] Fog Computing Saving Energy.
In this paper, a comparison of energy utilization of
applications on both the servers is done and the results
shows that Nano data servers can save energy
comparatively on a higher rate based on various system
designing factors[9]. The hopping rate also adds to little
extent with the various factors.
Also, found that some part of energy which is consumed
currently can be saved by bringing few applications at
the Nano platform level.
E] Document Processing – Energy Consumption
Cloud computing and cloud-based services are a rapidly
growing sector of the expanding digital economy.
Recent studies have suggested that processing a task in
the cloud is more energy-efficient than processing the
same task locally [10].However, these studies have
generally ignored the network transport energy and the
additional power consumed by end-user devices when
accessing the cloud. In this paper, we develop a simple
model to estimate the incremental power consumption
involved in using interactive cloud services. We then
apply our model to a representative cloud-based word
processing application and observe from our
measurements that the volume of traffic generated by a
session of the application typically exceeds the amount
of data keyed in by the user by more than a factor of
1000. This has important implications on the overall
power consumption of the service. We provide insights
into the reasons behind the observed traffic levels.
Finally, we compare our estimates of the power
5. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
5
consumption with performing the same task on a low-
power consuming computer. Our study reveals that it is
not always energy-wise to use the cloud. Performing
certain tasks locally can be more energy-efficient than
using the cloud.
F] Architecture - IPTV networks
Another Energy utilization model of IPTV stockpiling
and dissemination gives bits of knowledge into the ideal
outline of a VoD organize.[11] Energy utilization is
limited by repeating mainstream program material on
servers near clients.
G] Fog – Potential
The Internet of Things (IoT) could empower
developments that upgrade the personal satisfaction, yet
it produces exceptional measures of information that are
troublesome for customary frameworks, the cloud, and
even edge registering to deal with. fog processing is
intended to defeat these impediments.12].
I] Fog – Feasibility
As billions of gadgets get associated with the Internet, it
won't be manageable to utilize the cloud as a
concentrated server. The path forward is to decentralize
calculations far from the cloud towards the edge of the
system nearer to the client.[13] This decreases the
idleness of correspondence between a client gadget and
the cloud, and is the commence of 'mist processing'
characterized in this paper. The point of this paper is to
highlight the practicality and the advantages in
enhancing the Quality-of-Service and Experience by
utilizing mist figuring. For an internet amusement utilize
case, we found that the normal reaction time for a client
is enhanced by 20% when utilizing the edge of the
system in contrast with utilizing a cloud-just model. It
was additionally watched that the volume of movement
between the edge and the cloud server is lessened by
more than 90% for the utilization case. The preparatory
outcomes highlight the capability of haze processing in
accomplishing a practical registering model and
highlights the advantages of incorporating the edge of
the system into the figuring biological community.
3. PROPOSED WORK
The highlighting commitment of this paper is
that the use of IP lookup algorithm with the base of Fog
Computing. In this project, we propose very small
servers known as “Nano data centers” or “Nano data
servers” abbreviated as (NDC) which play the role of
fogs sited in end-user premises for running the
applications in a point-to-point fashion. We use a single
device e.g. a laptop or a desktop which plays the role of
“Main data center” or “Main data server” which is
specifically the centralized data server of our entire
system.With the entire setup from cloud server to the
end user devices connected in a hierarchical manner we
try to identify various use cases in which the energy
consumption of the NDCs and MDCs are calculated
based on the algorithm implemented “EC Computation
Algorithm”.
Main Datacenter: Main Data center is the server where
the applications are deployed. We show the MDC
6. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
6
configurations, load status, amount of energy it can
consume in idle state, current state, no. of connections
associated etc. If the Nano Data center threshold limit
(maximum number of request can be handle) exceed,
then the Main data center will process the request. It
accepts requests redirected from all sources i.e different
fogs It has the maximum threshold limit compares to the
Nano Data Centers.
E.g. Any localhost or cloud server browsed on any
device such as laptops or desktops.
Nano Datacenter: Nano Data center is the server where
the applications are deployed. If the IP address belongs
to the region of Nano Data center and if the threshold
limit is not exceeded then the Nano data center will
process the client request, else it will be redirect the
request to the Main Data center. Nano data centers have
limited capacity. We calculate various parameters like
MDC as mentioned above. It processes the requests until
the limit exceeded and then redirects to the MDC as
soon as its status turns from normal to overloaded. It has
the low threshold limit compares to the Main Data
Centers.
E.g. End-user devices, mobile-applications, web-
applications, geo-satellite requesting device, location
detector, raspberry-pi toolkit, etc.
Fig 2: System Architecture of Fog Computing.
The main purpose of fogging is to augment productivity
and diminish the degree of information related to the
cloud for
handling, examination and capacity. This is regularly
done to enhance productivity; however, it might
similarly be utilized for security and consistence reasons.
Prominent fog processing applications incorporate savvy
framework, shrewd city, keen structures, vehicle systems
and programming characterized systems. The illustration
fog originates from the meteorological term for a cloud
end of the ground, similarly as fog focuses on the end of
the system.
With the help of this implementation we can see the real-
time energy computations and power consumptions
taking place basically in 2 scenarios:
1) System with 1 MDC and 2 NDCs
The entire system consisting of a centralized data
center and the Nano-data centers relying on main data
center. The various parameters are calculated such as
7. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
7
Pmax, Cmax, current load for the MDC and NDCs
respectively.
2) System when all NDCs are transformed to MDC.
As soon as the threshold limit for the NDCs are hit the
NDC are shifted as the MDC servers with the increased
throughput and capacity. The entire system consisting of
a centralized data center as the nano-data centers
working as MDCs. The various parameters are
calculated such as Pmax, Cmax, current load for all the
MDCs
3.1 Algorithms and Techniques
Energy consumption algorithm
IPlookup.
o Modified Elevator Stairs Algorithm
Web Services.
1) Efficient IP Lookup Algorithm.
As there is heavy internet traffic, the backend
supportive routers impose the capability of transmitting
the in-direction packets at high gigabits/second speed.
The IP address lookup thus comes into its role of high-
speed networks packets transmission to destined routers
from source to destination. It is very challenging task.
To deal with gigabit-per-second movement rates, the
backend-supporting routers must have the capacity to
forward a large number of datagrams every second on
each of their ports. Quick IP address query in the routers,
which utilizes the datagram's goal route to decide for
each datagram the following hop, is along these lines
vital to accomplish the datagram sending rates required.
Additionally, the bundle may experience numerous
switches before it achieves its goal. Consequently,
diminish in postponement by small scale seconds brings
about gigantic cut down in an opportunity to achieve the
goal. IP address query is troublesome because it requires
a Longest Matching Prefix seek. Numerous query
calculations are accessible to locate the Longest Prefix
Matching; one such is the Elevator-Stairs Algorithm.
Some top of the line routers has been actualized with
equipment parallelism utilizing TCAM. In any case,
TCAM is a great deal more costly regarding circuit
multifaceted nature and in addition control utilization. In
this manner, proficient algorithmic arrangements are
basically required to be executed utilizing system
processors as minimal effort and cost solutions.
Among the state-of-the-art algorithms for IP address
lookup, a binary search based on a balanced tree is
effective in providing a low-cost solution. To construct a
balanced search tree, the prefixes with the nesting
relationship should be converted into completely
disjointed prefixes. We propose Small balanced tree
using entry reduction for IPLookup algo. Take the
specified IP address.
Make respective segments.
Take 1st
segment and search at root level.
Speed Calculation – Array of the root level is
considered and a lookup is applied directly.
- Consider the 1st
segment node as 224.
8. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
8
- Work with only 1 subtree with root as
224.
- If the data is very dense, we consider
entire array in all other levels.
- Probably, we require a dictionary for the
sub-levels of a kind.
- If the next segment is 201, you look up
201 in the dictionary for the 223 nodes,
- Now your possible list of candidates is
just 64K items (i.e. all IP addresses that
are 223,201.x.x).
- Repeat the above process with the next 2
levels.
- The result is that you can resolve an IP
address in just 4 lookups: 1 lookup in an
array, and 3 dictionary lookups
This structure is also very easy to maintain.
Inserting a new address or range requires at most
four lookups and adds.
Same with deleting. Updates can be done in-place,
without having to rebuild the entire tree.
Take care read and update should not come under
same instance.
No concurrent updating should happen while
concurrent read can be accessed.
2) Web Services.
We make use of some basic runtime applications
which make the request of the data centers as they
are deployed to various systems. The Web services
can be any system based application used remotely
by wide number of users at distant locations.
We make a war file of the respective application and
deploy it over the workspace and the local host
server of various devices.
In this case the devices can be different laptops,
mobile devices, raspberry-pi kits etc. All these
devices should be connected to a same LAN so that
the requests should not be disrupted.
3) EC Computation Algorithm.
4) Flow Chart of EC Computation Algorithm
9. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
9
Fig : EC Computation Algorithm.
4. RESULTS AND DISCUSSION.
NDCs consume a insignificant total of energy for around
some apps by moving data in the vicinity to client side
users and reducing the energy consumed over the
transport network.
M_Energy 1 is the existing system. Nano data center can
take the current load of only 11
Initially it can take amount of 5 units. (Pidle= 5 units).
Totally incoming connections, it can take is 3, for each
connection it takes the energy of 2 units.
So, we are calculating the energy by using the formula;
Current Load = Pidle + CE
= Pidle + Current Connection * E /
Connection
U Threshold is the total value for the energy it should be
less than the current system.
So, the Current load for the Servers are:
Current Load = Pidle + CE
Nano1 = 5 + (3 * 2) = 11
Nano2 = 5 + (1 * 2) = 7
Main = 50 + (100 *2) = 250
By the enhancement we can increase the values of the
server as the main servers so in the second table we are
using the PMAX is 600 for nano servers as well as main
10. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
10
servers, but the current connections we can increase to
100 by the same initial energy.
Current Load =Pidle + CE
Main1 = 50 + (3 * 2) = 61
Main3 = 50 + (1 * 2) = 52
Main2 = 50 + (100*2) = 250
Pmax is the maximum energy of the Nano server it is
taken by: Pidle * Current Connection * E /
Connection
Cmax is the maximum energy for the connection, it is
given by: Pidle * E / Connection
5. CONCLUSION
Cloud computing has become the base of the new trend
transforming our digital sector in IT. Large scale or
small scale ir enterprise customers everywhere the cloud
services are grooming to the roots due to its wide range
of abilities and advantages profiting the business
revenue. Due to increase in demand of cloud based
storage, applications and services, the network traffic,
energy consumption and routing are the upcoming major
concerns.
In this thesis, we studied, analyzed and brought out some
results which discusses the outcomes of fog computing
over cloud computing. We examined that the energy
consumption of the Nano-servers called as fogs was
considerably less when the applications are brought
down at the networks edge at the client side devices. A
detailed comparison of the energy consumed at the NDC
level and MDC level was evaluated with the outcome
revealing NDCs consumed less energy than MDCs for
the same task performed for both scenarios
6. FUTURE SCOPE.
Despite the commitments of the present theory in
energy utilization of Cloud processing also, Fog
registering, there are various open research challenges
that should be handled with a specific end goal to further
propel the range.
Furthermore, the wide range of applications fog can
handle can be evaluated. Subsequently, as the vitality
utilization displaying and estimation strategies proposed
in this proposal can be connected to PaaS and IaaS, it is
profitable to study vitality utilization of PaaS and IaaS in
end-client terminals, transport system and server farms.
Besides, our outcomes depend on vitality utilization of
uses amid utilize stage and we did not consider vitality
utilization of uses and administrations in all their years.
Explore considering an existence cycle point of view
would be required to look at the aggregate ecological
impression of the applications and administrations.
11. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
11
7. ACKNOWLEDGMENT
I would like to express my sincere gratitude to my
advisor Prof. Prabadevi B and Prof. Jayenti N Dept.
SITE SCHOOL VIT UNIVERSITY, VELLORE for
continuous support for the research paper survey and
analysis. I am very obliged for their supportive role and
thankful for inspiring me to study more over this subject.
I am very glad to have such experience of research
paper writing in a way to reach the concepts
8. REFERENCES
[1]. Flavio Bonomi, Rodolfo Milito, Preethi Natarajan
and Jiang Zhu (2016) “Fog computing: A platform for
internet of things and analytics”
[2]. Bonomi. F., Milito, R., Zhu, J., Addepalli, S.(2012):
“Fog computing and its role in the internet of things.”
[3]. Pao, L., Johnson, K.(2009) “A tutorial on the
dynamics and control of wind turbines and wind farms”
In: American Control Conference.
[4]. Botterud, A., Wang, J. (2009) “Wind power
forecasting and electricity market operations.” In:
International Conference of 32nd International
Association for Energy Economics (IAEE), San
Francisco, CA
[5]. Cristea, V., Dobre, C., Pop, F.(2013) “Context-
aware environment internet of things.” Internet of
Things and Inter-cooperative Computational
Technologies for Collective Intelligence Studies in
Computational Intelligence, vol. 460, pp. 25–49
[6]. Haak, D.(2010) “ Achieving high performance in
smart grid data management.” White paper from
Accenture
[7].Green cloud computing: Balancing energy in
processing, storage, and transport (2011)
[8]. Vytautas Valancius, Nikolaos Laoutaris, Laurent
Massoulié (2009) “ Greening the Internet with Nano
Data Centers”
[9].Fatemeh Jalali, Kerry Hinton , Robert Ayre , Tansu
Alpcan , and Rodney S. Tucker “Fog Computing May
Help to Save Energy in Cloud Computing”
Centre for Energy-Efficient Telecommunications
(CEET), The University of Melbourne, Australia.
[10]. Shanhe Yi, Cheng Li, Qun Li(2015) “A Survey of
Fog Computing: Concepts, Applications and Issues
Department of Computer Science” College of William
and Mary Williamsburg, VA, USA.
[11]. An Tran Thien, Ricardo Colomo(2016) “A
Systematic literature review of Fog Computing.”
12. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 6, June 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
12
[12]. Amir Vahid Dastjerdi, Rajkumar Buyya, (2016)
“Fog Computing: Helping the Internet of Things Realize
its Potential”
[13]. (2015) “Fog Computing and the Internet of Things:
Extend the Cloud to Where the Things Are”
Cisco and/or its affiliates.