The computing continuum extends the high-performance cloud data centers with energy-efficient and low-latency devices close to the data sources located at the edge of the network. However, the heterogeneity of the computing continuum raises multiple challenges related to application and data management. These include (i) how to efficiently provision compute and storage resources across multiple control domains across the computing continuum, (ii) how to decompose and schedule an application, and (iii) where to store an application source and the related data. To support these decisions, we explore in this thesis, novel approaches for (i) resource characterization and provisioning with detailed performance, mobility, and carbon footprint analysis, (ii) application and data decomposition with increased reliability, and (iii) optimization of application storage repositories. We validate our approaches based on a selection of use case applications with complementary resource requirements across the computing continuum over a real-life evaluation testbed.
1. Cloud, Fog or Edge: Where and when to compute?
a
Dragi Kimovski
Institute for Information Technology
Klagenfurt University
Introductory talk for the habilitation procedure
18.12.2020
2. • https://itec.aau.at/research/distributed-and-parallel-systems/
• Research Topics
ü Parallel and distributed systems
ü Cloud computing and simulation
ü Fog/Edge computing
ü High-performance computing
ü Resource management and scheduling
ü Programming and runtime environments
ü Energy efficiency
The research group - DPS
2
3. About me
Dragi Kimovski
Postdoc assistant with “Zielvereinbarung”
Previous positions:
• 2016-2018 Senior Researcher and Lecturer, University
of Innsbruck, Austria
• 2013-2019 Assistant Professor, University of
Information Science and Technology, Macedonia
• 2009-2013 Teaching Assistant, University of
Information Science and Technology, Macedonia
Visiting positions:
• University of Michigan - Ann Arbor, US
• University of Bologna, Italy
• University of Granada, Spain
Project experience:
• H2020 Entice project (2016-2018), WP leader and integration
manager
• H2020 ASPIDE (2018-2021), Scientific coordinator
• H2020 DataCloud (2021-2023), WP leader
• OeAD AtomicFog (2018-2020), Coordinator
Publications:
Over 45 publications in Journals and Conferences 3
4. Dragi Kimovski
Postdoc assistant with “Zielvereinbarung”
Teaching experience:
• Distributed systems, Distributed computing
infrastructures, Cloud computing and IoT,
Advanced programming in C++ @ University of
Klagenfurt, Austria
• Parallel Systems, Distributed Systems@ University
of Innsbruck, Austria
• Parallel computing, High performance computing,
Computing organization, Computer networks,
Computing systems configuration, Programming
@ University of Information Science and
Technology, Macedonia
• Parallel processing @ Technical University of Sofia,
Bulgaria
Supervision:
• 13 Bachelor thesis
• 3 Master thesis
• 1 PhD thesis
4
About me
6. • Latency to reach the Cloud data centers
can be unacceptably high.
• Executing applications physically closer to
the data sources can improve their
performance.
• The utilization of the so-called Computing
continuum can support the emerging
Internet of Things systems.
https://www.ntt.co.jp/news2014/1401e/140123a.html
The computing continuum
6
9. Added value of the computing continuum
Reduction of the amount of
data being sent to the Cloud
Faster response times Support for new applications
(Smart cities, e-health, …)
9
Improved data security Reduced CO2 emissions
10. Is the computing continuum “magical”?
10
• Can the computing continuum solve the computational issues
related to the emerging Internet of Things application?
• The answer is: Na ja.
11. Issues (At least some of them)
Heterogeneity of devices Devices can move Application management
11
Various control domains Data management and
distribution
16. The heterogenity of the computing continuum
• The heterogeneity of the computing continuum raises multiple
application management challenges:
– where to offload an application from the cloud to the fog or to
the edge.
• Large diversity of the devices:
– single-board computers such as Raspberry Pis;
– powerful multi-processor servers.
16
17. Performance characterisation
• To answer this question it is essential to characterize the
performance of the resources across the computing continuum.
• We present an analysis of:
• computational and network performance;
• carbon emissions.
• The main goal is to support the decision process for offloading an
application.
18. The Carinthian Computing Continuum – C3
Table 1: Description of the resources available in the C3
testbed.
Conceptual layer Device / Instance type Architecture (v)CPU Memory [GiB] Storage [GiB] Network Physical processor Clock [GHz] Operating system
Cloud layer
AWS t2.micro
64-bit x86
1 1
32
Moderate Intel Xeon 3.1
Ubuntu 18.04AWS c5.large 2 4
10 Gbps
Intel Xeon Platinum 8000 series 3.6
AWS m5a.xlarge 4 16 AMD EPYC 7000 series 2.5
Fog layer
Exoscale Tiny
64-bit x86
1 1
32 10 Gbps
Intel Xeon 3.6
Ubuntu 18.04
Exoscale Medium 2 4
Exoscale Large 4 8
ITEC Cloud Instance 4 8 Intel Xeon Platinum 8000 3.1
Edge layer
Edge Gateway System 64-bit x86 12 32 32 10 Gbps AMD Ryzen Threadripper 2920X 3.5 Ubuntu 18.04
Raspberry Pi 3B
64-bit ARM 4
1
64 1 Gbps
Cortex - A53 1.4
Pi OS Buster
Raspberry Pi 4 4 Cortex - A72 1.5
Jetson Nano 4 Tegra X1 and Cortex - A57 1.43 Linux for Tegra R28.2.1
2.2 Fog layer112
The fog layer comprises computing infras-113
tructures consolidated in small data-centers114
in close vicinity to the data sources. This layer115
comprises resources from two providers in116
the C3
testbed [4]: Exoscale and University of117
Klagenfurt. We allocate these providers in the118
fog layer as a result of the low round-trip com-119
munication latency ( 7 ms) and high band-120
width ( 10 Gbps). The Exoscale cloud com-121
prises data centers in Vienna and Klagenfurt122
(Austria). We selected three computing opti-123
mized x86-64 instances from the Exoscale124
cloud offering: Tiny, Medium and Large.125
University of Klagenfurt provides a private126
cloud infrastructure operated by OpenStack127
3 Benchmark applications 150
We selected three representative appli- 151
cation classes with complementary require- 152
ments to evaluate the computational perfor- 153
mance and the CO2 emissions of the com- 154
puting continuum. 155
3.1 Video encoding 156
Video encoding allows transmission of 157
video content with different qualities over 158
limited and heterogeneous communication 159
channels. It compresses an original raw video 160
to reduce its effective bandwidth consump- 161
tion, while maintaining a subjective high qual- 162
ity for viewers. Video encoding has wide fields 163
of applications, including content delivery (live 164
and on-demand video streams), traffic control 165
18
19. Video encoding
• Ffmpeg version 3.4.6 with the most popular H.264/MPEG-4 video
encoder.
• Raw video segment with length of 4 second and size of 514 MB.
rge
tion
nce
Fm-
ular
by
We
eg-
MB,
deo
HD-
ates
ding
urce
achieved the lowest encoding and transfer 233
time due to the low utilization rate and its high 234
computing and networking capabilities.
R
Pi3B
R
Pi4
Jetson
EG
S
ES
Tiny
ES
M
edium
ES
LargeAW
S
t2.m
icro
AW
S
c5.large
AW
S
m
5a.xlarge
0
10
20
30
40
30.6
4.8
3.6
0.4
1
0.68
0.75
4.7
0.7
0.6
32.3
5.5
4
0.5
1.4
0.89
0.93
4.7
0.8
0.7
36.3
8.2
6.4
0.9
3.1
1.4
1.2
5.2
1.4
1.3
Executiontime[s]
HD-Ready (720p) Full HD (1080p) Quad HD (1440p)
(a) Average encoding time.
200
7.5
170.6
63.8
4 Performance evaluation191
4.1 Video encoding192
We evaluate the encoding performance193
of the computing continuum using FFm-194
peg version 3.4.6 with the most popular195
H.264/MPEG-4 video encoder4
deployed by196
more than 90% of the video industry5
. We197
perform the encoding on a raw video seg-198
ment with length of 4 s and size of 514 MB,199
available in the Sintel6
video-set. The video200
segment is encoded in three resolutions (HD-201
ready, Full HD and Quad HD) with data rates202
of 1500, 3000, and 6500 kbps.203
Figure 2 depicts the average encoding204
time and transfer time, from the video source205
(located at the University of Klagenfurt) to206
the encoding device or instance, for a single207
raw video segment in the three resolutions.208
The standard deviation ranges from 1.3%209
for the AWS m5a.xlarge instance to 3.6%210
for the Raspberry Pi 3B devices. We ob-211
serve that the older generation single-board212
computers (Raspberry Pi 3B) have a signif-213
icantly higher encoding time than the other214
resources. However, the Raspberry Pi 3B215
devices provide lower transfer times than the216
cloud instances and are suitable for video-on-217
demand services employing offline encoding.218
The Raspberry Pi 4 and the Jetson Nano de-219
R
Pi3B
R
Pi4
Jetson
EG
S
ES
Tiny
ES
M
edium
ES
LargeAW
S
t2.m
icro
AW
S
c5.large
AW
S
m
5a.xlarge
0
10
20
30
40
30.6
4.8
3.6
0.4
1
0.68
0.75
4.7
0.7
0.6
32.3
5.5
4
0.5
1.4
0.89
0.93
4.7
0.8
0.7
36.3
8.2
6.4
0.9
3.1
1.4
1.2
5.2
1.4
1.3
Executiontime[s]
HD-Ready (720p) Full HD (1080p) Quad HD (1440p)
(a) Average encoding time.
R
Pi3B
R
Pi4
Jetson
EG
S
ES
Tiny
ES
M
edium
ES
LargeAW
S
t2.m
icro
AW
S
c5.large
AW
S
m
5a.xlarge
0
50
100
150
200
12.5
5.1
9.2
5.1
74.5
63.1
67.1
157.5
170.6
163.8
Transfertime[s]
(b) Average raw video segment transfer time.
Figure 2: Average encoding performance of a
4 s long video segment with the x264 codec
and FFmpeg 3.4.6. 19
20. Machine learning
• Tensor Flow training:
– A quantum neural network using the MNIST data-set limited to
20000 samples with a size of 3.3MB with 90% accuracy;
– A convolutional neural network using the Kaggle data-set with a
size of 218MB with 80% accuracy.
enarios
ages:
ng the
amples
rio cre-
ers and
r to the
each a
0%.
ing the
18 MB.
s 80%.
e layers
er uses
e range
we use
tization
nsions.
execu-
network
e train-
the de-
raining.
m 1.2%
4% for
aluation
neural
is significantly higher for the convolutional 296
neural network, the cloud and fog resources 297
outperform the edge devices, except EGS. 298
Recommendation. We recommend the 299
model training with large data-sets and 300
multiple layers in the cloud or on dedicated 301
systems (such as EGS), whenever possible. 302
We recommend offloading to the edge only 303
when the training data is of limited size, or 304
when the neural network has few layers.
R
Pi3B
R
Pi4
Jetson
EG
S
ES
Tiny
ES
M
edium
ES
LargeAW
S
t2.m
icro
AW
S
c5.large
AW
S
m
5a.xlarge
0
100
200
300
400
500
54.1
35.6
40.1
6.2
8.4
8.7
8.4
5.3
6.2
8.6
1,630
1,050
432
65.2
210.2
192.1
134.26
220.1
178.83
180.3
Executiontime[s]
Quantum neural network Convolutional neural network
(a) Average training time.
80
.3
1
The convolutional network has three layers262
with a kernel size of three. Each layer uses263
increasingly higher filter sizes in the range264
[32, 64, 128]. After each layer, we use265
a max-pooling sample-based discretization266
process to reduce the spatial dimensions.267
We repeat the training five times.268
Figure 3 analyzes the average execu-269
tion time for training the two neural network270
types and the transfer times of the train-271
ing data from centralized storage to the de-272
vice or instance that performs the training.273
The standard deviation ranges from 1.2%274
for the Raspberry Pi 4 devices to 5.4% for275
the AWS t2.micro instance. The evaluation276
shows that the less complex quantum neural277
network requires a relatively lower training278
time across all resources. The old generation279
single-board computers show again a lower280
performance, and their suitability for training281
heavily depends on the size of the training282
data and the model. The other fog and edge283
devices provide similar performance to the284
cloud resources. The single-board computers285
provide lower training performance for the286
convolutional network. The only exception are287
the Jetson Nano devices able to train the288
convolutional network up to four times faster289
than the Raspberry Pi devices. In general,290
the EGS provides the lowest training time291
R
Pi3B
R
Pi4
Jetson
EG
S
ES
Tiny
ES
M
edium
ES
LargeAW
S
t2.m
icro
AW
S
c5.large
AW
S
m
5a.xlarge
0
100
200
300
400
500
54.1
35.6
40.1
6.2
8.4
8.7
8.4
5.3
6.2
8.6
1,6
1,0
432
65.2
210.2
192.1
134.26
220.1
178.83
180.3
Executiontime[s]
Quantum neural network Convolutional neural network
(a) Average training time.
R
Pi3B
R
Pi4
Jetson
EG
S
ES
Tiny
ES
M
edium
ES
LargeAW
S
t2.m
icro
AW
S
c5.large
AW
S
m
5a.xlarge
0
20
40
60
80
0.1
0.1
0.1
0.1
0.4
0.4
0.4
1
1
1
5.4
2.2
3.9
2.1
32.2
27.3
29.1
68.3
65.1
68
Transfertime[s]
Data-set (Quantum neural network) Data-set (Convolutional neural network)
(b) Average training data transfer time.
Figure 3: Average training and data transfer 20
21. In-memory computing
• In-memory analysis in SPARK:
– Collaborative data filtering of dataset with size of 36.6 KB;
– π estimation.
data filtering aims to fill miss-
or improved recommendation
costumers. The model uses
g least squares algorithm and
of movie preferences with a
B. We trained the model over
e data-set with a cold start
randomly divides the data into
validation sets.
is a memory and computa-
sive task that estimates the
y distributing the work among
rk executors. This enables us
he computational and mem-
nce of the distributed memory
ontinuum for complex tasks.
hows the average execution
memory collaborative data fil-
R
Pi3B
R
Pi4
Jetson
EG
S
ES
Tiny
ES
M
edium
ES
LargeAW
S
t2.m
icro
AW
S
c5.large
AW
S
m
5a.xlarge
0
20
40
60
80
100 144.1
77.3
81.1
46
38.8
20.6
15.05
52.7
25.1
29.2
16.5
7.4
6.8
4.73
2.19
1.69
1.57
3.9
2.1
2.5
Executiontime[s]
Collaborative filtering Pi Calculation
Figure 4: Average execution time for in-
memory collaborative data filtering and ⇡ es-
timation using Apache Spark. 21
22. Carbon emmisions analysis
• We evaluate the power consumption of the physical devices used
for the convolutional neural network training in TensorFlow.
– We use a digital multimeter to physically measure the average
electrical current;
– We rely on an AWS research report to approximate the power
consumption of the fog devices and cloud instances.
S
ES
Tiny
ES
M
edium
ES
LargeAW
S
t2.m
icro
AW
S
c5.large
AW
S
m
5a.xlarge
55.1
65.3
61.2
26.9
24.8
25.2
0.94
7.1
7
7
22.1
20.7
20.1
R
Pi3B
R
Pi4
Jetson
EG
S
ES
Tiny
ES
M
edium
ES
LargeAW
S
t2.m
icro
AW
S
c5.large
AW
S
m
5a.xlarge
0
1
2
3
4
5
6
1
0.7
0.71
1.5
2.3
4.1
3.9
2.4
3.8
5.4
Carbonemission[g]
Carbon emissions
Figure 6: Carbon footprint for training a neural
22
23. What we learned
• We compiled a set of recommendations for practitioners on where
to offload their applications across the computing continuum.
• However, is this enough for proper application management?
Recommendations for application offloading across the computing co
Application
Requirement
Low network load Low execution time Low CO2 emissions
Video encoding Edge/Fog Cloud Edge
Machine learning Edge Cloud/Fog Edge
In-memory analytic Cloud/Fog Cloud Edge
ional Conference on the Internet of Things
Prodan, R., Barrionuevo, J. J. D., Fahringer,
ay). A multi-objective approach for workflow
n heterogeneous environments. In 2012 12th
International Symposium on Cluster, Cloud
puting, and communication in UAV swa
Radu Prodan is professor in distribute
ITEC, Klagenfurt University. He receive
gree in 2004 from the Vienna University
and was Associate Professor until 201
versity of Innsbruck (Austria). His rese
Research papers:
1. Dragi Kimovski, Josef Hammer, Narges Mehran, Hermann Hellwagner and Radu Prodan, “Cloud, Fog or Edge: Where to Compute”, IEEE Computing Magazine, 2nd
revision, in-review.
2. Roland Matha, Dragi Kimovski, Anatoliy Zabrovsky, Christian Timmerer and Radu Prodan, “Where to Encode: A Performance Analysis of x86and Arm-based Amazon EC2 Instances”, CCGrid 2021, In-review.
23
25. Motivation
• To efficiently place complex distributed applications’
workflows in heterogeneous environment, with limited
computing and storage capacity.
25
26. Approach
• To cope with the complexity of the
problem we apply multi-objective
approach.
• Searching for a set of non-
dominated scheduling/placements
of applications on the continuum.
• An automated decision-making
module to select an appropriate
solution.
26
27. Approach
• Objectives:
• Time à lower completion time;
• Energy à reduce energy consumption;
• Cost à reduce monetary cost.
• Constrained on devices in the computing continuum that tend to
move (roam) less.
27
28. Objectives
• Time objective:
• The computation time;
• The time required for transferring data among
components.
Computation time
rk
rj
mi
pred
(mi)
28
29. Objectives
• Energy objective:
• Energy for executing the component on a device;
• Energy for receiving data;
• Static energy consumption for maintaining the device active.
29
30. Objectives
• Cost objective:
• Processing the application on
CPUj
• Storing the data on STORj
• Communication cost between
resources
30
31. The scheduling and placement problem
onents placed onto the Cloud – Edge resources: M =
i | i 2 N, 0 i }, where |M|= . Each decision vari-
ble i is the placement of one component mi onto a
source: i = plc (mi). The goal is to find a placement
c(A) for an application A that assigns all its components
the set R of resources that minimizes the three objectives:
8
>>><
>>>:
f1(T) = min
plc(A)=R
T(A,R);
f2(E) = min
plc(A)=R
E(A,R);
f3(C) = min
plc(A)=R
C(A,R).
(11)
Searching an optimal placement plc(A) results in a set
solutions, which must satisfy the processing, memory
nd storage constraints on device rj = (CPUj, MEMj, STORj)
signed to each component mj, and the movement proba-
W
to the set R of resources that minimizes the three objectives:
8
>>><
>>>:
f1(T) = min
plc(A)=R
T(A,R);
f2(E) = min
plc(A)=R
E(A,R);
f3(C) = min
plc(A)=R
C(A,R).
(11)
Searching an optimal placement plc(A) results in a set
of solutions, which must satisfy the processing, memory
and storage constraints on device rj = (CPUj, MEMj, STORj)
assigned to each component mj, and the movement proba-
bility of a device rj within a given time window P W
rj
(s):
8
>><
>>:
CPU (mi) < CPUj;
MEM (mi) < MEMj;
STOR (mi) < STORj;
P W
rj
(s) < km.
(12)
• We utilize NSGA-II to tackle this NP-complete optimization
problem
31
33. Mobility constraint
• Improve application maping and resources allocation by
addressing the mobility characteristics of the Fog/Edge devices in
the computing continuum.
• Prediction of the mobility allows to optimize:
• Application placement: Select the device having lowest
mobility;
• Priority scheduling: Assign „harder“ tasks to less mobile devices
and vice versa.
33
34. Mobility characterization
• By modeling the devices in the continuum 𝑟!
through (first order) discrete Markov chains (MC).
• Calculates the probability of a system to reach a
particular state in future.
• Consists of (already specialized):
• Finite set of states, i.e. {𝑆", 𝑆#, 𝑆$}.
• 𝑆" − 𝐷𝑖𝑠𝑐𝑜𝑛𝑒𝑐𝑡𝑒𝑑;
• 𝑆# − 𝐶𝑜𝑛𝑒𝑐𝑡𝑒𝑑;
• 𝑆$ − 𝑅𝑜𝑎𝑚𝑒𝑑.
𝑆!
𝑆"
𝑆#
"1
3
0
"2
3
0
1
"3
4
0
0
"1
4
34
35. Markov chain mobility prediction
• Given a MC for a (𝑤, 𝑑) – tuple and the functions:
– 𝑖𝑛𝑑𝑒𝑥 𝑟 ≔
𝑅𝑒𝑡𝑢𝑟𝑛𝑠 𝑡ℎ𝑒 𝑙𝑜𝑤𝑒𝑠𝑡 𝑖𝑛𝑑𝑒𝑥 𝑐𝑜𝑛𝑡𝑎𝑖𝑛𝑖𝑛𝑔 𝑡ℎ𝑒 𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑜𝑓 𝑡ℎ𝑒 𝑣𝑒𝑐𝑡𝑜𝑟 𝑟 ∈
0, 1 ! × #
– 𝑠𝑡𝑎𝑡𝑒 𝑖𝑛𝑑𝑒𝑥 ≔ ;
𝑖𝑛𝑑𝑒𝑥 = 1 → 𝑆$
𝑖𝑛𝑑𝑒𝑥 = 2 → 𝑆%
𝑖𝑛𝑑𝑒𝑥 = 3 → 𝑆&
• We can derive the next state that is most likely to occur by:
𝑠$%& = 𝑠𝑡𝑎𝑡𝑒 𝑖𝑛𝑑𝑒𝑥 𝜋 ⋅ (𝑃!!
'(
)$
/ 𝑖 ∈ ℕ 𝑖𝑠 𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑟𝑎𝑛𝑠𝑖𝑡𝑖𝑜𝑛𝑠
= 𝜋 ⋅ 𝑃!)
"# ⋅ … ⋅ 𝑃!)
"#
= 𝑖 𝑡𝑖𝑚𝑒𝑠
35
36. Case studies
Mental health careInsulin pump
7: end if
8: end for
9: return True . Constraints fulfilled
10: end function
Algorithm 3 mMAPO automated decision making
1: function DECISION_MAKING(Y )
2: Input:
!
OPV . Objective priority vector
3: IC initiate_centroids( ~OPV )
4: CP cluster_pareto(Y, IC)
5: returnselect_pareto_solution(CP)
6: end function
IoT micro-sensors embedded in the patient body [33]. The
sensors send the blood sugar information to the resources
executing machine learning algorithms to create a model
that identifies the patient state variation and computes
the proper level of insulin upon abnormal state detection.
Afterwards, the pump controller receives the information
through eight components interacting as in Fig. 4:
• Compute blood sugar level of the patient;
• Compute insulin level and store it in a remote database;
• Retrieve patient records from the database;
• Review values of a patient for taking proper decisions;
• Send to doctor the blood sugar for review of insulin
intake;
• Send review results back with the proper insulin dose
based on the patient history.
• Compute pump command and adjust miniaturized pump
pressure to avoid the risk of falling into coma;
• Control insulin pump needle for delivering the correct
dose.
6.2 Mental healthcare
This application manages near real-time information of pa-
tients who suffer from mental disorders [33] in a number of
UK hospitals (Fig. 5). Due to privacy concerns, the patients
may not always attend the same clinic and need support
through appointments and emergency services:
• Determine mental state for a specific patient’s record;
• Decompose the safety concern for the patient to prevent
Compute
insulin
level
doctor
Compute
pump
command
insulin
pump
Log insulin
dose
patient
records sensor pump
Review
values
blood
sugar
Figure 4. Insulin pump application.
Decom-
pose the
safety
concern
Mental
health act
Find the
closest
clinic
Generate
record for
medical
staff
Smart
wearable
device
Emergency
services
Determine
mental
states
Summarize
View
patient
history
Figure 5. Mental healthcare application.
7 EXPERIMENTAL SIMULATION
We implemented the mMAPO Pareto analysis algorithm in
the jMetal [34] framework and integrated within the Sched-
uler of the ASKALON environment. We created elaborate
simulation scenarios using iFogSim [14], which considers
the computational and storage characteristics of both the
Edge devices and the Cloud virtual machine instances.
7.1 Experimental design
We evaluated the benefits of mMAPO for application place-
ment compared to three state-of-the-art methods: 1) Fog
Service Placement Problem (FSPP) [11] based on linear inte-
ger programming model focused on reducing the economic
cost and improving resources utilization; 2) Edge-ward de-
lay-priority (EW-DP) [8] that implements a hierarchical best
fit algorithm to cope with users mobility; and 3) Best-fit
Queue (BQ) [12] as a queuing algorithm that reduces the
completion time by primarily using Edge devices. We con-
sidered completion time, energy consumption and economic
cost for executing a request from the IoT sensors until the
final data collection at another device or end-user.
6
nents
ri) _
filled
filled
ector
The
urces
model
putes
ction.
ation
ase;
ns;
sulin
Table 1
Resource requirements per component.
Application CPU [MI] MEM [MB] Storage [MB]
Insulin pump 200 – 2000 10 – 60 256 – 1024
Mental healthcare 200 – 2000 10 – 50 256 – 512
Compute
insulin
level
Send to
doctor
Compute
pump
command
Control
insulin
pump
Log insulin
dose
Retrieve
patient
records
Blood
sensor
Insulin
pump
Review
values
Compute
blood
sugar
Figure 4. Insulin pump application.
Decom-
pose the
safety
concern
Mental
health act
Find the
closest
clinic
Generate
record for
medical
staff
Smart
wearable
device
Emergency
services
Determine
mental
states
Summarize
View
patient
history
Figure 5. Mental healthcare application.
7 EXPERIMENTAL SIMULATION
We implemented the mMAPO Pareto analysis algorithm in
the jMetal [34] framework and integrated within the Sched-
uler of the ASKALON environment. We created elaborate
simulation scenarios using iFogSim [14], which considers
the computational and storage characteristics of both the
6
Table 1
Resource requirements per component.
Application CPU [MI] MEM [MB] Storage [MB]
Insulin pump 200 – 2000 10 – 60 256 – 1024
Mental healthcare 200 – 2000 10 – 50 256 – 512
Send to
doctor
Control
insulin
pump
Retrieve
patient
records
Blood
sensor
Insulin
pump
Compute
blood
sugar 36
37. Evaluation mapping (real-world)
Figure 3: Insulin pump application completion time, energy consumption, and economic cost for different data size
Figure 4: Mental health care application completion time, energy consumption, and economic cost for different CPU load
37
38. Evaluation (mobility prediction)
Figure 5: Experimental evaluation of the Markov chain single-transition mobility prediction model
Figure 6: Average request probability failure
38
1,000 2,000 4,000
0
50
INSTR [MI]
Co
1,000 2,000 4,000
0
1,000
INSTR [MI]
Energ
1,000 2,000 4,000
0.5
1
INSTR [MI]
Ec
Figure 11. Mental healthcare application time, energy, and cost for different CPU workloads.
0 25 50 100 200
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Number of prediction samples per device
Executiontime[s]
25 50 100 200
80
90
100
Number of prediction samples per device
Accuracy[%] 0 25 50 100
80
90
100
Number of single-step predictions per device
Accuracy[%]
DPM
IDV
Figure 12. Experimental evaluation of the Markov chain single-transition mobility prediction model.
9.4 Request failure probability
This section evaluates the effect of the mobility on the
request failure probability for executing the mental health
application. We performed a series-based reliability analysis
[39] that identifies the average time spent by the Edge
devices in a connected Sc or roamed Sr state in the four
ime windows W defined in Section 9.2: Morning (MO),
Forenoon (FN), Afternoon (AN) and Night (NI). The empir-
cal analysis shows that the average connected or roamed
ime T for each time window W is 2790 s, 2888 s, 3322 s
and 2442 s, respectively. We therefore utilize this informa-
ion to create probability function for a request failure f in
failure rate. In contrast, FSPP primarily relies on the Cloud
infrastructures with limited use of Edge devices, which
results in a relatively low failure probability of around 35%.
The introduction of the mobility prediction reduced the
failure probability of mMAPO to less than 10%. The high
probability of a request failure in EW-DP, BQ and FSPP
indirectly increases the energy and execution costs by a
factor of two to six compared to mMAPO.
9.5 Complexity and quality analysis
We investigate the ability of mMAPO to provide optimized
MO FN AN NI
0
20
40
60
80
100
Window W
Requestsprobabilityfailure[%]
FSPP
mMAPO
EW-DP
BQ
Figure 13. Average request probability failure for mental healthcare
application.
Cloud. Moreover, the mMA
Edge devices can reduce th
up to 80%. The results also s
is more energy efficient for
require high performance r
munication latency has a l
time than the data size. L
devices can have large impa
proper consideration durin
REFERENCES
[1] What edge computing me
39. Summary
• We introduced a multi-objective application placement method.
• It utilizes a Markov-chain based model for predicting when a
Fog/Edge device might move (roam) within the computing
continuum.
• The model can achieve:
• up to 80% lower request execution failure;
• up to six times lower application completion times;
• up to 28% cheaper execution costs;
• reduction of the energy requirements by up to 40%.
Research papers:
1. Vincenzo De Maio and Dragi Kimovski, “Multi-objective scheduling of extreme data scientific workflows in Fog”, Future Generation Computer Systems, Volume 106, 2020, pp. 171-184.
2. Narges Mehran, Dragi Kimovski and Radu Prodan, “MAPO: A Multi-Objective Model for IoT Application Placement in a Fog Environment”, Proceedings of the 9th International Conference on the Internet of Things, 2019, pp. 1-8.
3. Dragi Kimovski, Narges Mehran, Christopher Kerth and Radu Prodan, , “Mobility-Aware IoT Application Placement in the Cloud -- Edge Continuum”, IEEE Transactions on Services Computing, 2nd revision, in-review.
39
42. References
42
Research papers:
1. Dragi Kimovski, Josef Hammer, Narges Mehran, Hermann Hellwagner and Radu Prodan, “Cloud, Fog or Edge: Where to Compute”, IEEE Computing Magazine, 2nd
revision, in-review.
2. Roland Matha, Dragi Kimovski, Anatoliy Zabrovsky, Christian Timmerer and Radu Prodan, “Where to Encode: A Performance Analysis of x86and Arm-based Amazon
EC2 Instances”, CCGrid 2021, In-review.
3. Vincenzo De Maio and Dragi Kimovski, “Multi-objective scheduling of extreme data scientific workflows in Fog”, Future Generation Computer Systems, Volume 106,
2020, pp. 171-184.
4. Narges Mehran, Dragi Kimovski and Radu Prodan, “MAPO: A Multi-Objective Model for IoT Application Placement in a Fog Environment”, Proceedings of the 9th
International Conference on the Internet of Things, 2019, pp. 1-8.
5. Dragi Kimovski, Narges Mehran, Christopher Kerth and Radu Prodan, , “Mobility-Aware IoT Application Placement in the Cloud -- Edge Continuum”, IEEE
Transactions on Services Computing, 2nd revision, in-review.
6. Dragi Kimovski, Julio Ortega, Andres Ortiz and Raul Banos, “Parallel alternatives for evolutionary multi-objective optimization in unsupervised feature selection”,
Elsevier Expert Systems with Applications No.9/Vol.42, 2015
7. Dragi Kimovski, Julio Ortega, Andres Ortiz and Raul Banos, “Leveraging cooperation for parallel multi-objective feature selection in high-dimensional EEG data”,
Concurrency and Computation: Practice and Experience No.18/Vol.27, 2015
8. Dragi Kimovski, Julio Ortega, Andres Ortiz and Raul Banos “Feature selection in high-dimensional EEG data by parallel multi- objective optimization”, 2014 IEEE
International Conference on Cluster Computing, 2014
9. Nishant Saurabh, Dragi Kimovski, Francesco Gaetano and Radu Prodan, ”A Two-Stage Multi-Objective Optimization of Erasure Coding in Overlay Networks”, CCGrid
2017, Madrid, Spain 2017
10. Dragi Kimovski, Attila Marosi, Sandi Gec, Nishant Saurabh, Atilla Kertezs, Gabor Kecskemeti, Vlado Stankovski and Radu Prodan “Distributed Environment for
Efficient Virtual Machine Image Management”, Concurrency and Computation: Practice and Experience
11. Sandi Gec, Dragi Kimovski, Uros Pascinski, Radu Prodan and Vlado Stankovski “Semantic approach for multi-objective optimisation of the ENTICE distributed Virtual
Machine and container images repository”, Concurrency and Computation: Practice and Experience
12. Dragi Kimovski, Nishant Saurabh, Vlado Stankovski, Radu Prodan, “Multi-Objective Middleware for Distributed VMI Repositories in Federated Cloud Environment”,
Scalable Computing: Practice and Experience, No.4/Vol.17, 2016
13. Dragi Kimovski, Nishant Saurabh, Sandi Gec, Polona Stefancic, Gabor Kecskemeti, Vlado Stankovski, Radu Prodan, Thomas Fahringer, “Towards Decentralized
Repository Services for Efficient and Transparent Virtual Machine Operations: The ENTICE Approach”, IEEE International Conference on Cloud Networking,
CloudNet 2016, Pisa, Italy, October 2016
43. Questions?
• You can also send me your questions after the presentation on
dragi.kimovski@aau.at
43