IoT Workload Distribution Impact
between Edge and Cloud Computing
in a Smart Grid Application
Ot´avio Carvalho, Manuel Garcia, Eduardo Roloff, Emmanuell Diaz Carre˜no,
Philippe O. A. Navaux
Federal University of Rio Grande do Sul - Parallel and Distributed Processing Group
Latin America High Performance Computing Conference - CARLA 2017
Table of contents
1. Introduction
2. Architecture and Implementation
3. Evaluation
4. Conclusion and Future Works
2
Introduction - Motivation
• Smart Grids potential to save billions of dollars in energy spending
for both producers and consumers.
• Internet of Things potential economic impact.
• Technologies created for IoT are driving computing toward
dispersion.
• Edge Computing
• Cloudlets
• Micro-datacenters
• Fog Nodes
3
Introduction - Main goals
• Explore the potential performance improvements of moving
computation from cloud to edge in a Smart Grid application.
1. What are the limits of our application architecture in terms of
latency and throughput?
2. To what extent is it possible to move our workload from cloud to
edge nodes?
3. Which strategies can be used to reduce the amount of data that is
sent to the cloud?
4
Architecture and Implementation
• Three-layered architecture:
• Cloud-layer
• High latency processing.
• Receives aggregated data from multiple edge nodes.
• Composed by applications running on Linux VMs on Windows Azure.
• Edge-layer
• Low latency processing.
• Receives data from multiple sensors and perform local processing.
• Reduces the amount of data that needs to be sent to Cloud-layer.
• Composed by ARM nodes (Raspberry Pi Zero W) connected to
Wi-Fi.
• Sensor-layer
• Measurements only.
• Produces a high amount of measurements that should be sent to
Edge-layer for aggregation.
• For evaluation purposes, our sensor measurements are pre-loaded into
our Edge-layer nodes.
5
Architecture and Implementation
Figure 1: Architecture overview: Three-layered architecture
6
Evaluation - Communication
0
5000
10000
15000
50th 90th 99th
Percentiles (ms)
Latency(ms)
32KB
64KB
128KB
256KB
512KB
1024KB
Figure 2: PingPong: Latency Percentiles by Message Sizes (32KB to 1MB)
7
Evaluation - Communication
0.0
0.5
1.0
1.5
32KB 64KB 128KB 256KB 512KB 1024KB
Size (KB)
Throughput(QPS)
32KB
64KB
128KB
256KB
512KB
1024KB
Figure 3: PingPong: Maximum Throughput by Message Size (32KB to 1MB)
8
Evaluation - Application concurrency
0
2000
4000
6000
8000
1 10 100
Concurrency (Number of Goroutines)
Throughput(QPS)
Edge
Cloud
Figure 4: Concurrency Analysis: Impact of Goroutines usage on throughput
(Edge and Cloud nodes)
9
Evaluation - Application scalability
0
500
1000
1500
2000
1 2 4
Number of Edge Nodes
Throughput(QPS)
1
2
4
Figure 5: Scalability Analysis: Throughput with multiple consumers (1 to 4
edge nodes)
10
Evaluation - Workload windowing
0
200000
400000
600000
800000
1 2 4
Number of Edge Nodes
Throughput(QPS)
1
10
100
1000
Figure 6: Windowing Analysis: Windowing impact on throughput (1 to 1000
messages per request)
11
Conclusion and Future Works
• Conclusion
• The application was able to achieve a higher throughput by
leveraging processing on edge nodes.
• We were able to reduce communication with the cloud by
aggregating data at edge level.
• Future Works
• Study how other communication protocols (such as MQTT) would
behave in this application context.
• Explore techniques and models for adaptive workload scheduling.
• Evolve the application architecture to a general framework for IoT.
12
Thanks! Questions?
13

IoT Workload Distribution Impact Between Edge and Cloud Computing in a Smart Grid Application

  • 1.
    IoT Workload DistributionImpact between Edge and Cloud Computing in a Smart Grid Application Ot´avio Carvalho, Manuel Garcia, Eduardo Roloff, Emmanuell Diaz Carre˜no, Philippe O. A. Navaux Federal University of Rio Grande do Sul - Parallel and Distributed Processing Group Latin America High Performance Computing Conference - CARLA 2017
  • 2.
    Table of contents 1.Introduction 2. Architecture and Implementation 3. Evaluation 4. Conclusion and Future Works 2
  • 3.
    Introduction - Motivation •Smart Grids potential to save billions of dollars in energy spending for both producers and consumers. • Internet of Things potential economic impact. • Technologies created for IoT are driving computing toward dispersion. • Edge Computing • Cloudlets • Micro-datacenters • Fog Nodes 3
  • 4.
    Introduction - Maingoals • Explore the potential performance improvements of moving computation from cloud to edge in a Smart Grid application. 1. What are the limits of our application architecture in terms of latency and throughput? 2. To what extent is it possible to move our workload from cloud to edge nodes? 3. Which strategies can be used to reduce the amount of data that is sent to the cloud? 4
  • 5.
    Architecture and Implementation •Three-layered architecture: • Cloud-layer • High latency processing. • Receives aggregated data from multiple edge nodes. • Composed by applications running on Linux VMs on Windows Azure. • Edge-layer • Low latency processing. • Receives data from multiple sensors and perform local processing. • Reduces the amount of data that needs to be sent to Cloud-layer. • Composed by ARM nodes (Raspberry Pi Zero W) connected to Wi-Fi. • Sensor-layer • Measurements only. • Produces a high amount of measurements that should be sent to Edge-layer for aggregation. • For evaluation purposes, our sensor measurements are pre-loaded into our Edge-layer nodes. 5
  • 6.
    Architecture and Implementation Figure1: Architecture overview: Three-layered architecture 6
  • 7.
    Evaluation - Communication 0 5000 10000 15000 50th90th 99th Percentiles (ms) Latency(ms) 32KB 64KB 128KB 256KB 512KB 1024KB Figure 2: PingPong: Latency Percentiles by Message Sizes (32KB to 1MB) 7
  • 8.
    Evaluation - Communication 0.0 0.5 1.0 1.5 32KB64KB 128KB 256KB 512KB 1024KB Size (KB) Throughput(QPS) 32KB 64KB 128KB 256KB 512KB 1024KB Figure 3: PingPong: Maximum Throughput by Message Size (32KB to 1MB) 8
  • 9.
    Evaluation - Applicationconcurrency 0 2000 4000 6000 8000 1 10 100 Concurrency (Number of Goroutines) Throughput(QPS) Edge Cloud Figure 4: Concurrency Analysis: Impact of Goroutines usage on throughput (Edge and Cloud nodes) 9
  • 10.
    Evaluation - Applicationscalability 0 500 1000 1500 2000 1 2 4 Number of Edge Nodes Throughput(QPS) 1 2 4 Figure 5: Scalability Analysis: Throughput with multiple consumers (1 to 4 edge nodes) 10
  • 11.
    Evaluation - Workloadwindowing 0 200000 400000 600000 800000 1 2 4 Number of Edge Nodes Throughput(QPS) 1 10 100 1000 Figure 6: Windowing Analysis: Windowing impact on throughput (1 to 1000 messages per request) 11
  • 12.
    Conclusion and FutureWorks • Conclusion • The application was able to achieve a higher throughput by leveraging processing on edge nodes. • We were able to reduce communication with the cloud by aggregating data at edge level. • Future Works • Study how other communication protocols (such as MQTT) would behave in this application context. • Explore techniques and models for adaptive workload scheduling. • Evolve the application architecture to a general framework for IoT. 12
  • 13.