AdaM: an Adaptive Monitoring Framework for Sampling and Filtering on IoT Devices
1. AdaM: an Adaptive Monitoring
Framework for Sampling and Filtering
on IoT Devices
IEEE International Conference on Big Data 2015 (BigData 2015)
Oct 29 – Nov 01, 2015 @ Santa Clara, CA, USA
Demetris Trihinas, George Pallis, Marios D. Dikaiakos
University of Cyprus
{trihinas, gpallis, mdd}@cs.ucy.ac.cy
2. Demetris Trihinas 2
IoT was initially devices sensing and exchanging data streams
with humans or other network-enabled devices
IEEE BIGDATA CONFERENCE 2015
3. Edge-Mining
Demetris Trihinas 3
A term coined to reflect data processing and decision-making on
“smart” devices that sit at the edge of IoT networks
Buy more
detergent
…our devices just got a little bit more “smarter”…
IEEE BIGDATA CONFERENCE 2015
4. The “Big Data” in IoT
Demetris Trihinas 4
• Taming data volume and data velocity with
limited processing and network capabilities
• IoT devices are usually battery-powered which means intense
processing leads to increased energy consumption (less battery-life)
Zhang et al., Usenix HotCloud, 2015
Challenges
[IDC, Big Data in IoT, 2014]
IEEE BIGDATA CONFERENCE 2015
6. Metric Stream
Demetris Trihinas 6
• A metric stream 𝑀 = {𝑠𝑖}𝑖=0
𝑛
is a large sequence of collected samples,
denoted as 𝑠𝑖 where 𝑖 = 0, 1, … , 𝑛 and 𝑛 → ∞
• Each sample 𝑠𝑖 is a tuple 𝑡𝑖, 𝑣𝑖 described by a timestamp 𝑡𝑖 and a
value 𝑣𝑖
𝑠𝑖
𝑠𝑖+1
Metric Stream M
IEEE BIGDATA CONFERENCE 2015
7. Periodic Sampling
Demetris Trihinas 7
• The process of triggering the collection mechanism of a monitored source every
𝑇 time units such that the 𝑖 𝑡ℎ sample is collected at time 𝑡𝑖 = 𝑖 ∙ 𝑇
𝑠𝑖
𝑠𝑖+1
Metric Stream M sampled every T = 1s
Compute resources and energy are wasted while generating large data volumes at a high velocity
Metric Stream M sampled every T = 10s
Sudden events and significant insights are missed
IEEE BIGDATA CONFERENCE 2015
8. Adaptive Sampling
Demetris Trihinas 8
• Dynamically adjust the sampling period 𝑇𝑖 based on some function,
containing information of the metric stream evolution
𝑠𝑖
𝑠𝑖+1
𝑇𝑖+1
Metric Stream 𝑀′ dynamically sampledMetric Stream 𝑀 with 𝑇 = 1𝑠
IEEE BIGDATA CONFERENCE 2015
• Adapting the sampling rate reduces unnecessary processing accompanied
by computing a new sample and consequently preserves energy!
9. Metric Filtering
Demetris Trihinas 9
• “Cleansing” the metric stream by not transmitting over the network
consecutive collected samples of “little” difference
• Fixed Range Metric Filters
Metric Stream M with fixed filter Range 𝑅 = 1%
if (curValue ∈ [prevValue – R, prevValue + R ])
filter(curValue)
Although stable phases exist, only 2 metric values are filtered
Can we know this R
beforehand?
Should it keep the
same value forever?
Is it the same for each
and every metric?
IEEE BIGDATA CONFERENCE 2015
𝑅
𝑠𝑖+1
𝐹𝑖𝑙𝑡𝑒𝑟𝑒𝑑!𝑠𝑖
x x
10. Adaptive Filtering
Demetris Trihinas 10
• Dynamically adjusting the filter range 𝑅 based on some function containing
information of the current variability of the metric stream
Metric Stream 𝑀′ with adaptive RangeMetric Stream 𝑀
𝑠𝑖
𝑠𝑖+1
𝑅𝑖+1
𝑅𝑖
IEEE BIGDATA CONFERENCE 2015
• Adapting the filter range to the variability of the metric streams allows
for data volume and network traffic to be reduced
11. The Adaptive Monitoring Framework
AdaM
Demetris Trihinas 11
Sensing
Unit
Processing
Unit
Communication
Unit
Adaptive
Sampling
Adaptive
Filtering
Knowledge
Base
AdaM
IoT Device
𝑠𝑖
𝑇𝑖+1
𝑅𝑖+1
𝑠𝑖
• Software library with no external dependencies embeddable on IoT devices
• Reduces processing, network traffic and energy consumption by adapting the
monitoring intensity based on the metric stream evolution and variability
• Achieves a balance between efficiency and accuracy
It’s open-source! Give it a try!
https://github.com/dtrihinas/AdaM
Raspberry Pi
Arduino
Beacons
IEEE BIGDATA CONFERENCE 2015
12. Adaptive Sampling Algorithm
• Step 1: Compute the distance 𝛿𝑖 between the current two consecutive
sample values
• Step 2: Compute metric stream evolution based on a Probabilistic
Exponential Weighted Moving Average (PEWMA) to estimate 𝛿𝑖+1
and the standard deviation 𝜎𝑖+1:
𝛿𝑖+1 = 𝜇𝑖 = 𝑎 ∙ 𝜇𝑖−1 + 1 − 𝑎 𝛿𝑖
Demetris Trihinas 12
Looks like an exponential moving average, right?
But weighting is probabilistically applied!
𝑎 = 𝛼 1 − 𝛽P𝑖
IEEE BIGDATA CONFERENCE 2015
13. Why use a Probabilistic EWMA?
Simple EWMA is volatile to abrupt transient changes
• Slow to acknowledge spike after “stable” periods
• If “stable” phases follow sudden spikes, then subsequent
values are overestimated
Demetris Trihinas 13
Sounds a lot like a job for
a Gaussian distribution!
IEEE BIGDATA CONFERENCE 2015
14. Adaptive Sampling Algorithm
• Step 3: Compute actual standard deviation 𝜎𝑖
• Step 4: Compute the current confidence (𝑐𝑖 ≤ 1) of our approach
based on the actual and estimated standard deviation
𝑐𝑖 = 1 −
| 𝜎𝑖 − 𝜎𝑖|
𝜎𝑖
• Step 5: Compute the sampling period 𝑇𝑖+1 based on the current
confidence and the user-defined imprecision γ
Demetris Trihinas 14
… the more “confident” the algorithm is, the larger the
outputted sampling period 𝑇𝑖+1 can be…
O(1) complexity as all steps only use their previous values!
IEEE BIGDATA CONFERENCE 2015
15. Adaptive Filtering Algorithm
• Step 1: Compute the dispersion of the stream based on average 𝜇𝑖
and standard deviation 𝜎𝑖 already computed from adaptive sampling
𝐹𝑖 =
𝜎𝑖
2
𝜇𝑖
• Step 2: Compute the new filter range 𝑅𝑖+1 based on 𝐹𝑖 and the user-
defined imprecision 𝛾
Demetris Trihinas 15
…a low 𝐹𝑖 indicates a non dispersed data stream which
means a low variability in the metric stream…
O(1) complexity as all steps only use their previous values!
IEEE BIGDATA CONFERENCE 2015
𝑠𝑖
𝑠𝑖+1
𝑅𝑖+1
𝑠𝑖+2
𝑠𝑖+3
𝐹𝑖𝑙𝑡𝑒𝑟𝑒𝑑!𝑅𝑖+2𝑅𝑖
𝐹 < 𝛾
17. AdaM vs State-of-the-Art Adaptive
Techniques
• i-EWMA: an EWMA adapting sampling period by 1 time unit when the
estimated error (ε) is under/over user-defined imprecision
• L-SIP[1]: a linear algorithm using a double EWMA to produce estimates of
current data distribution based on the rate sample values change
=> Slow to react to highly transient and abrupt fluctuations in the metric stream
• FAST[2]: an (aggressive) framework using a PID controller to compute (large)
sampling periods accompanied by a Kalman Filter to predict values at non
sampling points
Demetris Trihinas 17
[1] Gaura et al., IEEE Sensors Journal, 2013 [2] Fan et al., ACM CIKM, 2012
IEEE BIGDATA CONFERENCE 2015
18. Datasets
Demetris Trihinas 18
Memory Trace
Java Sorting Benchmark
CPU Trace
Carnegie Mellon RainMon Project
Disk I/O Trace
Carnegie Mellon RainMon Project
TCP Port Monitoring Trace
Cyber Defence SANS Tech Institute
Fitbit Trace
Coursera Data Science Course
IEEE BIGDATA CONFERENCE 2015
19. Our Small “Big Data” Testbeds
Demetris Trihinas 19
Raspberry Pi 1st Gen. Model B
512MB RAM, Single Core 700MHz
Daemon on OS emulates traces while feeding
samples to each algorithm
Android Emulator + SensorSimulator
512MB RAM, Single Core
SensorSimulator script emulates traces by
feeding samples to Android emulator for each
algorithm
Daemon
IEEE BIGDATA CONFERENCE 2015
20. Evaluation Metrics
• Estimation Accuracy – Mean Absolute Percentage Error (MAPE)
• CPU Cycles
• Outgoing Network Traffic
• Edge Device Energy Consumption*
Demetris Trihinas 20
*Xiao et al., CPSCom, 2010
perf
nethogs
powerstat
Other than error evaluation no other system goes through an overhead study!
21. Error Comparison
Demetris Trihinas 21
Memory Trace CPU Trace Disk I/O Trace
TCP Port Monitoring Trace Fitbit Trace
For all traces AdaM has the smallest inaccuracy (<10%)
Even in a more aggressive configuration (λ=2) AdaM is still comparable to L-SIP
AdaM AdaM AdaM
AdaM AdaM
User-defined accepted imprecision set to γ = 0.1
IEEE BIGDATA CONFERENCE 2015
22. So How Well Does AdaM Perform?
Demetris Trihinas 22
Memory Trace
Java Sorting Benchmark
CPU Trace
Carnegie Mellon RainMon Project
Disk I/O Trace
Carnegie Mellon RainMon Project
TCP Port Monitoring Trace
Cyber Defence SANS Tech Institute
Fitbit Trace
Coursera Data Science Course
IEEE BIGDATA CONFERENCE 2015
23. Overhead Comparison (1)
Demetris Trihinas 23
Network Traffic
Energy Consumption
CPU Cycles
• Data volume reduced by 74%
• Energy consumption by 71%
• Accuracy is always above 89% and 83%
with filtering enabled
IEEE BIGDATA CONFERENCE 2015
24. Overhead Comparison (2)
Demetris Trihinas 24
Network Traffic Energy Consumption
Error (MAPE)
FAST’s aggressiveness results in slightly lower
energy consumption and network traffic
But, this does not come for free…
FAST features a large inaccuracy
AdaM achieves a balance between
efficiency and accuracy
IEEE BIGDATA CONFERENCE 2015
25. Scalability Evaluation
• Integrated AdaM to a data streaming
system
• JCatascopia Cloud Monitoring*
• JCatascopia Agents (data sources) use AdaM
to adapt monitoring intensity
• Archiving time is measured at JCatascopia
Server to evaluate data velocity
• Compare AdaM over Periodic
Sampling
Demetris Trihinas 25
* [Trihinas et al, CCGrid 2014]
https://github.com/dtrihinas/JCatascopia
DB
AdaM
Agent
Agent
Agent
.
.
.
JCatascopia
AdaM
AdaM
metrics
Data velocity is reduced
to a linear growth
IEEE BIGDATA CONFERENCE 2015
26. Conclusion and Future Work
Demetris Trihinas 26
• Data volume and velocity of IoT-generated data
keeps increasing
• AdaM reduces processing, network traffic and
energy consumption of IoT device by adapting
monitoring intensity
• AdaM achieves a balance between efficiency and
accuracy
• AdaM v2.0 support data streaming engines such
as Apache Spark and Flink
IEEE BIGDATA CONFERENCE 2015
It’s open-source! Give it a try!
https://github.com/dtrihinas/AdaM
27. Demetris Trihinas 27
Laboratory for Internet Computing
Department of Computer Science
University of Cyprus
http://linc.ucy.ac.cy
IEEE BIGDATA CONFERENCE 2015
Editor's Notes
Present AdaM, an adaptive monitoring framework. Our motivation comes from the Internet of Things but really, AdaM can be used on any data streaming systems, and we currently use it for PaaS cloud monitoring
Back to our motivation… where from the emergence of the IoT we have all these devices: like activity trackers and smart home appliances that really impact how we live and work…
Now IoT used to be about sensing and exchanging data streams but now its a lot more…
We have this term called edge-mining where we see that data processing and even decision-making has transitioned to these devices which just got a little bit more smarter!
Now, you can see you washing machine ordering for you detergent when you run out!
So where is the big data in IoT… well you have all these studies from IDC and Gartner saying that by 2020 we will have about 20B devices interconnected generating 40…something GBs of data!
Now the challenges here are: (i) data volume and velocity of generated data keeps increasing… but these devices have limited processing capabilities and lets not forget the amount of data they place on the network
(ii) With processing comes increased energy consumption which means less battery!
The remedy to these challenges can be adaptive techniques such as sampling and filtering….
First, some preliminaries… a metric stream is nothing more than a large sequence of collected samples with n possibly going up to infinity, and each sample is just a tuple with a timestamp and a value for simplicity
Now, periodic sampling as we all know its just the process of collecting a sample every T time units so the ith sample is basically collected at i times T time units… now whats the problem that we have here… well as you can see from this metric stream… there are a lot of relatively stable phases where collecting these samples is a waste o processing resources and energy while we generate a lot of data…
So one can now say ok, lets increase the sampling rate…. Well you could do this but you risk losing sudden events and important instances…
So what can we do? How about adaptive sampling
Where we dynamically adjust the sampling period based on some function which reveals information for the entire metric stream or parts of it… so we want to find Ti+1 to collect sample si+1 basically…
Now lets go to filtering… which is the process of cleansing the stream to reduce data volume and the network overhead.
A simple filter is the range filter which basically filters current values if they are in the range prevValue +- R with R being the range say 1% as we see in the example this isn’t a good filter as we only remove 2 values…
so one would ask… can we know R beforehand? Possible not if the stream changes in time… so it wouldn’t be a good idea to keep it to the same value… and can we use the same R for each metric? Again possibly not since metrics vary a lot
So what can we do? How about adaptive filtering where we dynamically adjust the filter range based on the variability of the metric stream indicated by some function again… so basically we want R to change at each sampling point!
So to handle adaptive sampling and filtering we have developed AdaM which is a framework embeddable in the software core of IoT devices with no dependencies and it is capable of adapting the monitoring intensity based on the metric evolution and variability to reduce processing, network traffic and energy consumption
So lets see how our adaptive sampling algorithm works. First we compute the distance between the current two samples… then we compute the evolution of the metric stream based on a PEWMA to estimate the distance for sample i+1 and the standard deviation…
So if you look at how we compute delta i+1 you might be reminded of a regular EWMA right ? Well yes, but weighting here is probabilistically applied
B is just a weight so when b = 0 it converges to an EWMA… but think as b equal to 1 for simplicity
So why do we use the PEWMA? Well because the EWMA while used a lot it is volatile to abrupt changes… for example its slow to acknowledge spikes and it also overestimates values after sudden spikes… so it doesn’t really follow the metric evolution and that’s why weighting should be probabilistically applied
Back to the steps we then have step 3 which computes the actual deviation at sample i… we do this because next we compute the current confidence of our approach by comparing the estimated deviation with the actual to get a rate smaller than one where the notion here is that the more confident the algorithm the large the outputted T can be as the estimation is currently following the observed SD
Finally we compute the sampling period which is based on the confidence and the user-defined imprecision we also have lambda which is just a multiplicity factor if a more aggressive approach should be followed
O(1) complexity!
Our adaptive filtering algorithm just has two steps as we use the standard deviation and average we get when estimating the distance to compute the fano factor… now the fano factor is an indicator of the current dispersion of the data stream so in general is F is high you have a metric stream with a high variability…
After we have the fano factor, in a similar manner we can compute the new filter range
Again the complexity is of stable time!
For our evaluation, we compare adam to 3 other adaptive techniques… so we have i-ewma which uses an ewma to change the sampling period by 1 time unit depending if the estimated error is over or under the use defined imprecision… we then have l-sip which uses a double ewma which means it calculates estimations for the upcoming samples based on the rate samples change values… this is a good technique but is slow to abrupt changes in the metric stream…finally we have fast which offers aggressive sampling and on non sampling points it used a kalman filter to predict values
We choose to use complex real world traces rather than linear of sinusoidal datasets… so we have memory, cpu, disk io, network monitoring and even a fitbit trace!
Now we must have the smallest testbed in the whole conference as we use the raspberry pi… first gen… just one core at 700mhz and 0.5 a gig ram! And also the same capabilities on the android emulator+sensorsim
So first we compare adam l=1 and l=2 to iewma and lsip… adam has the smallest error with its inaccuracy being lower than 10% in all traces and even for l=2 its still comparable to lsip!
User-defined accepted imprecision set to γ = 0.1
So how well do we perform? Well these traces look identical right?
So then lets go to an overhead comparison, with adam being the blue ones, where we see adam reducing data volume by 74% and energy consumption by 71% with accuracy always above 89% compared to periodic sampling!
Filtering adds an overhead <2% to AdaM but benefits network traffic reduction by at least 32% and energy consumption by 12%
Now if you look closely you will see that the aggressiveness of FAST in some cases it exhibits least energy and network overhead than adam… but this does not come for free as it had a large error!
AdaM at the end manages to balance efficiency and accuracy!
Finally we did a scalability evaluation where we integrated adam to Jcatascopia, a cloud monitoring system for streaming data…. We evaluation scalability/data velocity by comparing adam to periodic sampling where we see that with adam we can significantly reduce data velocity!
**Archiving time = time to process and store received metrics
So I leave you with this…
With edge mining we use a lot of resources and energy while generating volumes of data at a high velocity… we introduced adam to adapt the intensity of monitoring based on the metric evolution and variability…. Achieving a balance between efficiency and accuracy…