This document presents a method for classifier with deep deviation detection in PoE-IoT devices. It introduces the context of AI powered network visibility and monitoring and the need to continuously monitor activity and detect unknown behaviors. It describes experiments using the UNSW dataset and features like packet sizes, volumes and arrival times to train a decision tree classifier. The method then identifies deviations by comparing actual feature values during testing to thresholds learned during training and flags counters and features that deviate for further analysis. Future work areas include extending the approach to other statistical features and ensemble methods.
2. CONECCT 2020
01
Classifier with Deep
Deviation Detection in PoE-
IoT devices
Authors:
Priyanka Bhat
Manjunath Batakurki
Madhusoodhana Chari
Aruba, Hewlett-Packard
Enterprise
3. CONECCT 2020
02
Introduction
• Context : AI powered network visibility and monitoring
• Technology gap:
- Does the training data set captures all the possible patterns?
- Has the classifier trained all data points?
- Can we rely on accuracy of classifier?
• Emerging opportunity:
- continuous activity monitoring
- unknown behavior detection
- policy enforcement in the network
6. CONECCT 2020
05
Case Feature Actual value, test
value
(min, max) Device Label by
DT classifier
Counter name,
counter value,
features deviated
a min bpktl (240, 241) (1, 485) Smart Cam [ ]
b max fpktl (284, 30) (58, 1500) Smart Cam MinOutofThreshol
dBound,1,[ max
fpktl]
c max fiat (1101235, 2321564) (0, 1128283) Smart Cam MaxOutofThreshol
dBound,1,[ max
fiat]
d total bpackets
duration
(12050, 92745)
(27653,173463)
(0, 79657)
(132,125378)
TP-Link Day Night
Cloud camera
MaxOutofThreshol
dBound,2, [total
bpackets,
duration]
Sample Records with predicted class label and counters along
with their feature variations
7. CONECCT 2020
06
Conclusion and Future
Work
• Unique method to identify deviations, in the PoE-IoT devices
present in the network
• Can be used to identify thresholds and enforcing policies in the
network
• Can be extended to other statistical features such as variance,
mean etc.
• Can be extended to classify based on subset of features with
majority among the decision trees to arrive at the class label,
yielding better insights and deviation detection
https://iotanalytics.unsw.edu.au/resources/List_Of_Devices.txt
UNSW – University of New south Wales
31 POE_IOT devices, consisting of mix of Wireless, and wired, such as amazon echo, TP link cloud camera, Belkin motion sensor, Netatmo weather station, Blipcare bloodpressure meter, Smart bulb, Samsung smart cam, Macbook, HP printer, Nest protect smoke alarm etc
Here are the example results:
We had extracted training data set needed for our model. As the pre-process step, We also calculated the threshold bounds
such as min/max for each feature per class. This step was done for all the classes that are part of
the training data set.
The decision tree had generated 325 unique rules with an accuracy of 98% to classify PoE-IoT devices. The average rule length was 11(out of 21) which suggests that on an average 11 features were used by a rule. This means that other features are not seen by the decision tree. Hence deviation in those features along with the deviations in features seen by the decision tree are recorded in post process step.
We used Camera devices as a test set to simulate anomalies by changing the feature values to
validate our solution.
case a : There are no deviations. As the test value for the feature lies within the limits of the min and max. This is usual behavior of an PoE-IoT device.
case b : There is a deviation in the packet length based behavior. This is an unknown behavior and needs attention.
case c : There is a deviation in packet arrival time based feature. This depicts a case when an PoE-IoT device is trying to communicate to an external device while transmitting packets which is a security breach on the network.
case d : There is a deviation in the two features. This
depicts a scenario where an PoE-IoT device is hacked and is sending malicious packets which might degrade the network
performance. These deviation counters can be added as programmatic
rules for policy enforcement in Policy management
platforms [5].
IV.