Attacks on federated learning model are discussed as a part of my research to build a model that overcomes the diverse security issues and vulnerabilities in the cloud in the process of building a unified machine learning model that can benefit multi-user/ multi-companies to work together.
Double Revolving field theory-how the rotor develops torque
Poisoning attacks on Federated Learning based IoT Intrusion Detection System
1. Information Security Research Group
Poisoning Attacks on Federated Learning-based IoT
Intrusion Detection System
By
Sai Kiran Kadam
2. Federated Learning (FL) - Intrusion Detection in IoT - Poisoning Attacks
● FL for IoT
○ Train multi-party data located in isolated islands without the need to expose data among the
parties.
○ Data owners collaboratively train a model without exposing their data to each other.
● Intrusion detection - Anomaly detection
○ Training a model characterizing normal device behavior and using this model for detecting
”anomalous” behavior that deviates from the normal model.
○ FL has been emerging for distributed ML model training and seems to be an adequate tool.
● Data poisoning attacks - Idea of the paper
○ Implant a backdoor in the aggregated model to incorrectly classify malicious data as benign.
2
3. System Model and Threat Model
● System Model:
● Threat Model:
○ Attacker’s goal: To corrupt the global model by aggregator so that the model wouldn’t detect
malicious traffic as anomalous.
○ The attacker controls a number of IoT devices and can also connect their devices to the
security gateways
3
4. Data Poisoning Attack
● Backdoor the model:
○ Inject small amount of malicious data into the benign traffic which will not be detected as
anomalous.
○ The model will not detect the backdoored traffic as malicious, security gateway uses this data
to train the local model that is sent to the aggregator, hence affecting the global model.
○ Challenges of the implanted backdoor:
■ To evade
● the traffic anomaly detection of the global model and
● the model anomaly detection of the aggregator.
4
5. Experimental Setup
● Datasets used:
○ D¨IoT-Benign: IoT traffic generated from 18 IoT devices deployed in a real-word smart home.
○ UNSW-Benign: IoT traffic generated from 28 IoT devices in an office for 20 days.
○ D¨IoT-Attack: Attack traffic generated by 5 IoT devices infected by the Mirai malware which
has 13 attack types, e.g., infection, scanning, SYN flood, HTTP flood, etc.
● Implementation:
○ Framework - PyTorch,
○ server with 20X Intel Xeon CPU cores, 64GB RAM, 4X NVIDIA GeForce GPUs,
○ Ubuntu 18.04 LTS OS.
5
6. Metrics
● Backdoor Accuracy (BA)
○ fraction of malicious samples that the system falsely classifies as normal samples to the total
malicious samples
● Main Task Accuracy (MA)
○ fraction of normal samples that the system correctly classifies as normal traffic to the total
normal samples.
● Poisoned Data Rate (PDR)
○ ratio of poisoned traffic injected in the network with respect to the benign traffic the
compromised devices generate.
● Poisoned Model Rate (PMR)
○ fraction of the number of the gateways that have compromised IoT devices to the total number
of the gateways.
6
8. Conclusion
The implanted backdoor attack bypasses the existing defences and is effective which raises
the need for new defence techniques against attacks on FL-IoT intrusion detection systems.
Bibliography:
Nguyen, T.D., Rieger, P., Miettinen, M. and Sadeghi, A.R., 2020. Poisoning attacks on federated learning-based IoT intrusion detection
system. In Proc. Workshop Decentralized IoT Syst. Secur.(DISS) (pp. 1-7).
Thank you
8