Youtube: https://www.youtube.com/watch?v=0At0g4IIbAQ
Driven by their accessibility and ubiquity, deep learning has seen rapid growth into a variety of fields, in recent years, including many safety-critical areas. With the rising demands for computational power and speed in machine learning, there is a growing need for hardware architectures optimized for deep learning and other machine learning models, specifically in tightly constrained edge based systems. Unfortunately, the modern fabless business model of manufacturing hardware, while economic, leads to deficiencies in security through the supply chain. In addition, the embedded, distributed, unsupervised, and physically exposed nature of edge devices would make various hardware or physical attacks on edge devices as critical threats. In this talk, I will first introduce the landscape of adversarial machine learning on the edge. I will discuss several new attacks on neural networks from the hardware or physical perspective. I will then present our method for inserting a backdoor into neural networks. Our method is distinct from prior attacks in that it was generated to neither alter the weights nor inputs of a neural network. But rather, it inserts a backdoor by altering the functionality of operations implemented by the network on those parameters during the production of the neural network.
Joseph Clements, work with Dr. Yingjie Lao’s Secure and Innovative Computing Research Group conducting research
2. Secure and Innovative Computing
Research Group
Clemson University
• Adversarial Machine Learning
• Hardware Security
• VLSI Design
• Approximate Computing
• Post Quantum Cryptography and Homomorphic Computing
Our Team
Joseph Clements Hardware Trojan Attacks on Neural Networks 2
3. Contents
1. Motivation: bringing hardware security into ML
2. Landscape of attacks in the hardware domain
3. One example: hardware Trojan attacks on neural networks
4. Conclusion and future directions
Joseph Clements Hardware Trojan Attacks on Neural Networks 3
4. Cloud Based ML Paradigms
Joseph Clements Hardware Trojan Attacks on Neural Networks 4
10. Attacks in the Hardware Domain
• IP Piracy
• Counterfeiting
• Side-Channel Attacks
• Hardware Trojans
• Etc.
Joseph Clements Hardware Trojan Attacks on Neural Networks 10
11. IP Piracy and IC Overbuilding
Joseph Clements Hardware Trojan Attacks on Neural Networks 11
Goal: Produce IPs without the
owners approval and market
as one’s own.
With increasing costs of
producing IP, acquiring them
through access to a supply
chain or physical device can
be highly profitable.
12. Counterfeiting
63% of all parts flagged as
counterfeited are Integrated
circuits.
Joseph Clements Hardware Trojan Attacks on Neural Networks 12
Goal: Generate a fake such that
consumers will mistakenly identified
it as from a trusted source.
Potentially house malicious
functionality not present in originals
(ERAI Reported Parts Analysis, 2017)
(Rostami et al., 2013, ICCAD)
13. Side Channel Attacks
SCAs exploit information leaked from a computer
system’s implementation, rather than weaknesses in
the algorithm itself
Joseph Clements Hardware Trojan Attacks on Neural Networks 13
14. Hardware Trojans
Joseph Clements Hardware Trojan Attacks on Neural Networks 14
Governments and companies
can incentivize members of a
globalized supply chain to
modify the devices they
produce.
Persistent – Once inserted
into the hardware Trojans
cannot be easily removed.
17. • Implemented in the
production phase
• Targets the networks
basic operations
• Persistent and difficult
to detect or defend
• Does not involve
retraining
Hardware Trojan on Neural Network
Objective: Insert a stealthy backdoor into a neural network classifier,
which forces a malicious output classification when a input trigger key is
applied.
Joseph Clements Hardware Trojan Attacks on Neural Networks 17
18. Step 1: Select the Target Layer
Joseph Clements Hardware Trojan Attacks on Neural Networks 18
19. Step 2: Select Input Trigger Key
Joseph Clements Hardware Trojan Attacks on Neural Networks 19
20. Step 3: Determining Operational Perturbation
Joseph Clements Hardware Trojan Attacks on Neural Networks 20
21. Step 4: Hardware Implementation
Joseph Clements Hardware Trojan Attacks on Neural Networks 21
Only a small subset of neurons need to be modified
23. Results of attacks in scenarios with
well-crafted keys
Experimental Results
Joseph Clements Hardware Trojan Attacks on Neural Networks 23
Stealthiness under functional testing
– The percentage of outputs produced
by a modified network which matches
those produced by a golden model.
Effectiveness
– The percentage of attempted attacks
which succeeded in causing the desired
misclassification.
• Average of 97% • 100% for all scenarios
Stealthiness under behavioral testing – A measure of the amount modifications
needed to implement an attack as deviations in side channel information correlate
to the magnitude of hardware modifications.
Results of attacks in scenarios with
randomly generated keys
24. Conclusion and Future Directions
Joseph Clements Hardware Trojan Attacks on Neural Networks 24
Federated Learning Paradigm
1. Through this attack, we demonstrate that we can perform a stealthy and
effective attack on an ML model through it’s hardware implementation.
2. Other attacks on ML models are possible through hardware implementations.
3. Implementing novel ML paradigms potentially introduces additional hardware
vulnerabilities for adversaries to utilize.
To ensure safety for ML in paradigms where globalization and physical access are
present, development of systems should be security aware.