Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

"Fundamental Security Challenges of Embedded Vision," a Presentation from Synopsys

26 views

Published on

For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/synopsys/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-borza

For more information about embedded vision, please visit:
http://www.embedded-vision.com

Mike Borza, Principal Security Technologist at Synopsys, presents the "Fundamental Security Challenges of Embedded Vision" tutorial at the May 2019 Embedded Vision Summit.

As facial recognition, surveillance and smart vehicles become an accepted part of our daily lives, product and chip designers are coming to grips with the business need to secure the data that passes through their systems. Training data, the resulting model data and how decisions are made and acted on can be proprietary information for the product, and important to keep out of competitors’ hands.

Inputs from sensors and cameras can contain legally protected data, and the data may create ethical and privacy concerns as cameras and microphones in homes, cars and public settings explode in number. This presentation describes typical security concerns in vision systems today, including potential weaknesses in training-to-inferencing systems where data can be compromised, and discusses different approaches to security.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

"Fundamental Security Challenges of Embedded Vision," a Presentation from Synopsys

  1. 1. © 2019 Synopsys Fundamental Security Challenges of Embedded Vision Mike Borza, Principal Security Technologist Synopsys May 2019
  2. 2. © 2019 Synopsys Agenda • What is security? • Security concerns in EV applications • Security drivers in Embedded Vision • Approaches to dealing with security • Wrap-up 2
  3. 3. © 2019 Synopsys What is Security? What do we mean in the Embedded Vision context? • Security is a broad topic that means different things to different people • While we don’t discuss embedded system security, we assume it! • For our purposes today, we’ll focus on 2 aspects 3 Confidentiality Can we protect the intellectual property invested in creating an EV system, and can we protect the privacy of users of that system when that’s appropriate? Integrity (Authenticity) Can we be confident that the training and live input data are authentic and that the model is the correct one?
  4. 4. © 2019 Synopsys EV Applications With Security Concerns Obvious • Biometrics: face, fingerprint, … • Medical monitoring & diagnostics • Surveillance, traffic monitoring, … • ADAS and in-vehicle sensors Less Obvious • Amazon Echo Look fashion assistant, Facebook Portal • Video “diaries” correlated with location data – see Google’s “Selfish Ledger” Some of the concerns • What happened, when, where, was it altered? • Whose data is this, for what purpose, liability, regulatory compliance? • Protection of IP rights, ongoing investment, brand 4
  5. 5. © 2019 Synopsys Vision Processing is Increasingly AI Processing Model Coefficients Final Graph Initial Graph Image Database Training Vision Processing Vision/NN Processing Engine Camera/ImagingClassification Top 5 classes: 1: woman 2: child 3: car 4: street, crosswalk 5: trees, bushes Training data, metrics… Learned value of each node in network ImageNet Google Open Images, Custom…… Yolo, ResNet SegNet GoogLeNet AlexNet Training on CPU/GPU Compute Farms Standard graph or variant
  6. 6. © 2019 Synopsys Things Worth Protecting Model Coefficients Final Graph Initial Graph Image Database Training Vision Processing Vision/NN Processing Engine Camera/ImagingClassification Top 5 classes: 1: woman 2: child 3: car 4: street, crosswalk 5: trees, bushes Training data, metrics… Learned value of each node in network ImageNet Google Open Images, Custom…… Yolo, ResNet SegNet GoogLeNet AlexNet Training on CPU/GPU Compute Farms Standard graph or variant Sensor data needs integrity Training data requires integrity & optional confidentiality protection Decisions and output data need integrity Working memory needs confidentiality, optional integrity Model & graph need at least integrity, optional confidentiality Model & graph need at least integrity, optional confidentiality Protect the dataset when stored, too
  7. 7. © 2019 Synopsys Integrity is Key to Value, Confidentiality Protects IP Depends on a well designed graph and a large, diverse application-relevant dataset • Customized topology is a competitive advantage How to get those relevant datasets? • Use deployed EV systems to gather them Models are a valuable IP investment Dataset deficiencies cause bias or misbehavior that reduces value Protect data and model integrity to protect value Protect confidentiality to retain differentiation http://i2.cdn.turner.com/cnn/dam/assets/140411160038-faces-brain-study-story-top.jpg “Image Super-Resolution Using Deep Convolutional Networks (2016), C. Dong et al.” Source Output Reference Source Output Reference Correct performance of the system is a key value measure & differentiator
  8. 8. © 2019 Synopsys Security Drivers • Security concerns • Confidentiality of the model: When in use, and when distributed during updates • Privacy of user data • Protection can be required under law, e.g. GDPR (EU) and HIPAA (US) • Integrity of the system: Avoid manipulation by corrupting the model or input data • E.g. access control decisions based on biometrics, widescale corruption of upstream training datasets Neural Network Outputs & Decisions • User data often privacy sensitive: video, photos, biomedical, biometric • Some EV systems provide data back to the training system User Data • Large investment to obtain – the IP of the system • Distributing updates & adding new models as better training data and NN configurations arise Model & Training Data System Outputs & Decisions • Outputs may be compressed representations of the inputs, or provide decision data based on them • Some output data is intended to be processed in the cloud and should not be available locally
  9. 9. © 2019 Synopsys What are the Threats and Who are the Adversaries? Most known attacks on EV systems today are academic or by “white-hat” researchers • Attacks in the wild: big AI engines for network threat detection (e.g. Gmail spam detector) Adversaries: hacktivists, criminals and criminal organizations, cyber terrorists, nation states • Add to that: insiders (accidental or malicious) the curious and mischievous, researchers Adversarial Inputs: Craft the input to produce a desired outcome “Data Poisoning”: Manipulate the training data • Trojan insertion – a special case of data poisoning Model theft: let someone else do the hard work https://www.pluribus-one.it/research/sec-ml/wild-patterns Openai.com
  10. 10. © 2019 Synopsys Approaches to Dealing with Threats 10
  11. 11. © 2019 Synopsys Denial Pray – doing nothing is always a possibility • “My models will be frequently updated” • Adversary: buy one copy, break it, get the updates for free The more successful your product, the more desirable you are as a target Why would an adversary smart enough to understand your system not just develop their own? • The value of the training data • The best systems will use enormous compute resources on proprietary datasets Analogy: the car industry • “Those attacks aren’t practical…”
  12. 12. © 2019 Synopsys Protect the Data Sources Vision Processing Vision/ANN Processing Engine each node in network ImageNet Google Open Images, Custom Yolo, ResNet SegNet, GoogleNet AlexNet Image Database Initial Graph Training Training on CPU/GPU Compute Farms Final Graph Model Coefficients Learned value of Standard graph or variant Camera/Imaging Training data, metrics, ... Top 5 classes: 1: woman 2: child 3: car 4: street, crosswalk 5: trees Classification Physically close vision sensors may need no extra protection • Data error detection/correction may be sufficient Network based communication should be protected by at least cryptographic authentication • Encrypt if the data has privacy or proprietary value concerns that require confidentiality • TLS or DTLS protocols are sufficient for most cases Training data should by cryptographically authenticated to prevent poisoning the stored dataset Crucial to both EV device and training data Protect the sensor to EV link Protect the training data Don’t forget the stored data and decision outputs Protect the EV processor And the model
  13. 13. © 2019 Synopsys Protecting the EV System To know what to do and where, you need a threat model • What are your assets? • How can your adversary get at them? • How will you stop that? Internet based adversary? • Firewall, strong access control, secure communication protocols Adjacent subsystems (other processors, memories, etc)? • Chip and bus level firewalls, access controls – esp. on protocols like Ethernet, PCI or USB In the chip (e.g. host CPU, other processors)? • Memory separation technology, external EV memory encryption Vision Processing Vision/DNN Processing Engine each node in network NN Graph Learned value of Standard graph or variant Camera/Imaging Top 5 classes: 1: woman 2: child 3: car 4: street, crosswalk 5: trees Classification Model Coefficients
  14. 14. © 2019 Synopsys Key Takeaways • There is a lot worth protecting in an EV system • Integrity is the value of your product • Confidentiality protects your value proposition and investment • Top 3 things to do: • Identify your system’s assets • Identify the threats to those assets • Plan how you’ll protect the assets from the threats
  15. 15. © 2019 Synopsys Probing Further General embedded systems security • “Introduction to Embedded Security”, Joe Grand, Blackhat Briefings 2004 https://www.blackhat.com/presentations/bh-usa-04/bh-us-04-grand/grand_embedded_security_US04.pdf • Embedded System Security, Mike & David Kleidermacher, O’Reilly – (no affiliation) Great survey of state-of-the-art in 2018: “Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey”, Naveed Akhtar, Ajmal Mian (U Western Australia) Good introduction to need for AI security: “AI Security White Paper”, Huawei Poisoning attacks: • “Generative Poisoning Attack Method Against Neural Networks”, Chaofei Yang et al. • “EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples”, Pin-Yu Chen et al. • “Towards Evaluating the Robustness of Neural Networks”, Nicholas Carlini, David Wagner (UC Berkeley) Neural Trojans • “Trojaning Attack on Neural Networks”, Yingqi Liu et al. Side channel attacks on NN model or input data • “CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information”, Lejla Batina et al. • Stealing Machine Learning Models via Prediction APIs, Florian Tramèr et al.
  16. 16. © 2019 Synopsys For More Information Visit the Synopsys booth for demos on Automotive ADAS, Virtual Reality & More 16 EV6x Embedded Vision Processor IP with Safety Enhancement Package • Thursday, May 23, Doors open 8 AM • Santa Clara Convention Center • Sessions on EV6x Vision Processor IP, Functional Safety, Security, OpenVX… • Pre-register via the EV Alliance website or at Synopsys booth • $25 entry fee Join Synopsys’ EV Seminar on Thursday Navigating Embedded Vision at the Edge B E S T P R O C E S S O R
  17. 17. © 2019 Synopsys THANK YOU 17

×