최근 컴퓨터 성능이 발달되고 대량의 데이터 수집이 가능하게 되면서, 인공지능 기술 중에 딥뉴럴네트워크 (Deep Neural Network, DNN)을 이용한 인공지능 기술이 각광받고 있다.
특히, 딥뉴럴네트워크은 이미지 인식, 음성 인식, 패턴 분석 등 분야에 있어서 탁월한 성능을 보여주고 있다. 하지만 딥뉴럴네트워크의 보안문제 중 Adversarial example이 주목 받고 있다.
Adversarial example은 입력 데이터에 최소한의 데이터를 변조를 하여 딥뉴럴네트워크가 원래 class가 아닌 다른 class로 잘못 인식하게 만드는 공격이다.
따라서 Adversarial example은 딥뉴럴네트워크의 보안문제에 위협이 된다. 이번 발표에서는 Adversarial example에 대한 전체적인 내용과 발표자가 제안한 방법인 Friend-safe evasion attack 등에 대해서 소개하고자 한다.
Adversarial Attacks on A.I. Systems — NextCon, Jan 2019anant90
Machine Learning is itself just another tool, susceptible to adversarial attacks. These can have huge implications, especially in a world with self-driving cars and other automation. In this talk, we will look at recent developments in the world of adversarial attacks on the A.I. systems, and how far we have come in mitigating these attacks.
I will first introduce adversarial machine learning, emerging research direction dealing with security aspects of machine learning. Then, I will explain poisoning and evasion attacks, followed by the description of transferability phenomena. Finally, I will talk about the proposed defenses against such types of attacks and their effectiveness.
Dr Murari Mandal from NUS presented as part of 3 days OpenPOWER Industry summit about Robustness in Deep learning where he talked about AI Breakthroughs , Performance improments in AI models , Adversarial attacks , Attacks on semantic segmentation , Attacs on object detector , Defending Against adversarial attacks and many other areas.
Adversarial Attacks on A.I. Systems — NextCon, Jan 2019anant90
Machine Learning is itself just another tool, susceptible to adversarial attacks. These can have huge implications, especially in a world with self-driving cars and other automation. In this talk, we will look at recent developments in the world of adversarial attacks on the A.I. systems, and how far we have come in mitigating these attacks.
I will first introduce adversarial machine learning, emerging research direction dealing with security aspects of machine learning. Then, I will explain poisoning and evasion attacks, followed by the description of transferability phenomena. Finally, I will talk about the proposed defenses against such types of attacks and their effectiveness.
Dr Murari Mandal from NUS presented as part of 3 days OpenPOWER Industry summit about Robustness in Deep learning where he talked about AI Breakthroughs , Performance improments in AI models , Adversarial attacks , Attacks on semantic segmentation , Attacs on object detector , Defending Against adversarial attacks and many other areas.
Introduction to Bayesian classifier. It describes the basic algorithm and applications of Bayesian classification. Explained with the help of numerical problems.
Security and Privacy of Machine LearningPriyanka Aash
Machine learning is a powerful new tool that can be used for security applications (for example, to detect malware) but machine learning itself introduces many new attack surfaces. For example, attackers can control the output of machine learning models by manipulating their inputs or training data. In this session, I give an overview of the emerging field of machine learning security and privacy.
Learning Objectives:
1: Learn about vulnerabilities of machine learning.
2: Explore existing defense techniques (differential privacy).
3: Understand opportunities to join research effort to make new defenses.
(Source: RSA Conference USA 2018)
In these slides, Generative Adversarial Network (GAN) is briefly introduced, and some GAN applications in medical imaging are presented. In the conclusions, some comments are given for persons who are interested in research of medical imaging using GAN.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Deep Learning - Overview of my work IIMohamed Loey
Deep Learning Machine Learning MNIST CIFAR 10 Residual Network AlexNet VGGNet GoogleNet Nvidia Deep learning (DL) is a hierarchical structure network which through simulates the human brain’s structure to extract the internal and external input data’s features
Review and discussion of the paper 'Adversarial Attacks and Defenses in Deep Learning' by Kui Ren, Tiahnhang Zheng, Zhan Qin, Xue Liu (2020) for Machine Learning Book Club.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Generative Adversarial Networks and Their ApplicationsArtifacia
This is the presentation from our AI Meet Jan 2017 on GANs and its applications.
You can join Artifacia AI Meet Bangalore Group: https://www.meetup.com/Artifacia-AI-Meet/
Generative Adversarial Networks is an advanced topic and requires a prior basic understanding of CNNs. Here is some pre-reading material for you.
- https://arxiv.org/pdf/1406.2661v1.pdf
- https://arxiv.org/pdf/1701.00160v1.pdf
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Algorithmic Game Theory for Critical Infrastructure Security and ResilienceLinan Huang
Two related works:
1. A Large-Scale Markov Game Approach to Dynamic Protection of Interdependent Infrastructure Networks (https://rdcu.be/bJKug)
2. A Dynamic Games Approach to Proactive Defense Strategies against Advanced Persistent Threats in Cyber-Physical Systems (https://authors.elsevier.com/a/1a2wQc43uoSQ4)
Introduction to Bayesian classifier. It describes the basic algorithm and applications of Bayesian classification. Explained with the help of numerical problems.
Security and Privacy of Machine LearningPriyanka Aash
Machine learning is a powerful new tool that can be used for security applications (for example, to detect malware) but machine learning itself introduces many new attack surfaces. For example, attackers can control the output of machine learning models by manipulating their inputs or training data. In this session, I give an overview of the emerging field of machine learning security and privacy.
Learning Objectives:
1: Learn about vulnerabilities of machine learning.
2: Explore existing defense techniques (differential privacy).
3: Understand opportunities to join research effort to make new defenses.
(Source: RSA Conference USA 2018)
In these slides, Generative Adversarial Network (GAN) is briefly introduced, and some GAN applications in medical imaging are presented. In the conclusions, some comments are given for persons who are interested in research of medical imaging using GAN.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Deep Learning - Overview of my work IIMohamed Loey
Deep Learning Machine Learning MNIST CIFAR 10 Residual Network AlexNet VGGNet GoogleNet Nvidia Deep learning (DL) is a hierarchical structure network which through simulates the human brain’s structure to extract the internal and external input data’s features
Review and discussion of the paper 'Adversarial Attacks and Defenses in Deep Learning' by Kui Ren, Tiahnhang Zheng, Zhan Qin, Xue Liu (2020) for Machine Learning Book Club.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Generative Adversarial Networks and Their ApplicationsArtifacia
This is the presentation from our AI Meet Jan 2017 on GANs and its applications.
You can join Artifacia AI Meet Bangalore Group: https://www.meetup.com/Artifacia-AI-Meet/
Generative Adversarial Networks is an advanced topic and requires a prior basic understanding of CNNs. Here is some pre-reading material for you.
- https://arxiv.org/pdf/1406.2661v1.pdf
- https://arxiv.org/pdf/1701.00160v1.pdf
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Algorithmic Game Theory for Critical Infrastructure Security and ResilienceLinan Huang
Two related works:
1. A Large-Scale Markov Game Approach to Dynamic Protection of Interdependent Infrastructure Networks (https://rdcu.be/bJKug)
2. A Dynamic Games Approach to Proactive Defense Strategies against Advanced Persistent Threats in Cyber-Physical Systems (https://authors.elsevier.com/a/1a2wQc43uoSQ4)
Paper Explained: One Pixel Attack for Fooling Deep Neural NetworksDevansh16
Read more: https://devanshverma425.medium.com/what-should-we-learn-from-the-one-pixel-attack-a67c9a33e2a4
Abstract—Recent research has revealed that the output of Deep
Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE). It requires less adversarial information (a blackbox attack) and can fool more types of networks due to the inherent features of DE. The results show that 67.97% of the natural images in Kaggle CIFAR-10 test dataset and 16.04% of the ImageNet (ILSVRC 2012) test images can be perturbed to at least one target class by modifying just one pixel with 74.03% and 22.91% confidence on average. We also show the same vulnerability on the original CIFAR-10 dataset. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. Besides, we also illustrate an important application of DE (or broadly speaking, evolutionary computation) in the domain of adversarial machine learning: creating tools that can effectively generate lowcost adversarial attacks against neural networks for evaluating robustness.
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube. It’s a work in progress haha: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Twitter: https://twitter.com/Machine01776819
My Substack: https://devanshacc.substack.com/
If you would like to work with me email me: devanshverma425@gmail.com
Live conversations at twitch here: https://rb.gy/zlhk9y
To get updates on my content- Instagram: https://rb.gy/gmvuy9
Get a free stock on Robinhood: https://join.robinhood.com/fnud75
FAST DETECTION OF DDOS ATTACKS USING NON-ADAPTIVE GROUP TESTINGIJNSA Journal
Network security has become more important role today to personal users and organizations. Denial-of-
Service (DoS) and Distributed Denial-of-Service (DDoS) attacks are serious problem in network. The
major challenges in design of an efficient algorithm in data stream are one-pass over the input, poly-log
space, poly-log update time and poly-log reporting time. In this paper, we use strongly explicit construction
d-disjunct matrices in Non-adaptive group testing (NAGT) to adapt these requirements and propose a
solution for fast detecting DoS and DDoS attacks based on NAGT approach
Black-Box attacks against Neural Networks - technical project presentationRoberto Falconi
Project paper at: https://www.slideshare.net/RobertoFalconi4/blackbox-attacks-against-neural-networks-technical-project-report
Python implementation of a practical black-box attack against machine learning.This is the technical report for the Neural Networks course by Professor A. Uncini, PhD S. Scardapane and PhD D. Comminiello. The report is about Practical Black-Box Attacks against Machine Learning, scientific paper by N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik and A. Swami. The work is done by Dr S. Clinciu and Dr R. Falconi, while studying at MSc in Engineering in Computer Science, at Sapienza University of Rome.
Project’s goal is to introduce the first demonstration that black box attacks against deep neural networks (DNN) classifiers are practical for real-world adversaries with no knowledge about the model. We assume the adversary has no information about the structure or parameters of the DNN, and the defender does not have access to any large training dataset. A can only observe labels assigned by the DNN for chosen inputs, in a manner analog to a cryptographic oracle.
Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars. In this work, we develop new methods to defend against evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. For a benign/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. Specifically, we sample some data points from the hypercube centered at the testing example in the input space; we use an existing DNN to predict the label for each sampled data point; and we take a majority vote among the labels of the sampled data points as the label for the testing example. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification can significantly mitigate evasion attacks without sacrificing classification accuracy on benign examples. Specifically, our region-based classification achieves the same classification accuracy on testing benign examples as point-based classification, but our region-based classification is significantly more robust than point-based classification to various evasion attacks.
Towards Evaluating the Robustness of Deep Intrusion Detection Models in Adver...Sri Ram
Network Intrusion Detection System (NIDS) is a method that is utilized to categorize network traffic as malicious or normal. Anomaly-based method and signature-based method are the traditional approaches used for network intrusion detection. The signature-based approach can only detect familiar attacks whereas the anomaly-based approach shows promising results in detecting new unknown attacks. Machine Learning (ML) based approaches have been studied in the past for anomaly-based NIDS. In recent years, the Deep Learning (DL) algorithms have been widely utilized for intrusion detection due to its capability to obtain optimal feature representation automatically. Even though DL based approaches improves the accuracy of the detection tremendously, they are prone to adversarial attacks. The attackers can trick the model to wrongly classify the adversarial samples into a particular target class. In this paper, the performance analysis of several ML and DL models are carried out for intrusion detection in both adversarial and non-adversarial environment. The models are trained on the NSLKDD dataset which contains a total of 148,517 data points. The robustness of several models against adversarial samples is studied.
FAST DETECTION OF DDOS ATTACKS USING NON-ADAPTIVE GROUP TESTINGIJNSA Journal
Network security has become more important role today to personal users and organizations. Denial-ofService (DoS) and Distributed Denial-of-Service (DDoS) attacks are serious problem in network. The major challenges in design of an efficient algorithm in data stream are one-pass over the input, poly-log space, poly-log update time and poly-log reporting time. In this paper, we use strongly explicit construction d-disjunct matrices in Non-adaptive group testing (NAGT) to adapt these requirements and propose a solution for fast detecting DoS and DDoS attacks based on NAGT approach.
Multi Stage Filter Using Enhanced Adaboost for Network Intrusion DetectionIJNSA Journal
Based on the analysis and distribution of network attacks in KDDCup99 dataset and real time traffic, this paper proposes a design of multi stage filter which is an efficient and effective approach in dealing with various categories of attacks in networks. The first stage of the filter is designed using Enhanced Adaboost with Decision tree algorithm to detect the frequent attacks occurs in the network and the second stage of the filter is designed using enhanced Adaboost with Naïve Byes algorithm to detect the moderate attacks occurs in the network. The final stage of the filter is used to detect the infrequent
attack which is designed using the enhanced Adaboost algorithm with Naïve Bayes as a base learner. Performance of this design is tested with the KDDCup99 dataset and is shown to have high detection rate with low false alarm rates.
With the development growing of network technology, computer networks became increasingly
wide and opened. This evolution gave birth to new techniques allowing accessibility of networks
and information systems with an aim of facilitating the transactions. Consequently, these
techniques gave also birth to new forms of threats. In this article, we present the utility to use a
system of intrusion detection through a presentation of these characteristics. Using as
inspiration the immune biological system, we propose a model of artificial immune system
which is integrated in the behavior of distributed agents on the network in order to ensure a
good detection of intrusions. We also present the internal structure of the immune agents and
their capacity to distinguish between self and not self. The agents are able to achieve
simultaneous treatments, are able to auto-adaptable to environment evolution and have also the
property of distributed coordination.
DefQ Defensive Quantization Against Inference Slow-Down Attack for Edge Compu...OKOKPROJECTS
https://okokprojects.com/
IEEE PROJECTS 2023-2024 TITLE LIST
WhatsApp : +91-8144199666
From Our Title List the Cost will be,
Mail Us: okokprojects@gmail.com
Website: : https://www.okokprojects.com
: http://www.ieeeproject.net
Support Including Packages
=======================
* Complete Source Code
* Complete Documentation
* Complete Presentation Slides
* Flow Diagram
* Database File
* Screenshots
* Execution Procedure
* Video Tutorials
* Supporting Softwares
Support Specialization
=======================
* 24/7 Support
* Ticketing System
* Voice Conference
* Video On Demand
* Remote Connectivity
* Document Customization
* Live Chat Support
An intrusion detection system for packet and flow based networks using deep n...IJECEIAES
Study on deep neural networks and big data is merging now by several aspects to enhance the capabilities of intrusion detection system (IDS). Many IDS models has been introduced to provide security over big data. This study focuses on the intrusion detection in computer networks using big datasets. The advent of big data has agitated the comprehensive assistance in cyber security by forwarding a brunch of affluent algorithms to classify and analysis patterns and making a better prediction more efficiently. In this study, to detect intrusion a detection model has been propounded applying deep neural networks. We applied the suggested model on the latest dataset available at online, formatted with packet based, flow based data and some additional metadata. The dataset is labeled and imbalanced with 79 attributes and some classes having much less training samples compared to other classes. The proposed model is build using Keras and Google Tensorflow deep learning environment. Experimental result shows that intrusions are detected with the accuracy over 99% for both binary and multiclass classification with selected best features. Receiver operating characteristics (ROC) and precision-recall curve average score is also 1. The outcome implies that Deep Neural Networks offers a novel research model with great accuracy for intrusion detection model, better than some models presented in the literature.
AN ANN APPROACH FOR NETWORK INTRUSION DETECTION USING ENTROPY BASED FEATURE S...IJNSA Journal
With the increase in Internet users the number of malicious users are also growing day-by-day posing a serious problem in distinguishing between normal and abnormal behavior of users in the network. This has led to the research area of intrusion detection which essentially analyzes the network traffic and tries to determine normal and abnormal patterns of behavior.In this paper, we have analyzed the standard NSL-KDD intrusion dataset using some neural network based techniques for predicting possible intrusions. Four most effective classification methods, namely, Radial Basis Function Network, SelfOrganizing Map, Sequential Minimal Optimization, and Projective Adaptive Resonance Theory have been applied. In order to enhance the performance of the classifiers, three entropy based feature selection methods have been applied as preprocessing of data. Performances of different combinations of classifiers and attribute reduction methods have also been compared.
Due to continuous growth of the Internet technology, it needs to establish security mechanism. Intrusion Detection System (IDS) is increasingly becoming a crucial component for computer and network security systems. Most of the existing intrusion detection techniques emphasize on building intrusion detection model based on all features provided. Some of these features are irrelevant or redundant. This paper is proposed to identify important input features in building IDS that is computationally efficient and effective. In this paper, we identify important attributes for each attack type by analyzing the detection rate. We input the specific attributes for each attack types to classify using Naive Bayes, and Random Forest. We perform our experiments on NSL-KDD intrusion detection data set, which consists of selected records of the complete KDD Cup 1999 intrusion detection dataset.
Random Keying Technique for Security in Wireless Sensor Networks Based on Mem...ijcsta
Wireless Sensor Networks (WSNs) are often prone to risk of security attacks and vulnerabilities. This is because of
the less human intervention in their operations. Hence, novel security mechanisms and techniques are of a prime
importance in these types of networks. In this context, we propose a unique security scheme, which coalesce the
random keying technique with memetics. The application of these kinds of bio-inspired computation in WSNs
provides robust security in the network with the obtained results supporting the security concerns of the network.
With the increase in Internet users the number of malicious users are also growing day-by-day posing a
serious problem in distinguishing between normal and abnormal behavior of users in the network. This
has led to the research area of intrusion detection which essentially analyzes the network traffic and tries
to determine normal and abnormal patterns of behavior.In this paper, we have analyzed the standard
NSL-KDD intrusion dataset using some neural network based techniques for predicting possible
intrusions. Four most effective classification methods, namely, Radial Basis Function Network, Self-
Organizing Map, Sequential Minimal Optimization, and Projective Adaptive Resonance Theory have been
applied. In order to enhance the performance of the classifiers, three entropy based feature selection
methods have been applied as preprocessing of data. Performances of different combinations of classifiers
and attribute reduction methods have also been compared.
Similar to Research of adversarial example on a deep neural network (20)
비행기 설계를 왜 통일 해야 할까?
디자인 시스템을 하는 이유
비행기들이 다 용도가 다르다...어떻게 설계하지?
맥락이 다른 페이지와 패턴
경유지까지 아직 멀었다... 언제 수리하지?
디자인 시스템을 적용하는 시점
엔지니어랑 얘기해서 정비해야하는데...어떻게 수리하지?
디자인 시스템을 적용하는 프로세스
비행기 설계가 바뀐걸 어떻게 알리지?
디자인 시스템의 전파
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. 2
Outline
Introduction
Related work
Adversarial example attacks are divided into four categories
Target model information, distance measure, recognition, and generating
method
Adversarial example defense
Reactive defense
Proactive defense
Problem Statement
Scheme 1
Scheme 2
Conclusion
Reference
4. 4
Threat to the security of DNN
Adversarial example
Slightly modified data that lead to incorrect classification
Introduction
ㆍㆍㆍ
ㆍㆍㆍ
Pr(0) = 0.89
Pr(1) = 0.03
Pr(n-1) = 0.01
Pr(n) = 0.02
ㆍㆍㆍ
Input layer
Output layer
: Node: Weight link
5. 5
Threat to the security of DNN
Adversarial example
Slightly modified data that lead to incorrect classification
Introduction
ㆍㆍㆍ
ㆍㆍㆍ
Pr(0) = 0.02
Pr(1) = 0.03
Pr(n-1) = 0.01
Pr(n) = 0.84
ㆍㆍㆍ
Input layer
Output layer
: Node: Weight link
8. 8
Introduction
ML Security Issues Category
Causative attacks influence learning with control over training data
Ex) poisoning attack
Exploratory attacks exploit misclassification but don’t affect training
Ex) Adversarial example
Poisoning attack
Decreasing the recognition accuracy of the target model.
Adding malicious training data on targeted model
Assume
It requires that the attacker access training data
9. 9
Introduction
Adversarial example
Causing misclassification by adding a little of noise to the sample.
Szedgedy et al. first presented the adversarial example
Attacker transforms an image slightly, causing this adversarial example
Advance attacks and their counter measures has been proposed
10. 10
Type of target model information
White box attack
The attacker knows the detailed information about the target model
Model architecture, parameters, and class probabilities
Success rate of white box attack reaches almost 100%
Black box attack
The attacker dose not know the information about the target model
Only query the target model
The well-known black box attack are twofold
Transferability
Universal perturbation
Substitute network
11. 11
Type of target model information
Transferability (black-box attack)
An adversarial example modified for a single target model is effective for
other model.
Adversarial examples generated using ensemble-based approaches can
successfully attack black box image classification.
Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. ICLR,
abs/1611.02770, 2017.
Kwon, Hyun, et al. "Advanced ensemble adversarial example on unknown deep neural network classifiers." IEICE Transactions on
Information and Systems 101.10 (2018): 2485-2500.
12. 12
Type of target model information
Universal perturbation
Find a universal perturbation vector.
𝜂 𝑝 ≤ 𝜖
Ρ 𝑥∗ ≠ 𝑓 𝑥 ≥ 1 − 𝜎
𝜖 limits the size of universal perturbation
𝜎 controls the failure rate of all the adversarial
examples.
This loop continue until the most data sample
are fooled (P < 1 − 𝜎 )
Seyed Mohsen Moosavi Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard.
Universal adversarial perturbations.
In Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), number EPFL-CONF-226156, 2017.
13. 13
Type of target model information
Substitute network (black-box attack)
The attacker can create a substitute network similar to the target model
By repeating the query process.
Once a substitute network is created, the attacker can perform a white box
attack.
Approximately 80% attack success for Amazon and Google services
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks
against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages
506–519. ACM, 2017.
14. 14
Type of distance measure
There are three ways to measure the distortion
𝐿0, 𝐿2, 𝐿∞
𝐿0 represents the sum of the number of all changed pixels
𝑖=0
𝑛
𝑥𝑖 − 𝑥𝑖
∗
𝐿2 represents standard Euclidean norm
𝑖=0
𝑛
(𝑥𝑖 − 𝑥𝑖
∗
)2
𝐿∞ is the maximum distance value between 𝑥𝑖 and 𝑥𝑖
∗
As the three distance measures become small,
the similarity of the sample image increases.
15. 15
Type of target recognition
There are two types: targeted attack and untargeted attack
Targeted attack
The target model to recognize the adversarial example as a
particular intended class
𝑥∗: 𝑎𝑟𝑔𝑚𝑖𝑛 𝑥∗ 𝐿 𝑥, 𝑥∗ s. t. f x∗ = 𝑦∗
Untargeted attack
The target model to recognize the adversarial example as a class
other than the original class.
𝑥∗
: 𝑎𝑟𝑔𝑚𝑖𝑛 𝑥∗ 𝐿 𝑥, 𝑥∗
s. t. f x∗
≠ 𝑦
16. 16
Methods of adversarial attack
Fast-gradient sign method (FGSM)
Take a step in the direction of the gradient of the loss function
𝑥∗ = 𝑥 + 𝜖 ∙ 𝑠𝑖𝑔𝑛(𝛻𝑙𝑜𝑠𝑠 𝐹,𝑡 𝑥 )
This is simple and good performance.
Iterative FGSM (I-FGSM)
Update version of the FGSM
Instead of changing the amount of 𝜖 , a smaller amount of 𝛼 is used.
Clipped by the same 𝜖
𝑥𝑖
∗
= 𝑥𝑖−1
∗
− 𝑐𝑙𝑖𝑝 𝜖(𝛼 ∙ 𝑠𝑖𝑔𝑛(𝛻𝑙𝑜𝑠𝑠 𝐹,𝑡 𝑥𝑖−1
∗
)
Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International
Conference on Learning Representations, 2015.
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. ICLR Workshop, 2017.
17. 17
Methods of adversarial attack
Carlini-Wagner (CW)
7 objective function (𝑓1~𝑓7) + 3 distance attack (𝐿1, 𝐿2, 𝐿∞)
Choosing constant 𝑐 (control weight of class)
𝑚𝑖𝑛𝑖𝑚𝑧𝑒 𝐷 𝑥, 𝑥 + 𝑤 + 𝑐 ∙ 𝑓(𝑥 + 𝑤) 𝑠. 𝑡. 𝑥 + 𝑤 ∈ 0,1 𝑛
Confidence 𝑘 (control distortion)
𝑓 𝑥∗
= 𝑚𝑎 𝑥 𝑚𝑎𝑥 𝑍 𝑥∗
𝑖 ∶ 𝑖 ≠ 𝑡 − 𝑍 𝑥∗
𝑡, −𝑘
It can applied to the image and audio domain.
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE
Symposium on, pages 39–57. IEEE, 2017.
18. 18
Methods of adversarial attack
One pixel attack
Only modifying one pixel to cause the misclassification
Differential evolution (DE) to find the optimal solution.
Current solution (father)
Candidate solution (child)
CIFAR10 with 70.97% success rate
All convolution network (AllConv), Network in Network (NiN), VGG16
Su, Jiawei, Danilo Vasconcellos Vargas, and Kouichi Sakurai. "One pixel attack for fooling deep neural networks." IEEE
Transactions on Evolutionary Computation (2019).
19. 19
Methods of defense
The defense of adversarial examples have two types
Reactive: detect the adversarial example
Proactive: make deep neural networks more robust.
Reactive defense
Adversarial example detection
Proactive defense
Distillation method
Adversarial training
Filtering method
Ensemble defense methods are available.
20. 20
Reactive defense
Adversarial detecting
Binary threshold: last layer’s output as the features
Distinguish distribution differences
Confidence value, p-value
Y.-C. Lin, M.-Y. Liu, M. Sun, and J.-B. Huang, “Detecting adversarial attacks on neural network policies with visual
foresight,” arXiv preprint arXiv:1710.00814, 2017.
T. Pang, C. Du, Y. Dong, and J. Zhu, “Towards robust detection of adversarial examples,”
arXiv preprint arXiv:1706.00633, 2017.
21. 21
Proactive defense
Distillation method
Using two neural network (detailed class probability)
Ex) “1”, class: [0100000000] 1”, class:[0.02 0.91 … 0.02]
Avoid calculating the gradient of the loss function.
Training Data 𝑥 Training label y
0
1
0
0 Training Data 𝑥 Training label f(x)
DNN 𝑓(𝑥) trained at temperature T DNN 𝑓 𝑑𝑖𝑠𝑡𝑖𝑙
(𝑥) trained at temperature T
Probability Vector Predictions 𝑓(𝑥) Probability Vector Predictions 𝑓 𝑑𝑖𝑠𝑡𝑖𝑙
(𝑥)
0.02
0.92
0.04
0.02
0.02
0.92
0.04
0.02
Initial Network Distilled Network
0.03
0.93
0.01
0.03
Nicolas Papernot, Patrick McDaniel, XiWu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial
perturbations against deep neural networks. In Security and Privacy (SP), 2016 IEEE Symposium on, pages 582–597.
IEEE, 2016.
22. 22
Proactive defense
Adversarial training
Original example + adversarial example training process
Simple and effective
Make sure that the accuracy of the original sample is not
compromised.
Ensemble adversarial training
Using several neural networks
It is more resistance to the adversarial example.
Tramèr, Florian, et al. "Ensemble adversarial training: Attacks and defenses." arXiv preprint
arXiv:1705.07204 (2017).
23. 23
Proactive defense
Filtering method
Eliminating the perturbation of the adversarial example
Creating a filtering module requires time and process.
Generator𝑿 𝒂𝒅𝒗
“7” (96.5%)
training Success rate
Targeted model
Shen et al. "AE-GAN: adversarial eliminating with GAN." arXiv preprint arXiv:1707.05474 (2017).
24. 24
Ensemble defense method
Magnet method
There are two modules: Detector and Reformer
Generality
Detector
Find the adversarial example by comparing the output of several
original samples.
Detecting adversarial examples with large distortion.
However, if the distortion is small, the detection probability is
lowered
Multiple Detector configurations can be also combined
Detector Reformer Classifier
Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC
Conference on Computer and Communications Security, pages 135–147. ACM, 2017.
25. 25
Ensemble defense method
Reformer
Targeting adversarial example with small distortion.
auto-encoder is used.
Reform: it convert adversarial examples with output that most
closely resembles the original sample.
27. 27
Ensemble defense method
Feature Squeezing
By comparing the output result from three models
Detecting adversarial examples
Xu, Weilin, David Evans, and Yanjun Qi. "Feature squeezing: Detecting adversarial examples in deep neural networks." NDSS 2018.
28. 28
Scheme 1. Problem Definition
In military domain, adversarial example is useful
Deceive an enemy’s machine classifier
Ex) Battlefield road signs
Modified to deceive an adversary’s self-driving vehicle
But, friendly self-driving vehicles should not be deceived
Friendly self-driving vehicle Adversary’s self-driving vehicle
“Left” “Right”
29. 29
Scheme 2. Problem Definition
Untargeted adversarial example
focus on certain classes for a
original class
It is easy to satisfy
misclassification
There is a pattern problem in
generating untargeted adversarial
example
By analyzing the output classes,
the defense determines the
original class.
<Confusion matrix in MNIST>
30. 30
Scheme 1
Friend-safe Evasion Attack: an adversarial example is
correctly classified by a friendly classifier.
Goal
Proposed Methods
Experiment & Evaluation
Discussion
Kwon, Hyun, et al. "Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier."
Computers & Security 78 (2018): 380-397.
31. 31
Scheme 1. Goal
Proposed an evasion attack scheme that creates adversarial example
Incorrectly classified by enemy classifiers
Correctly recognized by friendly classifiers
Maintaining low distortion
Has two configuration
Targeted classes: original sample to be recognized as a specific class
Untargeted classes: misclassification to any class other than the right class
Analyze the difference of two configuration.
Difference in distortion between the targeted and untargeted scheme.
Difference among targeted digits.
32. 32
Scheme 1. Proposed method
Given the 𝐷𝑓𝑟𝑖𝑒𝑛𝑑, 𝐷𝑒𝑛𝑒𝑚𝑦, and original input 𝑥 ∈ 𝑋,
the problem is an optimization problem that generates 𝑥∗
Targeted adversarial example
𝑥∗
: 𝑎𝑟𝑔𝑚𝑖𝑛 𝑥∗ 𝐿 𝑥, 𝑥∗
𝑠. 𝑡. 𝐷𝑓𝑟𝑖𝑒𝑛𝑑 𝑥∗
= 𝑦 𝑎𝑛𝑑 𝐷𝑒𝑛𝑒𝑚𝑦 𝑥∗
= 𝑦∗
(𝑡𝑎𝑟𝑔𝑒𝑡𝑒𝑑 𝑐𝑙𝑎𝑠𝑠)
Untargeted adversarial example
𝑥∗
: 𝑎𝑟𝑔𝑚𝑖𝑛 𝑥∗ 𝐿 𝑥, 𝑥∗
𝑠. 𝑡. 𝐷𝑓𝑟𝑖𝑒𝑛𝑑 𝑥∗
= 𝑦 𝑎𝑛𝑑 𝐷𝑒𝑛𝑒𝑚𝑦 𝑥∗
≠ 𝑦
Transformer
𝑫 𝒇𝒓𝒊𝒆𝒏𝒅
𝑫 𝒆𝒏𝒆𝒎𝒚
+Loss function
+Loss function
Original sample: x
Original class: y
𝒙∗
𝒙∗
<Proposed architecture>
33. 33
Scheme 1. Proposed method
A friend-safe adversarial example generation by minimizing 𝑙𝑜𝑠𝑠 𝑇
𝑙𝑜𝑠𝑠 𝑇 = 𝑙𝑜𝑠𝑠 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑖𝑜𝑛 + 𝑙𝑜𝑠𝑠𝑓𝑟𝑖𝑒𝑛𝑑 + 𝑙𝑜𝑠𝑠 𝑒𝑛𝑒𝑚𝑦
𝑙𝑜𝑠𝑠 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑖𝑜𝑛 : the distortion of transformed example
𝑙𝑜𝑠𝑠 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑖𝑜𝑛 = 𝑥∗
−
tanh 𝑥
2 2
2
𝑙𝑜𝑠𝑠𝑓𝑟𝑖𝑒𝑛𝑑 : 𝐷𝑓𝑟𝑖𝑒𝑛𝑑 𝑥∗ has a higher probability of predicting the original class
𝑙𝑜𝑠𝑠𝑓𝑟𝑖𝑒𝑛𝑑 = 𝑔 𝑓
𝑥∗
𝑔 𝑓
𝑘 = max 𝑍 𝑘 𝑖 ∶ 𝑖 ≠ 𝑜𝑟𝑔 − 𝑍 𝑘 𝑜𝑟𝑔 , 𝑜𝑟𝑔 is the original class
𝑍 ㆍ is the probability of the class that is predicted by 𝐷𝑓𝑟𝑖𝑒𝑛𝑑 and 𝐷𝑒𝑛𝑒𝑚𝑦
𝑙𝑜𝑠𝑠 𝑒𝑛𝑒𝑚𝑦 : 𝐷𝑒𝑛𝑒𝑚𝑦 𝑥∗ has a higher probability of predicting the other class
(Targeted) 𝑙𝑜𝑠𝑠𝑒𝑛𝑒𝑚𝑦 = 𝑔 𝑒 𝑡 𝑥∗
𝑔 𝑒 𝑡 𝑘 = max 𝑍 𝑘 𝑖 ∶ 𝑖 ≠ 𝑡 − 𝑍 𝑘 𝑡 , 𝑡 is the targeted class chosen by the attacker
(Untargeted) 𝑙𝑜𝑠𝑠𝑒𝑛𝑒𝑚𝑦 = 𝑔 𝑒 𝑢 𝑥∗
𝑔 𝑒 𝑢 𝑘 = 𝑍 𝑘 𝑜𝑟𝑔 − max 𝑍 𝑘 𝑖 ∶ 𝑖 ≠ 𝑜𝑟𝑔 − 𝑍
35. 35
Scheme 1. Experiment & Evaluation
Dataset: MNIST and CIFAR10
A collection of handwritten digit images (0-9)
A collection of color images (0-9)
plane, cars, birds, cats, deer, dogs, frogs, horses, boats, and trucks.
Language: python 2.3
Machine learning library: Tensorflow
Server: Xeon E5-2609 1.7 GHz
36. 36
Scheme 1. Experiment & Evaluation
Experimental method
First, 𝐷𝑓𝑟𝑖𝑒𝑛𝑑 and 𝐷𝑒𝑛𝑒𝑚𝑦 are pre-trained.
𝐷𝑓𝑟𝑖𝑒𝑛𝑑 : CNN, 𝐷𝑒𝑛𝑒𝑚𝑦 : Distillation
60,000 training sample (original MNIST)
10,000 test sample (original MNIST)
𝐷𝑓𝑟𝑖𝑒𝑛𝑑 and 𝐷𝑒𝑛𝑒𝑚𝑦: 99.25 and 99.12%
accuracy
Second, the transformer updates output x^*
and give it to 𝐷𝑓𝑟𝑖𝑒𝑛𝑑 and 𝐷𝑒𝑛𝑒𝑚𝑦, from which
it then received feedback. (for # of iteration)
Adam is used as an optimizer to minimize the
𝑙𝑜𝑠𝑠 𝑇
Learning rate: 1𝑒−2
, initial constant: 1𝑒−3
<𝐷𝑓𝑟𝑖𝑒𝑛𝑑 and 𝐷𝑒𝑛𝑒𝑚𝑦 architecture>
<𝐷𝑓𝑟𝑖𝑒𝑛𝑑 and 𝐷𝑒𝑛𝑒𝑚𝑦 parameter>
37. 37
Scheme 1. Experiment & Evaluation
Experimental result
Two section: targeted and untargeted adversarial example
Targeted adversarial example
39. 40
Scheme 1. Experiment & Evaluation
<Targeted attack success rate, 𝐷𝑓𝑟𝑖𝑒𝑛𝑑 accuracy, and average distortion>
<Images of the friend-safe adversarial example for the iteration>
40. 41
Scheme 1. Experiment & Evaluation
Untargeted adversarial example
<Confusion matrix of 𝐷𝑒𝑛𝑒𝑚𝑦 for an untargeted class (400 iterations)>
41. 42
Scheme 1. Experiment & Evaluation
<Untargeted attack success rate, 𝐷𝑓𝑟𝑖𝑒𝑛𝑑 accuracy, and average distortion>
42. 43
Scheme 1. Experiment & Evaluation
<Comparison between targeted and untargeted attacks when the success rate is 100%>
45. 46
Scheme 1. Experiment & Evaluation
<Comparison between targeted and untargeted attacks when the success rate is 100%>
46. 47
Scheme 1. Discussion
If two models are exactly same, it is impossible to generate example.
If two model are very similar model, it is possible to generate example.
Same model, but different training set or different training sample order
Untargeted attacks required less distortion, and are ideal for when
targeting is unnecessary.
Covert channel scheme can be applied
The roles of the friend and enemy are reversed.
Targeted class is hidden information that is transferred via the covert channel.
47. 48
Scheme 1. Discussion
Covert channel scheme can be applied.
it was a matter of ascertaining which of the nine objects (those other than the
visible object) was hidden.
48. 50
Scheme 1. Discussion
Multi-targeted adversarial example can be applied.
Attacker makes multiple models recognize a single original image as
different classes
.Ex) Battlefield road signs
Adversary 1 Adversary 3
“U-turn” “Right”
Adversary 2
“Straight”
49. 51
Scheme 1. Discussion
Multi-targeted Adversarial Example
Kwon, Hyun, et al. "Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network." IEEE Access 6 (2018): 46084-
46096.
51. 53
Scheme 2
Random Untargeted Adversarial Example
Goal
Proposed Methods
Experiment & Evaluation
Discussion
Kwon, Hyun, et al. "Fooling a Neural Network in Military Environments: Random Untargeted Adversarial Example." MILCOM 2018-2018
IEEE Military Communications Conference (MILCOM). IEEE, 2018.
Kwon, Hyun, et al. "Random Untargeted Adversarial Example on Deep Neural Network." Symmetry 10.12 (2018): 738.
52. 54
Scheme 2. Goal
Proposed random untargeted adversarial example
Using an arbitrary class in the generation process.
Keeping Higher attack success
Maintaining low distortion
Eliminated the pattern vulnerability.
Analyzing the confusion matrix.
53. 55
Scheme 2. Proposed method
Given the target model 𝐷, original input 𝑥 ∈ 𝑋, and random class 𝑟,
the problem is an optimization problem that generates random
untargeted adversarial example 𝑥∗
𝑥∗: 𝑎𝑟𝑔𝑚𝑖𝑛 𝑥∗ 𝐿 𝑥, 𝑥∗ such that 𝑓 𝑥∗ = 𝑟 (not y)
𝐿(∙) is the distance between original sample 𝑥 and transformed example 𝑥∗
.
𝑓(∙) is the operation function of the target model 𝐷.
<Proposed architecture>
Transformer 𝑫
+Loss function
Original sample: 𝒙
Original class: 𝒚
Random class: 𝒓
𝒙∗
54. 56
Scheme 2. Experiment & Evaluation
Experimental result
The confusion matrix of MNIST and CIFAR10 for the proposed example.
The wrong class are evenly distributed for each original class
There is no pattern vulnerability.
< Confusion matrix in MNIST > < Confusion matrix in CIFAR10 >
55. 57
Scheme 2. Discussion
The proposed scheme can attack the enemy DNNs due to eliminating the
pattern vulnerability.
The proposed method is white-box access to the target model.
Distortion depends on the dataset dimension.
Distortion: sum of the square root of each pixel difference from original data.
MNIST is (28×28×1).pixel matrix and CIFAR10 is (32×32×3) pixel matrix.
The proposed scheme can be applied to various applications.
Road signs
Audio domain
Camouflage
56. 58
Conclusion
Advanced attacks and their defenses have been being proposed
continuously.
Recently, interest in black-box attacks has increased.
In addition, detection methods of adversarial example have been studied.
Applications
CAPTCHA system
Face recognition system
Speech recognition system
57. 59
Reference
Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi, "Optimal Cluster Expansion-Based Intrusion Tolerant
System to Prevent Denial of Service Attacks." Applied Sciences, 2017.11
Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi, "CAPTCHA Image Generation Systems Using Generative
Adversarial Networks.” IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2017.10
Hyun Kwon, Yongchul Kim, Ki-Woong Park, Hyunsoo Yoon, Daeseon Choi, “Advanced Ensemble Adversarial Example
on Unknown Deep Neural Network Classifiers”, IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2018.07
Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi, "Friend-safe Evasion Attack: an adversarial example that is
correctly recognized by a friendly classifier.” COMPUTERS & SECURITY, 2018.08
Hyun Kwon, Yongchul Kim, Ki-Woong Park, Hyunsoo Yoon, Daeseon Choi, “Multi-targeted Adversarial Example in
Evasion Attack on Deep Neural Network"”, IEEE Access, 2018.08
Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi, "Random Untargeted Adversarial Example on Deep Neural
Network,” Symmetry, 2018.12
Hyun Kwon, Hyunsoo Yoon, Daeseon Choi. "Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural
Network”, International Conference on Information Security and Cryptology (ICISC 2017: pp. 351-367). Springer
Cham, 2017.11
Hyun Kwon, Hyunsoo Yoon, Daeseon Choi. "POSTER: Zero-Day Evasion Attack Analysis on Race between Attack and
Defense”, The 13th ACM on Asia Conference on Computer and Communications Security (ASIA CCS '18), 2018.06
Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi, "One-Pixel Adversarial Example that is Safe for Friendly
Deep Neural Networks. ”, WISA 2018
Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi, "Fooling a Neural Network in Military Environments:
Random Untargeted Adversarial Example", Military Communications Conference 2018 (MILCOM 2018)
Hyun Kwon, Hyunsoo Yoon, Daeseon Choi, "Priority Adversarial Example in Evasion Attack on Multiple Deep Neural
Networks", International Conference on Artificial Intelligence in information and communication (ICAIIC 2019),
2019.02