Review and discussion of the paper 'Adversarial Attacks and Defenses in Deep Learning' by Kui Ren, Tiahnhang Zheng, Zhan Qin, Xue Liu (2020) for Machine Learning Book Club.
Adversarial Attacks on A.I. Systems — NextCon, Jan 2019anant90
Machine Learning is itself just another tool, susceptible to adversarial attacks. These can have huge implications, especially in a world with self-driving cars and other automation. In this talk, we will look at recent developments in the world of adversarial attacks on the A.I. systems, and how far we have come in mitigating these attacks.
Security and Privacy of Machine LearningPriyanka Aash
Machine learning is a powerful new tool that can be used for security applications (for example, to detect malware) but machine learning itself introduces many new attack surfaces. For example, attackers can control the output of machine learning models by manipulating their inputs or training data. In this session, I give an overview of the emerging field of machine learning security and privacy.
Learning Objectives:
1: Learn about vulnerabilities of machine learning.
2: Explore existing defense techniques (differential privacy).
3: Understand opportunities to join research effort to make new defenses.
(Source: RSA Conference USA 2018)
Dr Murari Mandal from NUS presented as part of 3 days OpenPOWER Industry summit about Robustness in Deep learning where he talked about AI Breakthroughs , Performance improments in AI models , Adversarial attacks , Attacks on semantic segmentation , Attacs on object detector , Defending Against adversarial attacks and many other areas.
Can we use data to train Machine Learning models, perform statistical analysis, yet without putting private data on risk? There are tools and techniques such as Federated Learning, Differential Privacy or Homomorphic Encryption enabling safer work on the data.
Adversarial Attacks on A.I. Systems — NextCon, Jan 2019anant90
Machine Learning is itself just another tool, susceptible to adversarial attacks. These can have huge implications, especially in a world with self-driving cars and other automation. In this talk, we will look at recent developments in the world of adversarial attacks on the A.I. systems, and how far we have come in mitigating these attacks.
Security and Privacy of Machine LearningPriyanka Aash
Machine learning is a powerful new tool that can be used for security applications (for example, to detect malware) but machine learning itself introduces many new attack surfaces. For example, attackers can control the output of machine learning models by manipulating their inputs or training data. In this session, I give an overview of the emerging field of machine learning security and privacy.
Learning Objectives:
1: Learn about vulnerabilities of machine learning.
2: Explore existing defense techniques (differential privacy).
3: Understand opportunities to join research effort to make new defenses.
(Source: RSA Conference USA 2018)
Dr Murari Mandal from NUS presented as part of 3 days OpenPOWER Industry summit about Robustness in Deep learning where he talked about AI Breakthroughs , Performance improments in AI models , Adversarial attacks , Attacks on semantic segmentation , Attacs on object detector , Defending Against adversarial attacks and many other areas.
Can we use data to train Machine Learning models, perform statistical analysis, yet without putting private data on risk? There are tools and techniques such as Federated Learning, Differential Privacy or Homomorphic Encryption enabling safer work on the data.
Research Link: https://www.shamra.sy/academia/show/5b0bf45a13836
Introduction:
Deep learning is at the heart of the current rise of artificial intelligence. In the field of Computer Vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently lead to a large influx of contributions in this direction. This article presents a survey on adversarial attacks on deep learning in Computer Vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them
By: Ahmed Nour Jamal El-din, Mohammad Zaher Airout, Obada Al-Jabasini
يعتبر التعلم العميق القلب النابض للذكاء الصنعي في السنوات الأخيرة، وفي ظل تراوح تطبيقاته بين السيارات ذاتية القيادة وصولًا إلى التحليلات الطبية وغير ذلك، وقدرته على حل المشاكل المعقدة متفوقًا على الإنسان في الكثير من الأحيان، بدا أننا وصلنا للحل النهائي لمشاكل الذكاء الصنعي، لكن ظهور الهجمات الخادعة أصبح العائق الأساسي لتوظيف التطبيقات التي تعتمد على التعلم العميق كبديل للإنسان، وأصبح التطبيقات الأخيرة تحت المجهر لدراسة قدرتها على منع هذه الهجمات، نستعرض في هذا البحث تعريف الهجوم الخادع وطرقه بشكل عام، ثم نتطرق إلى تطبيقين محورين يمكن مهاجمتهما من خلاله ونعرض كيف نتصدى لهذه الهجمات، مرورًا بمقارنة النماذج الإحصائية مع الإنسان وكون الهجمات الخادعة جزءًا أساسيًا من الأنظمة التي تعتمد على المعطيات للقيام بمهامها.
اعداد : أحمد نور جمال الدين, عبادة الجباصيني,محمد زاهر عيروط
Adversarial machine learning for av softwarejunseok seo
Introduce practical guidances for developing adversarial machine model for anti-malware software. I didn't use reinforcement model yet, just proof-of-concept. If you have any questions about my work, email to me :)
nababora@naver.com
Research of adversarial example on a deep neural networkNAVER Engineering
최근 컴퓨터 성능이 발달되고 대량의 데이터 수집이 가능하게 되면서, 인공지능 기술 중에 딥뉴럴네트워크 (Deep Neural Network, DNN)을 이용한 인공지능 기술이 각광받고 있다.
특히, 딥뉴럴네트워크은 이미지 인식, 음성 인식, 패턴 분석 등 분야에 있어서 탁월한 성능을 보여주고 있다. 하지만 딥뉴럴네트워크의 보안문제 중 Adversarial example이 주목 받고 있다.
Adversarial example은 입력 데이터에 최소한의 데이터를 변조를 하여 딥뉴럴네트워크가 원래 class가 아닌 다른 class로 잘못 인식하게 만드는 공격이다.
따라서 Adversarial example은 딥뉴럴네트워크의 보안문제에 위협이 된다. 이번 발표에서는 Adversarial example에 대한 전체적인 내용과 발표자가 제안한 방법인 Friend-safe evasion attack 등에 대해서 소개하고자 한다.
Federated Learning makes it possible to build machine learning systems without direct access to training data. The data remains in its original location, which helps to ensure privacy, reduces network communication costs, and taps edge device computing resources. The principles of data minimization established by the GDPR, and the growing prevalence of smart sensors make the advantages of federated learning more compelling. Federated learning is a great fit for smartphones, industrial and consumer IoT, healthcare and other privacy-sensitive use cases, and industrial sensor applications.
We’ll present the Fast Forward Labs team’s research on this topic and the accompanying prototype application, “Turbofan Tycoon”: a simplified working example of federated learning applied to a predictive maintenance problem. In this demo scenario, customers of an industrial turbofan manufacturer are not willing to share the details of how their components failed with the manufacturer, but want the manufacturer to provide them with a strategy to maintain the part. Federated learning allows us to satisfy the customer's privacy concerns while providing them with a model that leads to fewer costly failures and less maintenance downtime.
We’ll discuss the advantages and tradeoffs of taking the federated approach. We’ll assess the state of tooling for federated learning, circumstances in which you might want to consider applying it, and the challenges you’d face along the way.
Speaker
Chris Wallace
Data Scientist
Cloudera
Big Data Helsinki v 3 | "Federated Learning and Privacy-preserving AI" - Oguz...Dataconomy Media
"Machine learning algorithms require significant amounts of training data which has been centralized on one machine or in a datacenter so far. For numerous applications, such need of collecting data can be extremely privacy-invasive. Recent advancements in AI research approach this issue by a new paradigm of training AI models, i.e., Federated Learning.
In federated learning, edge devices (phones, computers, cars etc.) collaboratively learn a shared AI model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud. From personal data perspective, this paradigm enables a way of training a model on the device without directly inspecting users’ data on a server. This talk will pinpoint several examples of AI applications benefiting from federated learning and the likely future of privacy-aware systems."
Using Machine Learning in Networks Intrusion Detection SystemsOmar Shaya
The internet and different computing devices from desktop computers to smartphones have raised many security and privacy concerns, and the need to automate systems that detect attacks on these networks has emerged in order to be able to protect these networks with scale. And while traditional intrusion detection methods may be able to detect previously known attacks, the issue of dealing with new unknown attacks arises and that brings machine learning as a strong candidate to solve these challenges.
In this report, we investigate the use of machine learning in detecting network attacks, intrusion detection, by looking at work that has been done in this field. Particularly we look at the work that has been done by Pasocal et al.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
Poisoning attacks on Federated Learning based IoT Intrusion Detection SystemSai Kiran Kadam
Attacks on federated learning model are discussed as a part of my research to build a model that overcomes the diverse security issues and vulnerabilities in the cloud in the process of building a unified machine learning model that can benefit multi-user/ multi-companies to work together.
With the growth of computer networking, electronic commerce and web services, security networking systems have become very important to protect infomation and networks againts malicious usage or attacks. In this report, it is designed an Intrusion Detection System using two artificial neural networks: one for Intrusion Detection and the another for Attack Classification.
This talk is about how we applied deep learning techinques to achieve state-of-the-art results in various NLP tasks like sentiment analysis and aspect identification, and how we deployed these models at Flipkart
Basic Introduction to Adversarial machine learning, It has topics like What is ML? What is Adversarial ML? How does one generate and manipulate models?
GANs are the new hottest topic in the ML arena; however, they present a challenge for the researchers and the engineers alike. Their design, and most importantly, the code implementation has been causing headaches to the ML practitioners, especially when moving to production.
Starting from the very basic of what a GAN is, passing trough Tensorflow implementation, using the most cutting-edge APIs available in the framework, and finally, production-ready serving at scale using Google Cloud ML Engine.
Slides for the talk: https://www.pycon.it/conference/talks/deep-diving-into-gans-form-theory-to-production
Github repo: https://github.com/zurutech/gans-from-theory-to-production
Applicability issues of Evasion-Based Adversarial Attacks and Mitigation Tech...Kishor Datta Gupta
Adversarial attacks are considered security risks for Artificial Intelligence-based systems. Researchers have been studying different defense techniques appropriate for adversarial attacks. Evaluation strategies of these attacks and corresponding defenses are primarily conducted on trivial benchmark analysis. We have observed that most of these analyses have practical limitations for both attacks and for defense methods. In this work, we analyzed the adversarial attacks based on how these are performed in real-world problems and what steps can be taken to mitigate their effects. We also studied practicability issues of well-established defense techniques against adversarial attacks and proposed some guidelines for better and effective solutions. We demonstrated that the adversarial attacks detection rate and destruction rate co-related inversely, which can be used in designing defense techniques. Based on our experimental results, we suggest an adversarial defense model incorporating security policies that are suitable for practical purposes.
https://www.researchgate.net/publication/344463103_Applicability_issues_of_Evasion-Based_Adversarial_Attacks_and_Mitigation_Techniques
Research Link: https://www.shamra.sy/academia/show/5b0bf45a13836
Introduction:
Deep learning is at the heart of the current rise of artificial intelligence. In the field of Computer Vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently lead to a large influx of contributions in this direction. This article presents a survey on adversarial attacks on deep learning in Computer Vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them
By: Ahmed Nour Jamal El-din, Mohammad Zaher Airout, Obada Al-Jabasini
يعتبر التعلم العميق القلب النابض للذكاء الصنعي في السنوات الأخيرة، وفي ظل تراوح تطبيقاته بين السيارات ذاتية القيادة وصولًا إلى التحليلات الطبية وغير ذلك، وقدرته على حل المشاكل المعقدة متفوقًا على الإنسان في الكثير من الأحيان، بدا أننا وصلنا للحل النهائي لمشاكل الذكاء الصنعي، لكن ظهور الهجمات الخادعة أصبح العائق الأساسي لتوظيف التطبيقات التي تعتمد على التعلم العميق كبديل للإنسان، وأصبح التطبيقات الأخيرة تحت المجهر لدراسة قدرتها على منع هذه الهجمات، نستعرض في هذا البحث تعريف الهجوم الخادع وطرقه بشكل عام، ثم نتطرق إلى تطبيقين محورين يمكن مهاجمتهما من خلاله ونعرض كيف نتصدى لهذه الهجمات، مرورًا بمقارنة النماذج الإحصائية مع الإنسان وكون الهجمات الخادعة جزءًا أساسيًا من الأنظمة التي تعتمد على المعطيات للقيام بمهامها.
اعداد : أحمد نور جمال الدين, عبادة الجباصيني,محمد زاهر عيروط
Adversarial machine learning for av softwarejunseok seo
Introduce practical guidances for developing adversarial machine model for anti-malware software. I didn't use reinforcement model yet, just proof-of-concept. If you have any questions about my work, email to me :)
nababora@naver.com
Research of adversarial example on a deep neural networkNAVER Engineering
최근 컴퓨터 성능이 발달되고 대량의 데이터 수집이 가능하게 되면서, 인공지능 기술 중에 딥뉴럴네트워크 (Deep Neural Network, DNN)을 이용한 인공지능 기술이 각광받고 있다.
특히, 딥뉴럴네트워크은 이미지 인식, 음성 인식, 패턴 분석 등 분야에 있어서 탁월한 성능을 보여주고 있다. 하지만 딥뉴럴네트워크의 보안문제 중 Adversarial example이 주목 받고 있다.
Adversarial example은 입력 데이터에 최소한의 데이터를 변조를 하여 딥뉴럴네트워크가 원래 class가 아닌 다른 class로 잘못 인식하게 만드는 공격이다.
따라서 Adversarial example은 딥뉴럴네트워크의 보안문제에 위협이 된다. 이번 발표에서는 Adversarial example에 대한 전체적인 내용과 발표자가 제안한 방법인 Friend-safe evasion attack 등에 대해서 소개하고자 한다.
Federated Learning makes it possible to build machine learning systems without direct access to training data. The data remains in its original location, which helps to ensure privacy, reduces network communication costs, and taps edge device computing resources. The principles of data minimization established by the GDPR, and the growing prevalence of smart sensors make the advantages of federated learning more compelling. Federated learning is a great fit for smartphones, industrial and consumer IoT, healthcare and other privacy-sensitive use cases, and industrial sensor applications.
We’ll present the Fast Forward Labs team’s research on this topic and the accompanying prototype application, “Turbofan Tycoon”: a simplified working example of federated learning applied to a predictive maintenance problem. In this demo scenario, customers of an industrial turbofan manufacturer are not willing to share the details of how their components failed with the manufacturer, but want the manufacturer to provide them with a strategy to maintain the part. Federated learning allows us to satisfy the customer's privacy concerns while providing them with a model that leads to fewer costly failures and less maintenance downtime.
We’ll discuss the advantages and tradeoffs of taking the federated approach. We’ll assess the state of tooling for federated learning, circumstances in which you might want to consider applying it, and the challenges you’d face along the way.
Speaker
Chris Wallace
Data Scientist
Cloudera
Big Data Helsinki v 3 | "Federated Learning and Privacy-preserving AI" - Oguz...Dataconomy Media
"Machine learning algorithms require significant amounts of training data which has been centralized on one machine or in a datacenter so far. For numerous applications, such need of collecting data can be extremely privacy-invasive. Recent advancements in AI research approach this issue by a new paradigm of training AI models, i.e., Federated Learning.
In federated learning, edge devices (phones, computers, cars etc.) collaboratively learn a shared AI model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud. From personal data perspective, this paradigm enables a way of training a model on the device without directly inspecting users’ data on a server. This talk will pinpoint several examples of AI applications benefiting from federated learning and the likely future of privacy-aware systems."
Using Machine Learning in Networks Intrusion Detection SystemsOmar Shaya
The internet and different computing devices from desktop computers to smartphones have raised many security and privacy concerns, and the need to automate systems that detect attacks on these networks has emerged in order to be able to protect these networks with scale. And while traditional intrusion detection methods may be able to detect previously known attacks, the issue of dealing with new unknown attacks arises and that brings machine learning as a strong candidate to solve these challenges.
In this report, we investigate the use of machine learning in detecting network attacks, intrusion detection, by looking at work that has been done in this field. Particularly we look at the work that has been done by Pasocal et al.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
Poisoning attacks on Federated Learning based IoT Intrusion Detection SystemSai Kiran Kadam
Attacks on federated learning model are discussed as a part of my research to build a model that overcomes the diverse security issues and vulnerabilities in the cloud in the process of building a unified machine learning model that can benefit multi-user/ multi-companies to work together.
With the growth of computer networking, electronic commerce and web services, security networking systems have become very important to protect infomation and networks againts malicious usage or attacks. In this report, it is designed an Intrusion Detection System using two artificial neural networks: one for Intrusion Detection and the another for Attack Classification.
This talk is about how we applied deep learning techinques to achieve state-of-the-art results in various NLP tasks like sentiment analysis and aspect identification, and how we deployed these models at Flipkart
Basic Introduction to Adversarial machine learning, It has topics like What is ML? What is Adversarial ML? How does one generate and manipulate models?
GANs are the new hottest topic in the ML arena; however, they present a challenge for the researchers and the engineers alike. Their design, and most importantly, the code implementation has been causing headaches to the ML practitioners, especially when moving to production.
Starting from the very basic of what a GAN is, passing trough Tensorflow implementation, using the most cutting-edge APIs available in the framework, and finally, production-ready serving at scale using Google Cloud ML Engine.
Slides for the talk: https://www.pycon.it/conference/talks/deep-diving-into-gans-form-theory-to-production
Github repo: https://github.com/zurutech/gans-from-theory-to-production
Applicability issues of Evasion-Based Adversarial Attacks and Mitigation Tech...Kishor Datta Gupta
Adversarial attacks are considered security risks for Artificial Intelligence-based systems. Researchers have been studying different defense techniques appropriate for adversarial attacks. Evaluation strategies of these attacks and corresponding defenses are primarily conducted on trivial benchmark analysis. We have observed that most of these analyses have practical limitations for both attacks and for defense methods. In this work, we analyzed the adversarial attacks based on how these are performed in real-world problems and what steps can be taken to mitigate their effects. We also studied practicability issues of well-established defense techniques against adversarial attacks and proposed some guidelines for better and effective solutions. We demonstrated that the adversarial attacks detection rate and destruction rate co-related inversely, which can be used in designing defense techniques. Based on our experimental results, we suggest an adversarial defense model incorporating security policies that are suitable for practical purposes.
https://www.researchgate.net/publication/344463103_Applicability_issues_of_Evasion-Based_Adversarial_Attacks_and_Mitigation_Techniques
PRACTICAL ADVERSARIAL ATTACKS AGAINST CHALLENGING MODELS ENVIRONMENTS - Moust...GeekPwn Keen
Youtube: https://www.youtube.com/watch?v=ttmm2yo74z8
Moustafa Alzantot is a Ph.D. Candidate in Computer Science at UCLA. His research interests include machine learning, privacy, and mobile computing. He is an inventor of two US patents and the recipient of several awards including the COMESA 2014 innovation award. He worked as an intern at Google, Facebook, and Qualcom.
Yash Sharma is a visiting scientist at Cornell who recently graduated with a Bachelors and Masters in Electrical Engineering. His research has focused on adversarial examples, namely pushing the state-of-the-art in attacks in both limited access settings and challenging domains. He is interested in finding more principled solutions for resolving the robustness problem, as well as studying other practical issues which are inhibiting us from achieving AGI.
Bringing Red vs. Blue to Machine LearningBobby Filar
Machine learning (ML) has introduced novel techniques designed to identify malware, recognize suspicious domains, and detect anomalous behavior, even in the absence of observed data. As ML-based security platforms increasingly are adopted, the ML models also introduce potential vulnerabilities. In fact, security is at best an afterthought for most machine learning models. This presents an interesting dynamic where ML models both enhance defensive capabilities as well as opportunities for attackers, making them an interesting new challenge in Red vs. Blue exercises moving forward. In this presentation I will briefly introduce adversarial machine learning, how these models can be attacked, while also demonstrating how blue teams can harden defenses, and explain how ML should not be viewed as the panacea, but rather another technology that can help, but needs to be wary of exploitation as well.
Black-Box attacks against Neural Networks - technical project presentationRoberto Falconi
Project paper at: https://www.slideshare.net/RobertoFalconi4/blackbox-attacks-against-neural-networks-technical-project-report
Python implementation of a practical black-box attack against machine learning.This is the technical report for the Neural Networks course by Professor A. Uncini, PhD S. Scardapane and PhD D. Comminiello. The report is about Practical Black-Box Attacks against Machine Learning, scientific paper by N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik and A. Swami. The work is done by Dr S. Clinciu and Dr R. Falconi, while studying at MSc in Engineering in Computer Science, at Sapienza University of Rome.
Project’s goal is to introduce the first demonstration that black box attacks against deep neural networks (DNN) classifiers are practical for real-world adversaries with no knowledge about the model. We assume the adversary has no information about the structure or parameters of the DNN, and the defender does not have access to any large training dataset. A can only observe labels assigned by the DNN for chosen inputs, in a manner analog to a cryptographic oracle.
AI Cybersecurity: Pros & Cons. AI is reshaping cybersecurityTasnim Alasali
Discover how AI is reshaping cybersecurity. This presentation delves into AI's role in enhancing threat detection, the balance of innovation and risk, and the strategies shaping the future of digital defense.
The increasing accuracy of the machine learning systems is quite impressive. It has naturally led to a veritable flood of applications using them including self-driving vehicles, face recognition, cancer diagnosis and even in next-gen shops. A few years ago, getting wrong predictions from a machine learning model used to be the norm. Nowadays, this has become the exception, and we’ve come to expect them to perform flawlessly, especially when they are deployed in real-world applications.
Model extraction attacks on the bert based NLP models leads to potential risk of data being stolen. This presentation provides explanation on how models being extracted by the adversaries and naive defense strategies to prevent the model from being stolen.
Recent studies on robustness of Convolutional Neural Networks (CNN) shows that CNNs are highly vulnerable towards adversarial attacks. Meanwhile, smaller sized CNN models with no signicant accuracy loss are being introduced to mobile devices. However, only the accuracy on standard datasets is reported along with such research. The wide deployment of smaller models on millions of mobile devices stresses importance of their robustness. In this research, we study how robust such models are with respect to state-of-the-art compression techniques such as quantization.
We thoroughly enjoyed sharing some early strategies to perform security analysis on Neural Networks (Deep Learning/Machine Learning Models) at Shopify.
The field is still ripe and a lot more advancements need to happen in order to build Enterprise grade scanners.
Our discussion was recorded, and your comments and opinions would help drive the field forward. To the best of our knowledge, this talk is a #First of its kind on youtube.
Video Link: https://lnkd.in/erP9tUE
Similar to Adversarial Attacks and Defenses in Deep Learning.pdf (20)
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
2. On April 23, 2013, Syrian hackers compromised the Associated Press
Twitter feed and tweeted, “Breaking: Two Explosions in the White House
and Barack Obama is injured”.
In response to the tweet, the Dow Jones Industrial Average dropped by
$136 billion dollars (although this drop was reversed 3 minutes later).
3. What are adversarial attacks and why should you
care?
● Any attempt to fool a deep learning model with deceptive input
● Especially researched in image recognition, but can also be applied to audio, text or tabular data
● When building models, we mostly focus on classification effectiveness/ minimizing error. Relatively
little work on model security and robustness.
● Imperceptible amounts of non-random noise can fool neural networks!
● Some of these attacks are 100% effective in fooling normal neural networks!
7. What I’ll talk about
● Threat-models
● Some background terminology
● Notable adversarial models
● Notable adversarial defences
● Trends and remaining challenges
● Code
8. Level of threat
● White-box: full knowledge of model architecture and parameters
● Gray-box: knowledge limited to features, model type
● Black-box: no/minimal knowledge of the model, can only use output
All non-adversarially trained models are susceptible, even to black box models
Adversarially trained models are still susceptible to white box models
9. Background
Adversarial loss: J(θ, x, y), θ = model weights
An adversarial sample x’ : D(x,x’) < η (predefined distance constraint, perturbation)
● Idea: find the minimum difference or perturbation f(x’) ≠ y’
10. Adversarial samples should be indistinguishable from
benign samples
Distance metrics:
● L₂ distance: What is the squared difference between adversarial and benign image?
● L∞ distance: Maximum element-wise difference between adversarial and benign image
(for each pixel, take the absolute value difference between X and Z, and return the
largest such distance found over all pixels)
11. Notable adversarial models
Limited-memory BFGS
Grid search / line search to find optimal hyperparameter
Carlini and Wagner (C&W) attack
Set of optimization-based attacks that generate L₀, L₂ and L∞ norm measured adversarial samples, with some
restrictions (kappa) to make sure a valid image is produced
100% attack success on ‘normal’ neural networks trained on MNIST, CIFAR-10, ImageNet
Compromised defensive models
12. Notable adversarial models
DeepFool
“Iterative linearization of the
classifier to generate minimal
perturbations that are sufficient
to change classification labels”
Computes perturbations more
reliably
Moosavi-Dezfooli et al., https://arxiv.org/pdf/1511.04599.pdf
13.
14. Notable adversarial models
Universal adversarial attack
● Is there a universal perturbation that will work on most
samples?
● L-BFGS- based
● Effective in attacking NN like CaffeNet, GoogleNet, VGG and
Resnet
● Fooling rate > 53%
16. Text & Audio Models
1% audio perturbation can change 50 words in
text transcription!
Attacks are robust to MP3 compression, but get
lost when played over speakers
https://nicholas.carlini.com/code/audio_adversarial_examples/
Strategies for text attacks generally include
deleting, inserting, and modifying
characters/words
17. Adversarial defenses fall into 5 categories
1. Training on adversarial samples
2. Randomization
3. Adding noise
4. Removing noise
5. Mathematically provable defenses
18. Defang: Randomize input or features
● Randomly padding and resizing input; image transformations with randomness
19. ● Add random noise layer before each convolutional layer in training and test sets (RSE)
● Random feature pruning at each layer
20. Detect: Denoise the input or features
● Conventional input rectification
○ ‘Squeeze’ image → if output is very different from input, then likely adversarial
● GAN-based
○ Use GAN to learn benign data distribution
○ Generate a benign projection for the adversarial sample
● Autoencoder-based
○ Detector & reformer
○ Use an autocoder to compress input and learn manifold of benign samples
○ Detector compares each sample to learnt manifold
○ Reformer rectifies adversarial samples
21. Detect: Denoise the input or features
● High-level representation guided denoiser (HGD)
○ Trains a denoising u-net using a feature-level loss function to minimize feature
differences between benign and adversarial samples
○ Won first place in black-box defenses, 2017
○ Even so, certain (white-box) attacks can reduce effectiveness to 0%
22. Provable (certificated) defenses
● Defenses that have theoretical backing to have a certain accuracy against attacks
● Range of defenses include KNN and Bayesian-based defenses
● Consistency-based defenses:
○ Perturbations also affect the area around them
○ > 90 detection rate
● Very computationally intensive
23. Trends in adversarial research
● Design stronger attacks to probe for weaknesses
● Real-world attack capabilities
● Certificated defenses - but currently not scalable
“A problem is that an attack can only target one category of defenses, but defenses are required to … be effective
against all possible attack methods”
● Analyzing model robustness - mostly done on KKN and linear classifiers
24. Unresolved challenges
● Causality
● Does a general robust decision boundary exist that could be learnt by (certain) neural
networks?
● Effectiveness vs efficiency
○ Adversarial training is effective, but requires a lot of data and compute
○ Randomization and denoising strategies very efficient, but not as effective as claimed
25. Discussion
In what other ways are models not robust?
Is model robustness/ security applicable to what you do / to our students?
Model fairness has been a hot topic lately, but robustness/ security seems to lag behind - what do you
think needs to change for adversarial training to be widely implemented?
What are your thoughts on the paper in general?
26. Try it yourself
Benchmark machine learning systems' vulnerability to adversarial examples:
https://github.com/cleverhans-lab/cleverhans
Blog: cleverhans.io