As more firms use machine learning (ML) and artificial intelligence (AI) initiatives, protecting them becomes more crucial. You can counteract threat actors’ strategies, which
include a variety of techniques to trick or abuse AI and machine learning systems and
models. Defense against hostile machine learning is one of the newer facets of AI and ML security. Some of them aren’t AI-specific.
2. A
s more firms use machine learning (ML) and artificial intelligence (AI) initiatives, pro-
tecting them becomes more crucial. You can counteract threat actors’ strategies, which
include a variety of techniques to trick or abuse AI and machine learning systems and
models. Defense against hostile machine learning is one of the newer facets of AI and ML secu-
rity. Some of them aren’t AI-specific. According to a report published by Microsoft this spring,
90% of firms are not prepared to defend themselves against adversarial machine learning. 25
of the 28 firms covered by the report lacked the security measures required to protect their ML
systems.
In a poll conducted by Gartner this spring, the difficulty of integrating AI technologies into cur-
rent infrastructure and security concerns shared the top spot as hindering the adoption of AI.
Adversarial Machine Learning (ML)
The topic of adversarial machine learning examines how machine learning algorithms are
challenged and countered. Contrary to what its name might imply, adversarial machine learn-
ing is not a branch of the field. Instead, it is a collection of strategies that adversaries employ
to undermine machine learning systems. According to a survey, there is a critical need for
improved machine learning system protection in industrial applications. According to Alexey
Rubtsov, a professor at Toronto Metropolitan University (formerly Ryerson) and senior research
associate at the Global Risk Institute, “adversarial machine learning exploits flaws and specific-
ities of ML models.” He recently published a paper on the application of adversarial machine
learning in the financial services industry.
anumak.ai
3. Types of ML attacks
• Poisoning attack: To make the model perform poorly upon deployment, the attacker manipu-
lates the training data or its labels. Poisoning is simply the hostile contamination of training
data. Because ML systems can be retrained using the data gathered during operation, an
attacker may taint the data by introducing malicious samples, which would interfere with or
affect retraining.
• Evasion attacks: The most common and studied attacks are evasion attacks. During deploy-
ment, the attacker tampers with the data to trick classifiers that have already been trained.
They are the most common attacks employed in intrusion and malware scenarios since they
are carried out during deployment. Attackers frequently obscure the content of malware or
spam emails to avoid detection. Since this classification does not directly affect the training
data, alterations are made to samples to avoid detection. Spoofing attacks against biometric
verification systems are an example of evasion.
• Model Extraction attack: A model thief or model extractor probes a black-box machine
learning system to either reconstruct the model or extract the data it was trained on. This is
especially important if the training data or the model contains private and sensitive informa-
tion.
For example, use model extraction attacks to steal a stock market forecasting model that the
adversary could utilize for self-financial gain.
The enemy could be able to obtain a copy of the model by buying it or via a service if a busi-
ness utilizes a commercial AI product. Attackers can, for instance, test their malware against
antivirus engines on open platforms.
anumak.ai
4. A few known adversarial attack methods
• Limited-memory BFGS (L-BFGS): A non-linear gradient-based numerical optimization tech-
nique called the Limited-memory Broyden-Fletcher-Goldfarb-Shannon (L-BFGS) method is
used to reduce the number of perturbations that are added to images. One of its benefits is
that it is efficient at producing adversarial examples. However, since limited-memory Broy-
den-Fletcher-Goldfarb-Shanno (L-BFGS) is an efficient approach with box limitations, it re-
quires a lot of processing power. As a result, the process is tedious and untenable.
• FastGradient Sign method (FGSM): To reduce as much as possible the amount of distur-
bance applied to every image pixel that can result in misclassification. Compared with other
techniques, calculating is possible using the FastGradient Sign technique. But every feature
also includes perturbations.
• Jacobian-based Saliency Map Attack (JSMA): While still producing misclassification, the
approach, unlike FGSM, uses feature selection to reduce the number of features updated.
Features are regular intervals to flat perturbations in descending order of saliency rating. In
contrast to FGSM, only a few functionalities are affected as a result. However, it also re-
quires more computation than FGSM.
• Deepfool Attack: With this untargeted adversarial sample generation method, the euclidean
distance between perturbed samples and original samples is as small as possible. Estimated
decision boundaries between classes are introduced iteratively along with perturbations. It
produces adversarial instances well, with greater misclassification rates and fewer perturba-
tions.
Attacks on the AI system
According to Gartner, most attacks against common software can also be used against AI. Dif-
ferent traditional security measures can be used to safeguard AI systems. For instance, tools that
shield data from access or compromise can also shield training data sets from alteration.
In addition, Gartner advises businesses to take extra precautions if they need to safeguard
AI and machine learning systems. First, Gartner advises businesses to embrace reliable AI prin-
ciples and conduct model validation tests to safeguard AI models’ integrity. Second, Gartner
advises deploying data poisoning detection technology to safeguard the integrity of AI training
data.
Conclusion
Through the potential for data manipulation and exploitation, machine learning creates a new
attack surface and raises security threats. Some machine learning models employ reinforcement
learning and pick up new information as it comes in. Companies implementing machine learn-
ing technology must know the dangers of hostile samples, stolen models, and data manipula-
tion. In addition, enterprises must question providers about how they safeguard their systems
from adversarial attacks before utilizing a third-party technology.
anumak.ai