Ivan is the founder of Wallarm, an AI-based security company based in Silicon Valley. Ivan is well known in the industry. He is a recipient of many bug bounty awards from such companies as Google, Facebook and Honewell. Ivan is also a frequent speaker and industry events and is known for being an inventor of memcached injection and for his work in Server Side Request Forgery (SSRF)
4. AI is a buzzword
Machine learning
● Analytical algorithms
○ Predefined formulas inside
● Neural networks
○ Algorithm generated inside by training
5. How it works? How to test it?
It’s impossible to validate trained neural network analytically
by some formula (unlike old-fashioned algos)
It’s impossible to test all the cases
6. We have no time to test
2^(16x16) =
11579208923731619542357098500868790785326998466564056403945
7584007913129639936 - 79 digits numbers!
36717430630808027468154168254911183362909051454097083980041
years at the speed on 100B images/s
<- about 48 orders of magnitude smaller than the calculation
above
13.8*10^9 years
10. AI attackers
1. Adversarial examples. Bypassing an AI detection logic by
intentionally generated examples by other neural
network.
2. AI-exploits. Generate payloads and attacks scenarios to
find vulnerabilities and exploit it.
11. AI attackers: Adversarial example
An intentional attack designed to cause the neural network to
make a mistake
https://blog.openai.com/adversarial-example-research/
12. AI-exploits
HITB GSEC, Singapore 2017. NeuralFuzz talk by Ivan Novikov
It’s possible to train the neural network to generate input
payloads to trigger vulnerabilities depends on an application
context and conditions like people can do it