Mention my background as Stanford OR, have written > 1000 articles on infosec published two computer networking books, run various online and print pubs The typical AI process workflow starts with building a data lake to collect information, in this case about security events, incidences and perhaps breach notifications. Then you come up with an algorithm and build your model, try it out to see if it produces any insights, and return to your data, cleanse it and try again. This approach doesn’t really work in the security arena, because it doesn’t really encapsulate what actual human knowledge we have when it comes to security threat identification and mitigation. We completely ignore these insights we have collected over the years of hands-on experience, mainly because it isn’t easily quantified. Today I want to review where we are with AI and security and show you some of the leading efforts at combining the two approaches.
Let me take you through the reality of this intersection, dealing with someof the major problems where AI can help right now and touch on some innovators in this space that can give us all hope we don’t go down the Skynet rabbit hole.
Usually, when we talk about AI and security, the first thing that comes to mind is this, where machines take over the planet a la Terminator. To this, all I can say is Hasta La Vista, baby! Let’s try to dispel this mythology once and for all.
One current effort is at the MIT Media Lab – they have a group called Scalable Behavior who have minted the term machine behavior to do more research into how machines talk to each other. https://medium.com/mit-media-lab/studying-the-behavior-of-ai-ca8f0475bf3b
Our first reality about malware is that it is getting better at hiding and “living off the land” or using Windows OS functions making it harder to detect. As the malware writers have learned, mimicking scripting and normal OS functions is a great way to keep things hidden.
Reality #3 – Malware is getting better at targeting victims, no longer “spray and pray” but now target and stay – this also helps to keep things hidden, although researchers often now find single customized malware instances
Finally, malware has become quite profitable and a major industry on its own. Anyone can order up a custom cyber attack for a couple a hundred bucks, thanks to exploit kits that your average teenager can set up with a web console and few skills. Plus nation states have gotten into the game, using cyber attacks to complement their espionage actitivites.
Now let’s talk about the current state of the art. This is Dudu Mimram, the CTO of DT Labs and a frequent speaker on AI and security. He talks about this cyber arms race, where as defenders get better at finding incursions, the attackers get better at hiding their craft. Can AI help us out here? That remains to be seen. Part of the problem is picking the right time horizon to build our models: if we pick too short a time, we will miss the specific trigger event that allows someone into our network. Too long a time horizon, and the event gets lost in the logs. . Gartner’s Anton Chuvakin has a simple AI/Cyber test: We should use AI when traditional infosec methods are intractable, inefficient or simply impossible, and when you have high enough and relevant data quality.
Flashpoint found a Chinese link with the WannaCry ransomware. Much of the security research up till then had pointed to North Korean ties, So using AI can give us false flags on attribution of who actually wrote the malware – deeper analysis showed that the time zone metadata was set to Korean time and eventually a Korean spy was charged with being involved in both Sony and WannaCry attacks. Attackers want to plant these false clues to deliberately mislead researchers. AI can be more of a hinderance than help But there is little commercial incentive to fund better attribution efforts
The idea that a front-end developer has to be aware of the data and the implications of its structure in data-driven companies is in itself a new idea. And the fact that data scientists should provide front-end developers with unit tests is not common at all today. https://sanau.co/ML-models-are-dying-quietly
If you are running a security ops center, this is what you look at daily to figure out how to keep the bad guys out. Unfortuneately, when it comes to using AI to automate these processes, we are solving the wrong problem, because we need a different approach, rather than having different policy rules
Both Amazon and Google have a wide collection of AI and ML tools and cloud services that do forecasting, image recognition, text and data analytics, conversational interfaces and train your ML model, all with just your web browser and a bunch of API keys.
So let’s look at a few of the AI cyber innovators that you should pay attention to.
Several companies are making use of homomorphic encryption, which has been around academia for more than a decade. so that different data owners can only see allowed data elements, passing everything encrypted among them. This slide shows how Duality SecurePlus works. There are several other companies in this space, including Enveil.com ZeroReveal, Capnion.com Ghost PII, and Preveil.com email and file security solutions. There are also a couple of open source research projects, such as OpenMined and Helib that are building new tools that balance privacy and security using homomorphic techniques.
The city of San Diego has put together this project called the CIE, it basically has become a common trust broker where the different social services agencies can share private information about a client without having to invade the person’s private details.
https://ciesandiego.org/. And https://icecybersecurity.com/.
Google’s Chronicle has this product called Backstory which is ingesting so much network traffic and log data that they have built ML tools to figure out when someone was first attacked, even many years ago, from their technologies.
They make use of a wide variety of AWS ML tools to detect fraudulent IDs and get very sophisticated about stopping other criminal uses of their system. https://www.oreilly.com/ideas/ai-at-scale-at-coinbase
This firm has been sponsored by DARPA and includes these four elements so learn your security environment to detect vulnerabilities. It can be very useful in detecting insider threats
If we are going to use automation for threat hunting, we have to do a better job of combining real-time queries with better visualizations of attacks. Just drawing pretty maps like this isn’t really sufficient. We have to enable non-security pros to find and neutralize threats. There are a number of vendors doing this, including Dark Trace (which is what I am showing here) They have a cyber AI platform uses unsupervised machine learning to analyze network data at scale, and makes billions of probability-based calculations based on the evidence that it sees.
Endgame examines both cloud and on-premises protection along with off-network devices to cover all your bases and be able to review the previous 4 mos. of data, along with attack viz tools such as Resolver. Security staff can get a holistic view of where an infection has spread on their network, determine its root cause, and resolve it without ever leaving the page. It’s a pretty nifty feature and is very usable by non-security staff.
This company Zero Eyes is already using AI to do real time weapon detection before someone commits a crime, so we aren’t that far afield from Minority Report. So we have come full circle, from the Terminator movies. I hope you have enjoyed this tour of innovative AI/Security companies and given you a few things to think about in this space. https://www.defenseone.com/technology/2019/04/ai-enabled-cameras-detect-crime-it-occurs-will-soon-invade-physical-world/156502/
AI and cyber security: new directions, old fears
AI and cyber
New directions, old
Editor, Inside Security
Slides here: slideshare.net/davidstrom
• Fear of Skynet
• Current malware situation
• Three issues with AI security implementations
• Hope for the future -- innovative AI/security uses