—We present a novel emulation system for creating
high-fidelity digital twins of IT infrastructures. The digital twins
replicate key functionality of the corresponding infrastructures
and allow to play out security scenarios in a safe environment.
We show that this capability can be used to automate the process
of finding effective security policies for a target infrastructure. In
our approach, a digital twin of the target infrastructure is used
to run security scenarios and collect data. The collected data is
then used to instantiate simulations of Markov decision processes
and learn effective policies through reinforcement learning, whose
performances are validated in the digital twin. This closed-loop
learning process executes iteratively and provides continuously
evolving and improving security policies. We apply our approach
to an intrusion response scenario. Our results show that the
digital twin provides the necessary evaluative feedback to learn
near-optimal intrusion response policies.
Learning Near-Optimal Intrusion Responses for IT Infrastructures via Decompos...Kim Hammar
We study automated intrusion response and formulate the interaction between an attacker and a defender on an IT infrastructure as a stochastic game where attack and defense strategies evolve through reinforcement learning and self-play. Direct application of reinforcement learning to any non-trivial instantiation of this game is impractical due to the exponential growth of the state and action spaces with the number of components in the infrastructure. We propose a decompositional approach to deal with this challenge and prove that under assumptions generally met in practice, the game decomposes into a) additive subgames on the workflow-level that can be optimized independently; and b) subgames on the component-level that satisfy the optimal substructure property. We further show that the optimal defender strategies on the component-level exhibit threshold structures. To solve the decomposed game we develop Decompositional Fictitious Self-Play (\dfsp), an efficient fictitious self-play algorithm that learns Nash equilibria through stochastic approximation. We show that \dfsp outperforms a state-of-the-art algorithm for our use case. To evaluate the learned strategies, we deploy them in a a virtual IT infrastructure in which we run real network intrusions and real response actions. From our experimental investigation we conclude that our approach can produce effective defender strategies for a practical IT infrastructure.
Learning Optimal Intrusion Responses via DecompositionKim Hammar
We study automated intrusion response and formulate the interaction between an attacker and a defender on an IT infrastructure as a stochastic game where attack and defense strategies evolve through reinforcement learning and self-play. Direct application of reinforcement learning to any non-trivial instantiation of this game is impractical due to the exponential growth of the state and action spaces with the number of components in the infrastructure. We propose a decompositional approach to deal with this challenge and prove that under assumptions generally met in practice, the game decomposes into a) additive subgames on the workflow-level that can be optimized independently; and b) subgames on the component-level that satisfy the optimal substructure property. We further show that the optimal defender strategies on the component-level exhibit threshold structures. To solve the decomposed game we develop Decompositional Fictitious Self-Play (\dfsp), an efficient fictitious self-play algorithm that learns Nash equilibria through stochastic approximation. We show that \dfsp outperforms a state-of-the-art algorithm for our use case. To evaluate the learned strategies, we deploy them in a a virtual IT infrastructure in which we run real network intrusions and real response actions. From our experimental investigation we conclude that our approach can produce effective defender strategies for a practical IT infrastructure.
Learning Near-Optimal Intrusion Responses for IT Infrastructures via Decompos...Kim Hammar
We study automated intrusion response and formulate the interaction between an attacker and a defender on an IT infrastructure as a stochastic game where attack and defense strategies evolve through reinforcement learning and self-play. Direct application of reinforcement learning to any non-trivial instantiation of this game is impractical due to the exponential growth of the state and action spaces with the number of components in the infrastructure. We propose a decompositional approach to deal with this challenge and prove that under assumptions generally met in practice the game decomposes into a) additive subgames on the workflow-level that can be optimized independently; and b) subgames on the component-level that satisfy the optimal substructure property. We further show that the optimal defender strategies on the component-level exhibit threshold structures. To solve the decomposed game we develop Decompositional Fictitious Self-Play (\dfsp), an efficient fictitious self-play algorithm that learns Nash equilibria through stochastic approximation. We show that \dfsp outperforms a state-of-the-art algorithm for our use case. To evaluate the learned strategies, we deploy them in a a virtual IT infrastructure in which we run real network intrusions and real response actions. From our experimental investigation we conclude that our approach can produce effective defender strategies for a practical IT infrastructure.
—We present a novel emulation system for creating
high-fidelity digital twins of IT infrastructures. The digital twins
replicate key functionality of the corresponding infrastructures
and allow to play out security scenarios in a safe environment.
We show that this capability can be used to automate the process
of finding effective security policies for a target infrastructure. In
our approach, a digital twin of the target infrastructure is used
to run security scenarios and collect data. The collected data is
then used to instantiate simulations of Markov decision processes
and learn effective policies through reinforcement learning, whose
performances are validated in the digital twin. This closed-loop
learning process executes iteratively and provides continuously
evolving and improving security policies. We apply our approach
to an intrusion response scenario. Our results show that the
digital twin provides the necessary evaluative feedback to learn
near-optimal intrusion response policies.
Learning Near-Optimal Intrusion Responses for IT Infrastructures via Decompos...Kim Hammar
We study automated intrusion response and formulate the interaction between an attacker and a defender on an IT infrastructure as a stochastic game where attack and defense strategies evolve through reinforcement learning and self-play. Direct application of reinforcement learning to any non-trivial instantiation of this game is impractical due to the exponential growth of the state and action spaces with the number of components in the infrastructure. We propose a decompositional approach to deal with this challenge and prove that under assumptions generally met in practice, the game decomposes into a) additive subgames on the workflow-level that can be optimized independently; and b) subgames on the component-level that satisfy the optimal substructure property. We further show that the optimal defender strategies on the component-level exhibit threshold structures. To solve the decomposed game we develop Decompositional Fictitious Self-Play (\dfsp), an efficient fictitious self-play algorithm that learns Nash equilibria through stochastic approximation. We show that \dfsp outperforms a state-of-the-art algorithm for our use case. To evaluate the learned strategies, we deploy them in a a virtual IT infrastructure in which we run real network intrusions and real response actions. From our experimental investigation we conclude that our approach can produce effective defender strategies for a practical IT infrastructure.
Learning Optimal Intrusion Responses via DecompositionKim Hammar
We study automated intrusion response and formulate the interaction between an attacker and a defender on an IT infrastructure as a stochastic game where attack and defense strategies evolve through reinforcement learning and self-play. Direct application of reinforcement learning to any non-trivial instantiation of this game is impractical due to the exponential growth of the state and action spaces with the number of components in the infrastructure. We propose a decompositional approach to deal with this challenge and prove that under assumptions generally met in practice, the game decomposes into a) additive subgames on the workflow-level that can be optimized independently; and b) subgames on the component-level that satisfy the optimal substructure property. We further show that the optimal defender strategies on the component-level exhibit threshold structures. To solve the decomposed game we develop Decompositional Fictitious Self-Play (\dfsp), an efficient fictitious self-play algorithm that learns Nash equilibria through stochastic approximation. We show that \dfsp outperforms a state-of-the-art algorithm for our use case. To evaluate the learned strategies, we deploy them in a a virtual IT infrastructure in which we run real network intrusions and real response actions. From our experimental investigation we conclude that our approach can produce effective defender strategies for a practical IT infrastructure.
Learning Near-Optimal Intrusion Responses for IT Infrastructures via Decompos...Kim Hammar
We study automated intrusion response and formulate the interaction between an attacker and a defender on an IT infrastructure as a stochastic game where attack and defense strategies evolve through reinforcement learning and self-play. Direct application of reinforcement learning to any non-trivial instantiation of this game is impractical due to the exponential growth of the state and action spaces with the number of components in the infrastructure. We propose a decompositional approach to deal with this challenge and prove that under assumptions generally met in practice the game decomposes into a) additive subgames on the workflow-level that can be optimized independently; and b) subgames on the component-level that satisfy the optimal substructure property. We further show that the optimal defender strategies on the component-level exhibit threshold structures. To solve the decomposed game we develop Decompositional Fictitious Self-Play (\dfsp), an efficient fictitious self-play algorithm that learns Nash equilibria through stochastic approximation. We show that \dfsp outperforms a state-of-the-art algorithm for our use case. To evaluate the learned strategies, we deploy them in a a virtual IT infrastructure in which we run real network intrusions and real response actions. From our experimental investigation we conclude that our approach can produce effective defender strategies for a practical IT infrastructure.
Automated Intrusion Response - CDIS Spring Conference 2024Kim Hammar
Presentation at CDIS Spring Conference 2024.
The ubiquity and evolving nature of cyber attacks is of growing concern to industry and society. In response, the automation of security processes and functions is the focus of many current research efforts. In this talk we will present a framework for automated network intrusion response, in which we model the interaction between an attacker and a defender as a partially observed Markov game. Within this framework, reinforcement learning enables the controlled evolution of attack and defense strategies towards a Nash equilibrium through the process of self-play. To realize and experiment with the self-play process on a practical IT infrastructure, we have developed a software platform for creating digital twins, which provide two key functions for our framework: (i) a safe and realistic test environment; and (ii) a tool for evaluation that enables closed-loop learning of security strategies.
Managing Cloud Security Risks in Your OrganizationCharles Lim
Any Organization in the World need to prepare themselves before they move to the cloud, i.e. cloud security risk assessment. It is all about managing your risks if you accept to move to the cloud and understanding the risks and benefits should be essential part of any organization thinking to move to cloud infrastructure.
Improving Efficiency of Security in Multi-CloudIJTET Journal
Abstract--Due to risk in service availability failure and the possibilities of malicious insiders in the single cloud, a movement towards “Multi-clouds” has emerged recently. In general a multi-cloud security system there is a possibility for third party to access the user files. Ensuring security in this stage has become tedious since, most of the activities are done in network. In this paper, an enhanced security methodology has been introduced in order to make the data stored in cloud more secure. Duple authentication process introduced in this concept defends malicious insiders and shields the private data. Various disadvantages in traditional systems like unauthorized access, hacking have been overcome in this proposed system and a comparison made with the traditional systems in terms of performance and computational time have shown better results.
Security Issues related with cloud computingIJERA Editor
The term CLOUD means Common Location Independent Online Utility on Demand. It‟s an emerging technology in IT industries. Cloud technologies are improving day by day and now it become a need for all small and large scale industries. Companies like Google, Amazon, Microsoft etc. is providing virtualized environment for user by which it omits the need for physical storage and others. But as the advantage of cloud computing is increasing day by day the issues are also threatening the IT industries. These issues related with the security of the data. The basic idea of this review paper is to elaborate the security issues related with cloud computing and what methods are implemented to improve these security. Certain algorithms like RSA, DES, and Ceaser Cipher etc. implemented to improve the security issues. In this paper we have implemented Identity based mRSA algorithm in this paper for improving security of data.
Image encryption using jumbling saltingMauryasuraj98
In this project we have implemented the modified JS algorithm and later compared it with other similar functioning algorithms such as AES, DES and Jumbling Salting Algorithm.
Also we have done this by using a throughput value that is considered as measure for comparing the effectiveness of these algorithms. Basically the throughput value indicates the number of Megabytes of image encrypted with respect to time taken to encrypt the image.
Over this thesis, we did try to optimize tow major challenges of RSA policy:
1# Computational complexity.
2# Apology of unbreakability.
We use here multidimensional random padding scheme (MRPS) as an outer layer protection. RSA policy itself is inner or core layer but not ever unbreakable if additional layers are imposed. Here in this work, our MRPS scheme would able to ensure fully parametrized randomization process.
Optimizing cybersecurity incident response decisions using deep reinforcemen...IJECEIAES
The main purpose of this paper is to explore and investigate the role of deep reinforcement learning (DRL) in optimizing the post-alert incident response process in security incident and event management (SIEM) systems. Although machine learning is used at multiple levels of SIEM systems, the last mile decision process is often ignored. Few papers reported efforts regarding the use of DRL to improve the post-alert decision and incident response processes. All the reported efforts applied only shallow (traditional) machine learning approaches to solve the problem. This paper explores the possibility of solving the problem using DRL approaches. The main attraction of DRL models is their ability to make accurate decisions based on live streams of data without the need for prior training, and they proved to be very successful in other fields of applications. Using standard datasets, a number of experiments have been conducted using different DRL configurations The results showed that DRL models can provide highly accurate decisions without the need for prior training.
Automated Intrusion Response - CDIS Spring Conference 2024Kim Hammar
Presentation at CDIS Spring Conference 2024.
The ubiquity and evolving nature of cyber attacks is of growing concern to industry and society. In response, the automation of security processes and functions is the focus of many current research efforts. In this talk we will present a framework for automated network intrusion response, in which we model the interaction between an attacker and a defender as a partially observed Markov game. Within this framework, reinforcement learning enables the controlled evolution of attack and defense strategies towards a Nash equilibrium through the process of self-play. To realize and experiment with the self-play process on a practical IT infrastructure, we have developed a software platform for creating digital twins, which provide two key functions for our framework: (i) a safe and realistic test environment; and (ii) a tool for evaluation that enables closed-loop learning of security strategies.
Managing Cloud Security Risks in Your OrganizationCharles Lim
Any Organization in the World need to prepare themselves before they move to the cloud, i.e. cloud security risk assessment. It is all about managing your risks if you accept to move to the cloud and understanding the risks and benefits should be essential part of any organization thinking to move to cloud infrastructure.
Improving Efficiency of Security in Multi-CloudIJTET Journal
Abstract--Due to risk in service availability failure and the possibilities of malicious insiders in the single cloud, a movement towards “Multi-clouds” has emerged recently. In general a multi-cloud security system there is a possibility for third party to access the user files. Ensuring security in this stage has become tedious since, most of the activities are done in network. In this paper, an enhanced security methodology has been introduced in order to make the data stored in cloud more secure. Duple authentication process introduced in this concept defends malicious insiders and shields the private data. Various disadvantages in traditional systems like unauthorized access, hacking have been overcome in this proposed system and a comparison made with the traditional systems in terms of performance and computational time have shown better results.
Security Issues related with cloud computingIJERA Editor
The term CLOUD means Common Location Independent Online Utility on Demand. It‟s an emerging technology in IT industries. Cloud technologies are improving day by day and now it become a need for all small and large scale industries. Companies like Google, Amazon, Microsoft etc. is providing virtualized environment for user by which it omits the need for physical storage and others. But as the advantage of cloud computing is increasing day by day the issues are also threatening the IT industries. These issues related with the security of the data. The basic idea of this review paper is to elaborate the security issues related with cloud computing and what methods are implemented to improve these security. Certain algorithms like RSA, DES, and Ceaser Cipher etc. implemented to improve the security issues. In this paper we have implemented Identity based mRSA algorithm in this paper for improving security of data.
Image encryption using jumbling saltingMauryasuraj98
In this project we have implemented the modified JS algorithm and later compared it with other similar functioning algorithms such as AES, DES and Jumbling Salting Algorithm.
Also we have done this by using a throughput value that is considered as measure for comparing the effectiveness of these algorithms. Basically the throughput value indicates the number of Megabytes of image encrypted with respect to time taken to encrypt the image.
Over this thesis, we did try to optimize tow major challenges of RSA policy:
1# Computational complexity.
2# Apology of unbreakability.
We use here multidimensional random padding scheme (MRPS) as an outer layer protection. RSA policy itself is inner or core layer but not ever unbreakable if additional layers are imposed. Here in this work, our MRPS scheme would able to ensure fully parametrized randomization process.
Optimizing cybersecurity incident response decisions using deep reinforcemen...IJECEIAES
The main purpose of this paper is to explore and investigate the role of deep reinforcement learning (DRL) in optimizing the post-alert incident response process in security incident and event management (SIEM) systems. Although machine learning is used at multiple levels of SIEM systems, the last mile decision process is often ignored. Few papers reported efforts regarding the use of DRL to improve the post-alert decision and incident response processes. All the reported efforts applied only shallow (traditional) machine learning approaches to solve the problem. This paper explores the possibility of solving the problem using DRL approaches. The main attraction of DRL models is their ability to make accurate decisions based on live streams of data without the need for prior training, and they proved to be very successful in other fields of applications. Using standard datasets, a number of experiments have been conducted using different DRL configurations The results showed that DRL models can provide highly accurate decisions without the need for prior training.
Intrusion Tolerance for Networked Systems through Two-level Feedback ControlKim Hammar
We formulate intrusion tolerance for a system with service replicas as a two-level optimal control problem. On the local control level, node controllers perform intrusion recoveries and on the global control level, a system controller manages the replication factor.
Learning Intrusion Prevention Policies through Optimal Stopping - CNSM2021Kim Hammar
We study automated intrusion prevention using reinforcement learning. In a novel approach, we formulate the problem of intrusion prevention as an optimal stopping problem. This formulation allows us insight into the structure of the optimal policies, which turn out to be threshold based. Since the computation of the optimal defender policy using dynamic programming is not feasible for practical cases, we approximate the optimal policy through reinforcement learning in a simulation environment. To define the dynamics of the simulation, we emulate the target infrastructure and collect measurements. Our evaluations show that the learned policies are close to optimal and that they indeed can be expressed using thresholds.
Learning Intrusion Prevention Policies Through Optimal StoppingKim Hammar
CDIS Research Workshop 2021 Balingsholm.
We study automated intrusion prevention using reinforcement learning. In a novel approach, we formulate the problem of intrusion prevention as an optimal multiple stopping problem. This formulation allows us insight into the structure of the optimal policies, which we show to be threshold based. Since the computation of the optimal defender policy using dynamic programming is not feasible for practical cases, we develop a reinforcement learning approach to approximate the optimal policy in a target infrastructure. The approach uses an emulation of the infrastructure to evaluate policies and to instantiate a simulation model which then is used to train policies through reinforcement learning. Our results show that the learned policies are close to optimal and that they indeed can be expressed using thresholds.
Learning Intrusion Prevention Policies Through Optimal StoppingKim Hammar
We study automated intrusion prevention using reinforcement learning. In a novel approach, we formulate the problem of intrusion prevention as an optimal multiple stopping problem. This formulation allows us insight into the structure of the optimal policies, which we show to be threshold based. Since the computation of the optimal defender policy using dynamic programming is not feasible for practical cases, we develop a reinforcement learning approach to approximate the optimal policy in a target infrastructure. The approach uses an emulation of the infrastructure to evaluate policies and to instantiate a simulation model which then is used to train policies through reinforcement learning. Our results show that the learned policies are close to optimal and that they indeed can be expressed using thresholds.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
CNSM 2022 - An Online Framework for Adapting Security Policies in Dynamic IT Environment - Hammar & Stadler
1. 1/14
An Online Framework for Adapting Security
Policies in Dynamic IT Environments
International Conference on Network and Service Management
Thessaloniki, Greece, Oct 31 - Nov 4 2022
Kim Hammar & Rolf Stadler
kimham@kth.se stadler@kth.se
Division of Network and Systems Engineering
KTH Royal Institute of Technology
3. 2/14
Goal: Automation and Learning
I Challenges
I Evolving & automated attacks
I Complex infrastructures
I Our Goal:
I Automate security tasks
I Adapt to changing attack methods
Attacker Clients
. . .
Defender
1 IPS
1
alerts
Gateway
7 8 9 10 11
6
5
4
3
2
12
13 14 15 16
17
18
19
21
23
20
22
24
25 26
27 28 29 30 31
4. 2/14
Approach: Self-Learning Security Systems
I Challenges
I Evolving & automated attacks
I Complex infrastructures
I Our Goal:
I Automate security tasks
I Adapt to changing attack methods
I Our Approach: Self-Learning
Systems:
I real-time telemetry
I stream processing
I theories from control/game/decision
theory
I computational methods (e.g.
dynamic programming &
reinforcement learning)
I automated network management
(SDN, NFV, etc.)
Attacker Clients
. . .
Defender
1 IPS
1
alerts
Gateway
7 8 9 10 11
6
5
4
3
2
12
13 14 15 16
17
18
19
21
23
20
22
24
25 26
27 28 29 30 31
5. 3/14
Our Framework for Automated Network Security
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Emulation System
Target System
Model Creation &
System Identification
Strategy Mapping
π
Selective
Replication
Strategy
Implementation π
Simulation System
Reinforcement Learning &
Generalization
Strategy evaluation &
Model estimation
Automation &
Self-learning systems
6. 3/14
Our Framework for Automated Network Security
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Emulation System
Target System
Model Creation &
System Identification
Strategy Mapping
π
Selective
Replication
Strategy
Implementation π
Simulation System
Reinforcement Learning &
Generalization
Strategy evaluation &
Model estimation
Automation &
Self-learning systems
7. 3/14
Our Framework for Automated Network Security
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Emulation System
Target System
Model Creation &
System Identification
Strategy Mapping
π
Selective
Replication
Strategy
Implementation π
Simulation System
Reinforcement Learning &
Generalization
Strategy evaluation &
Model estimation
Automation &
Self-learning systems
8. 3/14
Our Framework for Automated Network Security
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Emulation System
Target System
Model Creation &
System Identification
Strategy Mapping
π
Selective
Replication
Strategy
Implementation π
Simulation System
Reinforcement Learning &
Generalization
Strategy evaluation &
Model estimation
Automation &
Self-learning systems
9. 3/14
Our Framework for Automated Network Security
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Emulation System
Target System
Model Creation &
System Identification
Strategy Mapping
π
Selective
Replication
Strategy
Implementation π
Simulation System
Reinforcement Learning &
Generalization
Strategy evaluation &
Model estimation
Automation &
Self-learning systems
10. 3/14
Our Framework for Automated Network Security
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Emulation System
Target System
Model Creation &
System Identification
Strategy Mapping
π
Selective
Replication
Strategy
Implementation π
Simulation System
Reinforcement Learning &
Generalization
Strategy evaluation &
Model estimation
Automation &
Self-learning systems
11. 3/14
Our Framework for Automated Network Security
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Emulation System
Target System
Model Creation &
System Identification
Strategy Mapping
π
Selective
Replication
Strategy
Implementation π
Simulation System
Reinforcement Learning &
Generalization
Strategy evaluation &
Model estimation
Automation &
Self-learning systems
12. 3/14
Our Framework for Automated Network Security
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Emulation System
Target System
Model Creation &
System Identification
Strategy Mapping
π
Selective
Replication
Strategy
Implementation π
Simulation System
Reinforcement Learning &
Generalization
Strategy evaluation &
Model estimation
Automation &
Self-learning systems
13. 4/14
Our Previous Work
I Finding Effective Security Strategies through Reinforcement
Learning and Self-Play1
I Learning Intrusion Prevention Policies through Optimal
Stopping2
I A System for Interactive Examination of Learned Security
Policies3
I Intrusion Prevention Through Optimal Stopping4
I Learning Security Strategies through Game Play and Optimal
Stopping5
1
Kim Hammar and Rolf Stadler. “Finding Effective Security Strategies through Reinforcement Learning and
Self-Play”. In: International Conference on Network and Service Management (CNSM 2020). Izmir, Turkey, 2020.
2
Kim Hammar and Rolf Stadler. “Learning Intrusion Prevention Policies through Optimal Stopping”. In:
International Conference on Network and Service Management (CNSM 2021).
http://dl.ifip.org/db/conf/cnsm/cnsm2021/1570732932.pdf. Izmir, Turkey, 2021.
3
Kim Hammar and Rolf Stadler. “A System for Interactive Examination of Learned Security Policies”. In:
NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium. 2022, pp. 1–3. doi:
10.1109/NOMS54207.2022.9789707.
4
Kim Hammar and Rolf Stadler. “Intrusion Prevention Through Optimal Stopping”. In: IEEE Transactions on
Network and Service Management 19.3 (2022), pp. 2333–2348. doi: 10.1109/TNSM.2022.3176781.
5
Kim Hammar and Rolf Stadler. “Learning Security Strategies through Game Play and Optimal Stopping”. In:
Proceedings of the ML4Cyber workshop, ICML 2022, Baltimore, USA, July 17-23, 2022. PMLR, 2022.
14. 5/14
This Paper: Learning in Dynamic IT Environments6
I Challenge: operational IT environments are dynamic
I Components may fail, load patterns can shift, etc.
I Contribution: we present a framework for learning and
updating security policies in dynamic IT environments
Policy Learning
Agent
Environment
System Identification
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Digital Twin
and Attack
Scenarios
Target
System
Model
M
Traces h1, h2, . . .
Policy π
Configuration I
and change events
Policy π
Policy evaluation &
Data collection
Automated
security policy
6
Kim Hammar and Rolf Stadler. “An Online Framework for Adapting Security Policies in Dynamic IT
Environments”. In: International Conference on Network and Service Management (CNSM 2022). Thessaloniki,
Greece, 2022.
15. 6/14
Learning in Dynamic IT Environments
Algorithm 1: High-level execution of the framework
Input: emulator: method to create digital twin
ϕ: system identification algorithm
φ: policy learning algorithm
1 Algorithm (emulator, ϕ, φ)
2 do in parallel
3 DigitalTwin(emulator)
4 SystemIdProcess(ϕ)
5 LearningProcess(φ)
6 end
1 Procedure DigitalTwin(emulator)
2 Loop
3 π ← ReceiveFromLearningProcess()
4 ht ← CollectTrace(π)
5 SendToSystemIdProcess(ht)
6 UpdateDigitalTwin(emulator)
7 EndLoop
1 Procedure SystemIdProcess(ϕ)
2 Loop
3 h1, h2, . . . ← ReceiveFromDigitalTwin()
4 M ← ϕ(h1, h2, . . .) // estimate model
5 SendToLearningProcess(M)
6 EndLoop
1 Procedure LearningProcess(φ)
2 Loop
3 M ← ReceiveFromSystemIdProcess()
4 π ← φ(M) // learn policy π
5 SendToDigitalTwin(π)
6 EndLoop
16. 6/14
Learning in Dynamic IT Environments
Algorithm 2: High-level execution of the framework
Input: emulator: method to create digital twin
ϕ: system identification algorithm
φ: policy learning algorithm
1 Algorithm (emulator, ϕ, φ)
2 do in parallel
3 DigitalTwin(emulator)
4 SystemIdProcess(ϕ)
5 LearningProcess(φ)
6 end
1 Procedure DigitalTwin(emulator)
2 Loop
3 π ← ReceiveFromLearningProcess()
4 ht ← CollectTrace(π)
5 SendToSystemIdProcess(ht)
6 UpdateDigitalTwin(emulator)
7 EndLoop
1 Procedure SystemIdProcess(ϕ)
2 Loop
3 h1, h2, . . . ← ReceiveFromDigitalTwin()
4 M ← ϕ(h1, h2, . . .) // estimate model
5 SendToLearningProcess(M)
6 EndLoop
1 Procedure LearningProcess(φ)
2 Loop
3 M ← ReceiveFromSystemIdProcess()
4 π ← φ(M) // learn policy π
5 SendToDigitalTwin(π)
6 EndLoop
17. 6/14
The Digital Twin
Algorithm 3: High-level execution of the framework
Input: emulator: method to create digital twin
ϕ: system identification algorithm
φ: policy learning algorithm
1 Algorithm (emulator, ϕ, φ)
2 do in parallel
3 DigitalTwin(emulator)
4 SystemIdProcess(ϕ)
5 LearningProcess(φ)
6 end
1 Procedure DigitalTwin(emulator)
2 Loop
3 π ← ReceiveFromLearningProcess()
4 ht ← CollectTrace(π)
5 SendToSystemIdProcess(ht)
6 UpdateDigitalTwin(emulator)
7 EndLoop
1 Procedure SystemIdProcess(ϕ)
2 Loop
3 h1, h2, . . . ← ReceiveFromDigitalTwin()
4 M ← ϕ(h1, h2, . . .) // estimate model
5 SendToLearningProcess(M)
6 EndLoop
1 Procedure LearningProcess(φ)
2 Loop
3 M ← ReceiveFromSystemIdProcess()
4 π ← φ(M) // learn policy π
5 SendToDigitalTwin(π)
6 EndLoop
18. 7/14
Creating a Digital Twin of the Target System
I Emulate hosts with docker containers
I Emulate IPS and vulnerabilities with
software
I Network isolation and traffic shaping
through NetEm in the Linux kernel
I Enforce resource constraints using
cgroups.
I Emulate client arrivals with Poisson
process
I Internal connections are full-duplex
& loss-less with bit capacities of 1000
Mbit/s
I External connections are full-duplex
with bit capacities of 100 Mbit/s &
0.1% packet loss in normal operation
and random bursts of 1% packet loss
Attacker Clients
. . .
Defender
1 IPS
1
alerts
Gateway
7 8 9 10 11
6
5
4
3
2
12
13 14 15 16
17
18
19
21
23
20
22
24
25 26
27 28 29 30 31
19. 7/14
Creating a Digital Twin of the Target System
I Emulate hosts with docker containers
I Emulate IPS and vulnerabilities with
software
I Network isolation and traffic shaping
through NetEm in the Linux kernel
I Enforce resource constraints using
cgroups.
I Emulate client arrivals with Poisson
process
I Internal connections are full-duplex
& loss-less with bit capacities of 1000
Mbit/s
I External connections are full-duplex
with bit capacities of 100 Mbit/s &
0.1% packet loss in normal operation
and random bursts of 1% packet loss
Attacker Clients
. . .
Defender
1 IPS
1
alerts
Gateway
7 8 9 10 11
6
5
4
3
2
12
13 14 15 16
17
18
19
21
23
20
22
24
25 26
27 28 29 30 31
20. 7/14
Creating a Digital Twin of the Target System
I Emulate hosts with docker containers
I Emulate IPS and vulnerabilities with
software
I Network isolation and traffic shaping
through NetEm in the Linux kernel
I Enforce resource constraints using
cgroups.
I Emulate client arrivals with Poisson
process
I Internal connections are full-duplex
& loss-less with bit capacities of 1000
Mbit/s
I External connections are full-duplex
with bit capacities of 100 Mbit/s &
0.1% packet loss in normal operation
and random bursts of 1% packet loss
Attacker Clients
. . .
Defender
1 IPS
1
alerts
Gateway
7 8 9 10 11
6
5
4
3
2
12
13 14 15 16
17
18
19
21
23
20
22
24
25 26
27 28 29 30 31
21. 7/14
Creating a Digital Twin of the Target System
I Emulate hosts with docker containers
I Emulate IPS and vulnerabilities with
software
I Network isolation and traffic shaping
through NetEm in the Linux kernel
I Enforce resource constraints using
cgroups.
I Emulate client arrivals with Poisson
process
I Internal connections are full-duplex
& loss-less with bit capacities of 1000
Mbit/s
I External connections are full-duplex
with bit capacities of 100 Mbit/s &
0.1% packet loss in normal operation
and random bursts of 1% packet loss
Attacker Clients
. . .
Defender
1 IPS
1
alerts
Gateway
7 8 9 10 11
6
5
4
3
2
12
13 14 15 16
17
18
19
21
23
20
22
24
25 26
27 28 29 30 31
22. 7/14
Creating a Digital Twin of the Target System
I Emulate hosts with docker containers
I Emulate IPS and vulnerabilities with
software
I Network isolation and traffic shaping
through NetEm in the Linux kernel
I Enforce resource constraints using
cgroups.
I Emulate client arrivals with Poisson
process
I Internal connections are full-duplex
& loss-less with bit capacities of 1000
Mbit/s
I External connections are full-duplex
with bit capacities of 100 Mbit/s &
0.1% packet loss in normal operation
and random bursts of 1% packet loss
Attacker Clients
. . .
Defender
1 IPS
1
alerts
Gateway
7 8 9 10 11
6
5
4
3
2
12
13 14 15 16
17
18
19
21
23
20
22
24
25 26
27 28 29 30 31
23. 7/14
Creating a Digital Twin of the Target System
I Emulate hosts with docker containers
I Emulate IPS and vulnerabilities with
software
I Network isolation and traffic shaping
through NetEm in the Linux kernel
I Enforce resource constraints using
cgroups.
I Emulate client arrivals with Poisson
process
I Internal connections are full-duplex
& loss-less with bit capacities of 1000
Mbit/s
I External connections are full-duplex
with bit capacities of 100 Mbit/s &
0.1% packet loss in normal operation
and random bursts of 1% packet loss
Attacker Clients
. . .
Defender
1 IPS
1
alerts
Gateway
7 8 9 10 11
6
5
4
3
2
12
13 14 15 16
17
18
19
21
23
20
22
24
25 26
27 28 29 30 31
24. 7/14
The System Identification Process
Algorithm 4: High-level execution of the framework
Input: emulator: method to create digital twin
ϕ: system identification algorithm
φ: policy learning algorithm
1 Algorithm (emulator, ϕ, φ)
2 do in parallel
3 DigitalTwin(emulator)
4 SystemIdProcess(ϕ)
5 LearningProcess(φ)
6 end
1 Procedure DigitalTwin(emulator)
2 Loop
3 π ← ReceiveFromLearningProcess()
4 ht ← CollectTrace(π)
5 SendToSystemIdProcess(ht)
6 UpdateDigitalTwin(emulator)
7 EndLoop
1 Procedure SystemIdProcess(ϕ)
2 Loop
3 h1, h2, . . . ← ReceiveFromDigitalTwin()
4 M ← ϕ(h1, h2, . . .) // estimate model
5 SendToLearningProcess(M)
6 EndLoop
1 Procedure LearningProcess(φ)
2 Loop
3 M ← ReceiveFromSystemIdProcess()
4 π ← φ(M) // learn policy π
5 SendToDigitalTwin(π)
6 EndLoop
25. 8/14
System Model
I We model the evolution of the system with a discrete-time
dynamical system.
I We assume a Markovian system with stochastic dynamics and
partial observability.
Stochastic
System
(Markov)
Noisy
Sensor
Optimal
filter
Controller
action at
observation
ot
state
st
belief
bt
26. 9/14
System Identification
ˆ
f
O
(o
t
|0)
Probability distribution of # IPS alerts weighted by priority ot
0 1000 2000 3000 4000 5000 6000 7000 8000 9000
ˆ
f
O
(o
t
|1)
Fitted model Distribution st = 0 Distribution st = 1
I The distribution fO of defender observations (system metrics)
is unknown.
I We fit a Gaussian mixture distribution ˆ
fO as an estimate of fO
in the target system.
I For each state s, we obtain the conditional distribution ˆ
fO|s
through expectation-maximization.
27. 9/14
The Policy Learning Process
Algorithm 5: High-level execution of the framework
Input: emulator: method to create digital twin
ϕ: system identification algorithm
φ: policy learning algorithm
1 Algorithm (emulator, ϕ, φ)
2 do in parallel
3 DigitalTwin(emulator)
4 SystemIdProcess(ϕ)
5 LearningProcess(φ)
6 end
1 Procedure DigitalTwin(emulator)
2 Loop
3 π ← ReceiveFromLearningProcess()
4 ht ← CollectTrace(π)
5 SendToSystemIdProcess(ht)
6 UpdateDigitalTwin(emulator)
7 EndLoop
1 Procedure SystemIdProcess(ϕ)
2 Loop
3 h1, h2, . . . ← ReceiveFromDigitalTwin()
4 M ← ϕ(h1, h2, . . .) // estimate model
5 SendToLearningProcess(M)
6 EndLoop
1 Procedure LearningProcess(φ)
2 Loop
3 M ← ReceiveFromSystemIdProcess()
4 π ← φ(M) // learn policy π
5 SendToDigitalTwin(π)
6 EndLoop
28. 10/14
Learning Effective Defender Policies
I Optimization problem:
I Each stopping time = one
defensive action
I Maximize reward of
stopping times
τL, τL−1, . . . , τ1:
π∗
l ∈ arg max
πl
Eπl
" τL−1
X
t=1
γt−1
RC
st ,st+1,L
+ γτL−1
RS
sτL
,sτL+1,L + . . . +
τ1−1
X
t=τ2+1
γt−1
RC
st ,st+1,1 + γτ1−1
RS
sτ1
,sτ1+1,1
#
I Optimization methods:
Reinforcement learning,
dynamic programming,
computational game theory,
etc.
0 1
∅
t ≥ 1
lt > 0
t ≥ 2
lt > 0
intrusion starts
Qt = 1
final stop
lt = 0
intrusion
prevented
lt = 0
29. 11/14
Putting it all together: Learning in Dynamic Environments
1. Changes in the target system are monitored.
2. When changes are detected, the emulation is updated.
3. Attack and defense scenarios are run in the emulation to
collect data.
4. The system model and the defender policy are updated
periodically with the new data.
Policy Learning
Agent
Environment
System Identification
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Digital Twin
and Attack
Scenarios
Target
System
Model
M
Traces h1, h2, . . .
Policy π
Configuration I
and change events
Policy π
Policy evaluation &
Data collection
Automated
security policy
30. 12/14
Use Case: Intrusion Prevention
I A Defender owns an infrastructure
I Consists of connected components
I Components run network services
I Defender defends the infrastructure
by monitoring and active defense
I Has partial observability
I An Attacker seeks to intrude on the
infrastructure
I Has a partial view of the
infrastructure
I Wants to compromise specific
components
I Attacks by reconnaissance,
exploitation and pivoting
Attacker Clients
. . .
Defender
1 IPS
1
alerts
Gateway
7 8 9 10 11
6
5
4
3
2
12
13 14 15 16
17
18
19
21
23
20
22
24
25 26
27 28 29 30 31
31. 13/14
Results: Learning in a Dynamic IT Environment
200
400
600
#
clients
5000
10000
E[
Ẑ]
0 10 20 30 40 50
execution time (hours)
0
20
Avg
reward
E
h
Ẑt,O|1
i
E
h
Ẑt,O|0
i
E
h
Ẑ
[10]
t,O|1
i
E
h
Ẑ
[10]
t,O|0
i
upper bound Our framework [10]
Results from running our framework for 50 hours in the digital
twin/emulation.
32. 14/14
Conclusions
I We present a framework for learning
and updating security policies in
dynamic IT environments
I We apply the method to an intrusion
prevention use case.
I We show numerical results in a
realistic emulation environment.
I We design a solution framework guided
by the theory of optimal stopping.
s1,1 s1,2 s1,3 . . . s1,n
s2,1 s2,2 s2,3 . . . s2,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Emulation
Target
System
Model Creation &
System Identification
Strategy Mapping
π
Selective
Replication
Strategy
Implementation π
Simulation &
Learning