This presentation describes an Anomaly-based Intrusion Detection System for securing Linux Containers. The presentation was given on Monday, September 21, 2015, as part of the ESORICS 2015 Workshop on Security and Trust Management (STM).
System calls are the primary mechanism of user-to-kernel interaction. Today the Linux system call interface has achieved a primacy and ubiquity that make it an ideal layer at which to understand single-system and distributed-system pathologies. Sysdig advances the art of system call observability by drawing on the systems that came before it. Informed by his work with /proc, process tools and DTrace, Adam will walk through a history of system calls and system call observability from simple systems like truss and strace, moderns ones like DTrace and SystemTab, and ancient ones from the early days of Unix.
This document provides an overview of fuzzing techniques and the Sulley fuzzing framework. It begins with definitions of fuzzing and different fuzzing techniques like static testing, randomized fuzzing, and mutation-based fuzzing. The rest of the document demonstrates how to setup and use the Sulley framework to fuzz protocols like HTTP and file formats. It includes explanations of the Sulley API and how to generate test cases, monitor for crashes, and analyze results. Examples are provided of fuzzing HTTP servers and file formats.
This document provides an introduction to OS Query, an open source tool for querying information about operating systems. It outlines lessons that will cover OS Query basics, deploying OS Query, and running basic commands. Examples of queries are provided to find running processes, network connections, kernel modules, and more. Use cases are described for how OS Query can be used to detect new listening processes, outbound network activity, deleted process binaries, and loaded kernel modules.
This document provides an introduction and guide to performing a review of a Linux host system. It outlines the steps and areas to examine, including the operating system, kernel, time management, packages, logging, network configuration, filesystem, users, services, and more. Tips are provided throughout for taking thorough notes during the review and identifying potential issues on the system. The goal is to understand the system's security posture and configuration by analyzing each component in detail.
The document provides an overview of a presentation on kernel auditing research, including:
- Three parts to the presentation covering kernel auditing research, exploitable bugs found, and kernel exploitation.
- Audits were conducted on several open source kernels, finding over 100 vulnerabilities across them.
- A sample of exploitable bugs is then presented from the audited kernels to provide evidence that kernels are not bug-free and vulnerabilities can be relatively simple to find and exploit.
A brief talk on systems performance for the July 2013 meetup "A Midsummer Night's System", video: http://www.youtube.com/watch?v=P3SGzykDE4Q. This summarizes how systems performance has changed from the 1990's to today. This was the reason for writing a new book on systems performance, to provide a reference that is up to date, covering new tools, technologies, and methodologies.
This document discusses compatibility issues that can arise when upgrading to a newer kernel version in an embedded Linux system with a product lifecycle of over 10 years. It provides examples of different types of tests that should be performed to ensure compatibility, including API level tests using LTP, performance tests of CPU, network and I/O throughput and latency, and quality tests of file systems for data reliability over the long term. The examples show how compatibility issues can be identified at the API level and how performance and quality can vary depending on the kernel and file system version and configuration.
This document discusses Linux monitoring tools. It defines monitoring as observing an application to understand what it is doing, debug issues, and enhance performance. Linux has many built-in monitoring tools ranging from basic GUI tools to powerful command line tools. Top tools discussed include GNOME System Monitor, KSysGuard, XFCE Task Manager, and the console-based tool top, which shows CPU usage. Top has many advanced options like killing processes, customizing fields, and viewing by threads. Scripting and automating tools is recommended for effective monitoring.
System calls are the primary mechanism of user-to-kernel interaction. Today the Linux system call interface has achieved a primacy and ubiquity that make it an ideal layer at which to understand single-system and distributed-system pathologies. Sysdig advances the art of system call observability by drawing on the systems that came before it. Informed by his work with /proc, process tools and DTrace, Adam will walk through a history of system calls and system call observability from simple systems like truss and strace, moderns ones like DTrace and SystemTab, and ancient ones from the early days of Unix.
This document provides an overview of fuzzing techniques and the Sulley fuzzing framework. It begins with definitions of fuzzing and different fuzzing techniques like static testing, randomized fuzzing, and mutation-based fuzzing. The rest of the document demonstrates how to setup and use the Sulley framework to fuzz protocols like HTTP and file formats. It includes explanations of the Sulley API and how to generate test cases, monitor for crashes, and analyze results. Examples are provided of fuzzing HTTP servers and file formats.
This document provides an introduction to OS Query, an open source tool for querying information about operating systems. It outlines lessons that will cover OS Query basics, deploying OS Query, and running basic commands. Examples of queries are provided to find running processes, network connections, kernel modules, and more. Use cases are described for how OS Query can be used to detect new listening processes, outbound network activity, deleted process binaries, and loaded kernel modules.
This document provides an introduction and guide to performing a review of a Linux host system. It outlines the steps and areas to examine, including the operating system, kernel, time management, packages, logging, network configuration, filesystem, users, services, and more. Tips are provided throughout for taking thorough notes during the review and identifying potential issues on the system. The goal is to understand the system's security posture and configuration by analyzing each component in detail.
The document provides an overview of a presentation on kernel auditing research, including:
- Three parts to the presentation covering kernel auditing research, exploitable bugs found, and kernel exploitation.
- Audits were conducted on several open source kernels, finding over 100 vulnerabilities across them.
- A sample of exploitable bugs is then presented from the audited kernels to provide evidence that kernels are not bug-free and vulnerabilities can be relatively simple to find and exploit.
A brief talk on systems performance for the July 2013 meetup "A Midsummer Night's System", video: http://www.youtube.com/watch?v=P3SGzykDE4Q. This summarizes how systems performance has changed from the 1990's to today. This was the reason for writing a new book on systems performance, to provide a reference that is up to date, covering new tools, technologies, and methodologies.
This document discusses compatibility issues that can arise when upgrading to a newer kernel version in an embedded Linux system with a product lifecycle of over 10 years. It provides examples of different types of tests that should be performed to ensure compatibility, including API level tests using LTP, performance tests of CPU, network and I/O throughput and latency, and quality tests of file systems for data reliability over the long term. The examples show how compatibility issues can be identified at the API level and how performance and quality can vary depending on the kernel and file system version and configuration.
This document discusses Linux monitoring tools. It defines monitoring as observing an application to understand what it is doing, debug issues, and enhance performance. Linux has many built-in monitoring tools ranging from basic GUI tools to powerful command line tools. Top tools discussed include GNOME System Monitor, KSysGuard, XFCE Task Manager, and the console-based tool top, which shows CPU usage. Top has many advanced options like killing processes, customizing fields, and viewing by threads. Scripting and automating tools is recommended for effective monitoring.
The document outlines the setup and plan for the second iteration of a test environment called "The Hunt for Blue Leader". It details the virtual machines and IP addresses that will be used, including reusing some existing VMs and creating new ones. It also performs a risk assessment that identifies potential threats like unpatched systems, no security policies, and confidential data being at risk from hackers or industrial espionage.
Monit is an open source tool that monitors systems and applications and automatically restarts services if they fail or exceed configurable resource limits. It can monitor files, directories, processes, hosts, and custom scripts/programs. Monit is configured via a global configuration file and additional files for specific checks. It can monitor system resources, file integrity, network interfaces, remote hosts, and check for service dependencies. Monit also includes a web interface for monitoring and management.
Perf is a Linux profiler tool that uses performance monitoring hardware to count various events like CPU cycles, instructions, and cache misses. It can count events for a single thread, entire process, specific CPUs, or system-wide. Perf stat is used to count events during process execution, while perf record collects profiling data in a file for later analysis with perf report.
Monit is a utility that monitors processes, files, directories, and devices on a Unix system. It conducts automatic maintenance and repair. Monit can start processes that are not running, restart processes that are not responding, and stop processes that are using too many resources. It monitors services and items for changes and errors, and can send alerts about issues. Monit is configured via a control file and can monitor both local and remote systems. It provides a web interface for accessing status information.
This document summarizes the Linux audit system and proposes improvements. It begins with an overview of auditd and how audit messages are generated and processed in the kernel. Issues with auditd's performance, output format, and filtering are discussed. An alternative approach is proposed that uses libmnl for netlink handling, groups related audit messages into JSON objects, applies Lua-based filtering, and supports multiple output types like ZeroMQ and syslog. Benchmark results show this rewrite reduces CPU usage compared to auditd. The document advocates for continued abstraction and integration of additional data sources while avoiding feature creep.
2009-08-24 The Linux Audit Subsystem Deep DiveShawn Wells
Presented at SHARE Denver 2009. Why is Linux auditing needed? What can it do for me? How does it work? What events get audited? How do I make sense of all the data?
Fundamentals of Complete Crash and Hang Memory Dump Analysis (Revision 2)Dmitry Vostokov
This document provides an overview and agenda for a training on complete crash and hang memory dump analysis. The training will cover basics of dump generation and memory spaces, common analysis challenges, and commands. It will then discuss pattern-driven analysis, providing examples of common patterns like blocked threads, wait chains, resource consumption, corruption signs, and special processes. The agenda indicates there will be an exercise and discussions of additional techniques and tools.
DevOps Fest 2020. Philipp Krenn. Scale Your Auditing EventsDevOps_Fest
The document discusses scaling auditing events using tools like Auditbeat and Elastic SIEM. It provides an overview of Auditd for collecting Linux audit logs, how Auditbeat can correlate and parse those logs. It also demonstrates how to run Auditbeat and its Auditd module on Kubernetes, and send the data to Elastic SIEM which uses the Elastic Common Schema for indexing. The document concludes with links to example code and similar open source solutions.
This document provides a summary of little known native debugging tricks in Visual Studio. It discusses using the expression evaluator for evaluating expressions in different scopes. It also covers using Edit and Continue, setting breakpoints on specific errors, breaking on all methods of a class, naming native threads, and searching through memory. The document provides code examples and links to blog posts with more details on these techniques.
MeetBSDCA 2014 Performance Analysis for BSD, by Brendan Gregg. A tour of five relevant topics: observability tools, methodologies, benchmarking, profiling, and tracing. Tools summarized include pmcstat and DTrace.
The Linux auditing framework allows system administrators to monitor activities on a Linux system and analyze audit logs. It is included in major Linux distributions and supports various compliance standards. Administrators can add audit rules to monitor specific files, directories, or system calls. The audit logs record events and can be searched using commands like ausearch to investigate activity like users reading the password file. Audit rules are stored in /etc/audit/audit.rules and logs are stored in /var/log/audit/.
Shellshock is a 25-year-old vulnerability in the widely used Bash shell. It allows remote execution of commands via specially crafted environment variables. The bug has a severity score of 10 out of 10 due to its low complexity to exploit, lack of authentication needed, and ability to give full control of vulnerable systems. It affects many systems running Linux, embedded devices, and Internet of Things devices. While patches were quickly released, many older and forgotten systems will remain unpatched and vulnerable indefinitely.
Shellshock is a vulnerability in Bash that allows attackers to execute arbitrary commands on vulnerable systems. It was discovered in September 2014 and affected many Linux and Unix-based systems. The bug allows environment variables passed to Bash to execute code, potentially allowing remote code execution. This could enable large-scale DDoS attacks and access to sensitive systems.
Video: https://www.youtube.com/watch?v=uibLwoVKjec . Talk by Brendan Gregg for Sysdig CCWFS 2016. Abstract:
"You have a system with an advanced programmatic tracer: do you know what to do with it? Brendan has used numerous tracers in production environments, and has published hundreds of tracing-based tools. In this talk he will share tips and know-how for creating CLI tracing tools and GUI visualizations, to solve real problems effectively. Programmatic tracing is an amazing superpower, and this talk will show you how to wield it!"
Performance Wins with eBPF: Getting Started (2021)Brendan Gregg
This document provides an overview of using eBPF (extended Berkeley Packet Filter) to quickly get performance wins as a sysadmin. It recommends installing BCC and bpftrace tools to easily find issues like periodic processes, misconfigurations, unexpected TCP sessions, or slow file system I/O. A case study examines using biosnoop to identify which processes were causing disk latency issues. The document suggests thinking like a sysadmin first by running tools, then like a programmer if a problem requires new tools. It also outlines recommended frontends depending on use cases and provides references to learn more about BPF.
Shellshock is a security bug in Bash (Bourne Again SHell) command-line interpreter, mostly known as shell. Linux expert Stéphane Chazelas revealed this bug on 24th September 2014, and it is more severe than Heartbleed bug.
Network Anomaly Detection Using Autonomous System Flow AggregatesThienne Johnson
This document proposes a method for network anomaly detection using autonomous system (AS) flow aggregates. The method works by aggregating IP flows into AS flows to reduce data size while maintaining essential information. Metrics like packet count, traffic volume, IP flow count, and AS flow count are collected and their statistical distributions over time are used as a baseline. During online monitoring, the Jeffrey distance between current and historical distributions is computed to detect anomalies. Multiple metrics can be combined into a composite metric to better characterize abnormal behaviors. The method was tested on a dataset simulating DDoS attacks and shown to successfully detect anomalies with reduced overhead compared to analyzing raw IP flows.
The document outlines the setup and plan for the second iteration of a test environment called "The Hunt for Blue Leader". It details the virtual machines and IP addresses that will be used, including reusing some existing VMs and creating new ones. It also performs a risk assessment that identifies potential threats like unpatched systems, no security policies, and confidential data being at risk from hackers or industrial espionage.
Monit is an open source tool that monitors systems and applications and automatically restarts services if they fail or exceed configurable resource limits. It can monitor files, directories, processes, hosts, and custom scripts/programs. Monit is configured via a global configuration file and additional files for specific checks. It can monitor system resources, file integrity, network interfaces, remote hosts, and check for service dependencies. Monit also includes a web interface for monitoring and management.
Perf is a Linux profiler tool that uses performance monitoring hardware to count various events like CPU cycles, instructions, and cache misses. It can count events for a single thread, entire process, specific CPUs, or system-wide. Perf stat is used to count events during process execution, while perf record collects profiling data in a file for later analysis with perf report.
Monit is a utility that monitors processes, files, directories, and devices on a Unix system. It conducts automatic maintenance and repair. Monit can start processes that are not running, restart processes that are not responding, and stop processes that are using too many resources. It monitors services and items for changes and errors, and can send alerts about issues. Monit is configured via a control file and can monitor both local and remote systems. It provides a web interface for accessing status information.
This document summarizes the Linux audit system and proposes improvements. It begins with an overview of auditd and how audit messages are generated and processed in the kernel. Issues with auditd's performance, output format, and filtering are discussed. An alternative approach is proposed that uses libmnl for netlink handling, groups related audit messages into JSON objects, applies Lua-based filtering, and supports multiple output types like ZeroMQ and syslog. Benchmark results show this rewrite reduces CPU usage compared to auditd. The document advocates for continued abstraction and integration of additional data sources while avoiding feature creep.
2009-08-24 The Linux Audit Subsystem Deep DiveShawn Wells
Presented at SHARE Denver 2009. Why is Linux auditing needed? What can it do for me? How does it work? What events get audited? How do I make sense of all the data?
Fundamentals of Complete Crash and Hang Memory Dump Analysis (Revision 2)Dmitry Vostokov
This document provides an overview and agenda for a training on complete crash and hang memory dump analysis. The training will cover basics of dump generation and memory spaces, common analysis challenges, and commands. It will then discuss pattern-driven analysis, providing examples of common patterns like blocked threads, wait chains, resource consumption, corruption signs, and special processes. The agenda indicates there will be an exercise and discussions of additional techniques and tools.
DevOps Fest 2020. Philipp Krenn. Scale Your Auditing EventsDevOps_Fest
The document discusses scaling auditing events using tools like Auditbeat and Elastic SIEM. It provides an overview of Auditd for collecting Linux audit logs, how Auditbeat can correlate and parse those logs. It also demonstrates how to run Auditbeat and its Auditd module on Kubernetes, and send the data to Elastic SIEM which uses the Elastic Common Schema for indexing. The document concludes with links to example code and similar open source solutions.
This document provides a summary of little known native debugging tricks in Visual Studio. It discusses using the expression evaluator for evaluating expressions in different scopes. It also covers using Edit and Continue, setting breakpoints on specific errors, breaking on all methods of a class, naming native threads, and searching through memory. The document provides code examples and links to blog posts with more details on these techniques.
MeetBSDCA 2014 Performance Analysis for BSD, by Brendan Gregg. A tour of five relevant topics: observability tools, methodologies, benchmarking, profiling, and tracing. Tools summarized include pmcstat and DTrace.
The Linux auditing framework allows system administrators to monitor activities on a Linux system and analyze audit logs. It is included in major Linux distributions and supports various compliance standards. Administrators can add audit rules to monitor specific files, directories, or system calls. The audit logs record events and can be searched using commands like ausearch to investigate activity like users reading the password file. Audit rules are stored in /etc/audit/audit.rules and logs are stored in /var/log/audit/.
Shellshock is a 25-year-old vulnerability in the widely used Bash shell. It allows remote execution of commands via specially crafted environment variables. The bug has a severity score of 10 out of 10 due to its low complexity to exploit, lack of authentication needed, and ability to give full control of vulnerable systems. It affects many systems running Linux, embedded devices, and Internet of Things devices. While patches were quickly released, many older and forgotten systems will remain unpatched and vulnerable indefinitely.
Shellshock is a vulnerability in Bash that allows attackers to execute arbitrary commands on vulnerable systems. It was discovered in September 2014 and affected many Linux and Unix-based systems. The bug allows environment variables passed to Bash to execute code, potentially allowing remote code execution. This could enable large-scale DDoS attacks and access to sensitive systems.
Video: https://www.youtube.com/watch?v=uibLwoVKjec . Talk by Brendan Gregg for Sysdig CCWFS 2016. Abstract:
"You have a system with an advanced programmatic tracer: do you know what to do with it? Brendan has used numerous tracers in production environments, and has published hundreds of tracing-based tools. In this talk he will share tips and know-how for creating CLI tracing tools and GUI visualizations, to solve real problems effectively. Programmatic tracing is an amazing superpower, and this talk will show you how to wield it!"
Performance Wins with eBPF: Getting Started (2021)Brendan Gregg
This document provides an overview of using eBPF (extended Berkeley Packet Filter) to quickly get performance wins as a sysadmin. It recommends installing BCC and bpftrace tools to easily find issues like periodic processes, misconfigurations, unexpected TCP sessions, or slow file system I/O. A case study examines using biosnoop to identify which processes were causing disk latency issues. The document suggests thinking like a sysadmin first by running tools, then like a programmer if a problem requires new tools. It also outlines recommended frontends depending on use cases and provides references to learn more about BPF.
Shellshock is a security bug in Bash (Bourne Again SHell) command-line interpreter, mostly known as shell. Linux expert Stéphane Chazelas revealed this bug on 24th September 2014, and it is more severe than Heartbleed bug.
Network Anomaly Detection Using Autonomous System Flow AggregatesThienne Johnson
This document proposes a method for network anomaly detection using autonomous system (AS) flow aggregates. The method works by aggregating IP flows into AS flows to reduce data size while maintaining essential information. Metrics like packet count, traffic volume, IP flow count, and AS flow count are collected and their statistical distributions over time are used as a baseline. During online monitoring, the Jeffrey distance between current and historical distributions is computed to detect anomalies. Multiple metrics can be combined into a composite metric to better characterize abnormal behaviors. The method was tested on a dataset simulating DDoS attacks and shown to successfully detect anomalies with reduced overhead compared to analyzing raw IP flows.
This document discusses intrusion detection techniques. It describes misuse detection, which detects known attacks based on predefined rules, and anomaly detection, which detects deviations from normal behavior. Common misuse detection methods include rule-based, state transition analysis, and expert systems. Anomaly detection methods include statistical methods, machine learning, and data mining. The document also proposes ideas to improve intrusion detection, such as using association rule mining to detect patterns in audit data and discovering new patterns by analyzing existing rulesets.
Intrusion Detection Techniques for Mobile Wireless Networksguest1b5f71
This document proposes techniques for intrusion detection in mobile wireless networks. It discusses vulnerabilities in these networks and existing IDS approaches. It then presents a distributed and cooperative architecture where each node has an IDS agent to monitor for local anomalies. An information-theoretic approach is used for anomaly detection modeling traffic patterns, routing activities, and topological changes. Experiments show that on-demand routing protocols like DSR and AODV work better than table-driven protocols for detection due to path and pattern redundancy. The proposed techniques aim to provide effective intrusion detection in mobile ad hoc networks.
Intrusion detection in wireless sensor networkVinayak Raja
• Is a software application that monitors network or system activities for malicious activities policy violations and produces reports to a management station.
• OBJECTIVE: An Intrusion detection system (IDS) is software designed to detect unwanted attempts at accessing, manipulating, and/or disabling of computer mainly through a network, such as the Internet.
• PROBLEM SOLVED: Several types of malicious behaviors that can compromise the security and trust of a computer system. This includes network attacks against vulnerable services, data driven attacks on applications, host based attacks such as privilege escalation, unauthorized logins and access to sensitive files, and viruses. IDS solved this problem.
This document discusses intrusion detection systems (IDS), beginning with historical examples of cyber attacks. It describes the role of firewalls in network security and how IDS serve as a complementary technique to firewalls by monitoring network traffic and detecting intrusions. The document outlines different types of IDS, including host-based, network-based, and hybrid systems. It also covers common intrusion detection techniques and the limitations of IDS in providing comprehensive security.
An Introduction into Anomaly Detection Using CUSUMDominik Dahlem
A gentle introduction into anomaly detection using the cumulative sum (CUSUM) algorithm. Extensive visuals are used to exemplify the inner workings of the algorithm. CUSUM relies on stationarity assumptions of the underlying process.
This document defines spyware and discusses methods used for passive spyware tracking, specifically web beacons and cookies. It demonstrates how a web beacon can be used to deposit a cookie and track browsing activity without consent. The document concludes that browser settings can prevent unauthorized cookie deposits and users should be cautious about what software they download and install.
Anomaly detection in deep learning (Updated) EnglishAdam Gibson
This document discusses anomaly detection in deep learning. It begins by defining what an anomaly is, such as abnormal patterns in data for fraud detection. It then discusses techniques for anomaly detection using unsupervised autoencoders and supervised recurrent neural networks. Finally, it provides an example reference architecture for an anomaly detection pipeline that ingests data from external sources using NiFi, sends it to Kafka, makes predictions using deep learning models, indexes predictions in Elasticsearch using Logstash, and renders the data in Kibana.
SPECT involves injecting a radiopharmaceutical that emits gamma rays. Detectors rotate around the body to acquire data from multiple angles and produce 3D images. It allows visualization of organ function. A gamma camera detects gamma rays and includes a collimator, scintillation detector, photomultiplier tubes, and computer. SPECT is used for heart, brain, and tumor imaging. It has lower resolution than PET but is commonly used to detect coronary artery disease.
Spyware is software that is installed onto a computer without the user's knowledge or consent in order to gather information about the user. It monitors user activity and transmits this data to third parties who may use it for advertising or sell it. Having spyware running in the background can slow down a computer and cause crashes due to it using system resources. Common ways of getting spyware include downloading file sharing programs or clicking on deceptive pop-up windows.
Anomaly detection in deep learning can be used for fraud detection by finding abnormal patterns in data like bad credit card transactions or fake locations. Deep learning is well-suited for anomaly detection because it can learn complex patterns from large amounts of data, represent its own features that are robust to noise, and learn cross-domain patterns. Techniques for anomaly detection include unsupervised methods using autoencoder reconstruction error and supervised methods using RNNs to learn from labeled time series data and predict anomalies. Production systems for anomaly detection can use streaming data from sources like Kafka with neural networks consuming the streaming updates.
Tomography involves measuring gamma ray attenuation along lines of sight at different angles in order to reconstruct the internal structure of an object. Filtered back projection enhances edges in the projection data through filtering and backprojects the filtered data to form an image. Iterative reconstruction accounts for physical factors like attenuation, scatter, and noise by comparing projections of a patient model to actual data and updating the model. Quality control ensures proper system operation through tests of the center of rotation, uniformity, and ability to detect small objects.
Thrift is an interface definition language and binary communication protocol used at Facebook as a remote procedure call framework. It combines a software stack with a code generation engine to build services that work efficiently across multiple languages like C#, C++, Java, PHP and Python. Thrift allows developers to define data structures and service interfaces in an IDL file, then generates code for clients and servers. It provides features like common data transport, versioning, and supports efficient protocols to handle large service call volumes across Facebook applications.
The students in P3 went on their first trip to the forest. They took a cowbell that was given to them by a friend and enjoyed making noise with it in the forest. They found pinecones, acorns, leaves and sticks during their exploration in the forest. After playing in the forest, they had a snack and then went to see horses. When they returned to school, they looked at everything they had collected from the forest. The cowbell was a big hit with all the students.
A quick slideshow to enforce some of the basics of giving good customer service in a call center. I made a few modifications to it so I hope this one is better liked. :)
A guided fuzzing approach for security testing of network protocol softwarebinish_hyunseok
Even though it was a homework presentation of review of the title paper for Ph.D. course but covered the concept of fuzzing, taint analysis and symbolic execution for beginner.
Would you Rather Have Telemetry into 2 Attacks or 20? An Insight Into Highly ...MITRE ATT&CK
From ATT&CKcon 3.0
By Jonny Johnson, Red Canary and Olaf Hartong, FalconForce
As defenders, we often find ourselves wanting "more" data. But why? Will this new data provide a lot of value or is it for a very niche circumstance? How many attacks does it apply to? Are we leveraging previous data sources to their full capability? Within this talk, Olaf and Jonny will walk through different data sources they leverage more than most when analyzing data within environments, why they do, and what these data sources do and can provide in terms of value to a defender.
Alexei Vladishev - Zabbix - Monitoring Solution for EveryoneZabbix
Zabbix is an open source monitoring solution that can monitor all levels of infrastructure across various platforms. It uses triggers to define problems and collects data through active and passive agents to analyze metrics and detect issues. When problems occur, Zabbix can automatically react through escalation procedures that include notifications, tickets, and restarts. It is highly scalable and offers features like anomaly detection, forecasting, and event correlation for complex environments.
The document provides an overview of Consul administration at scale at Criteo. Key points:
- Criteo uses Consul for service discovery across 35k servers with 3200 services and 260k instances
- A dedicated team of 5 people manages Consul infrastructure and tools 24/7
- Automation is key to make Consul predictable at scale through standardized service registration, ACLs, and automation tools
- Metrics, logs, and monitoring are critical to detect issues with Consul and the services it manages
Sourcefire Vulnerability Research Team Labslosalamos
Today's client side attack threats represent a boon for the attacker in ways to obfuscate, evade, and hide their attacks methods. Adobe PDF, Flash, Microsoft Office documents, and Javascript require a very deep understanding of the file format, how its interpreted in the Browser, and understanding of the byte code paths that some of these formats can generate. To effectively handle some of these types of attacks it requires processing of these files multiple times to deal with compression, obfuscation, program execution, etc. This requires a new type of system to handle this type of inspection. The NRT system allows for this deep file format understanding and inspection.
The document describes XenTT, a tool for deterministic replay in the Xen virtualization platform. XenTT records the execution history of a guest VM, including nondeterministic events like interrupts and I/O. It then replays the execution deterministically to allow for repeatable systems analysis. This is done by making the CPU and execution environment deterministic during replay and precisely recording the timing and location of nondeterministic events during the original run.
Getting Deep on Orchestration: APIs, Actors, and Abstractions in a Distribute...Docker, Inc.
Orchestration platforms let us work with higher level ideas like services and jobs; but there is more to a platform than scheduling and service discovery. A platform is a collection of actors and APIs that work together and provide those higher level abstractions on a distributed system. In this session we'll go deep on the architecture of open source orchestration platforms, consider scaling pains, reveal extension points, and reflect on an orchestration platform at Amazon. We'll finish with a demo of a homemade abstraction deployed on a live, multi-cloud Swarm cluster.
This document provides an overview and agenda for a presentation on OSSEC. It includes sections on log management, OSSEC features, architecture, log analysis with OSSEC, integrity checking, and management commands. The agenda covers topics like log sources, standards, decoding, analysis, and advanced rule building. It also discusses configuring OSSEC for integrity monitoring and includes example OSSEC rules and commands.
[若渴計畫] Challenges and Solutions of Window Remote ShellcodeAj MaChInE
This document discusses challenges and solutions related to window remote shellcode. It outlines challenges posed by antivirus software, EMET, firewalls, and IDS/IPS systems. It then describes various techniques for bypassing these protections, such as encryption, obfuscation, non-standard programming languages, and the use of tools like Meterpreter and Veil Framework payloads. Specific bypass techniques covered include DLL injection, process hollowing, reflective loading, and the use of techniques like one-way shells and HTTP stagers.
This document discusses various methods for executing operating system commands from within SAS code, including the X command, %sysexec, Call system, Systask command, and Filename pipe. It provides examples of using each method and discusses advantages and disadvantages. Alternatives like shell scripts are also addressed for situations where XCMD is not enabled.
This document summarizes a PowerShell presentation given at Bsides Greenville 2019. It provides wireless network credentials, links to PowerShell cheat sheets and demos, and lists the speaker's background and experience with PowerShell. The presentation agenda covers topics like moving around the file system, hashing, data storage, custom event logs, WinRM logging, port scanning, and persistence through profiles.
Chronix is a domain specific time series database designed for anomaly detection in operational data. It is optimized for the needs of anomaly detection by supporting domain specific data types, analysis algorithms, data models, and query languages. It aims to address limitations of general purpose time series databases by exploiting characteristics of operational data through features like optional pre-computation of extras, timestamp compression, domain specific records and compression techniques, and multi-dimensional storage. An evaluation using data from five industry projects found that Chronix has significantly smaller memory and storage footprints and faster data retrieval and analysis times compared to other time series databases.
Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 1Tanel Poder
The document describes troubleshooting a complex performance issue in an Oracle database. Key details:
- The problem was sporadic extreme slowness of the Oracle database and server lasting 1-20 minutes.
- Initial AWR reports and OS metrics showed a spike at 18:10 with CPU usage at 66.89%, confirming a problem occurred then.
- Further investigation using additional metrics was needed to fully understand the root cause, as initial diagnostics did not provide enough context about this brief problem period.
Performance analysis and troubleshooting using DTraceGraeme Jenkinson
The document provides an overview of performance analysis tools like tracing and profiling. It discusses different tracing approaches like print statements, logging frameworks, and debuggers. It introduces DTrace as a dynamic instrumentation tool that allows tracing production systems with zero probe effect. A case study demonstrates using DTrace to analyze NFS latency issues. The document also discusses tracing tools for Linux like ftrace, perf, SystemTap, and eBPF.
Servers and Processes: Behavior and Analysisdreamwidth
This document provides an overview of servers, processes, and system administration. It discusses servers as machines made up of components like RAM, CPU, and I/O. It then covers these components and their capacities, as well as processes and how they interact with servers through system calls. Hands-on examples are provided to demonstrate monitoring servers and investigating processes using tools like top, lsof, strace, and vmstat.
The document provides an introduction to operating systems and real-time operating systems (RTOS). It defines an operating system as software that manages computer resources and provides common services for programs. An RTOS is designed for systems where response time is critical. The document discusses the components, features and types of both operating systems and RTOS, including examples like VxWorks and QNX.
The document provides guidance on learning about automotive embedded systems through a 10 part series. It recommends first studying parts on real-time operating system basics, OSEK/VDX, AUTOSAR basics, and automotive protocols. Then users should validate their understanding and solve practice questions. The document directs readers to online materials and emphasizes the importance of depth of learning to become professional in the field of embedded systems.
O'Reilly Velocity New York 2016 presentation on modern Linux tracing tools and technology. Highlights the available tracing data sources on Linux (ftrace, perf_events, BPF) and demonstrates some tools that can be used to obtain traces, including DebugFS, the perf front-end, and most importantly, the BCC/BPF tool collection.
MiniOS: an instructional platform for teaching operating systems labsRafael Roman Otero
1. The document describes a proposed instructional operating system called MiniOS that is designed to be small and simple enough for students to implement in a single semester.
2. It aims to be the smallest complete instructional OS by building only the essential components and targeting a microcontroller platform to reduce complexity compared to other instructional OSes.
3. The document outlines MiniOS's design, which includes a guide to its construction covering technical details of the compiler, hardware, OS implementations, and programming patterns used.
Similar to Intrusion Detection System for Applications using Linux Containers (20)
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
14. Test Configuration
Test Parameters
• Epoch-size range: 1000, 1500, …, 4000 (total system calls per epoch)
• Detection-threshold range: 10, 20, …, 100 (mismatches per epoch)
System Input
• A trace of 3,804,000 total system calls was used
• Only system calls were used for training (no arguments)
• 875,000 system calls used for training
• 40 distinct system calls found
15. Individual Attack Types Tested
Reconnaissance (Brute-force) attack
• Retrieve all info about DBMS, e.g. users, roles, schemas, passwords, … etc.
• Generated ~ 42,000 mismatches
DoS Attack
• Using wild cards to slow down database
• Generated 37 mismatches
OS takeover attempt
• Attempt to run ‘cat /etc/passwd’ shell command (failed)
• Generated 279 mismatches
File-system access
• Copy /etc/passwd to local machine
• Generated 182 mismatches
19. Conclusion
High detection rate is easily achievable at low detection threshold
• 100% at detection threshold of 10 mismatches per epoch
High detection speed
• Minimum of 10 system calls (for 100% detection rate)
• Maximum of 1000 system calls (for epoch size of 1000)
Non-zero FPR measured
• Nature of running application (not repetitive)
• state of database changes from idle to active Plus same workload may not generate exact BoSCs
• expect better performance for an application that is repetitive by nature (e.g. Hadoop Yarn)
• Memory-based learning technique
• looks for exact same BoSCs
• modify technique to adapt for minor change for better performance
Strong anomaly signal from anomalous data
• Malicious dataset: average 695 mismatches/epoch
• Normal dataset: average 33 mismatches/epoch
Relatively small overhead
• 5MB for storing normal-behavior database
Editor's Notes
A container typically encapsulates single application plus libraries and binaries only
Containers running on the same host share the same kernel as the host
Namespaces and control groups are used to isolate containers and manage resources
Containers communicate with the host kernel (and the wider world) through system calls
Sample Syscall trace
Sample Syscall trace
Sample Syscall trace
The Linux buitl-in tool strace is used to trace system calls between the container and the host kernel. The system call trace is written to a behavior log file. In addition, strace is used to generate a list of system calls that frequently appears during the normal execution of the current application.
System call parser reads behavior file as being updated by strace epoch by epoch.
During each epoch, the system calls are read one at a time, and the new system call is then added to the sliding window.
In addition, the system calls is passed to the syscall index map to look up the index.
The current frequency of the new system call is calculated, and the syscall index is retrieved from the map, and used to update the corresponding index in the new BoSC
The created BoSC is then passed to the classifier. In training mode, the classifier just adds the new BoSC to the normal behavior database if doesn’t already exists. If it already exists, the frequency of the bag is incremented. The databse is considered stable once all expected normal behavior is applied to the container
Once the databse is stable, the classifier switches to classification mode. In that mode, the classifier checks if new BoSC is not present in the database, a mismatch is declared. If the number of mismatches within one epoch exceeds certain threshold, an anomaly signal is raised. To improve the FPR for future epochs, we are also applying a continuous training mode in which the difference from last-epoch database is added to the normal behavior database if the number of mismatches is less than the threshold.