Debugging is an essential part of Linux kernel development. In
user-space we have the support of the kernel and many debugging tools, tracking down a kernel bug, instead, can be very difficult if you don't know the proper methodologies. This talk will cover some techniques to understand how the kernel works, hunt down and fix kernel bugs in order to become a better kernel developer.
In this talk we discuss the mechanisms of utilizing the eBPF language to perform hardware accelerated network packet manipulation and filtering. P4 programs can be compiled into eBPF scripts for offload in the Linux kernel using the Traffic Classifier (TC) subsystem. We demonstrate how, using eBPF as an intermediate language, it has been possible to extend the TC to either Just In Time (JIT) compile eBPF code to x86 assembler for software offload or to IXP byte code for execution in a trusted hardware environment within the Netronome Agilio intelligent server adapter. We finish by encouraging the audience to experiment with their own eBPF applications within the TC hardware accelerated system. The TC kernel patches are available on the Linux Kernel Networking mailing list as a Request For Comment (RFC) contribution.
Dinan Gunawardena, Director, Software Engineering, Netronome
Dinan Gunawardena is a Software Director focusing on running the driver team at Netronome. Previously, Dinan founded a software startup and was a Senior Research Engineer within the Operating Systems and Networking Group at Microsoft Research for 12 years, shipping technology in several versions of Microsoft Windows and the Bing Search Engine. Dinan has received over 20 patents and is a Chartered Software Engineer. Dinan has a Masters in Computer Science from University of Cambridge and a M.B.A. from WBS.
Jakub Kicinski, Software Engineering, Netronome
Jakub Kicinski is a Software Engineer specializing in the Linux Kernel drivers for Netronome SmartNICs. Jakub has previously worked as an intern for Intel Corporation. Jakub is also a researcher with expertise in Linux kernel. Experience in application development on complex multi-CPU and FPGA platforms. He is interested in high-performance software exploiting hardware capabilities and is passionate about networking. Jakub has a Masters in Computer Science from Gdansk University of Technology.
A Kernel of Truth: Intrusion Detection and Attestation with eBPFoholiab
"Attestation is hard" is something you might hear from security researchers tracking nation states and APTs, but it's actually pretty true for most network-connected systems!
Modern deployment methodologies mean that disparate teams create workloads for shared worker-hosts (ranging from Jenkins to Kubernetes and all the other orchestrators and CI tools in-between), meaning that at any given moment your hosts could be running any one of a number of services, connecting to who-knows-what on the internet.
So when your network-based intrusion detection system (IDS) opaquely declares that one of these machines has made an "anomalous" network connection, how do you even determine if it's business as usual? Sure you can log on to the host to try and figure it out, but (in case you hadn't noticed) computers are pretty fast these days, and once the connection is closed it might as well not have happened... Assuming it wasn't actually a reverse shell...
At Yelp we turned to the Linux kernel to tell us whodunit! Utilizing the Linux kernel's eBPF subsystem - an in-kernel VM with syscall hooking capabilities - we're able to aggregate metadata about the calling process tree for any internet-bound TCP connection by filtering IPs and ports in-kernel and enriching with process tree information in userland. The result is "pidtree-bcc": a supplementary IDS. Now whenever there's an alert for a suspicious connection, we just search for it in our SIEM (spoiler alert: it's nearly always an engineer doing something "innovative")! And the cherry on top? It's stupid fast with negligible overhead, creating a much higher signal-to-noise ratio than the kernels firehose-like audit subsystems.
This talk will look at how you can tune the signal-to-noise ratio of your IDS by making it reflect your business logic and common usage patterns, get more work done by reducing MTTR for false positives, use eBPF and the kernel to do all the hard work for you, accidentally load test your new IDS by not filtering all RFC-1918 addresses, and abuse Docker to get to production ASAP!
As well as looking at some of the technologies that the kernel puts at your disposal, this talk will also tell pidtree-bcc's road from hackathon project to production system and how focus on demonstrating business value early on allowed the organization to give us buy-in to build and deploy a brand new project from scratch.
Kernel Recipes 2014 - NDIV: a low overhead network traffic diverterAnne Nicolas
NDIV is a young, very simple, yet efficient network traffic diverter. Its purpose is to help build network applications that intercept packets at line rate with a very low processing overhead. A first example application is a stateless HTTP server reaching line rate on all packet sizes.
Willy Tarreau, HaproxyTech
Debugging is an essential part of Linux kernel development. In
user-space we have the support of the kernel and many debugging tools, tracking down a kernel bug, instead, can be very difficult if you don't know the proper methodologies. This talk will cover some techniques to understand how the kernel works, hunt down and fix kernel bugs in order to become a better kernel developer.
In this talk we discuss the mechanisms of utilizing the eBPF language to perform hardware accelerated network packet manipulation and filtering. P4 programs can be compiled into eBPF scripts for offload in the Linux kernel using the Traffic Classifier (TC) subsystem. We demonstrate how, using eBPF as an intermediate language, it has been possible to extend the TC to either Just In Time (JIT) compile eBPF code to x86 assembler for software offload or to IXP byte code for execution in a trusted hardware environment within the Netronome Agilio intelligent server adapter. We finish by encouraging the audience to experiment with their own eBPF applications within the TC hardware accelerated system. The TC kernel patches are available on the Linux Kernel Networking mailing list as a Request For Comment (RFC) contribution.
Dinan Gunawardena, Director, Software Engineering, Netronome
Dinan Gunawardena is a Software Director focusing on running the driver team at Netronome. Previously, Dinan founded a software startup and was a Senior Research Engineer within the Operating Systems and Networking Group at Microsoft Research for 12 years, shipping technology in several versions of Microsoft Windows and the Bing Search Engine. Dinan has received over 20 patents and is a Chartered Software Engineer. Dinan has a Masters in Computer Science from University of Cambridge and a M.B.A. from WBS.
Jakub Kicinski, Software Engineering, Netronome
Jakub Kicinski is a Software Engineer specializing in the Linux Kernel drivers for Netronome SmartNICs. Jakub has previously worked as an intern for Intel Corporation. Jakub is also a researcher with expertise in Linux kernel. Experience in application development on complex multi-CPU and FPGA platforms. He is interested in high-performance software exploiting hardware capabilities and is passionate about networking. Jakub has a Masters in Computer Science from Gdansk University of Technology.
A Kernel of Truth: Intrusion Detection and Attestation with eBPFoholiab
"Attestation is hard" is something you might hear from security researchers tracking nation states and APTs, but it's actually pretty true for most network-connected systems!
Modern deployment methodologies mean that disparate teams create workloads for shared worker-hosts (ranging from Jenkins to Kubernetes and all the other orchestrators and CI tools in-between), meaning that at any given moment your hosts could be running any one of a number of services, connecting to who-knows-what on the internet.
So when your network-based intrusion detection system (IDS) opaquely declares that one of these machines has made an "anomalous" network connection, how do you even determine if it's business as usual? Sure you can log on to the host to try and figure it out, but (in case you hadn't noticed) computers are pretty fast these days, and once the connection is closed it might as well not have happened... Assuming it wasn't actually a reverse shell...
At Yelp we turned to the Linux kernel to tell us whodunit! Utilizing the Linux kernel's eBPF subsystem - an in-kernel VM with syscall hooking capabilities - we're able to aggregate metadata about the calling process tree for any internet-bound TCP connection by filtering IPs and ports in-kernel and enriching with process tree information in userland. The result is "pidtree-bcc": a supplementary IDS. Now whenever there's an alert for a suspicious connection, we just search for it in our SIEM (spoiler alert: it's nearly always an engineer doing something "innovative")! And the cherry on top? It's stupid fast with negligible overhead, creating a much higher signal-to-noise ratio than the kernels firehose-like audit subsystems.
This talk will look at how you can tune the signal-to-noise ratio of your IDS by making it reflect your business logic and common usage patterns, get more work done by reducing MTTR for false positives, use eBPF and the kernel to do all the hard work for you, accidentally load test your new IDS by not filtering all RFC-1918 addresses, and abuse Docker to get to production ASAP!
As well as looking at some of the technologies that the kernel puts at your disposal, this talk will also tell pidtree-bcc's road from hackathon project to production system and how focus on demonstrating business value early on allowed the organization to give us buy-in to build and deploy a brand new project from scratch.
Kernel Recipes 2014 - NDIV: a low overhead network traffic diverterAnne Nicolas
NDIV is a young, very simple, yet efficient network traffic diverter. Its purpose is to help build network applications that intercept packets at line rate with a very low processing overhead. A first example application is a stateless HTTP server reaching line rate on all packet sizes.
Willy Tarreau, HaproxyTech
Cilium - Fast IPv6 Container Networking with BPF and XDPThomas Graf
We present a new open source project which provides IPv6 networking for Linux Containers by generating programs for each individual container on the fly and then runs them as JITed BPF code in the kernel. By generating and compiling the code, the program is reduced to the minimally required feature set and then heavily optimised by the compiler as parameters become plain variables. The upcoming addition of the Express Data Plane (XDP) to the kernel will make this approach even more efficient as the programs will get invoked directly from the network driver.
Kernel Recipes 2013 - Nftables, what motivations and what solutionsAnne Nicolas
Iptables and Netfilter were introduced in 2001 along with Linux 2.4 as the full layer for firewall. The functionalities and the codes changed quite a lot during this decade, but nothing like what has been done with nftables.
The motivation for this change is to overcome the limitations of iptables that was beginning to date both foncionnal level and in the code design: problem with the system update rules (very expensive when the number of rules increases which has become a problem to manage not static rules), code duplication, problematic for code maintenance and users.
Nftables is a replacement for iptables that has been developed since 2008 by Patri ck McHardy who is the head of the Netfilter project. After a period of sleep, the developments around the project resumed in 2012 and a team of developers was formed and is working on the project.
Nftables solves the problem of updates performance using a communication message between the kernel and user space. Infrastructure Netlink was used because it is the basis of the latest major Netfilter developments.
The most notable changes:
incremental update and atomic rules guaranteeing the performance and consistency of the set of rules
expression of the rules using a pseudo machine for avoiding complex operations of writing core modules and additional extensions
Nftables exceeds the limitations of iptables and brings news that should resolve elegant and efficient way many problems. The work is already significant and only the high-level library has not yet been developed. Given the remaining work, the first official release is planned for late 2013.
DockerCon 2017 - Cilium - Network and Application Security with BPF and XDPThomas Graf
This talk will start with a deep dive and hands on examples of BPF, possibly the most promising low level technology to address challenges in application and network security, tracing, and visibility. We will discuss how BPF evolved from a simple bytecode language to filter raw sockets for tcpdump to the a JITable virtual machine capable of universally extending and instrumenting both the Linux kernel and user space applications. The introduction is followed by a concrete example of how the Cilium open source project applies BPF to solve networking, security, and load balancing for highly distributed applications. We will discuss and demonstrate how Cilium with the help of BPF can be combined with distributed system orchestration such as Docker to simplify security, operations, and troubleshooting of distributed applications.
Kernel Recipes 2013 - Deciphering OopsiesAnne Nicolas
The Linux kernel is a very complex beast living in millions of households and data centers around the world. Normally, you’re not supposed to notice its presence but when it gets cranky because of something not suiting it, it spits crazy messages called colloquially
oopses and panics.
In this talk, we’re going to try to understand how to read those messages in order to be able to address its complaints so that it can get back to work for us.
Andrea Righi - Spying on the Linux kernel for fun and profitlinuxlab_conf
Do you ever wonder what the kernel is doing while your code is running? This talk will explore some methodologies and techniques (eBPF, ftrace, etc.) to look under the hood of the Linux kernel and understand what it’s actually doing behind the scenes.
This talk explores methodologies that allow to take a look “live” at kernel internal operations, from a network perspective, to I/O paths, CPU usage, memory allocations, etc., using in-kernel technologies, like eBPF and ftrace. Understanding such kernel internals can be really helpful to track down performance bottlenecks, debug system failures and it can be also a very effective way to approach to kernel development.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
“p4alu” is a P4 program who would parse UDP packet with payload in "p4alu header format" and apply calculation.
This program is tested using BMv2 simple_switch P4 target.
This is a tutorial for implementing application level traffic analyzer by using SF-TAP flow abstractor.
http://sf-tap.github.io/
https://github.com/SF-TAP/
https://github.com/SF-TAP/flow-abstractor
https://www.usenix.org/conference/lisa15/conference-program/presentation/takano
http://ytakano.github.io/
Network Automation (Bay Area Juniper Networks Meetup)Alejandro Salinas
Network Automation Presentation at the Bay Area Juniper Networks Meetup. Here I present three stories with regards to network automation at Groupon, increasing in complexity as we go through and also touching on some of the process/management challenges.
Beyond TCP: The evolution of Internet transport protocolsOlivier Bonaventure
The transport layer is one of the key layers of the Internet protocol stack. It enrichs the network layer service to make it suitable for applications. Almost 40 years after its initial design, TCP remains the most widely used transport protocol. In the early 2000s, SCTP was proposed as an alternative to TCP. Despite a clean and extensible design and many useful features, it did not reach wide deployment. This failure is mainly caused by middleboxes. We'll describe their operation and explain why Multipath TCP, which is a backward compatible evolution to TCP, has better chances of being deployed. We'll explain the main principles behind Multipath TCP and the lessons that can be drawn from its design. We'll then analyse why Internet giants like Google and Microsoft now consider application-layer solutions like QUIC to replace standard protocols like TCP.
Nadav Markus goes over the path from a simple crash POC provided by Google Project Zero (for CVE-2015-7547), to a fully weaponized exploit.
He explores how an attacker can utilize the behavior of the Linux kernel in order to bypass ASLR, allowing an attacker to remotely execute code on vulnerable targets.
Continuous integration, delivery, and deployment (CICD) is widely
used in DevOps communities, as it allows for teams of all sizes to
deploy rapidly-changing hardware and software resources quickly
and confidently.
Cilium - Fast IPv6 Container Networking with BPF and XDPThomas Graf
We present a new open source project which provides IPv6 networking for Linux Containers by generating programs for each individual container on the fly and then runs them as JITed BPF code in the kernel. By generating and compiling the code, the program is reduced to the minimally required feature set and then heavily optimised by the compiler as parameters become plain variables. The upcoming addition of the Express Data Plane (XDP) to the kernel will make this approach even more efficient as the programs will get invoked directly from the network driver.
Kernel Recipes 2013 - Nftables, what motivations and what solutionsAnne Nicolas
Iptables and Netfilter were introduced in 2001 along with Linux 2.4 as the full layer for firewall. The functionalities and the codes changed quite a lot during this decade, but nothing like what has been done with nftables.
The motivation for this change is to overcome the limitations of iptables that was beginning to date both foncionnal level and in the code design: problem with the system update rules (very expensive when the number of rules increases which has become a problem to manage not static rules), code duplication, problematic for code maintenance and users.
Nftables is a replacement for iptables that has been developed since 2008 by Patri ck McHardy who is the head of the Netfilter project. After a period of sleep, the developments around the project resumed in 2012 and a team of developers was formed and is working on the project.
Nftables solves the problem of updates performance using a communication message between the kernel and user space. Infrastructure Netlink was used because it is the basis of the latest major Netfilter developments.
The most notable changes:
incremental update and atomic rules guaranteeing the performance and consistency of the set of rules
expression of the rules using a pseudo machine for avoiding complex operations of writing core modules and additional extensions
Nftables exceeds the limitations of iptables and brings news that should resolve elegant and efficient way many problems. The work is already significant and only the high-level library has not yet been developed. Given the remaining work, the first official release is planned for late 2013.
DockerCon 2017 - Cilium - Network and Application Security with BPF and XDPThomas Graf
This talk will start with a deep dive and hands on examples of BPF, possibly the most promising low level technology to address challenges in application and network security, tracing, and visibility. We will discuss how BPF evolved from a simple bytecode language to filter raw sockets for tcpdump to the a JITable virtual machine capable of universally extending and instrumenting both the Linux kernel and user space applications. The introduction is followed by a concrete example of how the Cilium open source project applies BPF to solve networking, security, and load balancing for highly distributed applications. We will discuss and demonstrate how Cilium with the help of BPF can be combined with distributed system orchestration such as Docker to simplify security, operations, and troubleshooting of distributed applications.
Kernel Recipes 2013 - Deciphering OopsiesAnne Nicolas
The Linux kernel is a very complex beast living in millions of households and data centers around the world. Normally, you’re not supposed to notice its presence but when it gets cranky because of something not suiting it, it spits crazy messages called colloquially
oopses and panics.
In this talk, we’re going to try to understand how to read those messages in order to be able to address its complaints so that it can get back to work for us.
Andrea Righi - Spying on the Linux kernel for fun and profitlinuxlab_conf
Do you ever wonder what the kernel is doing while your code is running? This talk will explore some methodologies and techniques (eBPF, ftrace, etc.) to look under the hood of the Linux kernel and understand what it’s actually doing behind the scenes.
This talk explores methodologies that allow to take a look “live” at kernel internal operations, from a network perspective, to I/O paths, CPU usage, memory allocations, etc., using in-kernel technologies, like eBPF and ftrace. Understanding such kernel internals can be really helpful to track down performance bottlenecks, debug system failures and it can be also a very effective way to approach to kernel development.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
“p4alu” is a P4 program who would parse UDP packet with payload in "p4alu header format" and apply calculation.
This program is tested using BMv2 simple_switch P4 target.
This is a tutorial for implementing application level traffic analyzer by using SF-TAP flow abstractor.
http://sf-tap.github.io/
https://github.com/SF-TAP/
https://github.com/SF-TAP/flow-abstractor
https://www.usenix.org/conference/lisa15/conference-program/presentation/takano
http://ytakano.github.io/
Network Automation (Bay Area Juniper Networks Meetup)Alejandro Salinas
Network Automation Presentation at the Bay Area Juniper Networks Meetup. Here I present three stories with regards to network automation at Groupon, increasing in complexity as we go through and also touching on some of the process/management challenges.
Beyond TCP: The evolution of Internet transport protocolsOlivier Bonaventure
The transport layer is one of the key layers of the Internet protocol stack. It enrichs the network layer service to make it suitable for applications. Almost 40 years after its initial design, TCP remains the most widely used transport protocol. In the early 2000s, SCTP was proposed as an alternative to TCP. Despite a clean and extensible design and many useful features, it did not reach wide deployment. This failure is mainly caused by middleboxes. We'll describe their operation and explain why Multipath TCP, which is a backward compatible evolution to TCP, has better chances of being deployed. We'll explain the main principles behind Multipath TCP and the lessons that can be drawn from its design. We'll then analyse why Internet giants like Google and Microsoft now consider application-layer solutions like QUIC to replace standard protocols like TCP.
Nadav Markus goes over the path from a simple crash POC provided by Google Project Zero (for CVE-2015-7547), to a fully weaponized exploit.
He explores how an attacker can utilize the behavior of the Linux kernel in order to bypass ASLR, allowing an attacker to remotely execute code on vulnerable targets.
Continuous integration, delivery, and deployment (CICD) is widely
used in DevOps communities, as it allows for teams of all sizes to
deploy rapidly-changing hardware and software resources quickly
and confidently.
Banog meetup August 30th, network device property as codeDamien Garros
Managing Network Device Properties as Code:
Device configuration templates have simplified a lot of things for the network industry but most people are still managing their device properties (aka variables) manually which is very tedious and error prone. This talk will present a new approach to generate and manage network device properties easily using infrastructure as code principles.
This presentation introduces Data Plane Development Kit overview and basics. It is a part of a Network Programming Series.
First, the presentation focuses on the network performance challenges on the modern systems by comparing modern CPUs with modern 10 Gbps ethernet links. Then it touches memory hierarchy and kernel bottlenecks.
The following part explains the main DPDK techniques, like polling, bursts, hugepages and multicore processing.
DPDK overview explains how is the DPDK application is being initialized and run, touches lockless queues (rte_ring), memory pools (rte_mempool), memory buffers (rte_mbuf), hashes (rte_hash), cuckoo hashing, longest prefix match library (rte_lpm), poll mode drivers (PMDs) and kernel NIC interface (KNI).
At the end, there are few DPDK performance tips.
Tags: access time, burst, cache, dpdk, driver, ethernet, hub, hugepage, ip, kernel, lcore, linux, memory, pmd, polling, rss, softswitch, switch, userspace, xeon
"This deck is from the opening session of the "Introduction to Programming Pascal (P100) with CUDA 8" workshop at CSCS in Lugano, Switzerland. The three-day course is intended to offer an introduction to Pascal computing using CUDA 8."
Watch the video: http://wp.me/p3RLHQ-gsQ
Learn more: http://www.cscs.ch/events/event_detail/index.html?tx_seminars_pi1%5BshowUid%5D=155
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Spying on the Linux kernel for fun and profitAndrea Righi
Do you ever wonder what the kernel is doing while your code is running? This talk will explore some methodologies and techniques (eBPF, ftrace, etc.) to look under the hood of the Linux kernel and understand what it’s actually doing behind the scenes.
I was asked to talk in front of Computer science students at the Bar-Ilan university about "what happens" when you don't care about writing "secured" or "safe" code. A perfect example for that, in my opinion, was the world of embedded computing AKA the IoT. I talked about the history of consumer embedded devices and showed a live demo of an 0day I found in one of the most popular routers in the country.
BPF & Cilium - Turning Linux into a Microservices-aware Operating SystemThomas Graf
Container runtimes cause Linux to return to its original purpose: to serve applications interacting directly with the kernel. At the same time, the Linux kernel is traditionally difficult to change and its development process is full of myths. A new efficient in-kernel programming language called eBPF is changing this and allows everyone to extend existing kernel components or glue them together in new forms without requiring to change the kernel itself.
OpenShift Origin Community Day (Boston) Writing Cartridges V2 by Jhon Honce Diane Mueller
Presenters: Jhon Honce
Cartridges allow developers to provide services running on top of the Red Hat OpenShift Platform-as-a-Service (PaaS). OpenShift already provides cartridges for numerous web application frameworks and databases. Writing your own cartridges allows you to customize or enhance an existing service, or provide new services. In this session, the presenter will discuss best practices for cartridge development and the latest changes in the OpenShift cartridge support.
* Latest changes made in the platform to ease cartridge development
* OpenShift Cartridges vs. plugins
* Outline for development of a new cartridge
* Customization of existing cartridges
* Quickstarts: leveraging a cartridge or cartridges to provide a complete application
OpenShift Origin Community Day (Boston) Extending OpenShift Origin: Build You...OpenShift Origin
Extending OpenShift Origin: Build Your Own Cartridge
Presenters: Jhon Honce
Cartridges allow developers to provide services running on top of the Red Hat OpenShift Platform-as-a-Service (PaaS). OpenShift already provides cartridges for numerous web application frameworks and databases. Writing your own cartridges allows you to customize or enhance an existing service, or provide new services. In this session, the presenter will discuss best practices for cartridge development and the latest changes in the OpenShift cartridge support.
* Latest changes made in the platform to ease cartridge development
* OpenShift Cartridges vs. plugins
* Outline for development of a new cartridge
* Customization of existing cartridges
* Quickstarts: leveraging a cartridge or cartridges to provide a complete application
How to Use GSM/3G/4G in Embedded Linux SystemsToradex
The number of embedded devices that are connected to the internet is growing each day. Nowadays, they are installed majorly using a wireless connection. They need mobile network coverage to be connected to the internet. Read our next blog which tells you about the various configurations to connect a device such as Colibri iMX6S with the Colibri Evaluation Board running Linux to the internet through the PPP (Point-to-Point Protocol) link. Read More: https://www.toradex.com/blog/how-to-use-gsm-3g-4g-in-embedded-linux-systems
DevSecCon London 2019: A Kernel of Truth: Intrusion Detection and Attestation...DevSecCon
Matt Carroll
Infrastructure Security Engineer at Yelp
"Attestation is hard" is something you might hear from security researchers tracking nation states and APTs, but it's actually pretty true for most network-connected systems!
Modern deployment methodologies mean that disparate teams create workloads for shared worker-hosts (ranging from Jenkins to Kubernetes and all the other orchestrators and CI tools in-between), meaning that at any given moment your hosts could be running any one of a number of services, connecting to who-knows-what on the internet.
So when your network-based intrusion detection system (IDS) opaquely declares that one of these machines has made an "anomalous" network connection, how do you even determine if it's business as usual? Sure you can log on to the host to try and figure it out, but (in case you hadn't noticed) computers are pretty fast these days, and once the connection is closed it might as well not have happened... Assuming it wasn't actually a reverse shell...
At Yelp we turned to the Linux kernel to tell us whodunit! Utilizing the Linux kernel's eBPF subsystem - an in-kernel VM with syscall hooking capabilities - we're able to aggregate metadata about the calling process tree for any internet-bound TCP connection by filtering IPs and ports in-kernel and enriching with process tree information in userland. The result is "pidtree-bcc": a supplementary IDS. Now whenever there's an alert for a suspicious connection, we just search for it in our SIEM (spoiler alert: it's nearly always an engineer doing something "innovative")! And the cherry on top? It's stupid fast with negligible overhead, creating a much higher signal-to-noise ratio than the kernels firehose-like audit subsystems.
This talk will look at how you can tune the signal-to-noise ratio of your IDS by making it reflect your business logic and common usage patterns, get more work done by reducing MTTR for false positives, use eBPF and the kernel to do all the hard work for you, accidentally load test your new IDS by not filtering all RFC-1918 addresses, and abuse Docker to get to production ASAP!
As well as looking at some of the technologies that the kernel puts at your disposal, this talk will also tell pidtree-bcc's road from hackathon project to production system and how focus on demonstrating business value early on allowed the organization to give us buy-in to build and deploy a brand new project from scratch.
This work presents a P4 compiler backend targeting XDP, the eXpress Data Path. P4 is a domain-specific language describing how packets are processed by the data plane of a programmable network elements. XDP is designed for users who want programmability as well as performance.
https://github.com/williamtu/p4c-xdp/
Similar to Senior Design: Raspberry Pi Cluster Computing (20)
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
8. 8/34
Final Design
• 3D printed using SolidWorks
• Plexiglass
• Wired/Wireless router
• Heat Sinks and PC fans
• Power hub
9. OPERATING SYSTEM –
RAPSBIAN
JESSIE(w/NOOBS)
◦ Easy to use
◦ Lightweight OS
◦ Open source
◦ Bash Terminal interface
◦ Linux/Unix kernel
9/34
10. Bash Terminal- used to:
◦ Edit and create files to manipulate the OS and ports
i.e. setting up the host names and mounting drives
◦ Install software packages (i.e. openMPI, nfsserver)
◦ See IP addresses, Node settings and Network connections
Style of syntax used to operate in Terminal:
◦ $ sudo apt-get install (“file”) – used to install files
◦ $ sudo nano (“file”) – used to edit files
10/34
11. OpenMPI:
◦ Message Passing Interface used to implement parallel
computing
◦ Takes the data and breaks it into smaller chunks and
distributes it to the nodes to run simultaneously
◦ This method increases processing speed and efficiency
◦ Can compile and execute programs in C, C++, & Fortran
◦ GCC compiler is used to compile the program to be
processed in a parallel fashion
11/34
12. First all packages were updated
◦ Gfortran
◦ Nfs-common & Nfs-kernel-server
◦ Build-essential manpages-dev
◦ openmpi-bin/-doc libopenmpi-dev
◦ Etc.
Go into the configurations using sudo raspi-config
12/34
13. Settings for the master were the same as the slave
nodes:
◦ Set the host names as rpi0
◦ Enable ssh
◦ Overclock to “pi2” setting
◦ Set the memory split to 16
13/34
14. Install all the same packages from the master node
Sudo raspi-config to set all the same system
preferences as the master node
14/34
Photo courtesy of www.raspberrypi.org
15. 15/34
1. # include <stdio.h> //Standard Input/output library
2. # include <mpi.h>
3. int main(int argc, char** argv)
4. {
5. //MPI variables
6. int num_processes;
7. int curr_rank;
8. char proc_name[MPI_MAX_PROCESSOR_NAME];
9. int proc_name_len;
10. //intialize MPI
11. MPI_Init(&argc, &argv);
12. //get the number of processes
13. MPI_Comm_size(MPI_COMM_WORLD, &num_processes);
14.
15. //Get the rank of the current process
16. MPI_Comm_rank(MPI_COMM_WORLD, &curr_rank);
17. // Get the processor name for the current thread
18. MPI_Get_processor_name(proc_name, &proc_name_len);
19. //Check that we're running this process.
20. printf("Calling process %d out of %d on %srn", curr_rank, num_processes,
proc_name);
21. //Wait for all threads ot finish
22. MPI_Finalized();
23. return 0;
24. }
•Creates user specified dummy
processes of equal size
•Allocates the processes
dynamically to each nodes
•Displays the process number
upon completion
16. #include <stdio.h>
#include <math.h>
#include <mpi.h>
#define TOTAL_ITERATIONS 10000
int main(int argc, char *argv[])
{
//MPI variables
int num_processes;
int curr_rank;
// keep track of the current for-loop iterations
int total_iter;
int step_iter;
//variables used to calculate pi
double pi; // the final value
double curr_pi, h, sum, x; //step variables
//start up MPI
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &num_processes);
MPI_Comm_rank(MPI_COMM_WORLD, &curr_rank);
//Iterate TOTAL_ITERATIONS to calculate PI within a certain error margin
for(total_iter = 2; total_iter < TOTAL_ITERATIONS; total_iter++);
{
16/34
17. //init sum
sum = 0.0;
//determine step size
h = 1.0 / (double) total_iter;
//the current process will perform operations on its rank
//added by multiples of the total number of threads
// rank = 3,
for(step_iter = curr_rank +1; step_iter <= total_iter; step_iter += num_processes)
{
//determine the current step
x = h * ((double) step_iter - 0.5);
//add the current step values
sum += (4.0/(1.0 + x * x));
}
// resolve the sum into calculated value of pi
curr_pi = h * sum;
//reduce all processes' pi values to one value
MPI_Reduce(&curr_pi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
}
// Print out the final value and error
printf("calculated Pi = %.16frn", pi);
printf("Relative Error = %.16frn", fabs(pi - M_PI));
//Wrap up MPI
MPI_Finalize();
return 0;
}
17/34
18. Set all node IP addresses as static in
◦ Sudo nano /etc/network/interfaces (edit on all nodes)
◦ This step differs between wired and wireless
◦ For wired enter a static etho address
◦ For wireless enter a static address using wlan0
Set all hostnames to now static IP’s
◦ Sudo nano /etc/hosts (edit on all nodes )
◦ Add in the hostnames and addresses, for example:
◦ rpi0 192.168.0._
◦ rpi1 192.168.0._
Now we can ssh from one pi to another without
having to type IP addresses
18/34
19. Setting up the wireless connection was essentially the
same as setting up the wired connection
We assigned the ip addresses onto the wireless router
Then we went in to the etc/network/hosts and added
the new ips with hostnames
Added at the bottom of /etc/network/interfaces:
iface TP-LINK_7236 inet static
◦ Address 192._._._
◦ Netmask 255.255.255.0
◦ Gateway 192._._._
19/34
20. Next a common user was created on all nodes to
allow the nodes to communicate with out the need
for repeated password entry
◦ Sudo useradd –m –u 2345 mpiu
Next the nodes were mounted onto the master node
◦ Sudo mkdir /mirror //makes the directory
◦ Sudo chown mpiu:mpiu /mirror/ //changes ownership
◦ Sudo service rpcbind start
◦ Sudo update-rc.d rpcbind enable
20/34
21. Sudo nano /etc/exports
◦ Line added at bottom of file:
◦ /mirror 192.168.0.0/24(rw,sync)
◦ This line allows all ip addresses from 192.168.0.0 – 192.168.0.255
to be used by this system
◦ This is a possible point of concern when it comes to wireless
communication
Next nfs server reset and ssh from rpi0->rpi1
Same thing done to rpi1
Then “$~sudo mount rpi0:/mirror” actually mounts the
node
These steps repeated for all slave nodes
21/34
22. SSH Keys generated using
◦ Ssh-keygen –t rsa
◦ A passphrase is
recommended
◦ A bitmap of random
characters was then
generated as the key
Next key is copied to slave
nodes using:
◦ Ssh-copy-id mpiu@rpi1
“keychain logic” added to file
.bashrc
22/34
Photo courtesy visualgdb.com
23. Log in as mpiu on master node using
Su – mpiu
Switch to the /mirror/code/ directory which holds the
mpi test programs using “cd”
Mpicc calc_pi.c –o calc_pi //this line compiles the
program
Time mpiexec –n 4 –H rpi0-3 calc_pi //this line
executes the program on the master node and
distributes it to the nodes via the mounts
The output is the solution and the time it took to
execute
23/34
24. Here you can
see the .c files
and the
executables in
the directory
You see the
execution of the
program with
mpiexec
24/34
25. Initially we had assumed the code wasn’t working
correctly but this proved to be an incorrect diagnosis
The times we were seeing were not making much
sense
We ran the MPI tests on wired and wireless and we
found the processing times to be inconsistent
25/34
26. This led us to determine we had an issue with the
mounts on the nodes
The main issue was that the nodes wouldn’t read
the mirrored programs off the master
We are still currently in the processes of improving
the design and graphically interpreting the data
26/34
27. Wired vs Wireless performance
◦ Test the processing performance of cluster when:
Hard wired to router
Using dongles for each node to communicate wirelessly
Use wireshark to observe packet latency between nodes
Computational benchmark tests
◦ Using benchmark software to observe total processing power across
all pi’s
◦ Using complicated program as test material to solve with cluster
Graphical performance info
Implementation practical applications
Active Cooling onto the Pi’s
◦ Adding fans to final case design
27/34
29. 29/34
Part Price per Item 4Pi's Quantity Total (4) Link
Micro SD's 3.28 6 19.68 http://www.newegg.com/Product/Product.a
Micro USB's 4.69 4 18.76 http://www.amazon.com/AmazonBasics-Mic
Ethernet cables 0.82 4 3.28 http://www.newegg.com/Product/Product.a
Wifi Dongles 7.99 4 31.96 http://www.amazon.com/Kootek-Raspberry
Router (4-8 ports) 33.99 1 33.99 http://www.newegg.com/Product/Product.a
Raspberry Pi's 41.6 4 166.4 http://www.amazon.com/Raspberry-Pi-Mod
Heat sinks 2.41 4 9.64 http://www.amazon.com/Cooling-Aluminium
Dual Router 19.99 1 19.99 http://www.frys.com/product/8445718?site=
Fans 3.95 2 7.9 http://www.tannerelectronics.com
Makers Space 35 1 35 https://dallasmakerspace.org
Power USB 29.99 1 29.99 http://www.bestbuy.com/site/insignia-7-po
Total of All Parts 376.59
30. Diagnosing the mounting issue
Wireless and Wired communication working
Final equipment list acquired
Measuring and sketching layout of case structures
for the laser cutter
30/34
31. Compare wired vs wireless performance
◦ Detailed documenting and graphing of test results
Continue to debugging and improving the system
◦ Finish debugging the mounting issue
Finish first prototype for final case design
◦ Measuring and cutting the structure of the case
31/34
32. Wired and wireless connection is complete
Debugging nfs and mounting issues
◦ Continuously running performance tests
Final case design blueprint is complete
32/34