Namespaces, Cgroups and systemd document discusses:
1. Namespaces and cgroups which provide isolation and resource management capabilities in Linux.
2. Systemd which is a system and service manager that aims to boot faster and improve dependencies between services.
3. Key components of systemd include unit files, systemctl, and tools to manage services, devices, mounts and other resources.
Jenkins is an open source automation server written in Java. Jenkins helps to automate the non-human part of software development process, with continuous integration and facilitating technical aspects of continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tomcat.
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...Adrian Huang
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
The Linux Kernel Implementation of Pipes and FIFOsDivye Kapoor
A walkthrough of the code structure used in the linux kernel to implement pipes and FIFOs.
This was presented to a Senior level class at the Indian Institute of Technology, Roorkee.
Redfish is an IPMI replacement standardized by the DMTF. It provides a RESTful API for server out of band management and a lightweight data model specification that is scalable, discoverable and extensible. (Cf: http://www.dmtf.org/standards/redfish). This presentation will start by detailing its role and the features it provides with examples. It will demonstrate the benefits it provides to system administrator by providing a standardized open interface for multiple servers, and also storage systems.
We will then cover various tools such as the DMTF ones and the python-redfish library (Cf: https://github.com/openstack/python-redfish) offering Redfish abstractions.
Featured Speakers: Benoit Moussaud - Technical Director - XebiaLabs and Richard Mathis - Sales Director - XebiaLabs.
Date: Thursday, February 26th 2015
Time: 1:00 pm CET
Grâce à XL Release, découvrez comment :
1. Modéliser simplement vos releases actuelles (du commit à la mise en production)
2. Intégrer vos outils existants (Jira, Jenkins, Maven, Puppet, ServiceNow, Selenium/Fitnesse…)
3. Mesurer votre degré d’automatisation et identifier les points de contention dans votre processus de livraison
4. Vous améliorer en automatisant les tâches manuelles
5. Communiquer et collaborer autour d’un référentiel commun
6. Accélérer vers des pratiques de type Continuous Delivery tout en gardant le contrôle jusqu’en production.
Public cible : Release Managers, Delivery Managers, Program Managers, Responsables/Directeurs Production, Architectes….toutes personnes impliquées dans le processus de livraison
DockerCon 2017 - Cilium - Network and Application Security with BPF and XDPThomas Graf
This talk will start with a deep dive and hands on examples of BPF, possibly the most promising low level technology to address challenges in application and network security, tracing, and visibility. We will discuss how BPF evolved from a simple bytecode language to filter raw sockets for tcpdump to the a JITable virtual machine capable of universally extending and instrumenting both the Linux kernel and user space applications. The introduction is followed by a concrete example of how the Cilium open source project applies BPF to solve networking, security, and load balancing for highly distributed applications. We will discuss and demonstrate how Cilium with the help of BPF can be combined with distributed system orchestration such as Docker to simplify security, operations, and troubleshooting of distributed applications.
Jenkins is an open source automation server written in Java. Jenkins helps to automate the non-human part of software development process, with continuous integration and facilitating technical aspects of continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tomcat.
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...Adrian Huang
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
The Linux Kernel Implementation of Pipes and FIFOsDivye Kapoor
A walkthrough of the code structure used in the linux kernel to implement pipes and FIFOs.
This was presented to a Senior level class at the Indian Institute of Technology, Roorkee.
Redfish is an IPMI replacement standardized by the DMTF. It provides a RESTful API for server out of band management and a lightweight data model specification that is scalable, discoverable and extensible. (Cf: http://www.dmtf.org/standards/redfish). This presentation will start by detailing its role and the features it provides with examples. It will demonstrate the benefits it provides to system administrator by providing a standardized open interface for multiple servers, and also storage systems.
We will then cover various tools such as the DMTF ones and the python-redfish library (Cf: https://github.com/openstack/python-redfish) offering Redfish abstractions.
Featured Speakers: Benoit Moussaud - Technical Director - XebiaLabs and Richard Mathis - Sales Director - XebiaLabs.
Date: Thursday, February 26th 2015
Time: 1:00 pm CET
Grâce à XL Release, découvrez comment :
1. Modéliser simplement vos releases actuelles (du commit à la mise en production)
2. Intégrer vos outils existants (Jira, Jenkins, Maven, Puppet, ServiceNow, Selenium/Fitnesse…)
3. Mesurer votre degré d’automatisation et identifier les points de contention dans votre processus de livraison
4. Vous améliorer en automatisant les tâches manuelles
5. Communiquer et collaborer autour d’un référentiel commun
6. Accélérer vers des pratiques de type Continuous Delivery tout en gardant le contrôle jusqu’en production.
Public cible : Release Managers, Delivery Managers, Program Managers, Responsables/Directeurs Production, Architectes….toutes personnes impliquées dans le processus de livraison
DockerCon 2017 - Cilium - Network and Application Security with BPF and XDPThomas Graf
This talk will start with a deep dive and hands on examples of BPF, possibly the most promising low level technology to address challenges in application and network security, tracing, and visibility. We will discuss how BPF evolved from a simple bytecode language to filter raw sockets for tcpdump to the a JITable virtual machine capable of universally extending and instrumenting both the Linux kernel and user space applications. The introduction is followed by a concrete example of how the Cilium open source project applies BPF to solve networking, security, and load balancing for highly distributed applications. We will discuss and demonstrate how Cilium with the help of BPF can be combined with distributed system orchestration such as Docker to simplify security, operations, and troubleshooting of distributed applications.
Kubernetes currently has two load balancing mode: userspace and IPTables. They both have limitation on scalability and performance. We introduced IPVS as third kube-proxy mode which scales kubernetes load balancer to support 50,000 services. Beyond that, control plane needs to be optimized in order to deploy 50,000 services. We will introduce alternative solutions and our prototypes with detailed performance data.
This presentation gives an overview of Linux kernel block I/O susbsystem functionality, importance of I/O schedulers in Block layer. It also describes the different types of I/O Schedulers including the Deadline I/O scheduler, Anticipatory I/O Scheduler, Complete Fair queuing I/O scheduler and Noop I/O scheduler.
BPF & Cilium - Turning Linux into a Microservices-aware Operating SystemThomas Graf
Container runtimes cause Linux to return to its original purpose: to serve applications interacting directly with the kernel. At the same time, the Linux kernel is traditionally difficult to change and its development process is full of myths. A new efficient in-kernel programming language called eBPF is changing this and allows everyone to extend existing kernel components or glue them together in new forms without requiring to change the kernel itself.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxThe Linux Foundation
This talk will introduce Dom0-less: a new way of using Xen to build mixed-criticality solutions. Dom0-less is a Xen feature that adds a novel approach to static partitioning based on virtualization. It allows multiple domains to start at boot time directly from the Xen hypervisor, decreasing boot times dramatically. Xen userspace tools, such as xl and libvirt, become optional.
Dom0-less extends the existing device tree based Xen boot protocol to cover information required by additional domains. Binaries, such as kernels and ramdisks, are loaded by the bootloader (u-boot) and advertised to Xen via new device tree bindings.
The audience will learn how to use Dom0-less to partition the system. Uboot and device tree configuration details will be explained to enable the audience to get the most out of this feature. The talk will include a status update and details on future plans.
eBPF is one of the key technologies nowadays. There are several existing technologies in network or observability fields but not much in storage space. This presentation tells my research story and tries to define some of the possibilities of the technology.
Container Network Interface: Network Plugins for Kubernetes and beyondKubeAcademy
With the rise of modern containers comes new problems to solve – especially in networking. Numerous container SDN solutions have recently entered the market, each best suited for a particular environment. Combined with multiple container runtimes and orchestrators available today, there exists a need for a common layer to allow interoperability between them and the network solutions.
As different environments demand different networking solutions, multiple vendors and viewpoints look to a specification to help guide interoperability. Container Network Interface (CNI) is a specification started by CoreOS with the input from the wider open source community aimed to make network plugins interoperable between container execution engines. It aims to be as common and vendor-neutral as possible to support a wide variety of networking options — from MACVLAN to modern SDNs such as Weave and flannel.
CNI is growing in popularity. It got its start as a network plugin layer for rkt, a container runtime from CoreOS. Today rkt ships with multiple CNI plugins allowing users to take advantage of virtual switching, MACVLAN and IPVLAN as well as multiple IP management strategies, including DHCP. CNI is getting even wider adoption with Kubernetes adding support for it. Kubernetes accelerates development cycles while simplifying operations, and with support for CNI is taking the next step toward a common ground for networking. For continued success toward interoperability, Kubernetes users can come to this session to learn the CNI basics.
This talk will cover the CNI interface, including an example of how to build a simple plugin. It will also show Kubernetes users how CNI can be used to solve their networking challenges and how they can get involved.
KubeCon schedule link: http://sched.co/4VAo
Tracing Summit 2014, Düsseldorf. What can Linux learn from DTrace: what went well, and what didn't go well, on its path to success? This talk will discuss not just the DTrace software, but lessons from the marketing and adoption of a system tracer, and an inside look at how DTrace was really deployed and used in production environments. It will also cover ongoing problems with DTrace, and how Linux may surpass them and continue to advance the field of system tracing. A world expert and core contributor to DTrace, Brendan now works at Netflix on Linux performance with the various Linux tracers (ftrace, perf_events, eBPF, SystemTap, ktap, sysdig, LTTng, and the DTrace Linux ports), and will summarize his experiences and suggestions for improvements. He has also been contributing to various tracers: recently promoting ftrace and perf_events adoption through articles and front-end scripts, and testing eBPF.
Accelerating Envoy and Istio with Cilium and the Linux KernelThomas Graf
This talk will provide an introduction to injection options of Envoy and then deep dive into ongoing Linux kernel work that enables injecting Envoy while introducing as little latency as possible.
The servicemesh and the sidecar proxy model are on a steep trajectory to redefine many networking and security use cases. This talk explains and demos a new socket redirect Linux kernel technology that allows running Envoy with similar performance as if the sidecar was linked to the application using a UNIX domain socket. The talk will also give an outlook on how Envoy can use the recently merged kernel TLS functionality to gain access to the clear text payload transparently for end to end encrypted applications without requiring to decrypt and re-encrypt any data to further reduce the overhead and latency.
Launch the First Process in Linux SystemJian-Hong Pan
The session: https://coscup.org/2022/en/session/AGCMDJ
After Linux kernel boots, it will try to launch first process “init” in User Space. Then, the system begins the featured journey of the Linux distribution.
This sharing takes Busybox as the example and shows that how does Linux kernel find the “init” which directs to the Busybox. And, what will Busybox do and how to get the console. Try to make it like a simple Linux system.
Before Linux kernel launches “init” process, the file system and storage corresponding drivers/modules must be loaded to find the “init”. Besides, to mount the root file system correctly, the kernel boot command must include the root device and file system format parameters.
On the other hand, the Busybox directed from “init” is a lightweight program, but has rich functions, just like a Swiss Army Knife. So, it is usually used on the simple environment, like embedded Linux system.
This sharing will have a demo on a virtual machine first, then on the Raspberry Pi.
Drafts:
* https://hackmd.io/@starnight/Busbox_as_the_init
* https://hackmd.io/@starnight/Build_Alpines_Root_Filesystem_Bootstrap
Relate idea: https://hackmd.io/@starnight/Systems_init_and_Containers_COMMAND_Dockerfiles_CMD
In this talk Jiří Pírko discusses the design and evolution of the VLAN implementation in Linux, the challenges and pitfalls as well as hardware acceleration and alternative implementations.
Jiří Pírko is a major contributor to kernel networking and the creator of libteam for link aggregation.
Honest Performance Testing with "NDBench" (Vinay Chella, Netflix) | Cassandra...DataStax
Apache Cassandra makes it possible to execute millions of operations per second in scalable fashion. Harnessing the power of C* leaves many developers pondering about the following:
- Is my data model appropriate and not going to end up as wide partition(s) causing heap pressure and other issues?
- How do I tune my connection pool configuration? What are the optimal settings for my environment ?
- What is my C* cluster capacity in terms of number of IOPs for a given 95th and 99th latency?
- How do I perf-test my data access layer?
In this talk, Vinay Chella, Cloud Data Architect @ Netflix, will share open source tools, techniques and platform(NDBench) that Netflix uses to perf-test their C* fleet with simulations millions of operations per second.
About the Speaker
Vinay Chella Cloud Data Architect, NETFLIX Inc
About Vinay Chella, Cloud Data Architect at Netflix having deeper understanding of Cassandra and other RDBMS. As an Engineer and Architect, working extensively on data modeling, performance tuning and guiding best practices of various persistence stores. Helping various teams @ Netflix building next generation data access layers.
Kubernetes currently has two load balancing mode: userspace and IPTables. They both have limitation on scalability and performance. We introduced IPVS as third kube-proxy mode which scales kubernetes load balancer to support 50,000 services. Beyond that, control plane needs to be optimized in order to deploy 50,000 services. We will introduce alternative solutions and our prototypes with detailed performance data.
This presentation gives an overview of Linux kernel block I/O susbsystem functionality, importance of I/O schedulers in Block layer. It also describes the different types of I/O Schedulers including the Deadline I/O scheduler, Anticipatory I/O Scheduler, Complete Fair queuing I/O scheduler and Noop I/O scheduler.
BPF & Cilium - Turning Linux into a Microservices-aware Operating SystemThomas Graf
Container runtimes cause Linux to return to its original purpose: to serve applications interacting directly with the kernel. At the same time, the Linux kernel is traditionally difficult to change and its development process is full of myths. A new efficient in-kernel programming language called eBPF is changing this and allows everyone to extend existing kernel components or glue them together in new forms without requiring to change the kernel itself.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxThe Linux Foundation
This talk will introduce Dom0-less: a new way of using Xen to build mixed-criticality solutions. Dom0-less is a Xen feature that adds a novel approach to static partitioning based on virtualization. It allows multiple domains to start at boot time directly from the Xen hypervisor, decreasing boot times dramatically. Xen userspace tools, such as xl and libvirt, become optional.
Dom0-less extends the existing device tree based Xen boot protocol to cover information required by additional domains. Binaries, such as kernels and ramdisks, are loaded by the bootloader (u-boot) and advertised to Xen via new device tree bindings.
The audience will learn how to use Dom0-less to partition the system. Uboot and device tree configuration details will be explained to enable the audience to get the most out of this feature. The talk will include a status update and details on future plans.
eBPF is one of the key technologies nowadays. There are several existing technologies in network or observability fields but not much in storage space. This presentation tells my research story and tries to define some of the possibilities of the technology.
Container Network Interface: Network Plugins for Kubernetes and beyondKubeAcademy
With the rise of modern containers comes new problems to solve – especially in networking. Numerous container SDN solutions have recently entered the market, each best suited for a particular environment. Combined with multiple container runtimes and orchestrators available today, there exists a need for a common layer to allow interoperability between them and the network solutions.
As different environments demand different networking solutions, multiple vendors and viewpoints look to a specification to help guide interoperability. Container Network Interface (CNI) is a specification started by CoreOS with the input from the wider open source community aimed to make network plugins interoperable between container execution engines. It aims to be as common and vendor-neutral as possible to support a wide variety of networking options — from MACVLAN to modern SDNs such as Weave and flannel.
CNI is growing in popularity. It got its start as a network plugin layer for rkt, a container runtime from CoreOS. Today rkt ships with multiple CNI plugins allowing users to take advantage of virtual switching, MACVLAN and IPVLAN as well as multiple IP management strategies, including DHCP. CNI is getting even wider adoption with Kubernetes adding support for it. Kubernetes accelerates development cycles while simplifying operations, and with support for CNI is taking the next step toward a common ground for networking. For continued success toward interoperability, Kubernetes users can come to this session to learn the CNI basics.
This talk will cover the CNI interface, including an example of how to build a simple plugin. It will also show Kubernetes users how CNI can be used to solve their networking challenges and how they can get involved.
KubeCon schedule link: http://sched.co/4VAo
Tracing Summit 2014, Düsseldorf. What can Linux learn from DTrace: what went well, and what didn't go well, on its path to success? This talk will discuss not just the DTrace software, but lessons from the marketing and adoption of a system tracer, and an inside look at how DTrace was really deployed and used in production environments. It will also cover ongoing problems with DTrace, and how Linux may surpass them and continue to advance the field of system tracing. A world expert and core contributor to DTrace, Brendan now works at Netflix on Linux performance with the various Linux tracers (ftrace, perf_events, eBPF, SystemTap, ktap, sysdig, LTTng, and the DTrace Linux ports), and will summarize his experiences and suggestions for improvements. He has also been contributing to various tracers: recently promoting ftrace and perf_events adoption through articles and front-end scripts, and testing eBPF.
Accelerating Envoy and Istio with Cilium and the Linux KernelThomas Graf
This talk will provide an introduction to injection options of Envoy and then deep dive into ongoing Linux kernel work that enables injecting Envoy while introducing as little latency as possible.
The servicemesh and the sidecar proxy model are on a steep trajectory to redefine many networking and security use cases. This talk explains and demos a new socket redirect Linux kernel technology that allows running Envoy with similar performance as if the sidecar was linked to the application using a UNIX domain socket. The talk will also give an outlook on how Envoy can use the recently merged kernel TLS functionality to gain access to the clear text payload transparently for end to end encrypted applications without requiring to decrypt and re-encrypt any data to further reduce the overhead and latency.
Launch the First Process in Linux SystemJian-Hong Pan
The session: https://coscup.org/2022/en/session/AGCMDJ
After Linux kernel boots, it will try to launch first process “init” in User Space. Then, the system begins the featured journey of the Linux distribution.
This sharing takes Busybox as the example and shows that how does Linux kernel find the “init” which directs to the Busybox. And, what will Busybox do and how to get the console. Try to make it like a simple Linux system.
Before Linux kernel launches “init” process, the file system and storage corresponding drivers/modules must be loaded to find the “init”. Besides, to mount the root file system correctly, the kernel boot command must include the root device and file system format parameters.
On the other hand, the Busybox directed from “init” is a lightweight program, but has rich functions, just like a Swiss Army Knife. So, it is usually used on the simple environment, like embedded Linux system.
This sharing will have a demo on a virtual machine first, then on the Raspberry Pi.
Drafts:
* https://hackmd.io/@starnight/Busbox_as_the_init
* https://hackmd.io/@starnight/Build_Alpines_Root_Filesystem_Bootstrap
Relate idea: https://hackmd.io/@starnight/Systems_init_and_Containers_COMMAND_Dockerfiles_CMD
In this talk Jiří Pírko discusses the design and evolution of the VLAN implementation in Linux, the challenges and pitfalls as well as hardware acceleration and alternative implementations.
Jiří Pírko is a major contributor to kernel networking and the creator of libteam for link aggregation.
Honest Performance Testing with "NDBench" (Vinay Chella, Netflix) | Cassandra...DataStax
Apache Cassandra makes it possible to execute millions of operations per second in scalable fashion. Harnessing the power of C* leaves many developers pondering about the following:
- Is my data model appropriate and not going to end up as wide partition(s) causing heap pressure and other issues?
- How do I tune my connection pool configuration? What are the optimal settings for my environment ?
- What is my C* cluster capacity in terms of number of IOPs for a given 95th and 99th latency?
- How do I perf-test my data access layer?
In this talk, Vinay Chella, Cloud Data Architect @ Netflix, will share open source tools, techniques and platform(NDBench) that Netflix uses to perf-test their C* fleet with simulations millions of operations per second.
About the Speaker
Vinay Chella Cloud Data Architect, NETFLIX Inc
About Vinay Chella, Cloud Data Architect at Netflix having deeper understanding of Cassandra and other RDBMS. As an Engineer and Architect, working extensively on data modeling, performance tuning and guiding best practices of various persistence stores. Helping various teams @ Netflix building next generation data access layers.
Augeas, swiss knife resources for your puppet treeJulien Pivotto
This talk gives you an introduction to augeas with some usecases and demo of its Puppet integration. It also gives an introduction about how to manage files with puppet.
In the Python community we are taught from the outset of learning the language that the Zen of Python serves as a guide for how we should construct our codebases and projects. Rather than go into the zen-like meanings of each statement, this talk will explore how individual koans are implemented via detailed displays of sophisticated code examples.
A talk presented at the Automotive Grade Linux All-Members meeting on September 8, 2015. The focus on why AGL should adopt systemd, and highlights two of the more difficult integration issues that may arise while doing so. The embedded SVG image, courtesy Marko Hoyer of ADIT, is at http://she-devel.com/2015-07-23_amm_demo.svg
This document explains how to use docker container in Ubuntu 14.04 VM for trying out cgroups without actually using the host/guest operating system. It talks about using 'cpu' subcomponent and demostrates the effect of process isolation by 'htop' utility.
The docker containers are very effective way of trying out things by launching a container using standard/custom docker image from docker hub or your own image repository.
Introduction to the Hadoop Ecosystem (FrOSCon Edition)Uwe Printz
Talk held at the FrOSCon 2013 on 24.08.2013 in Sankt Augustin, Germany
Agenda:
- What is Big Data & Hadoop?
- Core Hadoop
- The Hadoop Ecosystem
- Use Cases
- What‘s next? Hadoop 2.0!
Виртуализация уровня операционной системы в Linux (так, называемые контейнеры) опирается на изоляцию ресурсов и на управление их использованием. Пространства имен в Linux (linux namespaces) тот инструмент, который позволяет изолировать ресурсы друг от друга на уровне имен. Например, именами процессов являются их идентификаторы (PIDs), которые можно организовать таким образом, что процессы никогда не могут узнать о существовании друг друга. Об этом и других интересных вещах рассказывается в презентации.
Linux 4.x Tracing Tools: Using BPF SuperpowersBrendan Gregg
Talk for USENIX LISA 2016 by Brendan Gregg.
"Linux 4.x Tracing Tools: Using BPF Superpowers
The Linux 4.x series heralds a new era of Linux performance analysis, with the long-awaited integration of a programmable tracer: Enhanced BPF (eBPF). Formally the Berkeley Packet Filter, BPF has been enhanced in Linux to provide system tracing capabilities, and integrates with dynamic tracing (kprobes and uprobes) and static tracing (tracepoints and USDT). This has allowed dozens of new observability tools to be developed so far: for example, measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. Tracing superpowers have finally arrived.
In this talk I'll show you how to use BPF in the Linux 4.x series, and I'll summarize the different tools and front ends available, with a focus on iovisor bcc. bcc is an open source project to provide a Python front end for BPF, and comes with dozens of new observability tools (many of which I developed). These tools include new BPF versions of old classics, and many new tools, including: execsnoop, opensnoop, funccount, trace, biosnoop, bitesize, ext4slower, ext4dist, tcpconnect, tcpretrans, runqlat, offcputime, offwaketime, and many more. I'll also summarize use cases and some long-standing issues that can now be solved, and how we are using these capabilities at Netflix."
Linux Performance Analysis: New Tools and Old SecretsBrendan Gregg
Talk for USENIX/LISA2014 by Brendan Gregg, Netflix. At Netflix performance is crucial, and we use many high to low level tools to analyze our stack in different ways. In this talk, I will introduce new system observability tools we are using at Netflix, which I've ported from my DTraceToolkit, and are intended for our Linux 3.2 cloud instances. These show that Linux can do more than you may think, by using creative hacks and workarounds with existing kernel features (ftrace, perf_events). While these are solving issues on current versions of Linux, I'll also briefly summarize the future in this space: eBPF, ktap, SystemTap, sysdig, etc.
Title: Ansible, best practices.
Ansible has taken a prominent place in the configmanagement world. By now many people involved in DevOps have taken a look at it, or done a first project with it. Now it is time to step back and look at quality and craftmanship. Bas Meijer, Ansible ambassador, will talk about Ansible best practices, and will show tips, tricks and examples based on several projects.
About the speaker
Bas is a systems engineer and software developer and wasted decades on latenight hacking. He is currently helping out 2 enterprises with continuous delivery and devops.
Linux Container Brief for IEEE WG P2302Boden Russell
A brief into to Linux Containers presented to IEEE working group P2302 (InterCloud standards and portability). This deck covers:
- Definitions and motivations for containers
- Container technology stack
- Containers vs Hypervisor VMs
- Cgroups
- Namespaces
- Pivot root vs chroot
- Linux Container image basics
- Linux Container security topics
- Overview of Linux Container tooling functionality
- Thoughts on container portability and runtime configuration
- Container tooling in the industry
- Container gaps
- Sample use cases for traditional VMs
Overall, a bulk of this deck is covered in other material I have posted here. However there are a few new slides in this deck, most notability some thoughts on container portability and runtime config.
Securing Applications and Pipelines on a Container PlatformAll Things Open
Presented at: Open Source 101 at Home
Presented by: Veer Muchandi, Red Hat Inc
Abstract: While everyone wants to do Containers and Kubernetes, they don’t know what they are getting into from Security perspective. This session intends to take you from “I don’t know what I don’t know” to “I know what I don’t know”. This helps you to make informed choices on Application Security.
Kubernetes as a Container Platform is becoming a de facto for every enterprise. In my interactions with enterprises adopting container platform, I come across common questions:
- How does application security work on this platform? What all do I need to secure?
- How do I implement security in pipelines?
- What about vulnerabilities discovered at a later point in time?
- What are newer technologies like Istio Service Mesh bring to table?
In this session, I will be addressing these commonly asked questions that every enterprise trying to adopt an Enterprise Kubernetes Platform needs to know so that they can make informed decisions.
An introduction to Linux Container, Namespace & Cgroup.
Virtual Machine, Linux operating principles. Application constraint execution environment. Isolate application working environment.
Introduction to OS LEVEL Virtualization & ContainersVaibhav Sharma
This Presentation contains information about os level virtualization and Containers internals. It has used other material on slide share which is referenced in Notes of PPT
امروزه مجازیسازی یکی از روشهای پرطرفدار برای پیادهسازی کارگزاران وب است. این فناوری موجب کاهش هزینههای تجارتهای کوچک میشود. مجازیسازی یکی از جنبههای مهم ارائه خدمات ابری است که حتی برای تجارتهای بزرگ نیز از جذابیت زیادی برخوردار است.
در این سخنرانی به امکاناتی همچون Control Groups و Containers که در نسخههای جدیدتر هسته سیستم عامل لینوکس پیادهسازی شده است میپردازیم. هرچند این امکانات مجازیسازی کامل را به ارمغان نمیآورند، اما بسیاری از مزایای آن را با سربار بسیار کم در سطح هسته فراهم میکنند. راه حلهایی همچون LXC و Docker بر اساس این امکانات توانستهاند به نتایج خوبی برسند که هم از لحاظ تجاری در خور توجه هستند و هم تبعات و کاربردهای امنیتی دارند.
History and Basics of containers, LXC, Docker and Kubernetes. This presentation is given to Engineering colleage students at VIT DevFest 2018. Beginner to Intermediate level.
Linux Containers and Docker SHARE.ORG Seattle 2015Filipe Miranda
This slide deck shows us an introduction to Linux Containers (LXC) and Docker for Linux on IBM z Systems.
One example of a commercial use of Linux Containers (and Docker) is Red Hat Openshift, which is is also covered at the end.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
First steps on CentOs7
1. Namespaces, Cgroups and systemd
Firsts steps on
CentOS 7
Marc Cortinas – Production Services -
Webops - March 2015
2. Why?
• Why? Motivations:
• 1. Trying to understanding why Lennart_Poettering was a “little bit” prepotent
• 2. Know the mainly changes on Linux in the next years
• 3. Learn more deeply CentOs7 after change the default distribution on Odigeo
• Fosdem Conferences - whats_new_in_systemd,_2015_edition
3. Agenda¿Why colors?
----------------------------------------- so far…
Memory Spaces and IPC - dbus
The kernel - udev
Namespaces
Virtualizations
------------------------------------------ more close…
Init systems on Unix
Control groups
• Overview
• Subsystems or resource controllers
• Demo
• Commands
Dbus
AutoFS
------------------------------------------ void main ()
SystemD
• Motivations
• Definition and features
• Overview
• Unit Files, Core components and libraries
• Commands
• Other Components:
1. Udev
2. JournalD
3. NetworkD
4. ConsoleD
5. LoginD
6. TimedateD
7. Systemd-Nspawn
4. Memory Spaces and Inter Process
comunication
User Space – Memory space to run user processes
• Only kernel processes can access a user space
• System prevents one process from interfering with another process
Kernel Space – Memory Space where kernel processes run
• System call is the only way user has access
• Arguments from system call exported from user space to kernel
space
• User process became kernel process when it executes system call
Communication Inter Process, not yet dBus
• Half-duplex UNIX Pipes best sysadmin friend
• Named Pipes, ack UNIX socket AF_UNIX
• SYS V IPC
– IPC:
– Messages queues
– Semaphores
– Shared memory
Linux Kernel Archs - Amir Hossein
http://www.tldp.org/LDP/lpg/node7.html
5. The Kernel
Linux Kernel Archs - Amir Hossein
Kernel: modules or sub-system that provides operating systems functions
ukernel: Includes code necessary to allow the system to proves major functionallity
– Ipc
– Memory Management
– Process Management
– IO Management
Flexible, modular, easy to implement
Monotlhitic kernel: https://en.wikipedia.org/wiki/Monolithic_kernel
- entire operating system is working in kernel space and is alone in supervisor mode
- defines a high-level virtual interface over computer hardware
- device drivers can be added to the kernel as modules ,or not? uDev..
Better Performance
Hybrid Kernel, nanokernel, picokernel, etc….
6. Namespaces
Namespaces – lightweight process virtualization
• Isolation: Enable a process (or group) to have different views of the system than
other processes
• Much likes Zones in Solaris
• No hypervisor layer
• Only one system call added (setns())
• Started in kernel 2.6.23 and finished in 3.8
• 6 namespaces
– Mount namespaces (CLONE_NEWNS, Linux 2.4.19) isolate the set of filesystem mount points seen by a
group of processes
– UTS namespaces (CLONE_NEWUTS, Linux 2.6.19) isolate two system identifiers—nodename and
domainname
– IPC namespaces (CLONE_NEWIPC, Linux 2.6.19) isolate certain interprocess communication (IPC)
resources, namely, System V IPC objects and (since Linux 2.6.30) POSIX message queues
– PID namespaces (CLONE_NEWPID, Linux 2.6.24) isolate the process ID number space
– Network namespaces (CLONE_NEWNET, started in Linux 2.6.24-2.6.29) provide isolation of the system
resources associated with networking
– User namespaces (CLONE_NEWUSER, started in Linux 2.6.23 and completed in Linux 3.8) isolate the user
and group ID number spaces
http://lwn.net/Articles/531114/
http://www.haifux.org/lectures/299/netLec7.pdf
8. Init systems on Unix
LINKS:
http://en.wikipedia.org/wiki/Init
OS Init System Family
MacOSX LaunchD (from 10.5.1) BSD
NetBSD SysVinit BSD
OpenBSD SysVinit BSD
FreeBSD SysVinit BSD
Debian Upstart/SystemD/SysVinit Linux
Ubuntu Upstart Linux
RHEL6/CentOS6 SysVinit + LSB Linux
RHEL7/CentOS7 SystemD Linux
Solaris SMF Solaris
9. Cgroups
• Project was born in Google on 2006
• It’s called process container.
• Merged in kernel into release 2.6.24
1) an upstream kernel feature that allows system resources to be
partitioned/divided up amongst different processes, or a group of processes.
2) user-space tools which handle kernel control groups mechanism
Cgroup - set of tasks with a set of parameters for one or more subsystems
Subsystem - "resource controller" that schedules a resource or applies per-
cgroup limits
Hierarchy - a set of cgroups arranged in a tree
LINKS:
https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt
https://www.youtube.com/watch?v=81j1WF5xEZc
http://fedoraproject.org/wiki/Features/ControlGroups
http://docs.fedoraproject.org/en-US/Fedora/16/html-single/Resource_Management_Guide/index.html
10. Cgroups – Subsystems OR Resource Controllers
• blkio — sets limits on input/output access to and from block devices;
• cpu — uses the CPU scheduler to provide cgroup tasks an access to the CPU. It is mounted
together with the cpuacct controller on the same mount;
• cpuacct — creates automatic reports on CPU resources used by tasks in a cgroup. It is
mounted together with the cpu controller on the same mount;
• cpuset — assigns individual CPUs (on a multicore system) and memory nodes to tasks in a
cgroup;
• devices — allows or denies access to devices for tasks in a cgroup;
• freezer — suspends or resumes tasks in a cgroup;
• memory — sets limits on memory use by tasks in a cgroup, and generates automatic reports
on memory resources used by those tasks;
• net_cls — tags network packets with a class identifier (classid) that allows the Linux traffic
controller (the tc command) to identify packets originating from a particular cgroup task;
• perf_event — enables monitoring cgroups with the perf tool;
• hugetlb — allows to use virtual memory pages of large sizes, and to enforce resource limits on
these pages.
#yum install kernel-doc and read /usr/share/doc/kernel-doc-<kernel_version>/Documentation/cgroups/
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/cgroups
12. Cgroups – Commands on libcgroup
tools
Description command
installation of packages tool to manage kernel API yum install libcgroup libcgroup-tools
creates persistent file snapshotting the currently hierarchy on runtime cgsnapshot -f /etc/cgconfig.conf
listing all available hierarchies along with their current mount points lssubsys -am
mount net_prio crontoller to a virtual file system mount -t cgroup -o net_prio none /sys/fs/cgroup/net_prio
unmount net_prio crontoller to a virtual file system umount /sys/fs/cgroup/controller_name
create transient cgroups in hierarchies, alternative 1 cgcreate -t uid:gid -a uid:gid -g controllers:path
create transient cgroups in hierarchies, alternative 2 mkdir /sys/fs/cgroup/net_prio/lab1/group1
remove cgroups cgdelete [-r] net_prio:/test-subgroup
set controller parameters by running cgset -r parameter=value path_to_cgroup
copy the parameters of one cgroup into another cgset --copy-from path_to_source_cgroup path_to_target_cgroup
Set controller parameters permanently vi /etc/cgconfig.conf ; systemctl stop cgconfig ; systemctl start cgconfig
Move a process into a cgroup cgclassify -g controllers:path_to_cgroup pidlist
Launch processes in a manually created cgroup cgexec -g controllers:path_to_cgroup command arguments
find the controllers that are available in your kernel cat /proc/cgroups
find the mount points of particular subsystem to find the mount points of particular subsystem
list the cgroups lscgroup
To restrict the output to a specific hierarchy lscgroup cpuset:adminusers
display the parameters of specific cgroups cgget -r parameter list_of_cgroups
13. Now dbus, next kdbus!
Goal: Improvements for Inter Process Communication
Before dbus: Pipe, Named pipe, queue messages, semaphores, shared memory.
Dbus: method call transactions, signals, properties, OO, broadcasting, introspection, policy,
activation, synchronization, type-safe marshalling, security, monitoring, exposes APIs, …. High level
concept!!!!
Dbus limitation: 10 copies + 4 complete validations + 4 context switches in duplex full transaction,
suitable for control but not payload,
Kdbus improvements: 2 or fewer copies + 2 validations + 2 context switches and more
Dbus Arch:
• Libdbus - library that allows two applications to connect to each other and exchange messages
• dbus-daemon - a message-bus daemon executable, built on libdbus
• wrapper libraries based on particular application frameworks
Linx Conf - Lennart Pottering - Dbus
DBus Freedesktop Project
Linux documentation project (tldp) - IPC
14. AutoFS
• What’s autoFS?
automount is a program for automatically mounting directories on an as-needed basis.
• Why autoFS in systemd?
Due to speed up boot process improving parallelization of startup process and
approach queue messaging into kernel until target proccess has been properly loaded.
RHEL 7 - Documentation
GIT repository in kernel code for autofs
Ubuntu help for autoFS
Man page for autofS
15. Motivations for SystemD
• Decrease the time used to init system with SysV solving dependencies (launchD)
• Bash language used to manage daemons, slow language and it could change base
on environment vars. (migrate to C)
• System need mount devices first before daemons (autofs)
• Keep track process after parent die (cgroups)
• Starts ordered and resolts dependencies (Require|Wants)
• Start only the services required on-demand (by default)
PMO systemD: Lennart Poettering
LINKS:
http://0pointer.de/blog/projects/systemd.html
http://0pointer.de/blog/projects/systemd-update.html
http://0pointer.de/blog/projects/systemd-update-2.html
http://0pointer.de/blog/projects/systemd-update-3.html
16. What’s systemD?
1. Boot system designed to start up the system more efficiently
1. Parallelization of start process, using sockets (AF_UNIX/AF_INET) and D-bus.
2. Suite of programs to manage daemons trying to avoid use BASH scripts with
environment variables dependency.
2. Daemon to administration system designed exclusively from API of kernel Linux
3. First process started on userspace
4. Framework to manage services and daemons dependencies
5. Daemon process running in background, added sufix -d-
6. Uses cgroups and fanotify to manage resources
7. Use AutoFS to avoid queue for any “fopen” call request
8. Keep track of process due to cgroups
18. Unit Files, Core components and
libraries
1. Unit file: Configuration file trying replace traditional startup bash scripts.
Service: A process or a group of processes based 1 cfg file
Scope: group - A group of externally created processes, registered with systemd
Slice: skel - A group of hierarchically organized units. Slices do not contain processes, they
organize a hierarchy in which scopes and services are placed.
(Default slices: -.slice; system.slice ; user.slice ;machine.slice)
1. Components
• systemd is a system and service manager for Linux operating systems.
• systemctl may be used to introspect and control the state of the systemd system and
service manager.
• systemd-analyze may be used to determine system boot-up performance statistics and
retrieve other state and tracing information from the system and service manager.
service socket device mount
automount swap target path
timer snapshot slice scope
19. SystemD commands
SYSV command OR Description SystemD command
init 3 systemctl isolate multi-user.target
service httpd [command] systemctl [command] httpd
ls /etc/rc.d/init.d/ systemctl list-units --all
chkconfig httpd [on|off]
D: creates/remove a unit file in the /usr/lib/systemd/system/ directory
(Persistent cgroups)
systemctl [enable|disable] httpd
D: run the top utility in a service unit in a new slice called test (Transcient) systemd-run --unit=toptest --slice=test top -b
D: Stop the unit non-gracefully signal systemctl kill name.service --kill-who=PID,... --signal=signal
chkconfig frobozz --add systemctl daemon-reload
runlevel systemctl list-units --type=target
D: limit the CPU and memory usage of httpd.service systemctl set-property httpd.service CPUShares=600 MemoryLimit=500M
D: limit the CPU and memory usage of httpd.service, temporary systemctl set-property --runtime httpd.service CPUShares=600
D: Recursively show control group contents systemd-cgls
D: show control group for resource systemd-cgls memory
D: Add cgroup info in ps psc='ps xawf -eo pid,user,cgroup,args'
D: List dependencies in target systemctl show -p "Wants" multi-user.target
D: Analyze system boot-up performance systemd-analyze
D: Show top control groups by their resource usage systemd-cgtop
D: Run programs in transient scope or service units systemd-run
D: Control the systemd machine manager (LXC or VM) Machinectl
D: show cgroups hierarchy attached to a process cat proc/PID/cgroup
20. Other components on systemD
• Udevd: is a device manager for the Linux kernel, which handles the /dev directory
and all user space actions when adding/removing devices
• Journald: systemd-journald is a daemon responsible for event logging
• Consoled: systemd-consoled provides a user console daemon, intending to replace
the Linux kernel's virtual terminal
• Logind: systemd-logind is a daemon that manages user logins and seats in various
ways
• Networkd: networkd allows systemd to perform various networking configurations,
features such as DHCP server or VXLAN support
• Timedated: systemd-timedated is a daemon that can be used to control time-
related settings, such as the system time, system time zone, or selection between
UTC and local time zone system clock
• Systemd-nspawn: Spawn a namespace container for debugging, testing and
building
21. Udev – Device Manager
Device Manager for Linux kernel, project was born on November 2003, succesor of devfsd.
udev was introduced in Linux 2.5. April 2012, udev's codebase was merged into the systemd source tree.
In October 2012, Linus Torvalds criticized Kay Sievers' approach of udev maintenance and bugs related to
firmware loading: Not because firmware loading cannot be done in user space. But simply because udev
maintenance since Greg gave it up has gone downhill.
Goal: Manage device nodes mapping in /dev directory have been a static set of files
Udev arch:
• libudev that allows access to device information; it was incorporated into the systemd software bundle
• User space daemon udevd that manages the virtual /dev
• Administrative command-line utility udevadm for diagnostics.
Udev Features:
• Runs in userpace
• Dynamicalle create/remove device files
• Provides consistent naming
• Provides user-space API
• Kernel 2.6, added sysfs filesystem in /sys with all infromation about devices/filesystmes
• /etc/udev/rules.d/*.rules define rules post-actions when kernel detect some device and info is
populated in sysfs
Device Manager Tutorial - udevadm
23. NetworkD – not in CentOS7 – added on
CoreOS
Added on systemd in v209, 20th february 2014. adding dhcp server or VxLAN support
on July 2014 into release v215 systemd.
• Main goal: allows systemd to perform various networking configurations
• Cfg Path: /etc/systemd/network
• Enable:
– systemctl enable systemd-networkd.service
– systemctl start systemd-networkd.service
• CFG type files:
• .link files: networkd performs basic settings on network devices (name of the network
interface, MTU, Wake on LAN, modified MAC address, configuration file for systemd-udev)
• network files: cfg file for systemd-netword, same syntaxi .link files, match and network tag
• .netdev files: Even if you have to create virtual network devices, look no further than networkd
bridges, bonded interfaces and VLANS
Tip: Learn how to linux add predictible network name interfaces
LINKs:
Linux Magazine Example Configurations
CoreOs Documentation Example configurations
Networkd Project Freedesktop
24. ConsoleD 1/2
The current status is….
• Linux Console (Linus Trobald 1991), system console internal to the kernel, it’s a device I/O all kernel
messages and allow login in single user mode. There are 2 implementations
1. Text mode – Compatible with PC systems with CGA, EGA, MDA, VGA = LEGACY (array 2D display)
2. Framebuffer – (fbdev) is a graphic hardware-independent abstraction layer, used in default modern linux
distributions
• Virtual Console, multiplex linux console in a several (7) consoles using VT system, running in kernel
space. Implementations:
• Teminal Emulator runs in user space and let load graphical environments, GNOME, KDE, etc…
Systemd-consoled development wants …
• Released inside systemd v217, October 2014, git commit here
• Main goal: systemd-consoled provides a user console daemon, intending to replace the Linux
kernel's virtual terminal, running in userpace
• Uses kmscon, project borned on Nov 2011. kmscon = KMS (Kernel-Mode-Setting, Kernel API performs
mode-settings) + DRM (Direct-Rendering-Manager of kernel to acces graphical devices)
LINKs:
Wikipedia - Linux console
Wiki Freedesktop.org – kmscon
20 years of CONFIG_VT, according linux-kernel VT
26. LoginD
• Logind was merged inside systemd on v30 released in 1 august 2011
• What has logind build for:
• Keeping track of users and sessions, their processes and their idle state
• Providing PolicyKit-based access for users to operations such as system shutdown or sleep
• Implementing a shutdown/sleep inhibition logic for applications
• Handling of power/sleep hardware keys
• Multi-seat management
• Session switch management
• Device access management for users
• Automatic spawning of text logins (gettys) on virtual console activation and user runtime directory
management
• User sessions are registered in logind via the pam_systemd(8) PAM module. (pam_systemd.so)
– - creates/destroy /run/user/$USER
– - $XDG_SESSION_ID (1 Id for each user)
– - add/delete systemd scope copyying skel from user.slice
LINKS:
Wiki freedesktop.org – multiseat
Wiki freedesktop.org – logind
manpage freedesktop.org - pam_systemd
27. TimedateD
• Timedated was merged inside systemd on v30 released in 1 august
2011
• Goal: daemon that can be used to control time-related settings,
such as the system time, system time zone, or selection between
UTC and local time zone system clock. It is accessible through D-Bus.
• The system time
• The system timezone
• A boolean controlling whether the system RTC is in local or UTC
timezone
• Whether the systemd-timesyncd.service(8) (NTP) services is
enabled/started or disabled/stopped. See systemd-
timedated.service(8) for more information.
• Wiki freedesktop.org - timedated
28. systemd-nspawn and machinectl
• Systemd-nspawn is chroot on steroids
• Goal - Spawn a minimal namespace container for debugging, testing and building
# yum –releasever=20 --nogpg --installroot=/srv/mycontainer --disablerepo='*' --
enablerepo=fedora install systemd passwd yum fedora-release vim-minimal
…
# systemd-nspawn -bD /srv/mycontainer/
[root@fedora20 ~]# machinectl
MACHINE CONTAINER SERVICE
mycontainer container nspawn
LINKS
Lennart Poettering, Linux Conf 2013
Wiki - fedoraproject.org – SystemdLightweightContainers
Wiki - freedesktop.org - VirtualizedTesting
29. What’s new in SystemD? 2015…
Main changes announced in FOSDEM
• new tool systemd-hwdb for querying the hardware metadata database , decoupled from the old libudev library
• machinectl gained support for two new "copy-from" and "copy-to" commands for copying files from a running container
• machinectl gained support for a new "bind" command to bind mount host directories into local containers
• Routes configured with networkd may now be assigned a scope in .network files
• networkd may now configure IPv6 link-local addressing in addition to IPv4 link-local addressing
• The IPv6 "token" for use in SLAAC may now be configured for each .network interface in networkd.
• When the user presses Ctrl-Alt-Del more than 7x within 2s an immediate reboot is triggered
• networkd gained support for creating "ipvlan", "gretap","ip6gre", "ip6gretap" and "ip6tnl" network devices. Moreover, gained
support for collecting LLDP network announcements
• systemd-nspawn's --image= option is now capable of dissecting and booting MBR and GPT disk images, This allows running
cloud images from major distributions directly with systemd-nspawn, without modification.
• networkd .network files gained support for configuring per-link IPv4/IPv6 packet forwarding as well as IPv4 masquerading.
• The default TERM variable to use for units connected changes to vt220 rather than vt102
• systemd now provides a way to store file descriptors per-service in PID 1.This is useful for daemons to ensure that fds they
require are not lost during a daemon restart
• The directory /var/lib/containers/ has been deprecated and been replaced by /var/lib/machines
CONCLUSIONS: They are working on improving systemd-nspawn (with BTFRS) and networkD, mainly.
Timeline last code releases
Systemd v218 – 11 dec 2014 - http://cgit.freedesktop.org/systemd/systemd/tag/?id=v218
FOSDEM 2015 – 1 Feb 2015 - https://fosdem.org/2015/schedule/event/whats_new_in_systemd,_2015_edition/
maybe, someday,fosdem video will be gained in http://video.fosdem.org/2015/devroom-distributions/
Systemd v219 – 16 Feb 2015 - http://cgit.freedesktop.org/systemd/systemd/tag/?id=v219
Linux 4.0 – 22 Feb 2015 - http://lkml.iu.edu/hypermail/linux/kernel/1502.2/04059.html
30. • Thanks... Questions?
• Tips:
1. Wiki freedesktop.org TipsAndTricks
2. Trick to Know systemd version –
fedora20 ~]# /usr/bin/timedatectl --version
systemd 208