Containerd Internals: Building a Core Container Runtime discusses the architecture and internals of Containerd. It provides a brief history of Containerd and explains its goals of providing a clean API, full OCI support, and decoupled components. It describes Containerd's components like runtimes, storage, and snapshots. It then explains the processes of pulling an image, starting a container, and getting Prometheus metrics.
In the Cloud Native community, eBPF is gaining popularity, which can often be the best solution for solving different challenges with deep observability of system. Currently, eBPF is being embraced by major players.
Mydbops co-Founder, Kabilesh P.R (MySQL and Mongo Consultant) illustrates on debugging linux issues with eBPF. A brief about BPF & eBPF, BPF internals and the tools in actions for faster resolution.
Linux Performance Analysis: New Tools and Old SecretsBrendan Gregg
Talk for USENIX/LISA2014 by Brendan Gregg, Netflix. At Netflix performance is crucial, and we use many high to low level tools to analyze our stack in different ways. In this talk, I will introduce new system observability tools we are using at Netflix, which I've ported from my DTraceToolkit, and are intended for our Linux 3.2 cloud instances. These show that Linux can do more than you may think, by using creative hacks and workarounds with existing kernel features (ftrace, perf_events). While these are solving issues on current versions of Linux, I'll also briefly summarize the future in this space: eBPF, ktap, SystemTap, sysdig, etc.
Designing a complete ci cd pipeline using argo events, workflow and cd productsJulian Mazzitelli
https://www.youtube.com/watch?v=YmIAatr3Who
Presented at Cloud and AI DevFest GDG Montreal on September 27, 2019.
Are you looking to get more flexibility out of your CICD platform? Interested how GitOps fits into the mix? Learn how Argo CD, Workflows, and Events can be combined to craft custom CICD flows. All while staying Kubernetes native, enabling you to leverage existing observability tooling.
[KubeCon EU 2022] Running containerd and k3s on macOSAkihiro Suda
https://sched.co/ytpi
It has been very hard to use Mac for developing containerized apps. A typical way is to use Docker for Mac, but it is not FLOSS. Another option is to install Docker and/or Kubernetes into VirtualBox, often via minikube, but it doesn't propagate localhost ports, and VirtualBox also doesn't support the ARM architecture. This session will show how to run containerd and k3s on macOS, using Lima and Rancher Desktop. Lima wraps QEMU in a simple CLI, with neat features for container users, such as filesystem sharing and automatic localhost port forwarding, as well as DNS and proxy propagation for enterprise networks. Rancher Desktop wraps Lima with k3s integration and GUI.
In the Cloud Native community, eBPF is gaining popularity, which can often be the best solution for solving different challenges with deep observability of system. Currently, eBPF is being embraced by major players.
Mydbops co-Founder, Kabilesh P.R (MySQL and Mongo Consultant) illustrates on debugging linux issues with eBPF. A brief about BPF & eBPF, BPF internals and the tools in actions for faster resolution.
Linux Performance Analysis: New Tools and Old SecretsBrendan Gregg
Talk for USENIX/LISA2014 by Brendan Gregg, Netflix. At Netflix performance is crucial, and we use many high to low level tools to analyze our stack in different ways. In this talk, I will introduce new system observability tools we are using at Netflix, which I've ported from my DTraceToolkit, and are intended for our Linux 3.2 cloud instances. These show that Linux can do more than you may think, by using creative hacks and workarounds with existing kernel features (ftrace, perf_events). While these are solving issues on current versions of Linux, I'll also briefly summarize the future in this space: eBPF, ktap, SystemTap, sysdig, etc.
Designing a complete ci cd pipeline using argo events, workflow and cd productsJulian Mazzitelli
https://www.youtube.com/watch?v=YmIAatr3Who
Presented at Cloud and AI DevFest GDG Montreal on September 27, 2019.
Are you looking to get more flexibility out of your CICD platform? Interested how GitOps fits into the mix? Learn how Argo CD, Workflows, and Events can be combined to craft custom CICD flows. All while staying Kubernetes native, enabling you to leverage existing observability tooling.
[KubeCon EU 2022] Running containerd and k3s on macOSAkihiro Suda
https://sched.co/ytpi
It has been very hard to use Mac for developing containerized apps. A typical way is to use Docker for Mac, but it is not FLOSS. Another option is to install Docker and/or Kubernetes into VirtualBox, often via minikube, but it doesn't propagate localhost ports, and VirtualBox also doesn't support the ARM architecture. This session will show how to run containerd and k3s on macOS, using Lima and Rancher Desktop. Lima wraps QEMU in a simple CLI, with neat features for container users, such as filesystem sharing and automatic localhost port forwarding, as well as DNS and proxy propagation for enterprise networks. Rancher Desktop wraps Lima with k3s integration and GUI.
- Archeology: before and without Kubernetes
- Deployment: kube-up, DCOS, GKE
- Core Architecture: the apiserver, the kubelet and the scheduler
- Compute Model: the pod, the service and the controller
An Operator is an application that encodes the domain knowledge of the application and extends the Kubernetes API through custom resources. They enable users to create, configure, and manage their applications. Operators have been around for a while now, and that has allowed for patterns and best practices to be developed.
In this talk, Lili will explain what operators are in the context of Kubernetes and present the different tools out there to create and maintain operators over time. She will end by demoing the building of an operator from scratch, and also using the helper tools available out there.
Container Network Interface: Network Plugins for Kubernetes and beyondKubeAcademy
With the rise of modern containers comes new problems to solve – especially in networking. Numerous container SDN solutions have recently entered the market, each best suited for a particular environment. Combined with multiple container runtimes and orchestrators available today, there exists a need for a common layer to allow interoperability between them and the network solutions.
As different environments demand different networking solutions, multiple vendors and viewpoints look to a specification to help guide interoperability. Container Network Interface (CNI) is a specification started by CoreOS with the input from the wider open source community aimed to make network plugins interoperable between container execution engines. It aims to be as common and vendor-neutral as possible to support a wide variety of networking options — from MACVLAN to modern SDNs such as Weave and flannel.
CNI is growing in popularity. It got its start as a network plugin layer for rkt, a container runtime from CoreOS. Today rkt ships with multiple CNI plugins allowing users to take advantage of virtual switching, MACVLAN and IPVLAN as well as multiple IP management strategies, including DHCP. CNI is getting even wider adoption with Kubernetes adding support for it. Kubernetes accelerates development cycles while simplifying operations, and with support for CNI is taking the next step toward a common ground for networking. For continued success toward interoperability, Kubernetes users can come to this session to learn the CNI basics.
This talk will cover the CNI interface, including an example of how to build a simple plugin. It will also show Kubernetes users how CNI can be used to solve their networking challenges and how they can get involved.
KubeCon schedule link: http://sched.co/4VAo
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
USENIX LISA2021 talk by Brendan Gregg (https://www.youtube.com/watch?v=_5Z2AU7QTH4). This talk is a deep dive that describes how BPF (eBPF) works internally on Linux, and dissects some modern performance observability tools. Details covered include the kernel BPF implementation: the verifier, JIT compilation, and the BPF execution environment; the BPF instruction set; different event sources; and how BPF is used by user space, using bpftrace programs as an example. This includes showing how bpftrace is compiled to LLVM IR and then BPF bytecode, and how per-event data and aggregated map data are fetched from the kernel.
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
SOSCON 2019.10.17
What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel.
Daniel T. Lee (Hoyeon Lee)
@danieltimlee
Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
Containerd internals: building a core container runtimeDocker, Inc.
In this talk, we’ll briefly overview of the OpenWhisk serverless (function-as-a-service) framework that initially used the full Docker container engine as the execution vehicle for invoking user functions via containers. After several performance and stability challenges, this project decided to assess the various layers of the Docker engine (containerd and runC) as potential options for the function invoker. Out of that work came an open source project, bucketbench, that can be used to generate benchmarks of container lifecycle operations (e.g., start, stop, kill, remove, pause, unpause) and compare multithreaded operation throughput and stability of each optional engine.
This talk will provide details on the bucketbench project, explain how it has been used to generate performance data for these container runtimes, and shares lessons learned along the way that greatly impact container runtime performance, including bottlenecks in the Linux kernel.
In this talk you’ll learn how you can use bucketbench for your own performance tuning or assessment of container runtimes and how you can collaborate on improvements to the bucketbench project.
containerd is an industry-standard core container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc..
- Archeology: before and without Kubernetes
- Deployment: kube-up, DCOS, GKE
- Core Architecture: the apiserver, the kubelet and the scheduler
- Compute Model: the pod, the service and the controller
An Operator is an application that encodes the domain knowledge of the application and extends the Kubernetes API through custom resources. They enable users to create, configure, and manage their applications. Operators have been around for a while now, and that has allowed for patterns and best practices to be developed.
In this talk, Lili will explain what operators are in the context of Kubernetes and present the different tools out there to create and maintain operators over time. She will end by demoing the building of an operator from scratch, and also using the helper tools available out there.
Container Network Interface: Network Plugins for Kubernetes and beyondKubeAcademy
With the rise of modern containers comes new problems to solve – especially in networking. Numerous container SDN solutions have recently entered the market, each best suited for a particular environment. Combined with multiple container runtimes and orchestrators available today, there exists a need for a common layer to allow interoperability between them and the network solutions.
As different environments demand different networking solutions, multiple vendors and viewpoints look to a specification to help guide interoperability. Container Network Interface (CNI) is a specification started by CoreOS with the input from the wider open source community aimed to make network plugins interoperable between container execution engines. It aims to be as common and vendor-neutral as possible to support a wide variety of networking options — from MACVLAN to modern SDNs such as Weave and flannel.
CNI is growing in popularity. It got its start as a network plugin layer for rkt, a container runtime from CoreOS. Today rkt ships with multiple CNI plugins allowing users to take advantage of virtual switching, MACVLAN and IPVLAN as well as multiple IP management strategies, including DHCP. CNI is getting even wider adoption with Kubernetes adding support for it. Kubernetes accelerates development cycles while simplifying operations, and with support for CNI is taking the next step toward a common ground for networking. For continued success toward interoperability, Kubernetes users can come to this session to learn the CNI basics.
This talk will cover the CNI interface, including an example of how to build a simple plugin. It will also show Kubernetes users how CNI can be used to solve their networking challenges and how they can get involved.
KubeCon schedule link: http://sched.co/4VAo
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
USENIX LISA2021 talk by Brendan Gregg (https://www.youtube.com/watch?v=_5Z2AU7QTH4). This talk is a deep dive that describes how BPF (eBPF) works internally on Linux, and dissects some modern performance observability tools. Details covered include the kernel BPF implementation: the verifier, JIT compilation, and the BPF execution environment; the BPF instruction set; different event sources; and how BPF is used by user space, using bpftrace programs as an example. This includes showing how bpftrace is compiled to LLVM IR and then BPF bytecode, and how per-event data and aggregated map data are fetched from the kernel.
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
SOSCON 2019.10.17
What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel.
Daniel T. Lee (Hoyeon Lee)
@danieltimlee
Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
Containerd internals: building a core container runtimeDocker, Inc.
In this talk, we’ll briefly overview of the OpenWhisk serverless (function-as-a-service) framework that initially used the full Docker container engine as the execution vehicle for invoking user functions via containers. After several performance and stability challenges, this project decided to assess the various layers of the Docker engine (containerd and runC) as potential options for the function invoker. Out of that work came an open source project, bucketbench, that can be used to generate benchmarks of container lifecycle operations (e.g., start, stop, kill, remove, pause, unpause) and compare multithreaded operation throughput and stability of each optional engine.
This talk will provide details on the bucketbench project, explain how it has been used to generate performance data for these container runtimes, and shares lessons learned along the way that greatly impact container runtime performance, including bottlenecks in the Linux kernel.
In this talk you’ll learn how you can use bucketbench for your own performance tuning or assessment of container runtimes and how you can collaborate on improvements to the bucketbench project.
containerd is an industry-standard core container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc..
containerd is an industry-standard container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc.
containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users.
OSDC 2018 | Three years running containers with Kubernetes in Production by T...NETWAYS
The talk gives a state of the art update of experiences with deploying applications in Kubernetes on scale. If in clouds or on premises, Kubernetes took over the leading role as a container operating system. The central paradigm of stateless containers connected to storage and services is the core of Kubernetes. However, it can be extended to distributed databases, Machine Learning, Windows VMs in Kubernetes. All these applications have been considered as edge cases a few years ago, however, are going more and more mainstream today.
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV Clusterbyonggon chun
Introduce the container runtime environment which is set up with Kubernetes and various CRI runtimes(Docker, Containerd, CRI-O) and the method of NUMA-aware resource management(CPU Manager, Topology Manager, Etc) for CNF(Containerized Network Function) within Kubernetes and related issues.
Docker London Meetup: Docker Engine EvolutionPhil Estes
A meetup talk on the evolution of the Docker engine from 2014-2019, including the refactoring and spin out of OCI runc and CNCF containerd codebases. This talk was given at the Docker London meetup group on Thursday, 31st January, 2019.
DockerCon 2022 - From legacy to Kubernetes, securely & quicklyEric Smalling
You’ve been developing software for years and now your team is ready to take the plunge into orchestrated containers and Kubernetes. You’ve learned about containers, images, and Dockerfiles, but standing up a Kubernetes cluster and actually running your app in it seems like a daunting task.
In this session, we’ll go over the basics to get your app up and running in Kubernetes right on your own workstation using Docker Desktop. On the way, we’ll cover some of the security aspects you need to keep in mind and show you how to implement them in your Kubernetes manifests.
We’ll go over:
1.) Kubernetes basics, including pods, deployments, and services
2.) Moving a legacy app into a container and running it in Kubernetes
3.) Some security best practices to watch out for — and what can happen if you don’t
4.) Implementing those best practices to defend against and limit the blast radius of an attack
Dieser Talk greift das Thema "Containers from Scratch" auf und zeigt wie Container Runtimes unseren Alltag erleichtern können und worum es sich hierbei genau handelt. Zudem werden die Unterschiede einiger Container Runtimes dargestellt.
Nebulaworks invited Bitnami's software engineer, Adnan Abdulhussein to present on, "The App Developer's Kubernetes Toolbox."
Details:
If you're developing applications on top of Kubernetes, you may be feeling overwhelmed with the vast number of development tools in the ecosystem at your disposal. Kubernetes is growing at a rapid pace, and it's becoming impossible to keep up with the latest and greatest development environments, debuggers, and build test and deployment tools.
Learn:
• The current state of development in Kubernetes
• Comparison of shared and local Kubernetes development environments
• Overview of different development tools in the ecosystem
• Which tools make sense in common scenarios
• How Bitnami uses Kubernetes as a development environment
DockerCon 2019 took place in San Francisco, from April 29th to May 2nd.
Open Source @ Dockercon Summit took place Thursday, May 2nd.
Dockercon 2019 was a success with 5000+ participants. We are planning a recap Meetup to highlight overall announcements, new features & news from the event:,
- new CLI plugins announcement (docker app, docker buildx, docker pipeline etc);
- features of Docker Enterprise 3.0 ( assemble, template etc)
- takeaways; useful links, demos, tips and tricks and of course all videos from all the sessions
- cool stuff from the Open summit, like the powerful buildkit
- Demo: Multi-arch Docker Builds
Under this Meetup, we'll discuss news / new feature announcements during Dockercon and their implications for the ecosystem and end user. In addition to the DockerCon recap, we'll have the usual opportunities for networking and Q&A. We will look to answer any questions you have about Dockercon at this meetup.
We invite all of our members to come -- whether you're a beginner or an experienced user of containers. Don't forget to RSVP for this event so we can make sure we have plenty of place for everyone. Save the date for Docker Timisoara Meetup on May 23th @ CoWork The Garden!
containerd the universal container runtimeDocker, Inc.
containerd is an industry-standard core container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc..
containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users.
containerd includes a daemon exposing gRPC API over a local UNIX socket. The API is a low-level one designed for higher layers to wrap and extend. It also includes a barebone CLI (ctr) designed specifically for development and debugging purpose. It uses runC to run containers according to the OCI specification. The code can be found on GitHub, and here are the contribution guidelines.
containerd is based on the Docker Engine’s core container runtime to benefit from its maturity and existing contributors.
Kubernetes for java developers - Tutorial at Oracle Code One 2018Anthony Dahanne
You’re a Java developer? Already familiar with Docker? Want to know more about Kubernetes and its ecosystem for developers? During this session, you’ll get familiar with core Kubernetes concepts (pods, deployments, services, volumes, and so on) before seeing the most-popular and most-productive Kubernetes tools in action, with a special focus on Java development. By the end of the session, you’ll have a better understanding of how you can leverage Kubernetes to speed up your Java deployments on-premises or to any cloud.
Whose Job Is It Anyway? Kubernetes, CRI, & Container RuntimesPhil Estes
A talk given at Cloud Native London meetup, February 6, 2018 on the role of container runtimes in Kubernetes, the introduction of the Container Runtime Interface (CRI), and the history of containerd and it's use as a CRI implementing container runtime for Kubernetes.
Webinar: Enabling Microservices with Containers, Orchestration, and MongoDBMongoDB
Want to try out MongoDB on your laptop? Execute a single command and you have a lightweight, self-contained sandbox; another command removes all trace when you're done. Need an identical copy of your application stack in multiple environments? Build your own container image and then your entire development, test, operations, and support teams can launch an identical clone environment.
Containers are revolutionizing the entire software lifecycle: from the earliest technical experiments and proofs of concept through development, test, deployment, and support. Orchestration tools manage how multiple containers are created, upgraded and made highly available. Orchestration also controls how containers are connected to build sophisticated applications from multiple, microservice containers.
This webinar introduces the concepts behind containers and orchestration, then explains the available technologies and how to use them with MongoDB. Finally, you will see a demonstration of exactly how to create a MongoDB replica set on Docker and Kubernetes within the Google Cloud.
Kubernetes - how to orchestrate containersinovex GmbH
http://www.meetup.com/Docker-Karlsruhe/events/220797663/
mehr Meetups von inovex:
http://www.meetup.com/inovex-karlsruhe
http://www.meetup.com/inovex-munich
http://www.meetup.com/inovex-cologne
Docker Meetup - Melbourne 2015 - Kubernetes Deep DiveKen Thompson
Presentation given at the October 2015 Docker Meetup in Melbourne. A deep dive in to Kubernetes networking and storage and how this is being utilised in OpenShift 3.
A Survey of Container Security in 2016: A Security Update on Container PlatformsSalman Baset
This talk is an update of container security in 2016. It describes the security measures that containers provide, shows how containers provide security measures out of box that are prone to configuration errors when running applications directly on host, and finally lists the ongoing in container security in the community.
Similar to Containerd Internals: Building a Core Container Runtime (20)
Enabling Security via Container RuntimesPhil Estes
A talk given at the Google-hosted Container Security Summit on Wednesday, February 12th, 2020 in Seattle, Washington. This talk covered the impact of work done at the lower-level runtimes layer and up through layers like cri-o, containerd, and Docker to bring specific security features to overall platforms like Kubernetes.
Extended and embedding: containerd update & project use casesPhil Estes
A talk given at FOSDEM 2020 in the containers devroom on the current status of the CNCF containerd project as well as a dive into the ways users are extending and embedding containerd in other platforms and projects.
Cloud Native TLV Meetup: Securing Containerized Applications PrimerPhil Estes
A talk give on Tuesday, January 28th, 2020 at the Tel Aviv, Israel Cloud Native meetup covering the core concepts of how to secure containerized applications in a Kubernetes context.
Securing Containerized Applications: A PrimerPhil Estes
A talk given at Devoxx Morocco on Wednesday, November 13, 2019. In this talk a very insecure sample (demo) application is used to explain the various security principles application developers can apply when using containers and Kubernetes--from image sourcing, content, scanning to resource controls, attack surface mitigation, and reducing privilege for containers.
Securing Containerized Applications: A PrimerPhil Estes
A talk given at Open Source Summit Europe in Lyon, France on Tuesday, October 29th, 2019. In this talk we try and focus on the key areas that an application developer can influence with regards to image and runtime security, focused on using Kubernetes as the orchestrator for a containerized application.
Let's Try Every CRI Runtime Available for KubernetesPhil Estes
A talk given at KubeCon/CloudNativeCon EU in Barcelona, Spain on May 23, 2019. In this talk Phil presented the explosion of OCI-compliant CRI-enabled runtimes that can be used underneath Kubernetes, and demonstrated several of them live.
CraftConf 2019: CRI Runtimes Deep Dive: Who Is Running My Pod?Phil Estes
A talk given at Craft Conf in Budapest, Hungary on May 10th, 2019. In this talk, Phil walked through the history of the need for a Container Runtime Interface (CRI) in Kubernetes, followed by an overview of all available CRI implementations, focusing on containerd, the CNCF core container runtime used in many clouds and projects. Phil demonstrated the "layers" of interaction from Kubernetes API, to CRI API to a container runtime's native API using an IBM Cloud Kubernetes cluster using containerd 1.2.6.
JAX Con 2019: Containers. Microservices. Cloud. Open Source. Fantasy or Reali...Phil Estes
A keynote given at JAX Con 2019 on May 7th in Mainz, Germany. In this keynote address, Phil presented four "buzzwords": containers, cloud, microservices, and open source and compared those technology areas against three main needs--speed, security, and efficiency--which seem to be common among enterprises today. Phil gives real world examples from IBM Cloud customers as well as detailing IBM's own transformation to a cloud native, container first approach to our own service delivery.
Giving Back to Upstream | DockerCon 2019Phil Estes
Giving Back to Upstream: An open source beginner's primer is a talk presented at DockerCon 2019 in San Francisco on April 30, 2019. In this talk, Phil Estes presented his story of getting involved in the container open source ecosystem, and provides a set of "open source 101" tips and guidance for those wanting to participate in open source contribution.
What's Running My Containers? A review of runtimes and standards.Phil Estes
A talk given at Open Source Leadership Summit (OSLS) on Thursday, March 14th in Half Moon Bay, CA. In this talk the current status of the Open Container Initiative (OCI) standards as well as the Kubernetes Container Runtime Interface (CRI) were presented, with a view towards how these components have provided a level playing field with significant choice when it comes to container runtimes for use in Kubernetes, as well as interoperability per the OCI standards.
CRI Runtimes Deep-Dive: Who's Running My Pod!?Phil Estes
A talk given at QCon NYC on Wednesday, June 27, 2018 in the Container track, focused on helping developers understand the inner workings of pluggable container runtimes in the Kubernetes world. The second half of this talk is not available in slide form, but should be available via QCon video. The non-slide talk content included hands-on-keyboard demonstrations of various tools which can be used to investigate and introspect kubelet and pod -> container runtime boundaries and details, all shown in IBM Cloud using the containerd runtime underneath a Kubernetes 1.11 cluster.
Docker Athens: Docker Engine Evolution & Containerd Use CasesPhil Estes
These slides are from a talk presented at the Docker Athens meetup on Thursday, May 31, 2018. They start by covering the evolution of the Docker engine of 2014/2015 into the separate components of OCI runc, (now) CNCF containerd, and the Docker client and daemon projects. Finally, various use cases for the CNCF containerd "core container runtime" project are detailed, from the Docker engine itself to serverless frameworks like OpenWhisk, to the container runtime interface (CRI) within Kubernetes.
It's 2018. Are My Containers Secure Yet!?Phil Estes
A talk given at DevOps Pro Vilnius on March 15, 2018 about container security. In this talk we discussed the core topics around the container ecosystem (host, runtime, image) applicable to both Docker and Kubernetes, as well as discussing usable security/secure by default, and defense in depth principles. Also discussed were security futures like Project Grafeas, libentitlement, LinuxKit concepts, and trusted/untrusted container runtimes in Kubernetes.
Docker Engine Evolution: From Monolith to Discrete ComponentsPhil Estes
A talk given on Tuesday and Wednesday the 27th and 28th of February 2018 at the Docker Mountain View and Docker SF meetup groups. In this talk, Docker Captain Phil Estes provides a history of the Docker engine from its early days as a single statically linked binary providing all the Docker engine functions to today's Moby and Docker CE projects comprising multiple projects and layers, including the Open Container Initiative (OCI) specifications and runC implementation, and the Cloud Native Computing Foundation (CNCF) containerd project. This talk also describes how these lower layer components spun out from Docker are being used to enhance other projects and offerings in the container ecosystem.
An Open Source Story: Open Containers & Open CommunitiesPhil Estes
A talk given at All Thing Open's Open Source 101 event at NC State University, Raleigh, North Carolina on Saturday, 17th February, 2018.
This talk covered some interesting history lessons of the Docker open source project and inter-vendor tensions. If you were not at this talk do not read intent into these slides as this was truly an attempt at a "blame-free" post-mortem of the important topics of open source, governance, and foundations as it related to the extremely popular Docker open source project.
Presentation given on Sunday, February 4th, 2018 in the containers devroom at FOSDEM 2018. This presentation covers the containerd project background, history, architecture, and current status as a CNCF project used by Docker, Kubernetes, and other projects requiring a stable, performant core container runtime.
A talk given on December 6, 2017 at KubeCon/CloudNativeCon in Austin, Texas. In this talk, Phil talked briefly about containerd history and design, but the bulk of the talk was a live coding demo of creating a simple client for containerd to learn about the clean and simple API design for the client library and gRPC services. The GitHub project https://github.com/estesp/examplectr has the code and sample LinuxKit assembly used for the code and example client demo.
Bucketbench: Benchmarking Container Runtime PerformancePhil Estes
A talk presented at the Moby Summit, Los Angeles (a co-located event with the Open Source Summit North America) on Thursday, September 14, 2017. In this talk, an open source tool, bucketbench, was presented as a way to benchmark container runtimes to compare performance impacts of changes in the runtime or changes to the configuration of Docker, runC, or containerd, the three runtimes currently supported in the bucketbench project.
Container Runtimes: Comparing and Contrasting Today's EnginesPhil Estes
A webinar presented for the {code} Community on August 30, 2017. In this talk, we looked at the sphere of modern container runtimes that start with Docker's emergence in 2013/2014 to today's additions of rkt, OCI's runc, containerd, cri-o, and Cloud Foundry's garden-runc project, many of them consolidating around the OCI standard for container runtime and image specifications.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
2. A Brief History
APRIL 2016 Containerd “0.2” announced, Docker 1.11
DECEMBER 2016Announce expansion of containerd OSS project
Management/Supervisor for the OCI runc executor
Containerd 1.0: A core container runtime project for the industry
MARCH 2017 Containerd project contributed to CNCF
3. runc
containerd
Why Containerd 1.0?
▪ Continue projects spun out
from monolithic Docker engine
▪ Expected use beyond Docker
engine (Kubernetes CRI)
▪ Donation to foundation for
broad industry collaboration
▫ Similar to runc/libcontainer
and the OCI
4. Technical Goals/Intentions
▪ Clean gRPC-based API + client library
▪ Full OCI support (runtime and image spec)
▪ Stability and performance with tight,
well-defined core of container function
▪ Decoupled systems (image, filesystem, runtime)
for pluggability, reuse
5. Requirements
- A la carte: use only what is required
- Runtime agility: fits into different platforms
- Pass-through container configuration (direct OCI)
- Decoupled
- Use known-good technology
- OCI container runtime and images
- gRPC for API
- Prometheus for Metrics
8. Containerd: Rich Go API
Getting Started
https://github.com/containerd/containerd/blob/master/docs/getting-started.md
GoDoc
https://godoc.org/github.com/containerd/containerd
10. # HELP container_blkio_io_service_bytes_recursive_bytes The blkio io service bytes recursive
# TYPE container_blkio_io_service_bytes_recursive_bytes gauge
container_blkio_io_service_bytes_recursive_bytes{container_id="foo4",device="/dev/nvme0n1",major="259",minor="0",namespace="default",op="Async"} 1.07159552e+08
container_blkio_io_service_bytes_recursive_bytes{container_id="foo4",device="/dev/nvme0n1",major="259",minor="0",namespace="default",op="Read"} 0
container_blkio_io_service_bytes_recursive_bytes{container_id="foo4",device="/dev/nvme0n1",major="259",minor="0",namespace="default",op="Sync"} 81920
container_blkio_io_service_bytes_recursive_bytes{container_id="foo4",device="/dev/nvme0n1",major="259",minor="0",namespace="default",op="Total"} 1.07241472e+08
container_blkio_io_service_bytes_recursive_bytes{container_id="foo4",device="/dev/nvme0n1",major="259",minor="0",namespace="default",op="Write"} 1.07241472e+08
# HELP container_blkio_io_serviced_recursive_total The blkio io servied recursive
# TYPE container_blkio_io_serviced_recursive_total gauge
container_blkio_io_serviced_recursive_total{container_id="foo4",device="/dev/nvme0n1",major="259",minor="0",namespace="default",op="Async"} 892
container_blkio_io_serviced_recursive_total{container_id="foo4",device="/dev/nvme0n1",major="259",minor="0",namespace="default",op="Read"} 0
container_blkio_io_serviced_recursive_total{container_id="foo4",device="/dev/nvme0n1",major="259",minor="0",namespace="default",op="Sync"} 888
container_blkio_io_serviced_recursive_total{container_id="foo4",device="/dev/nvme0n1",major="259",minor="0",namespace="default",op="Total"} 1780
container_blkio_io_serviced_recursive_total{container_id="foo4",device="/dev/nvme0n1",major="259",minor="0",namespace="default",op="Write"} 1780
# HELP container_cpu_kernel_nanoseconds The total kernel cpu time
# TYPE container_cpu_kernel_nanoseconds gauge
container_cpu_kernel_nanoseconds{container_id="foo4",namespace="default"} 2.6e+08
# HELP container_cpu_throttle_periods_total The total cpu throttle periods
# TYPE container_cpu_throttle_periods_total gauge
container_cpu_throttle_periods_total{container_id="foo4",namespace="default"} 0
# HELP container_cpu_throttled_periods_total The total cpu throttled periods
# TYPE container_cpu_throttled_periods_total gauge
container_cpu_throttled_periods_total{container_id="foo4",namespace="default"} 0
# HELP container_cpu_throttled_time_nanoseconds The total cpu throttled time
# TYPE container_cpu_throttled_time_nanoseconds gauge
container_cpu_throttled_time_nanoseconds{container_id="foo4",namespace="default"} 0
# HELP container_cpu_total_nanoseconds The total cpu time
# TYPE container_cpu_total_nanoseconds gauge
container_cpu_total_nanoseconds{container_id="foo4",namespace="default"} 1.003301578e+09
# HELP container_cpu_user_nanoseconds The total user cpu time
# TYPE container_cpu_user_nanoseconds gauge
container_cpu_user_nanoseconds{container_id="foo4",namespace="default"} 7e+08
# HELP container_hugetlb_failcnt_total The hugetlb failcnt
# TYPE container_hugetlb_failcnt_total gauge
container_hugetlb_failcnt_total{container_id="foo4",namespace="default",page="1GB"} 0
container_hugetlb_failcnt_total{container_id="foo4",namespace="default",page="2MB"} 0
# HELP container_hugetlb_max_bytes The hugetlb maximum usage
# TYPE container_hugetlb_max_bytes gauge
container_hugetlb_max_bytes{container_id="foo4",namespace="default",page="1GB"} 0
container_hugetlb_max_bytes{container_id="foo4",namespace="default",page="2MB"} 0
# HELP container_hugetlb_usage_bytes The hugetlb usage
# TYPE container_hugetlb_usage_bytes gauge
container_hugetlb_usage_bytes{container_id="foo4",namespace="default",page="1GB"} 0
container_hugetlb_usage_bytes{container_id="foo4",namespace="default",page="2MB"} 0
# HELP container_memory_active_anon_bytes The active_anon amount
# TYPE container_memory_active_anon_bytes gauge
container_memory_active_anon_bytes{container_id="foo4",namespace="default"} 2.658304e+06
# HELP container_memory_active_file_bytes The active_file amount
# TYPE container_memory_active_file_bytes gauge
container_memory_active_file_bytes{container_id="foo4",namespace="default"} 7.319552e+06
# HELP container_memory_cache_bytes The cache amount used
# TYPE container_memory_cache_bytes gauge
container_memory_cache_bytes{container_id="foo4",namespace="default"} 5.0597888e+07
# HELP container_memory_dirty_bytes The dirty amount
Metrics
23. Pulling an Image
1. Resolve manifest or index (manifest list)
2. Download all the resources referenced by the
manifest
3. Unpack layers into snapshots
4. Register the mappings between manifests and
constituent resources
24. Pulling an Image
Data Flow
Content Images Snapshots
Pull
Fetch Unpack
Events
Remote
Mounts
25. Starting a Container
1. Initialize a root filesystem (RootFS) from
snapshot
2. Setup OCI configuration (config.json)
3. Use metadata from container and snapshotter
to specify config and mounts
4. Start process via the task service
27. Example: Run a Container
Via ctr client:
$ export
CONTAINERD_NAMESPACE=example
$ ctr run -t
docker.io/library/redis:alpine
redis-server
$ ctr c
...
// create our container object and config
container, err := client.NewContainer(ctx,
"redis-server",
containerd.WithImage(image),
containerd.WithNewSpec(containerd.WithImageConfig(image)),
)
defer container.Delete()
// create a task from the container
task, err := container.NewTask(ctx, containerd.Stdio)
defer task.Delete(ctx)
// make sure we wait before calling start
exitStatusC, err := task.Wait(ctx)
// call start on the task to execute the redis server
if err := task.Start(ctx); err != nil {
return err
}
28. Example: Kill a Task
Via ctr client:
$ export
CONTAINERD_NAMESPACE=example
$ ctr t kill redis-server
$ ctr t ls
...
// make sure we wait before calling start
exitStatusC, err := task.Wait(ctx)
time.Sleep(3 * time.Second)
if err := task.Kill(ctx, syscall.SIGTERM); err != nil {
return err
}
// retrieve the process exit status from the channel
status := <-exitStatusC
code, exitedAt, err := status.Result()
if err != nil {
return err
}
// print out the exit code from the process
fmt.Printf("redis-server exited with status: %dn", code)
29. Example: Customize OCI Configuration
// WithHtop configures a container to monitor the host via `htop`
func WithHtop(s *specs.Spec) error {
// make sure we are in the host pid namespace
if err := containerd.WithHostNamespace(specs.PIDNamespace)(s); err != nil {
return err
}
// make sure we set htop as our arg
s.Process.Args = []string{"htop"}
// make sure we have a tty set for htop
if err := containerd.WithTTY(s); err != nil {
return err
}
return nil
}
With{func} functions cleanly separate modifiers
33. Going further with containerd
▪ Contributing: https://github.com/containerd/containerd
▫ Bug fixes, adding tests, improving docs, validation
▪ Using: See the getting started documentation in the docs
folder of the repo
▪ Porting/testing: Other architectures & OSs, stress
testing (see bucketbench, containerd-stress):
▫ git clone <repo>, make binaries, sudo make install
▪ K8s CRI: incubation project to use containerd as CRI
▫ In alpha today; e2e tests, validation, contributing
34. Moby Summit at OSS NA
Thursday, September 14, 2017
“An open framework to assemble specialized
container systems without reinventing the wheel.”
Tickets:
https://www.eventbrite.com/e/moby-summit-los-angeles-tickets-35930560273