Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Containerd Project Update: FOSDEM 2018


Published on

Presentation given on Sunday, February 4th, 2018 in the containers devroom at FOSDEM 2018. This presentation covers the containerd project background, history, architecture, and current status as a CNCF project used by Docker, Kubernetes, and other projects requiring a stable, performant core container runtime.

Published in: Software
  • Be the first to comment

Containerd Project Update: FOSDEM 2018

  1. 1. @estesp The containerd Project: An Overview and Project Update Phil Estes (@estesp), IBM Cloud February 3, 2018 FOSDEM Containers Room
  2. 2. @estesp A Brief History APRIL 2016 Containerd “0.2” announced, Docker 1.11 DECEMBER 2016Announce expansion of containerd OSS project Management/Supervisor for the OCI runc executor Containerd 1.0: A core container runtime project for the industry MARCH 2017 Containerd project contributed to CNCF DECEMBER 2017containerd project announces v1.0.0 release
  3. 3. @estesp
  4. 4. @estesp runc containerd Why Containerd 1.0? ▪ Continue projects spun out from monolithic Docker engine ▪ Expected use beyond Docker engine (Kubernetes CRI) ▪ Donation to foundation for broad industry collaboration ▫ Similar to runc/libcontainer and the OCI
  5. 5. @estesp Technical Goals/Intentions ▪ Clean gRPC-based API + client library ▪ Full OCI support (runtime and image spec) ▪ Stability and performance with tight, well-defined core of container function ▪ Decoupled systems (image, filesystem, runtime) for pluggability, reuse
  6. 6. @estesp Architecture Runtimes Metadata ContainersContent DiffSnapshot Tasks EventsImages GRPC Metrics Runtimes Storage OS
  7. 7. @estesp Containerd: Rich Go API Getting Started GoDoc
  8. 8. @estesp Snapshotters How do you build a container root filesystem?
  9. 9. @estesp Docker Storage Architecture Graph Driver “layers” “mounts” Layer Store “content addressable layers” Image Store “image configs” Containers “container configs” Reference Store “names to image” Daemon
  10. 10. @estesp containerd Storage Architecture Snapshotter “layer snapshots” Content Store “content addressed blobs” Metadata Store “references” Config Rootfs (mounts)
  11. 11. @estesp Snapshots ● No mounting, just returns mounts! ● Explicit active (rw) and committed (ro) ● Commands represent lifecycle ● Reference key chosen by caller (allows using content addresses) ● No tars and no diffs Evolved from Graph Drivers ● Simple layer relationships ● Small and focused interface ● Non-opinionated string keys ● External Mount Lifecycle type Snapshotter interface { Stat(key string) (Info, error) Mounts(key string) ([]containerd.Mount, error) Prepare(key, parent string) ([]containerd.Mount, error) View(key, parent string) ([]containerd.Mount, error) Commit(name, key string) error Remove(key string) error Walk(fn func(Info) error) error } type Info struct { Name string // name or key of snapshot Parent string Kind Kind } type Kind int const ( KindView KindActive KindCommitted )
  12. 12. @estesp Example: Investigating Root Filesystem $ ctr snapshot ls … $ ctr snapshot tree … $ ctr snapshot mounts <target> <id>
  13. 13. @estesp Running a container
  14. 14. @estesp service Tasks { // Create a task. rpc Create(CreateTaskRequest) returns (CreateTaskResponse); // Start a process. rpc Start(StartRequest) returns (StartResponse); // Delete a task and on disk state. rpc Delete(DeleteTaskRequest) returns (DeleteResponse); rpc DeleteProcess(DeleteProcessRequest) returns (DeleteResponse); rpc Get(GetRequest) returns (GetResponse); rpc List(ListTasksRequest) returns (ListTasksResponse); // Kill a task or process. rpc Kill(KillRequest) returns (google.protobuf.Empty); rpc Exec(ExecProcessRequest) returns (google.protobuf.Empty); rpc ResizePty(ResizePtyRequest) returns (google.protobuf.Empty); rpc CloseIO(CloseIORequest) returns (google.protobuf.Empty); rpc Pause(PauseTaskRequest) returns (google.protobuf.Empty); rpc Resume(ResumeTaskRequest) returns (google.protobuf.Empty); rpc ListPids(ListPidsRequest) returns (ListPidsResponse); rpc Checkpoint(CheckpointTaskRequest) returns (CheckpointTaskResponse); rpc Update(UpdateTaskRequest) returns (google.protobuf.Empty); rpc Metrics(MetricsRequest) returns (MetricsResponse); rpc Wait(WaitRequest) returns (WaitResponse); } client containerd Tasks and Runtime Runtimes Tasks Service Containers Service Meta Runtime Config Mounts linux containerd-shim containerd-shim containerd-shim containerd-shim wcow hcsshim VM VM VM VM lcow VM VM hcsshim VM VM VM VM VM VM
  15. 15. @estesp Example: Pull an Image Via ctr client: $ export CONTAINERD_NAMESPACE=example $ ctr pull $ ctr image ls ... import ( "context" "" "" ) // connect to our containerd daemon client, err := containerd.New("/run/containerd/containerd.sock") defer client.Close() // set our namespace to “example”: ctx := namespaces.WithNamespace(context.Background(), "example") // pull the alpine-based redis image from DockerHub: image, err := client.Pull(ctx, "", containerd.WithPullUnpack)
  16. 16. @estesp Example: Run a Container Via ctr client: $ export CONTAINERD_NAMESPACE=example $ ctr run -t redis-server $ ctr c ... // create our container object and config container, err := client.NewContainer(ctx, "redis-server", containerd.WithImage(image), containerd.WithNewSpec(containerd.WithImageConfig(image)), ) defer container.Delete() // create a task from the container task, err := container.NewTask(ctx, containerd.Stdio) defer task.Delete(ctx) // make sure we wait before calling start exitStatusC, err := task.Wait(ctx) // call start on the task to execute the redis server if err := task.Start(ctx); err != nil { return err }
  17. 17. @estesp Example: Kill a Task Via ctr client: $ export CONTAINERD_NAMESPACE=example $ ctr t kill redis-server $ ctr t ls ... // make sure we wait before calling start exitStatusC, err := task.Wait(ctx) time.Sleep(3 * time.Second) if err := task.Kill(ctx, syscall.SIGTERM); err != nil { return err } // retrieve the process exit status from the channel status := <-exitStatusC code, exitedAt, err := status.Result() if err != nil { return err } // print out the exit code from the process fmt.Printf("redis-server exited with status: %dn", code)
  18. 18. @estesp Example: Customize OCI Configuration // WithHtop configures a container to monitor the host via `htop` func WithHtop(s *specs.Spec) error { // make sure we are in the host pid namespace if err := containerd.WithHostNamespace(specs.PIDNamespace)(s); err != nil { return err } // make sure we set htop as our arg s.Process.Args = []string{"htop"} // make sure we have a tty set for htop if err := containerd.WithTTY(s); err != nil { return err } return nil } With{func} functions cleanly separate modifiers
  19. 19. @estesp Release Process Latest Release: v1.0.2 (-rc.0 pre-release 3 days ago) Key Points: ▪ Using SemVer ▪ Major releases have a support horizon with backported fixes ▫ Already proven out post-1.0.0 with 2 point releases ▪ Next point release (v1.1) plans to include features for Windows container runtime support and other enhancements ▪ Stability and compatibility guarantees provided & documented
  20. 20. @estesp Supported Components Component Status Stablized Version Links GRPC API Beta 1.0 api/ Metrics API Beta 1.0 Go client API Unstable 1.1 tentative godoc ctr tool Unstable Out of scope -
  21. 21. @estesp ▪ Containerd is a member project within the Cloud Native Computing Foundation (CNCF) ▪ The Moby project governance, adopted in Q42017 is not a BDFL model ▪ The newly formed Moby Technical Steering Committee (TSC) oversees Moby projects ▪ Broad base of contributors, and growing TOP TEN CONTRIBUTORS 1. Docker 2. Google 3. NTT 4. Tesla* 5. IBM 6. ZTE 7. Microsoft 8. Red Hat** 9. Huawei 10. Amazon Web Services * Former Docker maintainer left for Tesla ** Red Hat contributions mostly prior to 1.0 codebase Project Contributors
  22. 22. @estesp Users - CURRENT - Docker (moby) - Kubernetes (cri-containerd) - SwarmKit (experimental) - LinuxKit - BuildKit - PLANNING/DEVELOPING - CloudFoundry Garden-runC - Apache OpenWhisk - Puppet R&D - {your project here}
  23. 23. @estesp kubelet kubelet dockershim (CRI) Docker engine containerd containerd-shim containerd-shim containerd-shim runc runc runc containerd containerd-shim containerd-shim containerd-shim runc runc runc cri plugin containerd cri-containerd ttrpc: very lightweight gRPC protocol format Kubernetes runtimes: Docker vs. cri-containerd ( **NOTE: Cri-container project merged into containerd GitHub project in January 2018; will become a plugin within the containerd binary ) **
  24. 24. @estesp Source: ▪ Requires runc and containerd to be installed (distro packaging lagging these projects) ▪ CRI-Containerd project has been doing binary releases with dependencies included (future TBD) ▪ No requirement for Docker engine installation on worker nodes at all ▪ LinuxKit also using (and providing) this configuration in their default Kubernetes project: ▪ “make all KUBE_RUNTIME=cri-containerd” ▪ See: Using Containerd in K8s 1.9+
  25. 25. @estesp Containerd Benefits ● Designed and implemented with broad usage as a core container runtime in mind: ○ Docker, LinuxKit, Kubernetes and embedded core runtime use cases (OpenWhisk, Cloud Foundry) ● Heavy stress testing validating stability and performance guarantees 24/7 ● Attention to detail re: Go/gRPC APIs for strong usability and ease of embedding ● Focus on compatibility guarantees; bug fix backports for high level of support on major version levels
  26. 26. @estesp Going further with containerd ▪ Contributing: ▫ Bug fixes, adding tests, improving docs, validation ▪ Using: See the getting started documentation in the docs folder of the repo ▪ Porting/testing: Other architectures & OSs, stress testing (see bucketbench, containerd-stress): ▫ git clone <repo>, make binaries, sudo make install ▪ K8s CRI: implementation of K8s CRI using containerd ▫ In beta today; e2e tests, validation, contributing
  27. 27. @estesp Thank You! Questions? ▪ Phil Estes ▫ ▫ ▫ Twitter: @estesp