Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Bucketbench: Benchmarking Container Runtime Performance

1,376 views

Published on

A talk presented at the Moby Summit, Los Angeles (a co-located event with the Open Source Summit North America) on Thursday, September 14, 2017. In this talk, an open source tool, bucketbench, was presented as a way to benchmark container runtimes to compare performance impacts of changes in the runtime or changes to the configuration of Docker, runC, or containerd, the three runtimes currently supported in the bucketbench project.

Published in: Software
  • Be the first to comment

Bucketbench: Benchmarking Container Runtime Performance

  1. 1. bucketbench Benchmarking Container Runtime Performance Phil Estes Senior Technical Staff, IBM Cloud; Office of the CTO Architecture Technical Team, Containers @estesp
  2. 2. But... Standard container lifecycle operations are not sufficient for our performance guarantees! Cannot “docker build”, “docker run”, “docker rm” on each function invocation.. Containers make good sense as the function invocation vehicle.
  3. 3. { DOCKER CE CONTAINERD RUNC We Have Options! Docker Engine architecture Docker 1.11 and above; April 2016-current
  4. 4. docker Complete container engine with lifecycle management, orchestration, remote API (daemon model), plugin support, SDN networking, image building, image registry/local cache management. containerd High-performance, standards-based lightweight container runtime with gRPC API, daemon model. 1.0 contains complete lifecycle and image management in Q42017. runc Open Container Initiative (OCI) compliant implementation of the runtime specification. Lightweight container executor; no network, image registry or image creation capability.
  5. 5. https://github.com/estesp/bucketbench A Go-based framework for benchmarking container lifecycle operations (using concurrency and load) against docker, containerd (0.2.x and 1.0), and runc. The YAML file provided via the --benchmark flag will determine which lifecycle container commands to run against which container runtimes, specifying iterations and number of concurrent threads. Results will be displayed afterwards. Usage: bucketbench run [flags] Flags: -b, --benchmark string YAML file with benchmark definition -h, --help help for run -s, --skip-limit Skip 'limit' benchmark run -t, --trace Enable per-container tracing during benchmark runs Global Flags: --log-level string set the logging level (info,warn,err,debug) (default "warn") H O W CAN W E BEN CH M ARK VARIO U S CO N TAIN ER RU N TIM E O PTIO N S? examples/basic.yaml name: BasicBench image: alpine:latest rootfs: /home/estesp/containers/alpine detached: true drivers: - type: Docker threads: 5 iterations: 15 - type: Runc threads: 5 iterations: 50 commands: - run - stop - remove
  6. 6. Goals - Assess runtime stability under significant load/concurrency - Benchmark operational throughput of a container runtime Table shows the rate of operation sequences per second. * indicates errors. ---------------------------------------------------------------------------------------------------------------------------- Iter/Thd 1 thrd 2 thrds 3 thrds 4 thrds 5 thrds 6 thrds 7 thrds 8 thrds 9 thrds 10 thrds 11 thrds 12 thrds 13 thrds Limit 1000 651.3 829.4 834.5 809.6 827.6 848.6 774.8 843.2 800.3 839.2 804.2 806.7 813.0 DockerBasic 15 1.99 2.44 3.02* 3.24* 3.59* 3.90* 4.07* DockerPause 15 10.22 13.53 15.67 17.69 19.18 19.11 18.56 DockerFull 15 1.66 2.18* 2.69* 3.05* 3.21* 3.36* 3.63* ConBasic 50 2.90 4.95 6.54 7.49 8.10 8.33 8.65 9.02 9.25 9.17 9.43 9.22 9.25 RuncBasic 50 2.90 5.26 7.37 8.61 9.61 11.07 11.68 12.44 13.56 13.65 14.11 14.29 13.97 Caveats - Container configuration can greatly vary runtime performance - Direct comparison of runtimes not that valuable
  7. 7. Architecture Two key interfaces: Driver Drives the container runtime Bench Defines the container operations and provides results/statistics type Driver interface { Type() Type Info() (string, error) Create(name, image string, detached bool, trace bool) (Container, error) Clean() error Run(ctr Container) (string, int, error) Stop(ctr Container) (string, int, error) Remove(ctr Container) (string, int, error) Pause(ctr Container) (string, int, error) Unpause(ctr Container) (string, int, error) } type Bench interface { Init(driverType driver.Type, binaryPath, imageInfo string, trace bool) error Validate() error Run(threads, iterations int) error Stats() []RunStatistics Elapsed() time.Duration State() State Type() Type Info() string } Driver implementations support: docker, containerd (1.0 via gRPC Go client API; 0.2.x via `ctr` binary), and runc today Can easily be extended to support any runtime which can implement the Driver interface
  8. 8. Go tools: pprof, trace, block prof.. Also useful: strace, flame graphs..
  9. 9. API overhead, libnetwork setup/teardown, & metadata sync/update (locking) all add to differential from runc “bare” container start performance Filesystem setup also measurable for large # of layers, depending on storage backend Network namespace creation/deletion has significant impact under load ● 300ms (and higher) delay in network spin lock under multi-threaded contention ● Known issue: http://stackoverflow.com/questions/28818452/how-to-identify-performance-bottlene ck-in-linux-system-call-unshareclone-newnet DISCOVERIES
  10. 10. Bucketbench: TODOs 1. Structured Output Format ○ JSON and/or CSV output 2. Other Driver Implementations ○ rkt? cri-o? ○ Drive via CRI versus clients? 3. Integrate with Trace/Debug Tooling ○ Randomized trace output (% of operations) ○ “Real” performance metrics/tooling?
  11. 11. Thank You! 1. Check out, critique, contribute to: http://github.com/estesp/bucketbench 2. Connect with me to ask questions, or provide your own perspective and findings at @estesp on Twitter or estesp@gmail.com

×