Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Quantifying Container Runtime Performance: OSCON 2017 Open Container Day

388 views

Published on

A talk given at Open Container Day at O'Reilly's OSCON convention in Austin, Texas on May 9th, 2017. This talk describes an open source project, bucketbench, which can be used to compare performance, stability, and throughput of various container engines. Bucketbench currently supports docker, containerd, and runc, but can be extended to support any container runtime. This work was done in response to performance investigations by the Apache OpenWhisk team in using containers as the execution vehicle for functions in their "Functions-as-a-Service" runtime. Find out more about bucketbench here: https://github.com/estesp/bucketbench

Published in: Software
  • Be the first to comment

  • Be the first to like this

Quantifying Container Runtime Performance: OSCON 2017 Open Container Day

  1. 1. Quantifying Container Runtime Performance A Serverless Platform Case Study Phil Estes Senior Technical Staff, IBM Cloud CTO Architecture Tech Team, Containers @estesp
  2. 2. @estesp ● Virtualization ● IaaS ● PaaS ● Containers ● CaaS ● Serverless (FaaS)
  3. 3. SERVER- less?
  4. 4. Hint: There are still servers. (It just so happens that you don’t have to run them.) Step 3 Use triggers, actions, etc. as supported by your FaaS provider to handle function input/output chaining Step 1 Write your function, in a language supported by the FaaS runtime (Swift, Node.js, etc.) Step 2 Register your function with your FaaS framework
  5. 5. Serverless Servers Matter. (Because you expect your functions to run perfectly.) Expectation 3 I only pay for the execution runtime of my functions. Expectation 1 (Near?) infinite* scaling of your functions. Expectation 2 Perfect uptime. My functions always run when triggered with no perceptible delay.
  6. 6. FaaS pricing is based on “GB-s” Function execution runtime (rounded ~ 100ms) Memory allocated for the function (in GB) Gigabyte Seconds x =
  7. 7. So, about those servers...
  8. 8. But... Standard container lifecycle operations are not sufficient for our performance guarantees! Cannot “docker build”, “docker run”, “docker rm” on each function invocation.. Containers make good sense as the function invocation vehicle.
  9. 9. /usr/bin/docker libnetwork VolumeAPI AuthZcontainerd ctr-shim runc { /usr/bin/docker /usr/bin/dockerd DOCKER CONTAINERD RUNC We Have Options! Docker Engine architecture Docker 1.11 and above; April 2016-current
  10. 10. docker Complete container engine with lifecycle management, orchestration, remote API (daemon model), plugin support, SDN networking, image building, image registry/local cache management. containerd High-performance, standards-based lightweight container runtime with gRPC API, daemon model. Expanding to contain complete lifecycle and image management in 2017. runc Open Container Initiative (OCI) compliant implementation of the runtime specification. Lightweight container executor; no network, image registry or image creation capability.
  11. 11. https://github.com/estesp/bucketbench A Go-based framework for benchmarking container lifecycle operations (under load) against docker, containerd, and runc. Usage: bucketbench run [flags] Flags: -b, --bundle string Path of test runc image bundle (default ".") -c, --containerd int Number of threads to execute against containerd --ctr-binary string Name/path of containerd client (ctr) binary (default "ctr") -d, --docker int Number of threads to execute against Docker --docker-binary string Name/path of Docker binary (default "docker") -i, --image string Name of test Docker image (default "busybox") -r, --runc int Number of threads to execute against runc --runc-binary string Name/path of runc binary (default "runc") Global Flags: --log-level string set the logging level (info,warn,err,debug) (default "warn") H O W CAN W E CO M PARE TH E RU N TIM E PERFO RM AN CE O F TH ESE O PTIO N S?
  12. 12. Goals - Assess runtime stability under significant load/parallelism - Compare operational throughput of each container runtime Table shows the rate of operation sequences per second. * indicates errors. ---------------------------------------------------------------------------------------------------------------------------- Iter/Thd 1 thrd 2 thrds 3 thrds 4 thrds 5 thrds 6 thrds 7 thrds 8 thrds 9 thrds 10 thrds 11 thrds 12 thrds 13 thrds Limit 1000 651.3 829.4 834.5 809.6 827.6 848.6 774.8 843.2 800.3 839.2 804.2 806.7 813.0 DockerBasic 15 1.99 2.44 3.02* 3.24* 3.59* 3.90* 4.07* DockerPause 15 10.22 13.53 15.67 17.69 19.18 19.11 18.56 DockerFull 15 1.66 2.18* 2.69* 3.05* 3.21* 3.36* 3.63* ConBasic 50 2.90 4.95 6.54 7.49 8.10 8.33 8.65 9.02 9.25 9.17 9.43 9.22 9.25 RuncBasic 50 2.90 5.26 7.37 8.61 9.61 11.07 11.68 12.44 13.56 13.65 14.11 14.29 13.97 Caveats - Flexibility of lower layer configurations has significant impact - Stability & performance of runtimes release-dependant
  13. 13. Architecture Two key interfaces: ● Driver ○ Drives the container runtime ● Bench ○ Defines the container operations and provides results/statistics type Driver interface { Type() Type Info() (string, error) Create(name, image string, detached bool, trace bool) (Container, error) Clean() error Run(ctr Container) (string, int, error) Stop(ctr Container) (string, int, error) Remove(ctr Container) (string, int, error) Pause(ctr Container) (string, int, error) Unpause(ctr Container) (string, int, error) } type Bench interface { Init(driverType driver.Type, binaryPath, imageInfo string, trace bool) error Validate() error Run(threads, iterations int) error Stats() []RunStatistics Elapsed() time.Duration State() State Type() Type } Driver implementations support: ● Docker, containerd, and runc today ● Can easily be extended to support any runtime which can implement the interface shown above
  14. 14. Go tools: pprof, trace, block prof.. Also useful: strace, flame graphs..
  15. 15. @estesp Discoveries Network namespace creation/deletion has significant impact under load ▪ 300ms (and higher) delay in network spin lock under multi-threaded contention ▪ Known issue: http://stackoverflow.com/questions/28818452/how-to-identify-performance-bottlen eck-in-linux-system-call-unshareclone-newnet API overhead, libnetwork setup/teardown, & metadata sync/update (locking) all add to differential from runc “bare” container start performance Filesystem setup also measurable for large # of layers, depending on storage backend
  16. 16. @estesp Bucketbench: What’s Left To Do ● Easier way to specify/define benchmark runs ○ Requiring Go code to create new benchmark type too high a bar ○ Should provide a way to define via JSON/YAML and input to `bucketbench` ● Structured Output option vs. human readable format ○ Selectable JSON out for displaying or parsing/post-processing with other tools ○ Provide itemized metrics per operation (not currently exposed) in structured output ● Update containerd driver implementation ○ Use gRPC API instead of `ctr` external binary client; use image/storage capabilities ● Other driver implementations?
  17. 17. So What? ▪ Want to learn more about OpenWhisk? - Here at OSCON: Daniel Krook, IBM, Wed, 11:50am / Meeting Room 14 - https://openwhisk.org - https://github.com/openwhisk/openwhisk ▪ Get involved in improvements to bucketbench: - https://github.com/estesp/bucketbench - See list of TODO items ▪ Use bucketbench to improve stability/performance of container runtimes: - Propose better integration with tracing/performance tooling - Find and fix performance bottlenecks in any layer/runtimerunc
  18. 18. @estesp Thank You! 1. Check out, critique, contribute to: http://github.com/estesp/bucketbench 2. Connect with me to ask questions, or provide your own perspective and findings at @estesp on Twitter or estesp@gmail.com 3. Have fun with containers, whether you use Docker, containerd, runc, lxc/lxd, rkt, Kubernetes, Swarm, Mesos, Rancher, Nomad, OpenShift, ...

×