SlideShare a Scribd company logo
1 of 72
Passive Benchmarking with
docker LXC, KVM & OpenStack
Hosted @ SoftLayer
Boden Russell (brussell@us.ibm.com)
IBM Global Technology Services
Advanced Cloud Solutions & Innovation
V2.0
FAQ
 How is this version (v2.0) different from the initial benchmarks?
– See the revision history within this document.
 Are there any artifacts associated with the test?
– Yes; see my github repo: https://github.com/bodenr/cloudy-docker-kvm-bench
 Do these results imply an LXC based technology replaces the need for traditional
hypervisors?
– In my opinion, traditional VMs will become the “edge case” moving forward for use cases
which are currently based on Linux flavored VMs. However I believe there will still be cases
for traditional VMs, some of which are detailed in the LXC Realization presentation.
 Are these results scientific?
– No. Disclaimers have been attached to any documentation related to these tests to
indicate such. These tests are meant to be a set of “litmus” tests to gain an initial
understanding of how LXC compares to traditional hypervisors specifically in the Cloud
space.
 Do you welcome comments / feedback on the test?
– Yes; the goal of these tests is to educate the community on LXC based technologies vs
traditional hypervisors. As such they are fully disclosed in complete and hence open to
feedback of any kind.
5/11/2014 2Document v2.0
FAQ Continued
 Should I act on these results?
– I believe the results provide enough information to gain some interest. I expect any
organization, group or individual considering actions as a result will perform their own
validation to assert the technology choice is beneficial for their consumption prior to
adoption.
 Is further / deeper testing and investigation warranted?
– Absolutely. These tests should be conducted in a more active manner to understand the
root causes for any differences. Additional tests and variations are also needed including;
various KVM disk cache modes, skinny VM images (i.e. JeOS), impacts of database settings,
docker storage drivers, etc.
 Is this a direct measurement of the hypervisor (KVM) or LXC engine (docker)?
– No, many factors play into results. For example the compute node has the nova virt driver
running which is obviously different in implementation between nova libvirt-kvm and
nova docker. Thus it’s implementation *may* have an impact on the compute node
metrics and performance.
5/11/2014 Document v2.0 3
Revision History
Revision Overview of changes
V1.0 - Initial document release
V2.0 - All tests were re-run using a single docker image throughout the tests (see my Dockerfile).
- As the result of an astute reader, the 15 VM serial “packing” test reflects VM boot overhead rather than steady-
state; this version clarifies such claims.
- A new Cloudy test was added to better understand steady-state CPU.
- Rather than presenting direct claims of density, raw data and graphs are presented to let the reader draw their
own conclusions.
- Additional “in the guest” tests were performed including blogbench.
5/11/2014 Document v2.0 4
Why Linux Containers (LXC)
 Fast
– Runtime performance near bare metal speeds
– Management operations (run, stop , start, etc.) in seconds / milliseconds
 Agile
– VM-like agility – it’s still “virtualization”
– Seamlessly “migrate” between virtual and bare metal environments
 Flexible
– Containerize a “system”
– Containerize “application(s)”
 Lightweight
– Just enough Operating System (JeOS)
– Minimal per container penalty
 Inexpensive
– Open source – free – lower TCO
– Supported with out-of-the-box modern Linux kernel
 Ecosystem
– Growing in popularity
– Vibrant community & numerous 3rd party apps
5/11/2014 5Document v2.0
Hypervisors vs. Linux Containers
Hardware
Operating System
Hypervisor
Virtual Machine
Operating
System
Bins / libs
App App
Virtual Machine
Operating
System
Bins / libs
App App
Hardware
Hypervisor
Virtual Machine
Operating
System
Bins / libs
App App
Virtual Machine
Operating
System
Bins / libs
App App
Hardware
Operating System
Container
Bins / libs
App App
Container
Bins / libs
App App
Type 1 Hypervisor Type 2 Hypervisor Linux Containers
5/11/2014 6
Containers share the OS kernel of the host and thus are lightweight.
However, each container must have the same OS kernel.
Containers are isolated, but
share OS and, where
appropriate, libs / bins.
Document v2.0
Hypervisor VM vs. LXC vs. Docker LXC
5/11/2014 7Document v2.0
Docker in OpenStack
 Havana
– Nova virt driver which integrates with docker REST API on backend
– Glance translator to integrate docker images with Glance
 Icehouse
– Heat plugin for docker
 Both options are still under development
5/11/2014 8
nova-docker virt driver docker heat plugin
DockerInc::Docke
r::Container
(plugin)
Document v2.0
About This Benchmark
 Use case perspective
– As an OpenStack Cloud user I want a Ubuntu based VM with MySQL… Why would I choose
docker LXC vs a traditional hypervisor?
 OpenStack “Cloudy” perspective
– LXC vs. traditional VM from a Cloudy (OpenStack) perspective
– VM operational times (boot, start, stop, snapshot)
– Compute node resource usage (per VM penalty); density factor
 Guest runtime perspective
– CPU, memory, file I/O, MySQL OLTP, etc.
 Why KVM?
– Exceptional performance
DISCLAIMERS
The tests herein are semi-active litmus tests – no in depth tuning,
analysis, etc. More active testing is warranted. These results do not
necessary reflect your workload or exact performance nor are they
guaranteed to be statistically sound.
5/11/2014 9Document v2.0
Benchmark Environment Topology @ SoftLayer
glance api / reg
nova api / cond / etc
keystone
…
rally
nova api / cond / etc
cinder api / sch / vol
docker lxc
dstat
controller compute node
glance api / reg
nova api / cond / etc
keystone
…
rally
nova api / cond / etc
cinder api / sch / vol
KVM
dstat
controller compute node
5/11/2014 10
+
Awesome!
+
Awesome!
Document v2.0
Benchmark Specs
5/11/2014 11
Spec Controller Node (4CPU x 8G RAM) Compute Node (16CPU x 96G RAM)
Environment Bare Metal @ SoftLayer Bare Metal @ SoftLayer
Mother Board SuperMicro X8SIE-F Intel Xeon QuadCore SingleProc SATA
[1Proc]
SuperMicro X8DTU-F_R2 Intel Xeon HexCore DualProc [2Proc]
CPU Intel Xeon-Lynnfield 3470-Quadcore [2.93GHz] (Intel Xeon-Westmere 5620-Quadcore [2.4GHz]) x 2
Memory (Kingston 4GB DDR3 2Rx8 4GB DDR3 2Rx8 [4GB]) x2 (Kingston 16GB DDR3 2Rx4 16GB DDR3 2Rx4 [16GB]) x 6
HDD (LOCAL) Digital WD Caviar RE3 WD5002ABYS [500GB]; SATAII Western Digital WD Caviar RE4 WD5003ABYX [500GB]; SATAII
NIC eth0/eth1 @ 100 Mbps eth0/eth1 @100 Mbps
Operating System Ubuntu 12.04 LTS 64bit Ubuntu 12.04 LTS 64bit
Kernel 3.5.0-48-generic 3.8.0-38-generic
IO Scheduler deadline deadline
Hypervisor tested NA - KVM 1.0 + virtio + KSM (memory deduplication)
- docker 0.10.0 + go1.2.1 + commit dc9c28f + AUFS
OpenStack Trunk master via devstack Trunk master via devstack. Libvirt KVM nova driver / nova-docker
virt driver
OpenStack Benchmark
Client
OpenStack project rally NA
Metrics Collection NA dstat
Guest Benchmark Driver NA - Sysbench 0.4.12
- mbw 1.1.1.-2
- iibench (py)
- netperf 2.5.0-1
- Blogbench 1.1
- cpu_bench.py
VM Image NA - Scenario 1 (KVM): official ubuntu 12.04 image + mysql
snapshotted and exported to qcow2 – 1080 MB
- Scenario 2 (docker): guillermo/mysql -- 381.5 MB
Hosted @Document v2.0
Test Descriptions: Cloudy Benchmarks
5/11/2014 12
Benchmark Benchmark Driver Description
OpenStack Cloudy Benchmarks
Serial VM boot
(15 VMs)
OpenStack Rally - Boot VM from image
- Wait for ACTIVE state
- Repeat the above a total of 15 times
- Delete VMs
Compute node
steady-state VM
packing
cpu_bench.py - Boot 15 VMs in async fashion
- Sleep for 5 minutes (wait for steady-state)
- Delete all 15 VMs in async fashion
VM reboot
(5 VMs rebooted 5
times each)
OpenStack Rally - Boot VM from image
- Wait for ACTIVE state
- Soft reboot VM 5 times
- Delete VM
- Repeat the above a total of 5 times
VM snapshot
(1 VM, 1 snapshot)
OpenStack Rally - Boot VM from image
- Wait for ACTIVE state
- Snapshot VM to glance image
- Delete VM
Document v2.0
Test Descriptions: Guest Benchmarks
5/11/2014 13
Benchmark Benchmark Driver Description
Guest Runtime Benchmarks
CPU performance Sysbench from within the guest - Clear memory cache
- Run sysbench cpu test
- Repeat a total of 5 times
- Average results over the 5 times
OLTP (MySQL)
performance
Sysbench from within the guest - Clear memory cache
- Run sysbench OLTP test
- Repeat a total of 5 times
- Average results over the 5 times
MySQL Indexed insertion benchmark - Clear memory cache
- Run iibench for a total of 1M inserts printing stats at 100K
intervals
- Collect data over 5 runs & average
File I/O performance Sysbench from within the guest
- Synchronous IO
- Clear memory cache
- Run sysbench OLTP test
- Repeat a total of 5 times
- Average results over the 5 times
Memory performance Mbw from within the guest - Clear memory cache
- Run mbw with array size of 1000 MiB and each test 10 times
- Collect average over 10 runs per test
Network performance Netperf - Run netperf server on controller
- From guest run netperf client in IPv4 mode
- Repeat text 5x
- Average results
Application type
performance
Blogbench - Clear memory cache
- Run blogbench for 5 minutes
- Repeat 5 times
- Average read / write scores
Document v2.0
STEADY STATE VM PACKING
OpenStack Cloudy Benchmark
5/11/2014 14Document v2.0
Cloudy Performance: Steady State Packing
 Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot 15 VM asynchronously in succession
– Wait for 5 minutes (to achieve steady-state on the
compute node)
– Delete all 15 VMs asynchronously in succession
 Benchmark driver
– cpu_bench.py
 High level goals
– Understand compute node characteristics under
steady-state conditions with 15 packed / active VMs
5/11/2014 15
0
2
4
6
8
10
12
14
16
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
ActiveVMs
Time
Benchmark Visualization
VMs
Document v2.0
Cloudy Performance: Steady State Packing
5/11/2014 16
0
10
20
30
40
50
60
70
80
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
233
241
249
257
265
273
281
289
297
305
313
321
CPUUsageInPercent
Time
Docker: Compute Node CPU (full test duration)
usr
sys
Averages
– 0.54
– 0.17
0
10
20
30
40
50
60
70
80
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
233
241
249
257
265
273
281
289
297
305
313
321
329
337
345
CPUUsageInPercent
Time
KVM: Compute Node CPU (full test duration)
usr
sys
Averages
– 7.64
– 1.4
Document v2.0
Cloudy Performance: Steady State Packing
5/11/2014 17
0
2
4
6
8
10
12
14
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
211
CPUUsageInPercent
Time (31s – 243s)
Docker: Compute Node Steady-State CPU (segment: 31s – 243s)
usr
sys
0
2
4
6
8
10
12
14
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96
101
106
111
116
121
126
131
136
141
146
151
156
161
166
171
176
181
186
191
196
201
206
211
CPUUsageInPercent
Time (95s - 307s)
KVM: Compute Node Steady-State CPU (segment: 95s – 307s)
usr
sys
Averages
– 0.2
– 0.03
Averages
– 1.91
– 0.36
31 seconds
243 seconds
95 seconds
307 seconds
Document v2.0
Cloudy Performance: Steady State Packing
5/11/2014 18
0
2
4
6
8
10
12
14
1
7
13
19
25
31
37
43
49
55
61
67
73
79
85
91
97
103
109
115
121
127
133
139
145
151
157
163
169
175
181
187
193
199
205
211
CPUUsageInPercent
Time: KVM(95s - 307s) Docker(31s – 243s)
Docker / KVM: Compute Node Steady-State CPU (Segment Overlay)
docker-usr
docker-sys
kvm-usr
kvm-sys
docker: 31s
KVM: 95s
docker: 243s
KVM: 307s
Docker Averages
– 0.2
– 0.03
KVM Averages
– 1.91
– 0.36
Document v2.0
Cloudy Performance: Steady State Packing
5/11/2014 19
0.00E+00
1.00E+09
2.00E+09
3.00E+09
4.00E+09
5.00E+09
6.00E+09
7.00E+09
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
233
241
249
257
265
273
281
289
297
305
313
321
MemoryUsed
Time
Docker: Compute Node Used Memory (full test duration)
Memory
Delta
734 MB
Per VM
49 MB
0.00E+00
1.00E+09
2.00E+09
3.00E+09
4.00E+09
5.00E+09
6.00E+09
7.00E+09
1
10
19
28
37
46
55
64
73
82
91
100
109
118
127
136
145
154
163
172
181
190
199
208
217
226
235
244
253
262
271
280
289
298
307
316
325
334
MemoryUsed
Time
KVM: Compute Node Used Memory (full test duration)
Memory
Delta
4387 MB
Per VM
292 MB
Document v2.0
Cloudy Performance: Steady State Packing
5/11/2014 20
0.00E+00
1.00E+09
2.00E+09
3.00E+09
4.00E+09
5.00E+09
6.00E+09
7.00E+09 1
10
19
28
37
46
55
64
73
82
91
100
109
118
127
136
145
154
163
172
181
190
199
208
217
226
235
244
253
262
271
280
289
298
307
316
325
334
MemoryUsed
Axis Title
Docker / KVM: Compute Node Used Memory (Overlay)
kvm
docker
Document v2.0
Cloudy Performance: Steady State Packing
5/11/2014 21
0
10
20
30
40
50
60
70
80
90
100
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
233
241
249
257
265
273
281
289
297
305
313
321
1MinuteLoadAverage
Time
Docker: Compute Node 1m Load Average (full test duration)
1m
Average
0.15 %
0
10
20
30
40
50
60
70
80
90
100
1
9
17
25
33
41
49
57
65
73
81
89
97
105
113
121
129
137
145
153
161
169
177
185
193
201
209
217
225
233
241
249
257
265
273
281
289
297
305
313
321
329
337
1MinuteLoadAverage
Time
KVM: Compute Node 1m Load Average (full test duration)
1m
Average
35.9 %
Document v2.0
SERIALLY BOOT 15 VMS
OpenStack Cloudy Benchmark
5/11/2014 22Document v2.0
Cloudy Performance: Serial VM Boot
 Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot VM
– Wait for VM to become ACTIVE
– Repeat the above steps for a total of 15 VMs
– Delete all VMs
 Benchmark driver
– OpenStack Rally
 High level goals
– Understand compute node characteristics under
sustained VM boots
5/11/2014 23
0
2
4
6
8
10
12
14
16
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
ActiveVMs
Time
Benchmark Visualization
VMs
Document v2.0
Cloudy Performance: Serial VM Boot
5/11/2014 24
3.529113102
5.781662448
0
1
2
3
4
5
6
7
docker KVM
TimeInSeconds
Average Server Boot Time
docker
KVM
Document v2.0
Cloudy Performance: Serial VM Boot
5/11/2014 25
0
5
10
15
20
25
30
35
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79
CPUUsageInPercent
Time
Docker: Compute Node CPU
usr
sys
Averages
– 1.39
– 0.57
0
5
10
15
20
25
30
35
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
100
103
106
109
112
115
118
121
124
127
CPUUsageInPercent
Time
KVM: Compute Node CPU Usage
usr
sys
Averages
– 13.45
– 2.23
Document v2.0
Cloudy Performance: Serial VM Boot
5/11/2014 26
0
5
10
15
20
25
30
35
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101105109113117121125
CPUUsageInPercent
Time
Docker / KVM: Compute Node CPU (Unnormalized Overlay)
kvm-usr
kvm-sys
docker-usr
docker-sys
Document v2.0
Cloudy Performance: Serial VM Boot
5/11/2014 27
y = 0.0095x + 1.008
y = 0.3582x + 1.0633
0
5
10
15
20
25
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51
UsrCPUInPercent
Time (8s - 58s)
Docker / KVM: Serial VM Boot Usr CPU (segment: 8s - 58s)
docker(8-58)
kvm(8-58)
Linear (docker(8-58))
Linear (kvm(8-58))
8 seconds 58 seconds
Document v2.0
Cloudy Performance: Serial VM Boot
5/11/2014 28
0.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
3.00E+09
3.50E+09
4.00E+09
4.50E+09
5.00E+09
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79
MemoryUsed
Time
Docker: Compute Node Memory Used
Memory
Delta
677 MB
Per VM
45 MB
0.00E+00
1.00E+09
2.00E+09
3.00E+09
4.00E+09
5.00E+09
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101105109113117121125
MemoryUsed
Time
KVM: Compute Node Memory Used
Memory
Delta
2737 MB
Per VM
182 MB
Document v2.0
Cloudy Performance: Serial VM Boot
5/11/2014 29
0.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
3.00E+09
3.50E+09
4.00E+09
4.50E+09
5.00E+09
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101105109113117121125
MemoryUsed
Time
Docker / KVM: Compute Node Memory Used (Unnormalized Overlay)
kvm
docker
Document v2.0
Cloudy Performance: Serial VM Boot
5/11/2014 30
y = 1E+07x + 1E+09
y = 3E+07x + 1E+09
0.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
3.00E+09
3.50E+09
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
MemoryUsage
Time (1s - 67s)
Docker / KVM: Serial VM Boot Memory Usage (segment: 1s - 67s)
docker
kvm
Linear (docker)
Linear (kvm)
1 second 67 seconds
Document v2.0
Cloudy Performance: Serial VM Boot
5/11/2014 31
0
5
10
15
20
25
30
35
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79
1MinuteLoadAverage
Time
Docker: Compute Node 1m Load Average
1m
Average
0.25 %
0
5
10
15
20
25
30
35
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
100
103
106
109
112
115
118
121
124
127
1MinuteLoadAverage
Time
KVM: Compute Node 1m Load Average
1m
Average
11.18 %
Document v2.0
SERIAL VM SOFT REBOOT
OpenStack Cloudy Benchmark
5/11/2014 32Document v2.0
Cloudy Performance: Serial VM Reboot
 Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot a VM & wait for it to become ACTIVE
– Soft reboot the VM and wait for it to become ACTIVE
• Repeat reboot a total of 5 times
– Delete VM
– Repeat the above for a total of 5 VMs
 Benchmark driver
– OpenStack Rally
 High level goals
– Understand compute node characteristics under sustained VM reboots
5/11/2014 33
0
1
2
3
4
5
6
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55
ActiveVMs
Time
Benchmark Visualization
Active VMs
Document v2.0
Cloudy Performance: Serial VM Reboot
5/11/2014 34
2.577879581
124.433239
0
20
40
60
80
100
120
140
docker KVM
TimeInSeconds
Average Server Reboot Time
docker
KVM
Document v2.0
Cloudy Performance: Serial VM Reboot
5/11/2014 35
3.567586041
3.479760051
0
0.5
1
1.5
2
2.5
3
3.5
4
docker KVM
TimeInSeconds
Average Server Delete Time
docker
KVM
Document v2.0
Cloudy Performance: Serial VM Reboot
5/11/2014 36
0
1
2
3
4
5
6
7
8
9
10
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
100
103
106
109
CPUUsageInPercent
Time
Docker: Compute Node CPU
usr
sys
0
1
2
3
4
5
6
7
8
9
10
1
72
143
214
285
356
427
498
569
640
711
782
853
924
995
1066
1137
1208
1279
1350
1421
1492
1563
1634
1705
1776
1847
1918
1989
2060
2131
2202
2273
2344
2415
2486
2557
2628
2699
2770
2841
2912
2983
3054
3125
CPUUsageInPercent
Time
KVM: Compute Node CPU
usr
sys
Averages
– 0.69
– 0.26
Averages
– 0.84
– 0.18
Document v2.0
Cloudy Performance: Serial VM Reboot
5/11/2014 37
0.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
100
103
106
109
MemoryUsed
Time
Docker: Compute Node Used Memory
Memory
Delta
48 MB
0.00E+00
5.00E+08
1.00E+09
1.50E+09
2.00E+09
2.50E+09
1
81
161
241
321
401
481
561
641
721
801
881
961
1041
1121
1201
1281
1361
1441
1521
1601
1681
1761
1841
1921
2001
2081
2161
2241
2321
2401
2481
2561
2641
2721
2801
2881
2961
3041
3121
MemoryUsed
Time
KVM: Compute Node Used Memory
Memory
Delta
486 MB
Document v2.0
Cloudy Performance: Serial VM Reboot
5/11/2014 38
0
0.5
1
1.5
2
2.5
3
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
100
103
106
109
1MinuteLoadAverage
Time
Docker: Compute Node 1m Load Average
1m
Average
0.4 %
0
0.5
1
1.5
2
2.5
3
1
71
141
211
281
351
421
491
561
631
701
771
841
911
981
1051
1121
1191
1261
1331
1401
1471
1541
1611
1681
1751
1821
1891
1961
2031
2101
2171
2241
2311
2381
2451
2521
2591
2661
2731
2801
2871
2941
3011
3081
3151
1MinuteLoadAverage
Time
KVM: Compute Node 1m Load Average
1m
Average
0.33 %
Document v2.0
SNAPSHOT VM TO IMAGE
OpenStack Cloudy Benchmark
5/11/2014 39Document v2.0
Cloudy Performance: Snapshot VM To Image
 Benchmark scenario overview
– Pre-cache VM image on compute node prior to test
– Boot a VM
– Wait for it to become ACTIVE
– Snapshot the VM
– Wait for image to become ACTIVE
– Delete VM
 Benchmark driver
– OpenStack Rally
 High level goals
– Understand cloudy ops times from a user perspective
5/11/2014 40Document v2.0
Cloudy Performance: Snapshot VM To Image
5/11/2014 41
36.88756394
48.02313805
0
10
20
30
40
50
60
docker KVM
TimeInSeconds
Average Snapshot Server Time
docker
KVM
Document v2.0
Cloudy Performance: Snapshot VM To Image
5/11/2014 42
0
1
2
3
4
5
6
7
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
CPUUsageInPercent
Time
Docker: Compute Node CPU
usr
sys
Averages
– 0.42
– 0.15
0
1
2
3
4
5
6
7
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
100
103
106
109
112
115
CPUUsageInPercent
Time
KVM: Compute Node CPU
usr
sys
Averages
– 1.46
– 1.0
Document v2.0
Cloudy Performance: Snapshot VM To Image
5/11/2014 43
1.48E+09
1.5E+09
1.52E+09
1.54E+09
1.56E+09
1.58E+09
1.6E+09
1.62E+09
1.64E+09
1.66E+09
1.68E+09
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
100
103
106
109
112
115
MemoryUsed
Time
KVM: Compute Node Used Memory
Memory
Delta
114 MB
1.6E+09
1.61E+09
1.62E+09
1.63E+09
1.64E+09
1.65E+09
1.66E+09
1.67E+09
1.68E+09
1.69E+09
1.7E+09
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
MemoryUsed
Time
Docker: Compute Node Memory Used
Memory
Delta
57 MB
Document v2.0
Cloudy Performance: Snapshot VM To Image
5/11/2014 44
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65
1MinuteLoadAverage
Time
Docker: Compute Node 1m Load Average
1m
Average
0.06 %
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
4
7
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
100
103
106
109
112
115
1MinuteLoadAverage
Time
KVM: Compute node 1m Load Average
1m
Average
0.47 %
Document v2.0
GUEST PERFORMANCE
BENCHMARKS
Guest VM Benchmark
5/11/2014 45Document v2.0
Configuring Docker Container for 2CPU x 4G RAM
 Configuring docker LXC for 2CPU x 4G RAM
– Pin container to 2 CPUs / Mems
• Create cpuset cgroup
• Pin group to cpuset.mems to 0,1
• Pin group to cpuset.cpus to 0,1
• Add container root proc to tasks
– Limit container memory to 4G
• Create memory cgroup
• Set memory.limit_in_bytes to 4G
• Add container root proc to tasks
– Limit blkio
• Create blkio cgroup
• Add container root process of LXC to tasks
• Default blkio.weight of 500
5/11/2014 46Document v2.0
Guest Performance: CPU
 Linux sysbench 0.4.12 cpu test
 Calculate prime numbers up to 20000
 2 threads
 Instance size
– 4G RAM
– 2 CPU cores
– 20G disk
5/11/2014 47Document v2.0
Guest Performance: CPU
5/11/2014 48
15.26 15.22 15.13
0
2
4
6
8
10
12
14
16
18
Bare Metal docker KVM
Seconds
Calculate Primes Up To 20000
Bare Metal
docker
KVM
Document v2.0
Guest Performance: Memory
 Linux mbw 1.1.1-2
 Instance size
– 2 CPU
– 4G memory
 Execution options
– 10 runs; average
– 1000 MiB
5/11/2014 49Document v2.0
Guest Performance: Memory
5/11/2014 50
3823.3
4393.3
12881.61
3813.38
4395.92
12905.68
3428.95 3461.59
7223.23
0
2000
4000
6000
8000
10000
12000
14000
MEMCPY DUMB MCBLOCK
MiB/s
Memory Test
Memory Benchmark Performance
Bare Metal (MiB/s)
docker (MiB/s)
KVM (MiB/s)
Document v2.0
Guest Performance: Network
 Netperf 2.5.0-1
– Netserver running on controller
– Netperf on guest
– Run netperf 5 times & average results
 Instance size
– 2 CPU
– 4G memory
 Execution options
– IPv4 / TCP
5/11/2014 51Document v2.0
Guest Performance: Network
5/11/2014 52
940.26 940.56
0
100
200
300
400
500
600
700
800
900
1000
docker KVM
ThroughputIn10^6bits/second
Network Throughput
docker
KVM
Document v2.0
Guest Performance: File I/O Random Read
 Linux sysbench 0.4.12 fileio test
– Synchronous IO
– Random read
– Total file size of 150G
– 16K block size
– Test duration of 100s
 Thread variations: 1, 8, 16, 32, 64
 Instance size
– 4G RAM
– 2 CPU cores
– 200G disk
 KVM specs
– Disk cache mode set to none
– Virtio
– Deadline scheduler (host & guest)
 Docker specs
– AUFS storage driver
– Deadline scheduler
5/11/2014 53Document v2.0
Guest Performance: File I/O Random Read
5/11/2014 54
0
500
1000
1500
2000
2500
1 2 4 8 16 32 64
TotalTransferredInKb/sec
Threads
Sysbench Synchronous File I/O Random Read
docker
KVM
Document v2.0
Guest Performance: File I/O Random Read / Write
 Linux sysbench 0.4.12 fileio test
– Synchronous IO
– Random read
– Total file size of 150G
– 16K block size
– Read/Write ratio for combined random IO test: 1.50
– Test duration of 100s
 Thread variations: 1, 8, 16, 32, 64
 Instance size
– 4G RAM
– 2 CPU cores
– 200G disk
 KVM specs
– Disk cache mode set to none
– Virtio
– Deadline scheduler (host & guest)
 Docker specs
– AUFS storage driver
– Deadline scheduler
5/11/2014 55Document v2.0
Guest Performance: File I/O Random Read / Write
5/11/2014 56
0
200
400
600
800
1000
1200
1400
1600
1 2 4 8 16 32 64
TotalTransferredInKb/sec
Threads
Sysbench Synchronous File I/O Random Read/Write @ R/W Ratio of 1.50
docker
KVM
Document v2.0
Guest Performance: MySQL OLTP
 Linux sysbench 0.4.12 oltp test
– Table size of 2,000,000
– MySQL 5.5 (installed on Ubuntu 12.04 LTS with apt-get)
– 60 second iterations
– Default MySQL cnf settings
 Variations
– Number of threads
– Transactional random read & transactional random read / write
 Instance size
– 4G RAM
– 2 CPU cores
– 20G disk
5/11/2014 57Document v2.0
Guest Performance: MySQL OLTP
5/11/2014 58
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
1 2 4 8 16 32 64
TotalTransactions
Threads
MySQL OLTP Random Transactional Reads (60s)
docker
KVM
Document v2.0
Guest Performance: MySQL OLTP
5/11/2014 59
0
2000
4000
6000
8000
10000
12000
14000
1 2 4 8 16 32 64
TotalTransactions
Threads
MySQL OLTP Random Transactional R/W (60s)
docker
KVM
Document v2.0
Guest Performance: MySQL Indexed Insertion
 Indexed insertion benchmark (iibench python script)
– A total of 1,000,000 insertions
– Print stats at 100K intervals
 Instance size
– 4G RAM
– 2 CPU cores
– 20G disk
5/11/2014 60Document v2.0
Guest Performance: MySQL Indexed Insertion
5/11/2014 61
0
20
40
60
80
100
120
140
100000 200000 300000 400000 500000 600000 700000 800000 900000 1000000
SecondsPer100KInsertionBatch
Table Size In Rows
MySQL Indexed Insertion @ 100K Intervals
docker
kvm
Document v2.0
Guest Performance: BlogBench
 Blogbench 1.1
– Test duration of 5m
– Average results over 5 iterations of test
 Instance size
– 4G RAM
– 2 CPU cores
– 200G disk
 KVM specs
– Disk cache mode set to none
– Virtio
– Deadline scheduler (host & guest)
 Docker specs
– AUFS storage driver
– Deadline scheduler
5/11/2014 62Document v2.0
Guest Performance: BlogBench
5/11/2014 63
398772.6 384769
0
50000
100000
150000
200000
250000
300000
350000
400000
450000
docker KVM
Score
Blogbench Read Scores
docker
KVM
1526.6
1285
0
200
400
600
800
1000
1200
1400
1600
1800
docker KVM
Score
Blogbench Write Scores
docker
KVM
Document v2.0
OTHER CONSIDERATIONS
5/11/2014 64Document v2.0
Cloud Management Impacts on LXC
5/11/2014 65
0.17
3.529113102
0
0.5
1
1.5
2
2.5
3
3.5
4
docker cli nova-docker
Seconds
Docker: Boot Container - CLI vs Nova Virt
docker cli
nova-docker
Cloud management often caps true ops performance of LXC
Document v2.0
Ubuntu MySQL Image Size
5/11/2014 Document v2.0 66
381.5
1080
0
200
400
600
800
1000
1200
docker kvm
SizeInMB
Docker / KVM: Ubuntu MySQL
docker
kvm
Out of the box JeOS images for docker are lightweight
Other Observations
 Micro “synthetic” benchmarks do not reflect macro “application” performance
– Always benchmark your “real” workload
 Nova-docker virt driver still under development
– Great start, but additional features needed for parity (python anyone?)
– Additions to the nova-docker driver could change Cloudy performance
 Docker LXC is still under development
– Docker has not yet released v1.0 for production readiness
 KVM images can be made skinnier, but requires additional effort
 Increased density / oversubscription imposes additional complexity
– Techniques to handle resource consumption surges which exceed capacity
5/11/2014 Document v2.0 67
REFERENCE
5/11/2014 68Document v2.0
References & Related Links
 http://www.slideshare.net/BodenRussell/realizing-linux-containerslxc
 http://www.slideshare.net/BodenRussell/kvm-and-docker-lxc-benchmarking-with-
openstack
 https://github.com/bodenr/cloudy-docker-kvm-bench
 https://www.docker.io/
 http://sysbench.sourceforge.net/
 http://dag.wiee.rs/home-made/dstat/
 http://www.openstack.org/
 https://wiki.openstack.org/wiki/Rally
 https://wiki.openstack.org/wiki/Docker
 http://devstack.org/
 http://www.linux-kvm.org/page/Main_Page
 https://github.com/stackforge/nova-docker
 https://github.com/dotcloud/docker-registry
 http://www.netperf.org/netperf/
 http://www.tokutek.com/products/iibench/
 http://www.brendangregg.com/activebenchmarking.html
5/11/2014 69Document v2.0
Cloudy Benchmark: Serially Boot 15 VMs
 KVM
+------------------+-------+---------------+---------------+---------------+---------------+---------------+
| action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile |
+------------------+-------+---------------+---------------+---------------+---------------+---------------+
| nova.boot_server | 15 | 7.37148094177 | 5.78166244825 | 4.77369403839 | 6.67956886292 | 7.07061390877 |
+------------------+-------+---------------+---------------+---------------+---------------+---------------+
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
| max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times |
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
| 7.58968496323 | 6.00853565534 | 4.99443006516 | 6.91288709641 | 7.28662061691 | 1.0 | 15 |
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
 Docker
+------------------+-------+---------------+---------------+---------------+---------------+---------------+
| action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile |
+------------------+-------+---------------+---------------+---------------+---------------+---------------+
| nova.boot_server | 15 | 5.18499684334 | 3.52911310196 | 2.93864893913 | 4.74490590096 | 4.95752367973 |
+------------------+-------+---------------+---------------+---------------+---------------+---------------+
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
| max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times |
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
| 5.43275094032 | 3.77053097089 | 3.12985610962 | 4.95886874199 | 5.18047580719 | 1.0 | 15 |
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
5/11/2014 70Document v2.0
Cloudy Performance: Serial VM Reboot
 KVM
+--------------------+-------+---------------+---------------+---------------+---------------+---------------+
| action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile |
+--------------------+-------+---------------+---------------+---------------+---------------+---------------+
| nova.reboot_server | 10 | 124.900292158 | 124.433238959 | 123.947879076 | 124.881286669 | 124.890789413 |
| nova.boot_server | 2 | 7.05096197128 | 6.82815694809 | 6.6053519249 | 7.00640096664 | 7.02868146896 |
| nova.delete_server | 2 | 4.46658396721 | 3.47976005077 | 2.49293613434 | 4.26921918392 | 4.36790157557 |
+--------------------+-------+---------------+---------------+---------------+---------------+---------------+
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
| max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times |
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
| 633.087348938 | 632.493344903 | 631.899340868 | 632.968548131 | 633.027948534 | 0.4 | 5 |
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
 Docker
+--------------------+-------+---------------+---------------+---------------+---------------+---------------+
| action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile |
+--------------------+-------+---------------+---------------+---------------+---------------+---------------+
| nova.reboot_server | 25 | 4.48567795753 | 2.57787958145 | 2.35410904884 | 3.0847319603 | 3.48342533112 |
| nova.boot_server | 5 | 4.16244912148 | 3.5675860405 | 3.05103397369 | 4.03664107323 | 4.09954509735 |
| nova.delete_server | 5 | 3.54331803322 | 3.52483625412 | 3.50456190109 | 3.53761086464 | 3.54046444893 |
+--------------------+-------+---------------+---------------+---------------+---------------+---------------+
+---------------+---------------+---------------+--------------+---------------+---------------+-------------+
| max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times |
+---------------+---------------+---------------+--------------+---------------+---------------+-------------+
| 21.5702910423 | 19.9976443768 | 18.7037060261 | 20.997631073 | 21.2839610577 | 1.0 | 5 |
+---------------+---------------+---------------+--------------+---------------+---------------+-------------+
5/11/2014 71Document v2.0
Cloud Performance: Snapshot VM To Image
 KVM
+--------------------+-------+----------------+----------------+----------------+----------------+----------------+
| action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile |
+--------------------+-------+----------------+----------------+----------------+----------------+----------------+
| nova.delete_image | 1 | 0.726859092712 | 0.726859092712 | 0.726859092712 | 0.726859092712 | 0.726859092712 |
| nova.create_image | 1 | 48.0231380463 | 48.0231380463 | 48.0231380463 | 48.0231380463 | 48.0231380463 |
| nova.boot_server | 2 | 32.7824101448 | 19.4164011478 | 6.05039215088 | 30.1092083454 | 31.4458092451 |
| nova.delete_server | 2 | 12.3564949036 | 8.40917897224 | 4.46186304092 | 11.5670317173 | 11.9617633104 |
+--------------------+-------+----------------+----------------+----------------+----------------+----------------+
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
| max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times |
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
| 104.401446104 | 104.401446104 | 104.401446104 | 104.401446104 | 104.401446104 | 1.0 | 1 |
+---------------+---------------+---------------+---------------+---------------+---------------+-------------+
 Docker (defect deleting image)
+--------------------+-------+---------------+---------------+---------------+---------------+---------------+
| action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile |
+--------------------+-------+---------------+---------------+---------------+---------------+---------------+
| nova.create_image | 1 | 36.8875639439 | 36.8875639439 | 36.8875639439 | 36.8875639439 | 36.8875639439 |
| nova.boot_server | 2 | 3.96964478493 | 3.84809792042 | 3.72655105591 | 3.94533541203 | 3.95749009848 |
| nova.delete_server | 2 | 4.48610281944 | 4.46519696712 | 4.44429111481 | 4.48192164898 | 4.48401223421 |
+--------------------+-------+---------------+---------------+---------------+---------------+---------------+
+-----------+-----------+-----------+--------------+---------------+---------------+-------------+
| max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times |
+-----------+-----------+-----------+--------------+---------------+---------------+-------------+
| n/a | n/a | n/a | n/a | n/a | 0 | 1 |
+-----------+-----------+-----------+--------------+---------------+---------------+-------------+
5/11/2014 72Document v2.0

More Related Content

What's hot

Issues of OpenStack multi-region mode
Issues of OpenStack multi-region modeIssues of OpenStack multi-region mode
Issues of OpenStack multi-region modeJoe Huang
 
Virtualization Architecture & KVM
Virtualization Architecture & KVMVirtualization Architecture & KVM
Virtualization Architecture & KVMPradeep Kumar
 
Building Multi-Site and Multi-OpenStack Cloud with OpenStack Cascading
Building Multi-Site and Multi-OpenStack Cloud with OpenStack CascadingBuilding Multi-Site and Multi-OpenStack Cloud with OpenStack Cascading
Building Multi-Site and Multi-OpenStack Cloud with OpenStack CascadingJoe Huang
 
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...Edureka!
 
Kubernetes Networking 101
Kubernetes Networking 101Kubernetes Networking 101
Kubernetes Networking 101Weaveworks
 
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법Open Source Consulting
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes VMware Tanzu
 
KubeVirt (Kubernetes and Cloud Native Toronto)
KubeVirt (Kubernetes and Cloud Native Toronto)KubeVirt (Kubernetes and Cloud Native Toronto)
KubeVirt (Kubernetes and Cloud Native Toronto)Stephen Gordon
 
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요Jo Hoon
 
Kubernetes - introduction
Kubernetes - introductionKubernetes - introduction
Kubernetes - introductionSparkbit
 
Kubernetes dealing with storage and persistence
Kubernetes  dealing with storage and persistenceKubernetes  dealing with storage and persistence
Kubernetes dealing with storage and persistenceJanakiram MSV
 
Docker introduction
Docker introductionDocker introduction
Docker introductionPhuc Nguyen
 
Evolution of containers to kubernetes
Evolution of containers to kubernetesEvolution of containers to kubernetes
Evolution of containers to kubernetesKrishna-Kumar
 
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...Edureka!
 
Deep dive in container service discovery
Deep dive in container service discoveryDeep dive in container service discovery
Deep dive in container service discoveryDocker, Inc.
 
Kubernetes: A Short Introduction (2019)
Kubernetes: A Short Introduction (2019)Kubernetes: A Short Introduction (2019)
Kubernetes: A Short Introduction (2019)Megan O'Keefe
 
Introduction to Kubernetes Workshop
Introduction to Kubernetes WorkshopIntroduction to Kubernetes Workshop
Introduction to Kubernetes WorkshopBob Killen
 

What's hot (20)

Issues of OpenStack multi-region mode
Issues of OpenStack multi-region modeIssues of OpenStack multi-region mode
Issues of OpenStack multi-region mode
 
Docker internals
Docker internalsDocker internals
Docker internals
 
Virtualization Architecture & KVM
Virtualization Architecture & KVMVirtualization Architecture & KVM
Virtualization Architecture & KVM
 
Building Multi-Site and Multi-OpenStack Cloud with OpenStack Cascading
Building Multi-Site and Multi-OpenStack Cloud with OpenStack CascadingBuilding Multi-Site and Multi-OpenStack Cloud with OpenStack Cascading
Building Multi-Site and Multi-OpenStack Cloud with OpenStack Cascading
 
Kubernetes Basics
Kubernetes BasicsKubernetes Basics
Kubernetes Basics
 
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
Kubernetes Architecture | Understanding Kubernetes Components | Kubernetes Tu...
 
Kubernetes Networking 101
Kubernetes Networking 101Kubernetes Networking 101
Kubernetes Networking 101
 
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes
 
KubeVirt (Kubernetes and Cloud Native Toronto)
KubeVirt (Kubernetes and Cloud Native Toronto)KubeVirt (Kubernetes and Cloud Native Toronto)
KubeVirt (Kubernetes and Cloud Native Toronto)
 
Kubernetes PPT.pptx
Kubernetes PPT.pptxKubernetes PPT.pptx
Kubernetes PPT.pptx
 
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
 
Kubernetes - introduction
Kubernetes - introductionKubernetes - introduction
Kubernetes - introduction
 
Kubernetes dealing with storage and persistence
Kubernetes  dealing with storage and persistenceKubernetes  dealing with storage and persistence
Kubernetes dealing with storage and persistence
 
Docker introduction
Docker introductionDocker introduction
Docker introduction
 
Evolution of containers to kubernetes
Evolution of containers to kubernetesEvolution of containers to kubernetes
Evolution of containers to kubernetes
 
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
 
Deep dive in container service discovery
Deep dive in container service discoveryDeep dive in container service discovery
Deep dive in container service discovery
 
Kubernetes: A Short Introduction (2019)
Kubernetes: A Short Introduction (2019)Kubernetes: A Short Introduction (2019)
Kubernetes: A Short Introduction (2019)
 
Introduction to Kubernetes Workshop
Introduction to Kubernetes WorkshopIntroduction to Kubernetes Workshop
Introduction to Kubernetes Workshop
 

Viewers also liked

HKG15-204: OpenStack: 3rd party testing and performance benchmarking
HKG15-204: OpenStack: 3rd party testing and performance benchmarkingHKG15-204: OpenStack: 3rd party testing and performance benchmarking
HKG15-204: OpenStack: 3rd party testing and performance benchmarkingLinaro
 
2015 03-26 cloud platform master class for cloudplatform 4 5 - public
2015 03-26 cloud platform master class for cloudplatform 4 5 - public2015 03-26 cloud platform master class for cloudplatform 4 5 - public
2015 03-26 cloud platform master class for cloudplatform 4 5 - publicCitrix
 
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화OpenStack Korea Community
 
[Pycon KR 2017] Rst와 함께하는 Python 문서 작성 & OpenStack 문서 활용 사례
[Pycon KR 2017] Rst와 함께하는 Python 문서 작성 & OpenStack 문서 활용 사례[Pycon KR 2017] Rst와 함께하는 Python 문서 작성 & OpenStack 문서 활용 사례
[Pycon KR 2017] Rst와 함께하는 Python 문서 작성 & OpenStack 문서 활용 사례Ian Choi
 
OpenStack을 중심으로 한 오픈 소스 & 상용 하이브리드 클라우드
OpenStack을 중심으로 한 오픈 소스 & 상용 하이브리드 클라우드OpenStack을 중심으로 한 오픈 소스 & 상용 하이브리드 클라우드
OpenStack을 중심으로 한 오픈 소스 & 상용 하이브리드 클라우드Ian Choi
 
[OpenStack Days Korea 2016] Track2 - OpenStack 기반 소프트웨어 정의 스토리지 기술
[OpenStack Days Korea 2016] Track2 - OpenStack 기반 소프트웨어 정의 스토리지 기술[OpenStack Days Korea 2016] Track2 - OpenStack 기반 소프트웨어 정의 스토리지 기술
[OpenStack Days Korea 2016] Track2 - OpenStack 기반 소프트웨어 정의 스토리지 기술OpenStack Korea Community
 
[OpenStack Days Korea 2016] Track3 - VDI on OpenStack with LeoStream Connecti...
[OpenStack Days Korea 2016] Track3 - VDI on OpenStack with LeoStream Connecti...[OpenStack Days Korea 2016] Track3 - VDI on OpenStack with LeoStream Connecti...
[OpenStack Days Korea 2016] Track3 - VDI on OpenStack with LeoStream Connecti...OpenStack Korea Community
 
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개OpenStack Korea Community
 
[OpenStack Days Korea 2016] Track2 - 가상화 네트워크와 클라우드간 협업
[OpenStack Days Korea 2016] Track2 - 가상화 네트워크와 클라우드간 협업[OpenStack Days Korea 2016] Track2 - 가상화 네트워크와 클라우드간 협업
[OpenStack Days Korea 2016] Track2 - 가상화 네트워크와 클라우드간 협업OpenStack Korea Community
 
[OpenStack Days Korea 2016] Track3 - 머신러닝과 오픈스택
[OpenStack Days Korea 2016] Track3 - 머신러닝과 오픈스택[OpenStack Days Korea 2016] Track3 - 머신러닝과 오픈스택
[OpenStack Days Korea 2016] Track3 - 머신러닝과 오픈스택OpenStack Korea Community
 
[2017년 5월 정기세미나] Network with OpenStack - OpenStack Summit Boston Post
[2017년 5월 정기세미나] Network with OpenStack - OpenStack Summit Boston Post[2017년 5월 정기세미나] Network with OpenStack - OpenStack Summit Boston Post
[2017년 5월 정기세미나] Network with OpenStack - OpenStack Summit Boston PostOpenStack Korea Community
 
[OpenStack Days Korea 2016] Track4 - Deep Drive: k8s with Docker
[OpenStack Days Korea 2016] Track4 - Deep Drive: k8s with Docker[OpenStack Days Korea 2016] Track4 - Deep Drive: k8s with Docker
[OpenStack Days Korea 2016] Track4 - Deep Drive: k8s with DockerOpenStack Korea Community
 
[OpenStack Days Korea 2016] Track4 - 오픈스택을 공부합시다 - 커뮤니티 스터디 분과 소개
[OpenStack Days Korea 2016] Track4 - 오픈스택을 공부합시다 - 커뮤니티 스터디 분과 소개[OpenStack Days Korea 2016] Track4 - 오픈스택을 공부합시다 - 커뮤니티 스터디 분과 소개
[OpenStack Days Korea 2016] Track4 - 오픈스택을 공부합시다 - 커뮤니티 스터디 분과 소개OpenStack Korea Community
 
[OpenStack Days Korea 2016] Track2 - 데이터센터에 부는 오픈 소스 하드웨어 바람
[OpenStack Days Korea 2016] Track2 - 데이터센터에 부는 오픈 소스 하드웨어 바람[OpenStack Days Korea 2016] Track2 - 데이터센터에 부는 오픈 소스 하드웨어 바람
[OpenStack Days Korea 2016] Track2 - 데이터센터에 부는 오픈 소스 하드웨어 바람OpenStack Korea Community
 
[OpenStack Days Korea 2016] Track4 - OpenStack with Kubernetes
[OpenStack Days Korea 2016] Track4 - OpenStack with Kubernetes[OpenStack Days Korea 2016] Track4 - OpenStack with Kubernetes
[OpenStack Days Korea 2016] Track4 - OpenStack with KubernetesOpenStack Korea Community
 
[OpenStack Days Korea 2016] Track3 - Powered by OpenStack, Power to do more w...
[OpenStack Days Korea 2016] Track3 - Powered by OpenStack, Power to do more w...[OpenStack Days Korea 2016] Track3 - Powered by OpenStack, Power to do more w...
[OpenStack Days Korea 2016] Track3 - Powered by OpenStack, Power to do more w...OpenStack Korea Community
 
[OpenStack Days Korea 2016] Track4 - 해외 사례로 보는 OpenStack Billing System
[OpenStack Days Korea 2016] Track4 - 해외 사례로 보는 OpenStack Billing System[OpenStack Days Korea 2016] Track4 - 해외 사례로 보는 OpenStack Billing System
[OpenStack Days Korea 2016] Track4 - 해외 사례로 보는 OpenStack Billing SystemOpenStack Korea Community
 
[2017년 5월 정기세미나] IBM에서 바라보는 OpenStack 이야기
[2017년 5월 정기세미나] IBM에서 바라보는 OpenStack 이야기[2017년 5월 정기세미나] IBM에서 바라보는 OpenStack 이야기
[2017년 5월 정기세미나] IBM에서 바라보는 OpenStack 이야기OpenStack Korea Community
 
[OpenStack Days 2016] Track4 - OpenNSL으로 브로드콜 기반 네트,워크 스위치 제어하기
[OpenStack Days 2016] Track4 - OpenNSL으로 브로드콜 기반 네트,워크 스위치 제어하기[OpenStack Days 2016] Track4 - OpenNSL으로 브로드콜 기반 네트,워크 스위치 제어하기
[OpenStack Days 2016] Track4 - OpenNSL으로 브로드콜 기반 네트,워크 스위치 제어하기OpenStack Korea Community
 
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-GeneOpenStack Korea Community
 

Viewers also liked (20)

HKG15-204: OpenStack: 3rd party testing and performance benchmarking
HKG15-204: OpenStack: 3rd party testing and performance benchmarkingHKG15-204: OpenStack: 3rd party testing and performance benchmarking
HKG15-204: OpenStack: 3rd party testing and performance benchmarking
 
2015 03-26 cloud platform master class for cloudplatform 4 5 - public
2015 03-26 cloud platform master class for cloudplatform 4 5 - public2015 03-26 cloud platform master class for cloudplatform 4 5 - public
2015 03-26 cloud platform master class for cloudplatform 4 5 - public
 
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
[OpenStack 하반기 스터디] Docker를 이용한 OpenStack 가상화
 
[Pycon KR 2017] Rst와 함께하는 Python 문서 작성 & OpenStack 문서 활용 사례
[Pycon KR 2017] Rst와 함께하는 Python 문서 작성 & OpenStack 문서 활용 사례[Pycon KR 2017] Rst와 함께하는 Python 문서 작성 & OpenStack 문서 활용 사례
[Pycon KR 2017] Rst와 함께하는 Python 문서 작성 & OpenStack 문서 활용 사례
 
OpenStack을 중심으로 한 오픈 소스 & 상용 하이브리드 클라우드
OpenStack을 중심으로 한 오픈 소스 & 상용 하이브리드 클라우드OpenStack을 중심으로 한 오픈 소스 & 상용 하이브리드 클라우드
OpenStack을 중심으로 한 오픈 소스 & 상용 하이브리드 클라우드
 
[OpenStack Days Korea 2016] Track2 - OpenStack 기반 소프트웨어 정의 스토리지 기술
[OpenStack Days Korea 2016] Track2 - OpenStack 기반 소프트웨어 정의 스토리지 기술[OpenStack Days Korea 2016] Track2 - OpenStack 기반 소프트웨어 정의 스토리지 기술
[OpenStack Days Korea 2016] Track2 - OpenStack 기반 소프트웨어 정의 스토리지 기술
 
[OpenStack Days Korea 2016] Track3 - VDI on OpenStack with LeoStream Connecti...
[OpenStack Days Korea 2016] Track3 - VDI on OpenStack with LeoStream Connecti...[OpenStack Days Korea 2016] Track3 - VDI on OpenStack with LeoStream Connecti...
[OpenStack Days Korea 2016] Track3 - VDI on OpenStack with LeoStream Connecti...
 
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개
[OpenStack Days Korea 2016] Track2 - 아리스타 OpenStack 연동 및 CloudVision 솔루션 소개
 
[OpenStack Days Korea 2016] Track2 - 가상화 네트워크와 클라우드간 협업
[OpenStack Days Korea 2016] Track2 - 가상화 네트워크와 클라우드간 협업[OpenStack Days Korea 2016] Track2 - 가상화 네트워크와 클라우드간 협업
[OpenStack Days Korea 2016] Track2 - 가상화 네트워크와 클라우드간 협업
 
[OpenStack Days Korea 2016] Track3 - 머신러닝과 오픈스택
[OpenStack Days Korea 2016] Track3 - 머신러닝과 오픈스택[OpenStack Days Korea 2016] Track3 - 머신러닝과 오픈스택
[OpenStack Days Korea 2016] Track3 - 머신러닝과 오픈스택
 
[2017년 5월 정기세미나] Network with OpenStack - OpenStack Summit Boston Post
[2017년 5월 정기세미나] Network with OpenStack - OpenStack Summit Boston Post[2017년 5월 정기세미나] Network with OpenStack - OpenStack Summit Boston Post
[2017년 5월 정기세미나] Network with OpenStack - OpenStack Summit Boston Post
 
[OpenStack Days Korea 2016] Track4 - Deep Drive: k8s with Docker
[OpenStack Days Korea 2016] Track4 - Deep Drive: k8s with Docker[OpenStack Days Korea 2016] Track4 - Deep Drive: k8s with Docker
[OpenStack Days Korea 2016] Track4 - Deep Drive: k8s with Docker
 
[OpenStack Days Korea 2016] Track4 - 오픈스택을 공부합시다 - 커뮤니티 스터디 분과 소개
[OpenStack Days Korea 2016] Track4 - 오픈스택을 공부합시다 - 커뮤니티 스터디 분과 소개[OpenStack Days Korea 2016] Track4 - 오픈스택을 공부합시다 - 커뮤니티 스터디 분과 소개
[OpenStack Days Korea 2016] Track4 - 오픈스택을 공부합시다 - 커뮤니티 스터디 분과 소개
 
[OpenStack Days Korea 2016] Track2 - 데이터센터에 부는 오픈 소스 하드웨어 바람
[OpenStack Days Korea 2016] Track2 - 데이터센터에 부는 오픈 소스 하드웨어 바람[OpenStack Days Korea 2016] Track2 - 데이터센터에 부는 오픈 소스 하드웨어 바람
[OpenStack Days Korea 2016] Track2 - 데이터센터에 부는 오픈 소스 하드웨어 바람
 
[OpenStack Days Korea 2016] Track4 - OpenStack with Kubernetes
[OpenStack Days Korea 2016] Track4 - OpenStack with Kubernetes[OpenStack Days Korea 2016] Track4 - OpenStack with Kubernetes
[OpenStack Days Korea 2016] Track4 - OpenStack with Kubernetes
 
[OpenStack Days Korea 2016] Track3 - Powered by OpenStack, Power to do more w...
[OpenStack Days Korea 2016] Track3 - Powered by OpenStack, Power to do more w...[OpenStack Days Korea 2016] Track3 - Powered by OpenStack, Power to do more w...
[OpenStack Days Korea 2016] Track3 - Powered by OpenStack, Power to do more w...
 
[OpenStack Days Korea 2016] Track4 - 해외 사례로 보는 OpenStack Billing System
[OpenStack Days Korea 2016] Track4 - 해외 사례로 보는 OpenStack Billing System[OpenStack Days Korea 2016] Track4 - 해외 사례로 보는 OpenStack Billing System
[OpenStack Days Korea 2016] Track4 - 해외 사례로 보는 OpenStack Billing System
 
[2017년 5월 정기세미나] IBM에서 바라보는 OpenStack 이야기
[2017년 5월 정기세미나] IBM에서 바라보는 OpenStack 이야기[2017년 5월 정기세미나] IBM에서 바라보는 OpenStack 이야기
[2017년 5월 정기세미나] IBM에서 바라보는 OpenStack 이야기
 
[OpenStack Days 2016] Track4 - OpenNSL으로 브로드콜 기반 네트,워크 스위치 제어하기
[OpenStack Days 2016] Track4 - OpenNSL으로 브로드콜 기반 네트,워크 스위치 제어하기[OpenStack Days 2016] Track4 - OpenNSL으로 브로드콜 기반 네트,워크 스위치 제어하기
[OpenStack Days 2016] Track4 - OpenNSL으로 브로드콜 기반 네트,워크 스위치 제어하기
 
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
 

Similar to KVM and docker LXC Benchmarking with OpenStack

Performance characteristics of traditional v ms vs docker containers (dockerc...
Performance characteristics of traditional v ms vs docker containers (dockerc...Performance characteristics of traditional v ms vs docker containers (dockerc...
Performance characteristics of traditional v ms vs docker containers (dockerc...Boden Russell
 
DockerCon14 Performance Characteristics of Traditional VMs vs. Docker Containers
DockerCon14 Performance Characteristics of Traditional VMs vs. Docker ContainersDockerCon14 Performance Characteristics of Traditional VMs vs. Docker Containers
DockerCon14 Performance Characteristics of Traditional VMs vs. Docker ContainersDocker, Inc.
 
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)Boden Russell
 
KubeCon EU 2016: Leveraging ephemeral namespaces in a CI/CD pipeline
KubeCon EU 2016: Leveraging ephemeral namespaces in a CI/CD pipelineKubeCon EU 2016: Leveraging ephemeral namespaces in a CI/CD pipeline
KubeCon EU 2016: Leveraging ephemeral namespaces in a CI/CD pipelineKubeAcademy
 
Comparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization TechnologyComparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization TechnologyBenoit des Ligneris
 
OpenNebulaconf2017US: Paying down technical debt with "one" dollar bills by ...
OpenNebulaconf2017US:  Paying down technical debt with "one" dollar bills by ...OpenNebulaconf2017US:  Paying down technical debt with "one" dollar bills by ...
OpenNebulaconf2017US: Paying down technical debt with "one" dollar bills by ...OpenNebula Project
 
DevNetCreate - ACI and Kubernetes Integration
DevNetCreate - ACI and Kubernetes IntegrationDevNetCreate - ACI and Kubernetes Integration
DevNetCreate - ACI and Kubernetes IntegrationHank Preston
 
OSv at Usenix ATC 2014
OSv at Usenix ATC 2014OSv at Usenix ATC 2014
OSv at Usenix ATC 2014Don Marti
 
Docker - Demo on PHP Application deployment
Docker - Demo on PHP Application deployment Docker - Demo on PHP Application deployment
Docker - Demo on PHP Application deployment Arun prasath
 
Cloud stack for z Systems - July 2016
Cloud stack for z Systems - July 2016Cloud stack for z Systems - July 2016
Cloud stack for z Systems - July 2016Anderson Bassani
 
Orchestration tool roundup - OpenStack Israel summit - kubernetes vs. docker...
Orchestration tool roundup  - OpenStack Israel summit - kubernetes vs. docker...Orchestration tool roundup  - OpenStack Israel summit - kubernetes vs. docker...
Orchestration tool roundup - OpenStack Israel summit - kubernetes vs. docker...Uri Cohen
 
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...
Orchestration tool roundup   kubernetes vs. docker vs. heat vs. terra form vs...Orchestration tool roundup   kubernetes vs. docker vs. heat vs. terra form vs...
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
 
Uri Cohen & Dan Kilman, GigaSpaces - Orchestration Tool Roundup - OpenStack l...
Uri Cohen & Dan Kilman, GigaSpaces - Orchestration Tool Roundup - OpenStack l...Uri Cohen & Dan Kilman, GigaSpaces - Orchestration Tool Roundup - OpenStack l...
Uri Cohen & Dan Kilman, GigaSpaces - Orchestration Tool Roundup - OpenStack l...Cloud Native Day Tel Aviv
 
KubeCon 2017: Kubernetes from Dev to Prod
KubeCon 2017: Kubernetes from Dev to ProdKubeCon 2017: Kubernetes from Dev to Prod
KubeCon 2017: Kubernetes from Dev to ProdSubhas Dandapani
 
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...Jim St. Leger
 
Metal-k8s presentation by Julien Girardin @ Paris Kubernetes Meetup
Metal-k8s presentation by Julien Girardin @ Paris Kubernetes MeetupMetal-k8s presentation by Julien Girardin @ Paris Kubernetes Meetup
Metal-k8s presentation by Julien Girardin @ Paris Kubernetes MeetupLaure Vergeron
 
Container & kubernetes
Container & kubernetesContainer & kubernetes
Container & kubernetesTed Jung
 
Containerization is more than the new Virtualization: enabling separation of ...
Containerization is more than the new Virtualization: enabling separation of ...Containerization is more than the new Virtualization: enabling separation of ...
Containerization is more than the new Virtualization: enabling separation of ...Jérôme Petazzoni
 
Network performance test plan_v0.3
Network performance test plan_v0.3Network performance test plan_v0.3
Network performance test plan_v0.3David Pasek
 

Similar to KVM and docker LXC Benchmarking with OpenStack (20)

Performance characteristics of traditional v ms vs docker containers (dockerc...
Performance characteristics of traditional v ms vs docker containers (dockerc...Performance characteristics of traditional v ms vs docker containers (dockerc...
Performance characteristics of traditional v ms vs docker containers (dockerc...
 
DockerCon14 Performance Characteristics of Traditional VMs vs. Docker Containers
DockerCon14 Performance Characteristics of Traditional VMs vs. Docker ContainersDockerCon14 Performance Characteristics of Traditional VMs vs. Docker Containers
DockerCon14 Performance Characteristics of Traditional VMs vs. Docker Containers
 
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)
 
KubeCon EU 2016: Leveraging ephemeral namespaces in a CI/CD pipeline
KubeCon EU 2016: Leveraging ephemeral namespaces in a CI/CD pipelineKubeCon EU 2016: Leveraging ephemeral namespaces in a CI/CD pipeline
KubeCon EU 2016: Leveraging ephemeral namespaces in a CI/CD pipeline
 
Comparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization TechnologyComparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization Technology
 
OpenNebulaconf2017US: Paying down technical debt with "one" dollar bills by ...
OpenNebulaconf2017US:  Paying down technical debt with "one" dollar bills by ...OpenNebulaconf2017US:  Paying down technical debt with "one" dollar bills by ...
OpenNebulaconf2017US: Paying down technical debt with "one" dollar bills by ...
 
DevNetCreate - ACI and Kubernetes Integration
DevNetCreate - ACI and Kubernetes IntegrationDevNetCreate - ACI and Kubernetes Integration
DevNetCreate - ACI and Kubernetes Integration
 
OSv at Usenix ATC 2014
OSv at Usenix ATC 2014OSv at Usenix ATC 2014
OSv at Usenix ATC 2014
 
Docker - Demo on PHP Application deployment
Docker - Demo on PHP Application deployment Docker - Demo on PHP Application deployment
Docker - Demo on PHP Application deployment
 
Cloud stack for z Systems - July 2016
Cloud stack for z Systems - July 2016Cloud stack for z Systems - July 2016
Cloud stack for z Systems - July 2016
 
Orchestration tool roundup - OpenStack Israel summit - kubernetes vs. docker...
Orchestration tool roundup  - OpenStack Israel summit - kubernetes vs. docker...Orchestration tool roundup  - OpenStack Israel summit - kubernetes vs. docker...
Orchestration tool roundup - OpenStack Israel summit - kubernetes vs. docker...
 
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...
Orchestration tool roundup   kubernetes vs. docker vs. heat vs. terra form vs...Orchestration tool roundup   kubernetes vs. docker vs. heat vs. terra form vs...
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...
 
Uri Cohen & Dan Kilman, GigaSpaces - Orchestration Tool Roundup - OpenStack l...
Uri Cohen & Dan Kilman, GigaSpaces - Orchestration Tool Roundup - OpenStack l...Uri Cohen & Dan Kilman, GigaSpaces - Orchestration Tool Roundup - OpenStack l...
Uri Cohen & Dan Kilman, GigaSpaces - Orchestration Tool Roundup - OpenStack l...
 
KubeCon 2017: Kubernetes from Dev to Prod
KubeCon 2017: Kubernetes from Dev to ProdKubeCon 2017: Kubernetes from Dev to Prod
KubeCon 2017: Kubernetes from Dev to Prod
 
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
 
Metal-k8s presentation by Julien Girardin @ Paris Kubernetes Meetup
Metal-k8s presentation by Julien Girardin @ Paris Kubernetes MeetupMetal-k8s presentation by Julien Girardin @ Paris Kubernetes Meetup
Metal-k8s presentation by Julien Girardin @ Paris Kubernetes Meetup
 
Container & kubernetes
Container & kubernetesContainer & kubernetes
Container & kubernetes
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
Containerization is more than the new Virtualization: enabling separation of ...
Containerization is more than the new Virtualization: enabling separation of ...Containerization is more than the new Virtualization: enabling separation of ...
Containerization is more than the new Virtualization: enabling separation of ...
 
Network performance test plan_v0.3
Network performance test plan_v0.3Network performance test plan_v0.3
Network performance test plan_v0.3
 

Recently uploaded

The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...AliaaTarek5
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 

Recently uploaded (20)

The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 

KVM and docker LXC Benchmarking with OpenStack

  • 1. Passive Benchmarking with docker LXC, KVM & OpenStack Hosted @ SoftLayer Boden Russell (brussell@us.ibm.com) IBM Global Technology Services Advanced Cloud Solutions & Innovation V2.0
  • 2. FAQ  How is this version (v2.0) different from the initial benchmarks? – See the revision history within this document.  Are there any artifacts associated with the test? – Yes; see my github repo: https://github.com/bodenr/cloudy-docker-kvm-bench  Do these results imply an LXC based technology replaces the need for traditional hypervisors? – In my opinion, traditional VMs will become the “edge case” moving forward for use cases which are currently based on Linux flavored VMs. However I believe there will still be cases for traditional VMs, some of which are detailed in the LXC Realization presentation.  Are these results scientific? – No. Disclaimers have been attached to any documentation related to these tests to indicate such. These tests are meant to be a set of “litmus” tests to gain an initial understanding of how LXC compares to traditional hypervisors specifically in the Cloud space.  Do you welcome comments / feedback on the test? – Yes; the goal of these tests is to educate the community on LXC based technologies vs traditional hypervisors. As such they are fully disclosed in complete and hence open to feedback of any kind. 5/11/2014 2Document v2.0
  • 3. FAQ Continued  Should I act on these results? – I believe the results provide enough information to gain some interest. I expect any organization, group or individual considering actions as a result will perform their own validation to assert the technology choice is beneficial for their consumption prior to adoption.  Is further / deeper testing and investigation warranted? – Absolutely. These tests should be conducted in a more active manner to understand the root causes for any differences. Additional tests and variations are also needed including; various KVM disk cache modes, skinny VM images (i.e. JeOS), impacts of database settings, docker storage drivers, etc.  Is this a direct measurement of the hypervisor (KVM) or LXC engine (docker)? – No, many factors play into results. For example the compute node has the nova virt driver running which is obviously different in implementation between nova libvirt-kvm and nova docker. Thus it’s implementation *may* have an impact on the compute node metrics and performance. 5/11/2014 Document v2.0 3
  • 4. Revision History Revision Overview of changes V1.0 - Initial document release V2.0 - All tests were re-run using a single docker image throughout the tests (see my Dockerfile). - As the result of an astute reader, the 15 VM serial “packing” test reflects VM boot overhead rather than steady- state; this version clarifies such claims. - A new Cloudy test was added to better understand steady-state CPU. - Rather than presenting direct claims of density, raw data and graphs are presented to let the reader draw their own conclusions. - Additional “in the guest” tests were performed including blogbench. 5/11/2014 Document v2.0 4
  • 5. Why Linux Containers (LXC)  Fast – Runtime performance near bare metal speeds – Management operations (run, stop , start, etc.) in seconds / milliseconds  Agile – VM-like agility – it’s still “virtualization” – Seamlessly “migrate” between virtual and bare metal environments  Flexible – Containerize a “system” – Containerize “application(s)”  Lightweight – Just enough Operating System (JeOS) – Minimal per container penalty  Inexpensive – Open source – free – lower TCO – Supported with out-of-the-box modern Linux kernel  Ecosystem – Growing in popularity – Vibrant community & numerous 3rd party apps 5/11/2014 5Document v2.0
  • 6. Hypervisors vs. Linux Containers Hardware Operating System Hypervisor Virtual Machine Operating System Bins / libs App App Virtual Machine Operating System Bins / libs App App Hardware Hypervisor Virtual Machine Operating System Bins / libs App App Virtual Machine Operating System Bins / libs App App Hardware Operating System Container Bins / libs App App Container Bins / libs App App Type 1 Hypervisor Type 2 Hypervisor Linux Containers 5/11/2014 6 Containers share the OS kernel of the host and thus are lightweight. However, each container must have the same OS kernel. Containers are isolated, but share OS and, where appropriate, libs / bins. Document v2.0
  • 7. Hypervisor VM vs. LXC vs. Docker LXC 5/11/2014 7Document v2.0
  • 8. Docker in OpenStack  Havana – Nova virt driver which integrates with docker REST API on backend – Glance translator to integrate docker images with Glance  Icehouse – Heat plugin for docker  Both options are still under development 5/11/2014 8 nova-docker virt driver docker heat plugin DockerInc::Docke r::Container (plugin) Document v2.0
  • 9. About This Benchmark  Use case perspective – As an OpenStack Cloud user I want a Ubuntu based VM with MySQL… Why would I choose docker LXC vs a traditional hypervisor?  OpenStack “Cloudy” perspective – LXC vs. traditional VM from a Cloudy (OpenStack) perspective – VM operational times (boot, start, stop, snapshot) – Compute node resource usage (per VM penalty); density factor  Guest runtime perspective – CPU, memory, file I/O, MySQL OLTP, etc.  Why KVM? – Exceptional performance DISCLAIMERS The tests herein are semi-active litmus tests – no in depth tuning, analysis, etc. More active testing is warranted. These results do not necessary reflect your workload or exact performance nor are they guaranteed to be statistically sound. 5/11/2014 9Document v2.0
  • 10. Benchmark Environment Topology @ SoftLayer glance api / reg nova api / cond / etc keystone … rally nova api / cond / etc cinder api / sch / vol docker lxc dstat controller compute node glance api / reg nova api / cond / etc keystone … rally nova api / cond / etc cinder api / sch / vol KVM dstat controller compute node 5/11/2014 10 + Awesome! + Awesome! Document v2.0
  • 11. Benchmark Specs 5/11/2014 11 Spec Controller Node (4CPU x 8G RAM) Compute Node (16CPU x 96G RAM) Environment Bare Metal @ SoftLayer Bare Metal @ SoftLayer Mother Board SuperMicro X8SIE-F Intel Xeon QuadCore SingleProc SATA [1Proc] SuperMicro X8DTU-F_R2 Intel Xeon HexCore DualProc [2Proc] CPU Intel Xeon-Lynnfield 3470-Quadcore [2.93GHz] (Intel Xeon-Westmere 5620-Quadcore [2.4GHz]) x 2 Memory (Kingston 4GB DDR3 2Rx8 4GB DDR3 2Rx8 [4GB]) x2 (Kingston 16GB DDR3 2Rx4 16GB DDR3 2Rx4 [16GB]) x 6 HDD (LOCAL) Digital WD Caviar RE3 WD5002ABYS [500GB]; SATAII Western Digital WD Caviar RE4 WD5003ABYX [500GB]; SATAII NIC eth0/eth1 @ 100 Mbps eth0/eth1 @100 Mbps Operating System Ubuntu 12.04 LTS 64bit Ubuntu 12.04 LTS 64bit Kernel 3.5.0-48-generic 3.8.0-38-generic IO Scheduler deadline deadline Hypervisor tested NA - KVM 1.0 + virtio + KSM (memory deduplication) - docker 0.10.0 + go1.2.1 + commit dc9c28f + AUFS OpenStack Trunk master via devstack Trunk master via devstack. Libvirt KVM nova driver / nova-docker virt driver OpenStack Benchmark Client OpenStack project rally NA Metrics Collection NA dstat Guest Benchmark Driver NA - Sysbench 0.4.12 - mbw 1.1.1.-2 - iibench (py) - netperf 2.5.0-1 - Blogbench 1.1 - cpu_bench.py VM Image NA - Scenario 1 (KVM): official ubuntu 12.04 image + mysql snapshotted and exported to qcow2 – 1080 MB - Scenario 2 (docker): guillermo/mysql -- 381.5 MB Hosted @Document v2.0
  • 12. Test Descriptions: Cloudy Benchmarks 5/11/2014 12 Benchmark Benchmark Driver Description OpenStack Cloudy Benchmarks Serial VM boot (15 VMs) OpenStack Rally - Boot VM from image - Wait for ACTIVE state - Repeat the above a total of 15 times - Delete VMs Compute node steady-state VM packing cpu_bench.py - Boot 15 VMs in async fashion - Sleep for 5 minutes (wait for steady-state) - Delete all 15 VMs in async fashion VM reboot (5 VMs rebooted 5 times each) OpenStack Rally - Boot VM from image - Wait for ACTIVE state - Soft reboot VM 5 times - Delete VM - Repeat the above a total of 5 times VM snapshot (1 VM, 1 snapshot) OpenStack Rally - Boot VM from image - Wait for ACTIVE state - Snapshot VM to glance image - Delete VM Document v2.0
  • 13. Test Descriptions: Guest Benchmarks 5/11/2014 13 Benchmark Benchmark Driver Description Guest Runtime Benchmarks CPU performance Sysbench from within the guest - Clear memory cache - Run sysbench cpu test - Repeat a total of 5 times - Average results over the 5 times OLTP (MySQL) performance Sysbench from within the guest - Clear memory cache - Run sysbench OLTP test - Repeat a total of 5 times - Average results over the 5 times MySQL Indexed insertion benchmark - Clear memory cache - Run iibench for a total of 1M inserts printing stats at 100K intervals - Collect data over 5 runs & average File I/O performance Sysbench from within the guest - Synchronous IO - Clear memory cache - Run sysbench OLTP test - Repeat a total of 5 times - Average results over the 5 times Memory performance Mbw from within the guest - Clear memory cache - Run mbw with array size of 1000 MiB and each test 10 times - Collect average over 10 runs per test Network performance Netperf - Run netperf server on controller - From guest run netperf client in IPv4 mode - Repeat text 5x - Average results Application type performance Blogbench - Clear memory cache - Run blogbench for 5 minutes - Repeat 5 times - Average read / write scores Document v2.0
  • 14. STEADY STATE VM PACKING OpenStack Cloudy Benchmark 5/11/2014 14Document v2.0
  • 15. Cloudy Performance: Steady State Packing  Benchmark scenario overview – Pre-cache VM image on compute node prior to test – Boot 15 VM asynchronously in succession – Wait for 5 minutes (to achieve steady-state on the compute node) – Delete all 15 VMs asynchronously in succession  Benchmark driver – cpu_bench.py  High level goals – Understand compute node characteristics under steady-state conditions with 15 packed / active VMs 5/11/2014 15 0 2 4 6 8 10 12 14 16 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 ActiveVMs Time Benchmark Visualization VMs Document v2.0
  • 16. Cloudy Performance: Steady State Packing 5/11/2014 16 0 10 20 30 40 50 60 70 80 1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185 193 201 209 217 225 233 241 249 257 265 273 281 289 297 305 313 321 CPUUsageInPercent Time Docker: Compute Node CPU (full test duration) usr sys Averages – 0.54 – 0.17 0 10 20 30 40 50 60 70 80 1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185 193 201 209 217 225 233 241 249 257 265 273 281 289 297 305 313 321 329 337 345 CPUUsageInPercent Time KVM: Compute Node CPU (full test duration) usr sys Averages – 7.64 – 1.4 Document v2.0
  • 17. Cloudy Performance: Steady State Packing 5/11/2014 17 0 2 4 6 8 10 12 14 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106 111 116 121 126 131 136 141 146 151 156 161 166 171 176 181 186 191 196 201 206 211 CPUUsageInPercent Time (31s – 243s) Docker: Compute Node Steady-State CPU (segment: 31s – 243s) usr sys 0 2 4 6 8 10 12 14 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106 111 116 121 126 131 136 141 146 151 156 161 166 171 176 181 186 191 196 201 206 211 CPUUsageInPercent Time (95s - 307s) KVM: Compute Node Steady-State CPU (segment: 95s – 307s) usr sys Averages – 0.2 – 0.03 Averages – 1.91 – 0.36 31 seconds 243 seconds 95 seconds 307 seconds Document v2.0
  • 18. Cloudy Performance: Steady State Packing 5/11/2014 18 0 2 4 6 8 10 12 14 1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97 103 109 115 121 127 133 139 145 151 157 163 169 175 181 187 193 199 205 211 CPUUsageInPercent Time: KVM(95s - 307s) Docker(31s – 243s) Docker / KVM: Compute Node Steady-State CPU (Segment Overlay) docker-usr docker-sys kvm-usr kvm-sys docker: 31s KVM: 95s docker: 243s KVM: 307s Docker Averages – 0.2 – 0.03 KVM Averages – 1.91 – 0.36 Document v2.0
  • 19. Cloudy Performance: Steady State Packing 5/11/2014 19 0.00E+00 1.00E+09 2.00E+09 3.00E+09 4.00E+09 5.00E+09 6.00E+09 7.00E+09 1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185 193 201 209 217 225 233 241 249 257 265 273 281 289 297 305 313 321 MemoryUsed Time Docker: Compute Node Used Memory (full test duration) Memory Delta 734 MB Per VM 49 MB 0.00E+00 1.00E+09 2.00E+09 3.00E+09 4.00E+09 5.00E+09 6.00E+09 7.00E+09 1 10 19 28 37 46 55 64 73 82 91 100 109 118 127 136 145 154 163 172 181 190 199 208 217 226 235 244 253 262 271 280 289 298 307 316 325 334 MemoryUsed Time KVM: Compute Node Used Memory (full test duration) Memory Delta 4387 MB Per VM 292 MB Document v2.0
  • 20. Cloudy Performance: Steady State Packing 5/11/2014 20 0.00E+00 1.00E+09 2.00E+09 3.00E+09 4.00E+09 5.00E+09 6.00E+09 7.00E+09 1 10 19 28 37 46 55 64 73 82 91 100 109 118 127 136 145 154 163 172 181 190 199 208 217 226 235 244 253 262 271 280 289 298 307 316 325 334 MemoryUsed Axis Title Docker / KVM: Compute Node Used Memory (Overlay) kvm docker Document v2.0
  • 21. Cloudy Performance: Steady State Packing 5/11/2014 21 0 10 20 30 40 50 60 70 80 90 100 1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185 193 201 209 217 225 233 241 249 257 265 273 281 289 297 305 313 321 1MinuteLoadAverage Time Docker: Compute Node 1m Load Average (full test duration) 1m Average 0.15 % 0 10 20 30 40 50 60 70 80 90 100 1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185 193 201 209 217 225 233 241 249 257 265 273 281 289 297 305 313 321 329 337 1MinuteLoadAverage Time KVM: Compute Node 1m Load Average (full test duration) 1m Average 35.9 % Document v2.0
  • 22. SERIALLY BOOT 15 VMS OpenStack Cloudy Benchmark 5/11/2014 22Document v2.0
  • 23. Cloudy Performance: Serial VM Boot  Benchmark scenario overview – Pre-cache VM image on compute node prior to test – Boot VM – Wait for VM to become ACTIVE – Repeat the above steps for a total of 15 VMs – Delete all VMs  Benchmark driver – OpenStack Rally  High level goals – Understand compute node characteristics under sustained VM boots 5/11/2014 23 0 2 4 6 8 10 12 14 16 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ActiveVMs Time Benchmark Visualization VMs Document v2.0
  • 24. Cloudy Performance: Serial VM Boot 5/11/2014 24 3.529113102 5.781662448 0 1 2 3 4 5 6 7 docker KVM TimeInSeconds Average Server Boot Time docker KVM Document v2.0
  • 25. Cloudy Performance: Serial VM Boot 5/11/2014 25 0 5 10 15 20 25 30 35 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 CPUUsageInPercent Time Docker: Compute Node CPU usr sys Averages – 1.39 – 0.57 0 5 10 15 20 25 30 35 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 CPUUsageInPercent Time KVM: Compute Node CPU Usage usr sys Averages – 13.45 – 2.23 Document v2.0
  • 26. Cloudy Performance: Serial VM Boot 5/11/2014 26 0 5 10 15 20 25 30 35 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101105109113117121125 CPUUsageInPercent Time Docker / KVM: Compute Node CPU (Unnormalized Overlay) kvm-usr kvm-sys docker-usr docker-sys Document v2.0
  • 27. Cloudy Performance: Serial VM Boot 5/11/2014 27 y = 0.0095x + 1.008 y = 0.3582x + 1.0633 0 5 10 15 20 25 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 UsrCPUInPercent Time (8s - 58s) Docker / KVM: Serial VM Boot Usr CPU (segment: 8s - 58s) docker(8-58) kvm(8-58) Linear (docker(8-58)) Linear (kvm(8-58)) 8 seconds 58 seconds Document v2.0
  • 28. Cloudy Performance: Serial VM Boot 5/11/2014 28 0.00E+00 5.00E+08 1.00E+09 1.50E+09 2.00E+09 2.50E+09 3.00E+09 3.50E+09 4.00E+09 4.50E+09 5.00E+09 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 MemoryUsed Time Docker: Compute Node Memory Used Memory Delta 677 MB Per VM 45 MB 0.00E+00 1.00E+09 2.00E+09 3.00E+09 4.00E+09 5.00E+09 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101105109113117121125 MemoryUsed Time KVM: Compute Node Memory Used Memory Delta 2737 MB Per VM 182 MB Document v2.0
  • 29. Cloudy Performance: Serial VM Boot 5/11/2014 29 0.00E+00 5.00E+08 1.00E+09 1.50E+09 2.00E+09 2.50E+09 3.00E+09 3.50E+09 4.00E+09 4.50E+09 5.00E+09 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101105109113117121125 MemoryUsed Time Docker / KVM: Compute Node Memory Used (Unnormalized Overlay) kvm docker Document v2.0
  • 30. Cloudy Performance: Serial VM Boot 5/11/2014 30 y = 1E+07x + 1E+09 y = 3E+07x + 1E+09 0.00E+00 5.00E+08 1.00E+09 1.50E+09 2.00E+09 2.50E+09 3.00E+09 3.50E+09 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 MemoryUsage Time (1s - 67s) Docker / KVM: Serial VM Boot Memory Usage (segment: 1s - 67s) docker kvm Linear (docker) Linear (kvm) 1 second 67 seconds Document v2.0
  • 31. Cloudy Performance: Serial VM Boot 5/11/2014 31 0 5 10 15 20 25 30 35 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 1MinuteLoadAverage Time Docker: Compute Node 1m Load Average 1m Average 0.25 % 0 5 10 15 20 25 30 35 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 1MinuteLoadAverage Time KVM: Compute Node 1m Load Average 1m Average 11.18 % Document v2.0
  • 32. SERIAL VM SOFT REBOOT OpenStack Cloudy Benchmark 5/11/2014 32Document v2.0
  • 33. Cloudy Performance: Serial VM Reboot  Benchmark scenario overview – Pre-cache VM image on compute node prior to test – Boot a VM & wait for it to become ACTIVE – Soft reboot the VM and wait for it to become ACTIVE • Repeat reboot a total of 5 times – Delete VM – Repeat the above for a total of 5 VMs  Benchmark driver – OpenStack Rally  High level goals – Understand compute node characteristics under sustained VM reboots 5/11/2014 33 0 1 2 3 4 5 6 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 ActiveVMs Time Benchmark Visualization Active VMs Document v2.0
  • 34. Cloudy Performance: Serial VM Reboot 5/11/2014 34 2.577879581 124.433239 0 20 40 60 80 100 120 140 docker KVM TimeInSeconds Average Server Reboot Time docker KVM Document v2.0
  • 35. Cloudy Performance: Serial VM Reboot 5/11/2014 35 3.567586041 3.479760051 0 0.5 1 1.5 2 2.5 3 3.5 4 docker KVM TimeInSeconds Average Server Delete Time docker KVM Document v2.0
  • 36. Cloudy Performance: Serial VM Reboot 5/11/2014 36 0 1 2 3 4 5 6 7 8 9 10 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 CPUUsageInPercent Time Docker: Compute Node CPU usr sys 0 1 2 3 4 5 6 7 8 9 10 1 72 143 214 285 356 427 498 569 640 711 782 853 924 995 1066 1137 1208 1279 1350 1421 1492 1563 1634 1705 1776 1847 1918 1989 2060 2131 2202 2273 2344 2415 2486 2557 2628 2699 2770 2841 2912 2983 3054 3125 CPUUsageInPercent Time KVM: Compute Node CPU usr sys Averages – 0.69 – 0.26 Averages – 0.84 – 0.18 Document v2.0
  • 37. Cloudy Performance: Serial VM Reboot 5/11/2014 37 0.00E+00 5.00E+08 1.00E+09 1.50E+09 2.00E+09 2.50E+09 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 MemoryUsed Time Docker: Compute Node Used Memory Memory Delta 48 MB 0.00E+00 5.00E+08 1.00E+09 1.50E+09 2.00E+09 2.50E+09 1 81 161 241 321 401 481 561 641 721 801 881 961 1041 1121 1201 1281 1361 1441 1521 1601 1681 1761 1841 1921 2001 2081 2161 2241 2321 2401 2481 2561 2641 2721 2801 2881 2961 3041 3121 MemoryUsed Time KVM: Compute Node Used Memory Memory Delta 486 MB Document v2.0
  • 38. Cloudy Performance: Serial VM Reboot 5/11/2014 38 0 0.5 1 1.5 2 2.5 3 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 1MinuteLoadAverage Time Docker: Compute Node 1m Load Average 1m Average 0.4 % 0 0.5 1 1.5 2 2.5 3 1 71 141 211 281 351 421 491 561 631 701 771 841 911 981 1051 1121 1191 1261 1331 1401 1471 1541 1611 1681 1751 1821 1891 1961 2031 2101 2171 2241 2311 2381 2451 2521 2591 2661 2731 2801 2871 2941 3011 3081 3151 1MinuteLoadAverage Time KVM: Compute Node 1m Load Average 1m Average 0.33 % Document v2.0
  • 39. SNAPSHOT VM TO IMAGE OpenStack Cloudy Benchmark 5/11/2014 39Document v2.0
  • 40. Cloudy Performance: Snapshot VM To Image  Benchmark scenario overview – Pre-cache VM image on compute node prior to test – Boot a VM – Wait for it to become ACTIVE – Snapshot the VM – Wait for image to become ACTIVE – Delete VM  Benchmark driver – OpenStack Rally  High level goals – Understand cloudy ops times from a user perspective 5/11/2014 40Document v2.0
  • 41. Cloudy Performance: Snapshot VM To Image 5/11/2014 41 36.88756394 48.02313805 0 10 20 30 40 50 60 docker KVM TimeInSeconds Average Snapshot Server Time docker KVM Document v2.0
  • 42. Cloudy Performance: Snapshot VM To Image 5/11/2014 42 0 1 2 3 4 5 6 7 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 CPUUsageInPercent Time Docker: Compute Node CPU usr sys Averages – 0.42 – 0.15 0 1 2 3 4 5 6 7 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 CPUUsageInPercent Time KVM: Compute Node CPU usr sys Averages – 1.46 – 1.0 Document v2.0
  • 43. Cloudy Performance: Snapshot VM To Image 5/11/2014 43 1.48E+09 1.5E+09 1.52E+09 1.54E+09 1.56E+09 1.58E+09 1.6E+09 1.62E+09 1.64E+09 1.66E+09 1.68E+09 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 MemoryUsed Time KVM: Compute Node Used Memory Memory Delta 114 MB 1.6E+09 1.61E+09 1.62E+09 1.63E+09 1.64E+09 1.65E+09 1.66E+09 1.67E+09 1.68E+09 1.69E+09 1.7E+09 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 MemoryUsed Time Docker: Compute Node Memory Used Memory Delta 57 MB Document v2.0
  • 44. Cloudy Performance: Snapshot VM To Image 5/11/2014 44 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 1MinuteLoadAverage Time Docker: Compute Node 1m Load Average 1m Average 0.06 % 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 1MinuteLoadAverage Time KVM: Compute node 1m Load Average 1m Average 0.47 % Document v2.0
  • 45. GUEST PERFORMANCE BENCHMARKS Guest VM Benchmark 5/11/2014 45Document v2.0
  • 46. Configuring Docker Container for 2CPU x 4G RAM  Configuring docker LXC for 2CPU x 4G RAM – Pin container to 2 CPUs / Mems • Create cpuset cgroup • Pin group to cpuset.mems to 0,1 • Pin group to cpuset.cpus to 0,1 • Add container root proc to tasks – Limit container memory to 4G • Create memory cgroup • Set memory.limit_in_bytes to 4G • Add container root proc to tasks – Limit blkio • Create blkio cgroup • Add container root process of LXC to tasks • Default blkio.weight of 500 5/11/2014 46Document v2.0
  • 47. Guest Performance: CPU  Linux sysbench 0.4.12 cpu test  Calculate prime numbers up to 20000  2 threads  Instance size – 4G RAM – 2 CPU cores – 20G disk 5/11/2014 47Document v2.0
  • 48. Guest Performance: CPU 5/11/2014 48 15.26 15.22 15.13 0 2 4 6 8 10 12 14 16 18 Bare Metal docker KVM Seconds Calculate Primes Up To 20000 Bare Metal docker KVM Document v2.0
  • 49. Guest Performance: Memory  Linux mbw 1.1.1-2  Instance size – 2 CPU – 4G memory  Execution options – 10 runs; average – 1000 MiB 5/11/2014 49Document v2.0
  • 50. Guest Performance: Memory 5/11/2014 50 3823.3 4393.3 12881.61 3813.38 4395.92 12905.68 3428.95 3461.59 7223.23 0 2000 4000 6000 8000 10000 12000 14000 MEMCPY DUMB MCBLOCK MiB/s Memory Test Memory Benchmark Performance Bare Metal (MiB/s) docker (MiB/s) KVM (MiB/s) Document v2.0
  • 51. Guest Performance: Network  Netperf 2.5.0-1 – Netserver running on controller – Netperf on guest – Run netperf 5 times & average results  Instance size – 2 CPU – 4G memory  Execution options – IPv4 / TCP 5/11/2014 51Document v2.0
  • 52. Guest Performance: Network 5/11/2014 52 940.26 940.56 0 100 200 300 400 500 600 700 800 900 1000 docker KVM ThroughputIn10^6bits/second Network Throughput docker KVM Document v2.0
  • 53. Guest Performance: File I/O Random Read  Linux sysbench 0.4.12 fileio test – Synchronous IO – Random read – Total file size of 150G – 16K block size – Test duration of 100s  Thread variations: 1, 8, 16, 32, 64  Instance size – 4G RAM – 2 CPU cores – 200G disk  KVM specs – Disk cache mode set to none – Virtio – Deadline scheduler (host & guest)  Docker specs – AUFS storage driver – Deadline scheduler 5/11/2014 53Document v2.0
  • 54. Guest Performance: File I/O Random Read 5/11/2014 54 0 500 1000 1500 2000 2500 1 2 4 8 16 32 64 TotalTransferredInKb/sec Threads Sysbench Synchronous File I/O Random Read docker KVM Document v2.0
  • 55. Guest Performance: File I/O Random Read / Write  Linux sysbench 0.4.12 fileio test – Synchronous IO – Random read – Total file size of 150G – 16K block size – Read/Write ratio for combined random IO test: 1.50 – Test duration of 100s  Thread variations: 1, 8, 16, 32, 64  Instance size – 4G RAM – 2 CPU cores – 200G disk  KVM specs – Disk cache mode set to none – Virtio – Deadline scheduler (host & guest)  Docker specs – AUFS storage driver – Deadline scheduler 5/11/2014 55Document v2.0
  • 56. Guest Performance: File I/O Random Read / Write 5/11/2014 56 0 200 400 600 800 1000 1200 1400 1600 1 2 4 8 16 32 64 TotalTransferredInKb/sec Threads Sysbench Synchronous File I/O Random Read/Write @ R/W Ratio of 1.50 docker KVM Document v2.0
  • 57. Guest Performance: MySQL OLTP  Linux sysbench 0.4.12 oltp test – Table size of 2,000,000 – MySQL 5.5 (installed on Ubuntu 12.04 LTS with apt-get) – 60 second iterations – Default MySQL cnf settings  Variations – Number of threads – Transactional random read & transactional random read / write  Instance size – 4G RAM – 2 CPU cores – 20G disk 5/11/2014 57Document v2.0
  • 58. Guest Performance: MySQL OLTP 5/11/2014 58 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 1 2 4 8 16 32 64 TotalTransactions Threads MySQL OLTP Random Transactional Reads (60s) docker KVM Document v2.0
  • 59. Guest Performance: MySQL OLTP 5/11/2014 59 0 2000 4000 6000 8000 10000 12000 14000 1 2 4 8 16 32 64 TotalTransactions Threads MySQL OLTP Random Transactional R/W (60s) docker KVM Document v2.0
  • 60. Guest Performance: MySQL Indexed Insertion  Indexed insertion benchmark (iibench python script) – A total of 1,000,000 insertions – Print stats at 100K intervals  Instance size – 4G RAM – 2 CPU cores – 20G disk 5/11/2014 60Document v2.0
  • 61. Guest Performance: MySQL Indexed Insertion 5/11/2014 61 0 20 40 60 80 100 120 140 100000 200000 300000 400000 500000 600000 700000 800000 900000 1000000 SecondsPer100KInsertionBatch Table Size In Rows MySQL Indexed Insertion @ 100K Intervals docker kvm Document v2.0
  • 62. Guest Performance: BlogBench  Blogbench 1.1 – Test duration of 5m – Average results over 5 iterations of test  Instance size – 4G RAM – 2 CPU cores – 200G disk  KVM specs – Disk cache mode set to none – Virtio – Deadline scheduler (host & guest)  Docker specs – AUFS storage driver – Deadline scheduler 5/11/2014 62Document v2.0
  • 63. Guest Performance: BlogBench 5/11/2014 63 398772.6 384769 0 50000 100000 150000 200000 250000 300000 350000 400000 450000 docker KVM Score Blogbench Read Scores docker KVM 1526.6 1285 0 200 400 600 800 1000 1200 1400 1600 1800 docker KVM Score Blogbench Write Scores docker KVM Document v2.0
  • 65. Cloud Management Impacts on LXC 5/11/2014 65 0.17 3.529113102 0 0.5 1 1.5 2 2.5 3 3.5 4 docker cli nova-docker Seconds Docker: Boot Container - CLI vs Nova Virt docker cli nova-docker Cloud management often caps true ops performance of LXC Document v2.0
  • 66. Ubuntu MySQL Image Size 5/11/2014 Document v2.0 66 381.5 1080 0 200 400 600 800 1000 1200 docker kvm SizeInMB Docker / KVM: Ubuntu MySQL docker kvm Out of the box JeOS images for docker are lightweight
  • 67. Other Observations  Micro “synthetic” benchmarks do not reflect macro “application” performance – Always benchmark your “real” workload  Nova-docker virt driver still under development – Great start, but additional features needed for parity (python anyone?) – Additions to the nova-docker driver could change Cloudy performance  Docker LXC is still under development – Docker has not yet released v1.0 for production readiness  KVM images can be made skinnier, but requires additional effort  Increased density / oversubscription imposes additional complexity – Techniques to handle resource consumption surges which exceed capacity 5/11/2014 Document v2.0 67
  • 69. References & Related Links  http://www.slideshare.net/BodenRussell/realizing-linux-containerslxc  http://www.slideshare.net/BodenRussell/kvm-and-docker-lxc-benchmarking-with- openstack  https://github.com/bodenr/cloudy-docker-kvm-bench  https://www.docker.io/  http://sysbench.sourceforge.net/  http://dag.wiee.rs/home-made/dstat/  http://www.openstack.org/  https://wiki.openstack.org/wiki/Rally  https://wiki.openstack.org/wiki/Docker  http://devstack.org/  http://www.linux-kvm.org/page/Main_Page  https://github.com/stackforge/nova-docker  https://github.com/dotcloud/docker-registry  http://www.netperf.org/netperf/  http://www.tokutek.com/products/iibench/  http://www.brendangregg.com/activebenchmarking.html 5/11/2014 69Document v2.0
  • 70. Cloudy Benchmark: Serially Boot 15 VMs  KVM +------------------+-------+---------------+---------------+---------------+---------------+---------------+ | action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile | +------------------+-------+---------------+---------------+---------------+---------------+---------------+ | nova.boot_server | 15 | 7.37148094177 | 5.78166244825 | 4.77369403839 | 6.67956886292 | 7.07061390877 | +------------------+-------+---------------+---------------+---------------+---------------+---------------+ +---------------+---------------+---------------+---------------+---------------+---------------+-------------+ | max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times | +---------------+---------------+---------------+---------------+---------------+---------------+-------------+ | 7.58968496323 | 6.00853565534 | 4.99443006516 | 6.91288709641 | 7.28662061691 | 1.0 | 15 | +---------------+---------------+---------------+---------------+---------------+---------------+-------------+  Docker +------------------+-------+---------------+---------------+---------------+---------------+---------------+ | action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile | +------------------+-------+---------------+---------------+---------------+---------------+---------------+ | nova.boot_server | 15 | 5.18499684334 | 3.52911310196 | 2.93864893913 | 4.74490590096 | 4.95752367973 | +------------------+-------+---------------+---------------+---------------+---------------+---------------+ +---------------+---------------+---------------+---------------+---------------+---------------+-------------+ | max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times | +---------------+---------------+---------------+---------------+---------------+---------------+-------------+ | 5.43275094032 | 3.77053097089 | 3.12985610962 | 4.95886874199 | 5.18047580719 | 1.0 | 15 | +---------------+---------------+---------------+---------------+---------------+---------------+-------------+ 5/11/2014 70Document v2.0
  • 71. Cloudy Performance: Serial VM Reboot  KVM +--------------------+-------+---------------+---------------+---------------+---------------+---------------+ | action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile | +--------------------+-------+---------------+---------------+---------------+---------------+---------------+ | nova.reboot_server | 10 | 124.900292158 | 124.433238959 | 123.947879076 | 124.881286669 | 124.890789413 | | nova.boot_server | 2 | 7.05096197128 | 6.82815694809 | 6.6053519249 | 7.00640096664 | 7.02868146896 | | nova.delete_server | 2 | 4.46658396721 | 3.47976005077 | 2.49293613434 | 4.26921918392 | 4.36790157557 | +--------------------+-------+---------------+---------------+---------------+---------------+---------------+ +---------------+---------------+---------------+---------------+---------------+---------------+-------------+ | max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times | +---------------+---------------+---------------+---------------+---------------+---------------+-------------+ | 633.087348938 | 632.493344903 | 631.899340868 | 632.968548131 | 633.027948534 | 0.4 | 5 | +---------------+---------------+---------------+---------------+---------------+---------------+-------------+  Docker +--------------------+-------+---------------+---------------+---------------+---------------+---------------+ | action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile | +--------------------+-------+---------------+---------------+---------------+---------------+---------------+ | nova.reboot_server | 25 | 4.48567795753 | 2.57787958145 | 2.35410904884 | 3.0847319603 | 3.48342533112 | | nova.boot_server | 5 | 4.16244912148 | 3.5675860405 | 3.05103397369 | 4.03664107323 | 4.09954509735 | | nova.delete_server | 5 | 3.54331803322 | 3.52483625412 | 3.50456190109 | 3.53761086464 | 3.54046444893 | +--------------------+-------+---------------+---------------+---------------+---------------+---------------+ +---------------+---------------+---------------+--------------+---------------+---------------+-------------+ | max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times | +---------------+---------------+---------------+--------------+---------------+---------------+-------------+ | 21.5702910423 | 19.9976443768 | 18.7037060261 | 20.997631073 | 21.2839610577 | 1.0 | 5 | +---------------+---------------+---------------+--------------+---------------+---------------+-------------+ 5/11/2014 71Document v2.0
  • 72. Cloud Performance: Snapshot VM To Image  KVM +--------------------+-------+----------------+----------------+----------------+----------------+----------------+ | action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile | +--------------------+-------+----------------+----------------+----------------+----------------+----------------+ | nova.delete_image | 1 | 0.726859092712 | 0.726859092712 | 0.726859092712 | 0.726859092712 | 0.726859092712 | | nova.create_image | 1 | 48.0231380463 | 48.0231380463 | 48.0231380463 | 48.0231380463 | 48.0231380463 | | nova.boot_server | 2 | 32.7824101448 | 19.4164011478 | 6.05039215088 | 30.1092083454 | 31.4458092451 | | nova.delete_server | 2 | 12.3564949036 | 8.40917897224 | 4.46186304092 | 11.5670317173 | 11.9617633104 | +--------------------+-------+----------------+----------------+----------------+----------------+----------------+ +---------------+---------------+---------------+---------------+---------------+---------------+-------------+ | max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times | +---------------+---------------+---------------+---------------+---------------+---------------+-------------+ | 104.401446104 | 104.401446104 | 104.401446104 | 104.401446104 | 104.401446104 | 1.0 | 1 | +---------------+---------------+---------------+---------------+---------------+---------------+-------------+  Docker (defect deleting image) +--------------------+-------+---------------+---------------+---------------+---------------+---------------+ | action | count | max (sec) | avg (sec) | min (sec) | 90 percentile | 95 percentile | +--------------------+-------+---------------+---------------+---------------+---------------+---------------+ | nova.create_image | 1 | 36.8875639439 | 36.8875639439 | 36.8875639439 | 36.8875639439 | 36.8875639439 | | nova.boot_server | 2 | 3.96964478493 | 3.84809792042 | 3.72655105591 | 3.94533541203 | 3.95749009848 | | nova.delete_server | 2 | 4.48610281944 | 4.46519696712 | 4.44429111481 | 4.48192164898 | 4.48401223421 | +--------------------+-------+---------------+---------------+---------------+---------------+---------------+ +-----------+-----------+-----------+--------------+---------------+---------------+-------------+ | max (sec) | avg (sec) | min (sec) | 90 pecentile | 95 percentile | success/total | total times | +-----------+-----------+-----------+--------------+---------------+---------------+-------------+ | n/a | n/a | n/a | n/a | n/a | 0 | 1 | +-----------+-----------+-----------+--------------+---------------+---------------+-------------+ 5/11/2014 72Document v2.0