(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
Yusuf Haruna Docker internship slides
1. An Analysis and Empirical Study of Docker Networking
Yusuf HARUNA
University of Nice Sophia Antipolis/UCA, France
3 July 2019
Master II IFI - UBINET Internship Oral Defence
Supervisors: Guillaume Urvoy-Keller & Dino Lopez-Pacheco, i3s Laboratory
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 1 / 20
2. Outline
1 Introduction
2 Three popular cloud Applications
3 Benchmarks
4 Container Networking modes
5 Experimental Results
6 RSS/RPS
7 Conclusion
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 2 / 20
3. Introduction
Traditional Virtualization Vs Lightweight Virtualization
Source: https://www.docker.com/
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 3 / 20
4. Introduction
Motivations
High use of container-based virtualization in the cloud and search engines e.g
google launches about 7,000 containers every second.
Objectives
Understand the performance of different Docker Networking solutions,
Build a realistic testbed: select some applications + benchmarks,
Obtain some results by testing our testbed and monitor system level performance,
Check if we can reduce the overhead of the overlay Networks using OS/hardware
support.
Challenges
Tune the testbed and do some tests with RSS/RPS
Hardware dependency
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 4 / 20
5. Three popular cloud Applications + iperf3
An in-memory key-value store: Memcached, stress more memory +
a bit of network
A web server: Nginx, stress more network
A Relational Database server: PostgreSQL, stress more network
depending on the quarry + more I/O
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 5 / 20
6. Benchmarks
iperf3: to test maximum achievable throughput on IP networks.
∗ TCP throughput
∗ UDP throughput
memtier benchmark: to measure the performance of memcached.
∗ Spawns 4 threads
∗ Each thread creates 50 TCP connections
∗ Reports the average number of responses/second, the average latency
to respond to a memcached command + SET/GET latency
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 6 / 20
7. Benchmarks
wrk2: to measure the performance of nginx server.
∗ Spawns two threads
∗ Creates a total of 100 TCP connections to make a request to the server
∗ The throughput in requests/second can be set in the tool
∗ Outputs latency
pgbench: to measure the performance of PostgreSQL server.
∗ Creates a database of one million banking accounts
∗ Executes transactions with a total of 100 connections with 4 threads
∗ Outputs latency
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 7 / 20
8. Container Networking modes
Figure 1: Experimental setup
Two ways to deploy containers: on VMs b/c of security (e.g cloud provider) or on a PM e.g google
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 8 / 20
9. Container Networking modes on multiple hosts
Host mode: In this mode, containers share the namespace of the
host OS.
Source: https://www.onug.net/blog/
container-networking-easy-button-for-app-teams-heartburn-for-networking-and-security-teams/
Namespace is a way of logically separating processes along different dimensions;
Network, IPC, User, PID, Mount or UTS namespace.
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 9 / 20
10. Container Networking modes
NAT (Network Address Translation):
∗ Allows containers to communicate using the public IP address of their
host machine + port number of the container
∗ Maps the private address of a container to its port number in a NAT
table
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 10 / 20
11. Container Networking modes
Docker default overlay Network: It uses Virtual Extensible LAN to
connect containers on multiple hosts.
Source: https://www.youtube.com/watch?v=Jqm_4TMmQz8
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 11 / 20
12. Container Networking modes
Weave: Is another implementation of overlay network, it uses a
weave router container on each Docker host and the network is made
from these connected weave routers.
Source: https://sreeninet.wordpress.com/2015/01/18/docker-networking-weave/
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 12 / 20
13. Experimental Results
iperf3 throughput
(a) TCP (b) UDP
Figure 2: iperf3 TCP and UDP throughput
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 13 / 20
14. Experimental Results
Memcached throughput and latency
(a) Throughput (b) Latency
Figure 3: memcached throughput and latency
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 14 / 20
15. Experimental Results
Latency of Nginx server
Figure 4: Nginx 1MB html file latency on 3K reqs/sec
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 15 / 20
17. Experimental Results
(a) iperf3 (b) PostgreSQL
Figure 6: CPU utilization of iperf3 and PostgreSQL servers
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 17 / 20
18. RSS/RPS - A Linux kernel support
RSS: Receive Side Scaling
∗ A complementary technique in the Linux networking stack to increase
parallelism and improve performance for multi-processor systems
∗ Contemporary NICs support multi-queue for receiving and forwarding
packets
∗ Upon reception, a NIC can send different packets to different queues to
distribute processing among CPUs
∗ RPS (Receive Packet Steering) is logically a software implementation
of RSS
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 18 / 20
19. Conclusion
Host mode has the best performance among all the 4 modes followed by
NAT with few performance drop, while the two overlay networks (VXLAN
and weave) have more performance drop because of the double
encapsulation.
The overlay networks consume more system resources.
We would like to reduce the overhead of the overlay networks using
OS/hardware kernel support. We started collecting some results but there
are some questions on it, e.g. in memcached, the throughput improved but
the latency increase a bit, hence, we would like to check why.
The shell scripts of our testbed is open sourced at https://github.com/
Yusuf-Haruna/Analysis-of-Docker-Networking-Shell-scripts.
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 19 / 20
20. Thanks for your attention
Yusuf Haruna
yusuf.haruna@etu.univ-cotedazur.fr
Questions?
Y. Haruna (Ubinet Master) An Analysis and Emp. Study of Docker Net. 3 July 2019 20 / 20