Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

KubeCon EU 2016: Using Traffic Control to Test Apps in Kubernetes

881 views

Published on

Testing applications is important, as shown by the rise of continuous integration and automated testing. In this talk, I will focus on one area of testing that is difficult to automate: poor network connectivity. Developers usually work within reliable networking conditions so they might not notice issues that arise in other networking conditions. I will give examples of software that would benefit from test scenarios with varying connectivity. I will explain how traffic control on Linux can help to simulate various network connectivity. Finally, I will run a demo showing how an application running in Kubernetes behaves when changing network parameters.

Sched Link: http://sched.co/6Bb3

Published in: Technology
  • Be the first to comment

KubeCon EU 2016: Using Traffic Control to Test Apps in Kubernetes

  1. 1. in Kubernetes Alban Crequy Using Traffic Control to Test Apps KubeCon EU 2016 - Londonhttps://goo.gl/Zh2CMQ
  2. 2. Alban Crequy ∘ Worked on rkt the last 14 months ∘ Currently tech lead on rkt ∘ In 2014, worked on traffic control for multimedia applications in cars (tcmmd) https://github.com/alban
  3. 3. ∘ What is traffic control and how does it work on Linux ∘ Using TC in containers for tests ∘ Demo ∘ In Kubernetes ∘ Demo with pings ∘ Demo with guestbook ∘ Integration in a testing framework ∘ Demo with guestbook Plan
  4. 4. What is traffic control? How does it work on Linux?
  5. 5. Traffic control, why? web server client client client THE INTERNET ∘ fair distribution of bandwidth ∘ reserve bandwidth to specific applications ∘ avoid bufferbloat
  6. 6. ∘ Network scheduling algorithm ∘ which packet to emit next? ∘ when? ∘ Configurable at run-time: ∘ /sbin/tc ∘ Netlink ∘ Default on new network interfaces: sysctl net.core.default_qdisc Queuing disciplines (qdisc) eth0 THE INTERNETqdisc
  7. 7. ∘ First In, First Out ∘ But with 3 bands, based on IP header’s ToS field (type of service) Linux’ default qdisc: pfifo_fast eth0 THE INTERNETFIFO 1 FIFO 2 FIFO 0
  8. 8. Stochastic Fairness Queueing (sfq) eth0 THE INTERNET FIFO n FIFO 1 FIFO 0 ... round robin
  9. 9. ∘ drop packets to avoid buffer bloat ∘ similar to Random Early Detection (red) but based on delays rather than the size of the buffer ∘ set as default by systemd since 2014 Fair Queuing Controlled Delay (fq_codel) eth0 THE INTERNET X
  10. 10. Traffic control for testing?
  11. 11. Network emulator (netem) eth0 THE INTERNETnetem bandwidth latency packet loss corrupt ...
  12. 12. Testing with containers container 1 container 2 eth0eth0 Testing framework configure “netem” qdiscs: bandwidth, latency, packet drop...
  13. 13. ingress / egress server THE INTERNET egress ingress
  14. 14. ingress / egress eth0 THE INTERNET egress ingress ifb0
  15. 15. Testing a media server rkt pod host veth1eth0 RTP server egress qdisc media player
  16. 16. Demo Try it yourself: https://github.com/kinvolk/demo
  17. 17. How it worked rkt pod host veth1eth0 RTP server egress qdisc media player
  18. 18. In Kubernetes
  19. 19. Testing with traffic control in Kubernetes Kubernetes minion 1 pod pod Kubernetes minion 2 pod pod Testing framework ∘ configure network simulator ∘ play scenarios
  20. 20. Testing with traffic control in Kubernetes Kubernetes minion 1 pod pod Kubernetes minion 2 pod pod tcd tcd gRPC or D-Bus methods: ∘ Install() ∘ ConfigureEgress() https://github.com/kinvolk/tcd
  21. 21. Testing Weave Scope Kubernetes minion 1 tcd Scope Probe pod pod pod pod Kubernetes minion 2 tcd Scope Probe pod pod pod pod Scope App
  22. 22. Demo Try it yourself: https://github.com/kinvolk/demo
  23. 23. Demo Try it yourself: https://github.com/kinvolk/demo
  24. 24. Testing framework for web apps Selenium
  25. 25. Demo
  26. 26. Testing more complex scenarios (my wishlist)
  27. 27. How to define classes of traffic eth0 netem interface latency=100ms, drop=2% http/80 ip=10.0.4.* http/80 ip=10.0.5.* other, dns/53, ...
  28. 28. u32: filter on content eth0 HTB HTB HTBHTB HTB netemnetem netem interface root qdisc (type = HTB) root class (type = HTB) leaf qdiscs (type = netem) leaf classes (type = HTB) filters (type=u32) dport=80 dport=53ip=10.* latency=10ms drop=2%
  29. 29. Using filters in Kubernetes Kubernetes minion 1 pod pod Kubernetes minion 2 pod pod Testing framework drop 100% latency 100ms latency 100ms configuring tc filter based on IPs (type=u32)
  30. 30. Testing Raft Consensus Algorithm etcd etcd etcd etcd latency 1ms latency 80ms latency 5000ms etcd parameters: ∘ heartbeat interval: 100ms ∘ election timeout: 1000ms
  31. 31. ∘ 1 network namespace per pod ∘ rktnetes: apps started as systemd units ∘ How to filter by app? systemd.resource-control(5): NetClass=auto ∘ added in v227, 2015-10-07 ∘ removed in v229 :( Filtering by app Kubernetes minion 1 pod app app pod app app
  32. 32. cgroup “net_cls”: filter by app ∘ Classifying based on cgroups with “net_cls” ∘ Previously exposed by systemd ∘ Then, tc filter “cgroup” ∘ But not available in cgroup unified hierarchy, to ensure delegation ∘ netfilter/iptables being replaced by nftables ∘ New xt_cgroup just added to match on cgroup full path, then could mark it and use net_cls
  33. 33. Filtering with cBPF/eBPF eth0 BPF netemnetem kernel userspace BPF_JMP... BPF_LD... BPF_RET... if (skb->protocol…) return TC_H_MAKE(TC_H_ROOT, mark); compilation clang... -march=bpf upload in the kernel: - bpf() - Netlink x86_64 code JIT compilation
  34. 34. eBPF maps eth0 BPF netemnetem kernel userspace x86_64 code eBPF map Testing framework ∘ Build statistics ∘ Make them available to the testing framework
  35. 35. The End Try the demos yourself: https://github.com/kinvolk/demo Source: https://github.com/kinvolk/tcd The slides: https://goo.gl/Zh2CMQ
  36. 36. 2 things before questions
  37. 37. We’re Hiring https://kinvolk.io/careers/ in Berlin
  38. 38. coreos.com/fest - @coreosfest May 9 & 10, 2016 - Berlin, Germany
  39. 39. Questions?

×