Moved to https://speakerdeck.com/ebiken/zebra-srv6-cli-on-linux-dataplane-enog-number-49
Introduction to SRv6, Linux SRv6 implementation and how to add SRv6 CLI to Zebra 2.0 Open Source Network Operation Stack.
Presented at ENOG (Echigo NOG) #49.
SOSCON 2019.10.17
What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel.
Daniel T. Lee (Hoyeon Lee)
@danieltimlee
Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
Ariel Waizel discusses the Data Plane Development Kit (DPDK), an API for developing fast packet processing code in user space.
* Who needs this library? Why bypass the kernel?
* How does it work?
* How good is it? What are the benchmarks?
* Pros and cons
Ariel worked on kernel development at the IDF, Ben Gurion University, and several companies. He is interested in networking, security, machine learning, and basically everything except UI development. Currently a Solution Architect at ConteXtream (an HPE company), which specializes in SDN solutions for the telecom industry.
Moved to https://speakerdeck.com/ebiken/zebra-srv6-cli-on-linux-dataplane-enog-number-49
Introduction to SRv6, Linux SRv6 implementation and how to add SRv6 CLI to Zebra 2.0 Open Source Network Operation Stack.
Presented at ENOG (Echigo NOG) #49.
SOSCON 2019.10.17
What are the methods for packet processing on Linux? And how fast are each packet processing methods? In this presentation, we will learn how to handle packets on Linux (User space, socket filter, netfilter, tc), and compare performance with analysis of where each packet processing is done in the network stack (hook point). Also, we will discuss packet processing using XDP, an in-kernel fast-path recently added to the Linux kernel. eXpress Data Path (XDP) is a high-performance programmable network data-path within the Linux kernel. The XDP is located at the lowest level of access through SW in the network stack, the point at which driver receives the packet. By using the eBPF infrastructure at this hook point, the network stack can be expanded without modifying the kernel.
Daniel T. Lee (Hoyeon Lee)
@danieltimlee
Daniel T. Lee currently works as Software Engineer at Kosslab and contributing to Linux kernel BPF project. He has interest in cloud, Linux networking, and tracing technologies, and likes to analyze the kernel's internal using BPF technology.
Ariel Waizel discusses the Data Plane Development Kit (DPDK), an API for developing fast packet processing code in user space.
* Who needs this library? Why bypass the kernel?
* How does it work?
* How good is it? What are the benchmarks?
* Pros and cons
Ariel worked on kernel development at the IDF, Ben Gurion University, and several companies. He is interested in networking, security, machine learning, and basically everything except UI development. Currently a Solution Architect at ConteXtream (an HPE company), which specializes in SDN solutions for the telecom industry.
10. DPDKの設定
10
$ sudo lshw -class network -businfo
Bus info Device Class Description
===================================================
pci@0000:00:03.0 eth0 network 82540EM Gigabit Ethernet Controller
pci@0000:00:08.0 eth1 network 82540EM Gigabit Ethernet Controller
pci@0000:00:09.0 eth2 network 82540EM Gigabit Ethernet Controller
DPDK対応NICのPCI IDを確認
$ sudo ifconfig eth1 down
$ sudo ifconfig eth2 down
$ sudo ip addr flush dev eth1
$ sudo ip addr flush dev eth2
インターフェイスの設定をクリア
11. DPDK の設定(つづき)
dpdk {
socket-mem 1024
dev 0000:00:08.0
dev 0000:00:09.0
}
/etc/vpp/startup.confにdpdkに関する設定を追加して、VPPを起動
socket-memはパケットバッファ、hugepagesからアロケートされる
socket-memは明示的に指定しないと、CPUソケット(NUMAノード)毎に512Mを確保
VPP側では、GigabitEthernetX/Y/Zとして認識される
vpp# sh int
Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count
GigabitEthernet0/8/0 1 down 9000/0/0/0
GigabitEthernet0/9/0 2 down 9000/0/0/0
local0 0 down 0/0/0/0
12. Vhost-user接続の設定
vpp# create vhost-user socket /var/run/vpp/sock1.sock server
VPP側でVhost-userインターフェイスを作成
vpp# show int
Name Idx State Counter Count
VirtualEthernet0/0/0 3 down
...
qemu-system-x86_64
-enable-kvm -m 8192 -smp cores=4,threads=0,sockets=1 -cpu host
-drive file="ubuntu-16.04-server-cloudimg-amd64-disk1.img",if=virtio,aio=threads
-drive file="seed.img",if=virtio,aio=threads
-nographic -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on
-numa node,memdev=mem
-mem-prealloc
-chardev socket,id=char1,path=/var/run/vpp/sock1.sock
-netdev type=vhost-user,id=net1,chardev=char1,vhostforce
-device virtio-net-
pci,netdev=net1,mac=00:00:00:00:00:01,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off
対応するVhost clientと接続するVMを起動
VPP側では VirtualEthernet0/0/Xと認識される
13. vethの設定
$ sudo ip link add vpp1 type veth peer name veth1
ホストでvpp1とveth1というvethペアを作成
vpp# create host-interface name vpp1
host-vpp1
VPP側でvpp1に接続
veth1をコンテナ等に接続
vpp# show int
Name Idx State Counter Count
host-vpp1 4 down
…
VPP側ではhost-vpp1と認識される
14. tapの設定
vpp# tap connect taphost
tap-0
taphostというtapインターフェイスを作成
vpp# show int
Name Idx State Counter Count
tap-0 5 down
...
VPP側ではtap-Xと認識される
ホスト側ではtaphostとして認識される
$ ip link
...
7: taphost: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
UNKNOWN mode DEFAULT group default qlen 1000
link/ether 4e:00:94:42:8d:35 brd ff:ff:ff:ff:ff:ff
15. L2設定例
15
vpp# create host-interface name vpp1
host-vpp1
vpp# set interface state host-vpp1 up
vpp# set interface l2 bridge host-vpp1 1
vpp# create host-interface name vpp2
host-vpp2
vpp# set interface state host-vpp2 up
vpp# set interface l2 bridge host-vpp2 1
VPPにvethを認識させ、L2ポートとしてBridge
Domain 1に接続
L3ポートL2ポート
Bridge Domain 1
C1 C2
veth
pair
veth
pair
eth0 eth0
Network Namespace: default
Network Namespace:
ns1
Network Namespace:
ns2
10.0.0.1 10.0.0.2
host-vpp1 host-vpp2
16. L3設定例
16
vpp# create loopback interface
loop0
vpp# set interface l2 bridge loop0 1 bvi
vpp# set interface state loop0 up
vpp# set interface ip address loop0 10.0.0.254/24
vpp# create host-interface name vpp3
host-vpp3
vpp# set interface state host-vpp3 up
vpp# set interface ip address host-vpp3 10.0.1.254/24
Bridge Domain 1 に BVI インターフェイスを設定
VPPにvethを認識させ、L3ポートとして設定
VPP Router
L3ポートL2ポート
Bridge Domain 1
C1 C2 C3
veth
pair
veth
pair
veth
pair
eth0 eth0 eth0
Network Namespace: default
Network Namespace:
ns1
Network Namespace:
ns2
Network Namespace:
ns3
10.0.0.254
10.0.1.254
10.0.0.1 10.0.0.2 10.0.1.1
host-vpp1 host-vpp2 host-vpp3
loop0(bvi)