SlideShare a Scribd company logo
Configuring a compute node
for NFV
Network Innovation &
Virtualisation Global CTO Unit
_Antonio López Gracia
Antonio.lopezgracia@telefonica.com
9 Jun 2015
2Configuring a compute node for NFV
 HW & SW environment
 BIOS setup
 Installation of OS and required SW packages
 IOMMU IOTLB cache support
 Enabling IOMMU
 Enabling hugepages
 CPU isolation
 Deactivating KSM
 Enabling SR-IOV
 Pre-provision of Linux bridges
 Additional configuration to allow access from openvim
 Compute node configuration in special cases
 Available automation scripts in OpenMANO github
3HW & SW environment
 Hardware:
 Servers with Xeon E5-based Intel processors with Ivy Bridge or Haswell
architecture and 2 sockets
- Recommended at least 4 cores per socket
- Recommended at least 64 GB RAM per host
- Lab: HP DL380 Gen9 and Dell R720/R730 servers …
 Data plane: 10Gbps NICs supported by DPDK, equally distributed between
NUMAs
- Lab: HP 560 SFP+, Intel Niantic and Fortville families NICs
 Control plane: 1Gbps NICs
 Software:
 64bits OS with KVM, qemu and libvirt (i.e. RHEL7, Ubuntu Server 14.04,
CentOS 7) with kernel support of huge pages IOTLB cache in IOMMU
- Lab: RHEL 7.1
4BIOS setup
 Enter the BIOS and ensure that all virtualization options are active:
- Enable all Intel vt-x (processor virtualization) and vt-d (pci passthrough) options if present. Sometimes they
are grouped together just as “Virtualization Options”
- Enable SR-IOV if present as an option
- Verify processors are configured for maximum performance (no power savings)
- Enable hyperthreading (recommended)
 If virtualization options are active, the following command should give a non
empty output:
$ egrep "(vmx|svm)" /proc/cpuinfo
 If hyperthreading is active, the following command should give a non empty
output:
$ egrep ht /proc/cpuinfo
5Installation of OS and required SW packages
 Install RHEL7.1 with the following options:
%packages
@base
@core
@development
@network-file-system-client
@virtualization-hypervisor
@virtualization-platform
@virtualization-tools
 Install the following packages:
$ sudo yum install -y screen virt-manager ethtool gcc gcc-c++ xorg-x11-xauth xorg-x11-xinit xorg-x11-
deprecated-libs libXtst guestfish hwloc libhugetlbfs-utils libguestfs-tools policycoreutils-python
6IOMMU IOTLB cache support
 Use a kernel with support of huge pages IOTLB cache in IOMMU.
 From vanilla kernel 3.14 this support is included. In case you are using an older
kernel, you should update your kernel.
 Some distribution kernels have ported this requirement.
 Find out if the kernel of the distribution you are using has this support:
 RHEL 7.1 kernel (3.10.0-229.el7.x86_64) meets the requirement
 RHEL 7.0 requires a specific upgrade tu support the requirement. You can
upgrade the kernel as follows:
$ wget http://people.redhat.com/~mtosatti/qemu-kvm-take5/kernel-3.10.0-123.el7gig2.x86_64.rpm
$ sudo rpm -Uvh kernel-3.10.0-123.el7gig2.x86_64.rpm --oldpackage
7Enabling IOMMU
 Enable IOMMU, by adding the following to the grub command line
intel_iommu=on
8Enabling hugepages (I)
 Enable 1G hugepages, by adding the following to the grub command line
default_hugepagesz=1G hugepagesz=1G
 The number of huge pages can be also set at grub:
hugepages=24 (reserves 24GB)
 Or with a oneshot service that runs on boot (for early memory allocation):
$ sudo vi /usr/lib/systemd/system/hugetlb-gigantic-pages.service
[Unit]
Description=HugeTLB Gigantic Pages Reservation
DefaultDependencies=no
Before=dev-hugepages.mount
ConditionPathExists=/sys/devices/system/node
ConditionKernelCommandLine=hugepagesz=1G
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/lib/systemd/hugetlb-reserve-pages
[Install]
WantedBy=sysinit.target
9Enabling hugepages (II)
Then set the number of huge pages:
$ sudo vi /usr/lib/systemd/hugetlb-reserve-pages
#!/bin/bash
nodes_path=/sys/devices/system/node/
if [ ! -d $nodes_path ]; then
echo "ERROR: $nodes_path does not exist"
exit 1
fi
reserve_pages()
{
echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages
}
# This example reserves 12 pages of huge pages on each numa node
reserve_pages 12 node0
reserve_pages 12 node1
And enable the service:
$ sudo chmod +x /usr/lib/systemd/hugetlb-reserve-pages
$ sudo systemctl enable hugetlb-gigantic-pages
 Recommended best practice: reserve 4GB per NUMA to run the OS and use all
other system memory for 1GB huge pages
 Mount huge pages in /etc/fstab:
$ sudo echo "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" >> /etc/fstab
10CPU isolation
 Isolate CPUs so that the host OS is restricted to run only on some cores,
leaving the others to run VNFs in exclusive
 Recommended best practice: run the OS on the first core of each NUMA node,
by adding the isolcpus field to the grub command line.
isolcpus=1-9,11-19,21-29,31-39
The exact CPU numbers depend on the CPU numbers presented by the host OS. In the previous example, CPUs
0, 10, 20 and 30 are excluded because CPU 0 and its sibling 20 correspond to the first core of NUMA node 0,
and CPU 10 and its sibling 30 correspond to the first core of NUMA node 1.
 Running this awk script suggest the value to use in your compute node:
$ gawk 'BEGIN{pre=-2;} ($1=="processor"){pro=$3;} ($1=="core" && $4!=0){ if (pre+1==pro){endrange="-" pro}
else{cpus=cpus endrange sep pro; sep=","; endrange="";}; pre=pro;} END{printf("isolcpus=%sn",cpus
endrange);}' /proc/cpuinfo
isolcpus=2-35,38-71
11Dedicated resource allocation
CPU
QPI
I/O
device
I/O
device
Core Core Core CoreCore
Core Core Core CoreCore
MEMORY
I/O
device
I/O
device
CPU
I/O
device
I/O
device
Core Core Core CoreCore
Core Core Core CoreCore
I/O
device
I/O
device
MEMORY
• CPUs: not oversubscribed, isolated from host OS
• Memory: huge pages
• I/O devices: passthrough, SR-IOV
Host OS + Hypervisor VM 1 VM 2 VM 3Not used
6
VM 5VM 4
12Activating grub changes for iommu, huge pages and isolcpus
 In RHEL 7/CentOS OS
$ sudo vi /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/swap crashkernel=auto rd.lvm.lv=rhel/root rhgb quiet intel_iommu=on
default_hugepagesz=1G hugepagesz=1G isolcpus=2-35,38-71"
GRUB_DISABLE_RECOVERY="true"
Update grub - BIOS:
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Update grub - EFI:
$ sudo grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
-Don’t forget to reboot the system.
- After boot check that all options were applied on boot:
$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-229.el7.x86_64 root=/dev/mapper/rhel_nfv105-root ro
rd.lvm.lv=rhel_nfv105/swap crashkernel=auto rd.lvm.lv=rhel_nfv105/root rhgb quiet intel_iommu=on
default_hugepagesz=1G hugepagesz=1G isolcpus=2-35,38-71
13Deactivating KSM (Kernel Same-page Merging)
 KSM enables the kernel to examine two or more already running programs
and compare their memory. If any memory regions or pages are identical, KSM
reduces multiple identical memory pages to a single page. This page is then
marked copy on write. If the contents of the page is modified by a guest virtual
machine, a new page is created for that guest virtual machine.
 KSM has a performance overhead which may be too large for certain
environments or host physical machine systems.
 KSM can be deactivated by stopping the ksmtuned and the ksm service.
Stopping the services deactivates KSM but does not persist after restarting.
# service ksmtuned stop
Stopping ksmtuned: [ OK ]
# service ksm stop
Stopping ksm: [ OK ]
 Persistently deactivate KSM with the chkconfig command. To turn off the
services, run the following commands:
# chkconfig ksm off
# chkconfig ksmtuned off
14Enabling SR-IOV
 SR-IOV enabling depends on the NIC used
 For Intel Niantic and Fortville NICs, the number of VF enabled is defined by
writing on
echo X > /sys/bus/pci/devices/<PF pci address>/sriov_numvfs
 Recommended best practice: set the number of Vfs per PF by using udev rules:
$ cat /etc/udev/rules.d/pci_config.rules
ACTION=="add", KERNEL=="0000:05:00.0", SUBSYSTEM=="pci", RUN+="/usr/bin/bash -c 'echo 8 >
/sys/bus/pci/devices/0000:05:00.0/sriov_numvfs'"
ACTION=="add", KERNEL=="0000:05:00.1", SUBSYSTEM=="pci", RUN+="/usr/bin/bash -c 'echo 8 >
/sys/bus/pci/devices/0000:05:00.1/sriov_numvfs'"
ACTION=="add", KERNEL=="0000:0b:00.0", SUBSYSTEM=="pci", RUN+="/usr/bin/bash -c 'echo 8 >
/sys/bus/pci/devices/0000:0b:00.0/sriov_numvfs'“
…
 Blacklist the ixgbevf module, by adding the following to the grub command line.
This must be done after adding this host to the openvim, but not before. The
reason for blacklisting this driver is because it causes that the vlan tags of
broadcast packets are not properly removed when received by an SRIOV port
modprobe.blacklist=ixgbevf (on grub boot line)
15Pre-provision of Linux bridges (I)
 OpenMANO relies on Linux bridges to interconnect VMs when there are no
high performance requirements for I/O. This is the case of control plane VNF
interfaces that are expected to carry a small amount of traffic.
 A set of Linux bridges must be created on every host. Every Linux bridge must
be attached to a physical host interface with a specific VLAN. In addition, a
external management switch must be used to interconnect those physical host
interfaces. Bear in mind that the host interfaces used for data plane VM
interfaces will be different from the host interfaces used for control plane VM
interfaces.
 Currently OpenMANO configuration uses 20 bridges named virbrMan1 to
virbrMan20, using vlan tags 2001 to 2020 respectively to interconnect VNF
elements
 Another bridge called virbrInf with vlan tag 1001 is used to interconnect
physical infrastructure (hosts, switches and management VMs like openMANO
itself, in case of running virtualized)
16Pre-provision of Linux bridges (II)
 To create a bridge in RHEL 7.1 two files must be defined in
/etc/sysconfig/network-scripts:
$ cat /etc/sysconfig/network-scripts/ifcfg-virbrMan1
DEVICE=virbrMan1
TYPE=Bridge
ONBOOT=yes
DELAY=0
NM_CONTROLLED=no
USERCTL=no
$ cat /etc/sysconfig/network-scripts/ifcfg-em2.2001
DEVICE=em2.2001
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no
VLAN=yes
BOOTPROTO=none
BRIDGE=virbrMan1
 The host interface (em2 in the example), the name of the bridge (virbrMan1)
and the VLAN tag (2001) can be different. In case you use a different name for
the bridge, you should take it into account in 'openvimd.cfg'
17Additional configuration to allow access from openvim (I)
 Uncomment the following lines of /etc/libvirt/libvirtd.conf to allow external
connection to libvirtd:
unix_sock_group = "libvirt"
unix_sock_rw_perms = "0770"
unix_sock_dir = "/var/run/libvirt"
auth_unix_rw = "none“
 Create and configure a user for openvim access. A new user must be created
to access the compute node from openvim. The user must belong to group
libvirt, and other users must be able to access its home:
#creates a new user
$ sudo useradd -m -G libvirt <user>
#or modified an existing user
$ sudo usermod -a -G libvirt <user>
# Allow other users to access /home/<user>
$ sudo chmod +rx /home/<user>
18Additional configuration to allow access from openvim (II)
 Copy the ssh key of openvim into compute node. From the machine where
openvim is running (not from the compute node), run:
openvim $ ssh-keygen #needed for generate ssh keys if not done before
openvim $ ssh-copy-id <user>@<compute host>
 After that, ensure that you can access directly without password prompt from
openvim to compute host:
openvim $ ssh <user>@<compute host>
 Create a local folder for image storage and grant access from openvim:
Images will be stored in a remote shared location accessible by all compute nodes.
This can be a NFS file system for example. The VNFs description will contain a path to
images stored in this folder. Openvim assumes that images are stored here and copied
to a local file system path at virtual machine creation. The remote shared
configuration is outside the scope of the compute node configuration, as it is required
only by the VNF descriptors.
19Additional configuration to allow access from openvim (III)
 A local folder must be created (in default configuration, we assume
/opt/VNF/images) where the deployed VMs will be copied, and access must
be granted to libvirt group in a SElinux system. In the automation script we
assume that "/home" contains more disk space than "/", so a link to a local
home folder is created:
$ mkdir -p /home/<user>/VNF_images
$ rm -f /opt/VNF/images
$ mkdir -p /opt/VNF/
$ ln -s /home/<user>/VNF_images /opt/VNF/images
$ chown -R <user> /opt/VNF
# SElinux management
$ semanage fcontext -a -t virt_image_t "/home/<user>/VNF_images(/.*)?"
$ cat /etc/selinux/targeted/contexts/files/file_contexts.local |grep virt_image
$ restorecon -R -v /home/<user>/VNF_images
20Compute node configuration in special cases (I)
 Datacenter with different types of compute nodes:
In a datacenter with different types of compute nodes, it might happen that compute
nodes use different interface naming schemes. In that case, you can take the most
used interface naming scheme as the default one, and make an additional
configuration in the compute nodes that do not follow the default naming scheme.
In order to do that, you should create the file hostinfo.yaml file inside the image local
folder (e.g. typically /opt/VNF/images). It contains entries with:
openvim-expected-name: local-iface-name
For example, if openvim contains a network using macvtap to the physical interface
em1 (macvtap:em1) but in this compute node the interface is called eno1, creates a
local-image-folder/hostinfo.yaml file with this content:
em1: eno1
21Compute node configuration in special cases (II)
 Compute nodes in a development workstation
If a normal workstation is used to develop VNFs (as in this training) some of the
compute node requirements should not be configured, as VNF performance is not a
possible target.
In order to get a working development environment:
• Do not configure huge pages, as it would substract memory for the development
environment
• Do not configure isolcpus, as it would substract CPUs for the development
environment
• Do not configure SR-IOV interfaces, as normally 10GB data plane interfaces won’t
be available
22Available automation scripts in OpenMANO github
 Automate all operations from previous slides with Telefonica NFV Reference
Lab recommended best practices
 https://github.com/nfvlabs/openmano/blob/master/scripts/configure-
compute-node-RHEL7.1.sh
 Personalize RHEL7.1 on compute nodes
 Prepared to work with the following network card drivers:
- tg3 driver for management interfaces
- ixgbe and i40e driver for data plane interfaces
 https://github.com/nfvlabs/openmano/blob/master/scripts/configure-
compute-node-develop.sh
 For develop workstations, without isolcpus, huge pages, data plane
interfaces
3. configuring a compute node for nfv

More Related Content

What's hot

The Switch as a Server - PuppetConf 2014
The Switch as a Server - PuppetConf 2014The Switch as a Server - PuppetConf 2014
The Switch as a Server - PuppetConf 2014
Puppet
 
OSDC 2014: Nat Morris - Open Network Install Environment
OSDC 2014: Nat Morris - Open Network Install EnvironmentOSDC 2014: Nat Morris - Open Network Install Environment
OSDC 2014: Nat Morris - Open Network Install Environment
NETWAYS
 
OSDC 2014 ONIE by Nat Morris
OSDC 2014 ONIE by Nat MorrisOSDC 2014 ONIE by Nat Morris
OSDC 2014 ONIE by Nat Morris
Cumulus Networks
 
BKK16-312 Integrating and controlling embedded devices in LAVA
BKK16-312 Integrating and controlling embedded devices in LAVABKK16-312 Integrating and controlling embedded devices in LAVA
BKK16-312 Integrating and controlling embedded devices in LAVA
Linaro
 
Automação do físico ao NetSecDevOps
Automação do físico ao NetSecDevOpsAutomação do físico ao NetSecDevOps
Automação do físico ao NetSecDevOps
Raul Leite
 
VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld
 
ONIE: Open Network Install Environment @ OSDC 2014 Netways, Berlin
ONIE: Open Network Install Environment @ OSDC 2014 Netways, BerlinONIE: Open Network Install Environment @ OSDC 2014 Netways, Berlin
ONIE: Open Network Install Environment @ OSDC 2014 Netways, Berlin
Nat Morris
 
IxVM on CML
IxVM on CMLIxVM on CML
IxVM on CML
npsg
 
ONIE LinuxCon 2015
ONIE LinuxCon 2015ONIE LinuxCon 2015
ONIE LinuxCon 2015
Curt Brune
 
Introduction to OpenDaylight & Application Development
Introduction to OpenDaylight & Application DevelopmentIntroduction to OpenDaylight & Application Development
Introduction to OpenDaylight & Application Development
Michelle Holley
 
OpenWRT manual
OpenWRT manualOpenWRT manual
OpenWRT manualfosk
 
ONIE / Cumulus Networks Webinar
ONIE / Cumulus Networks WebinarONIE / Cumulus Networks Webinar
ONIE / Cumulus Networks WebinarCumulus Networks
 
XPDS16: Xen Scalability Analysis - Weidong Han, Zhichao Huang & Wei Yang, Huawei
XPDS16: Xen Scalability Analysis - Weidong Han, Zhichao Huang & Wei Yang, HuaweiXPDS16: Xen Scalability Analysis - Weidong Han, Zhichao Huang & Wei Yang, Huawei
XPDS16: Xen Scalability Analysis - Weidong Han, Zhichao Huang & Wei Yang, Huawei
The Linux Foundation
 
QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?
Pradeep Kumar
 
82599 sriov vm configuration notes
82599 sriov vm configuration notes82599 sriov vm configuration notes
82599 sriov vm configuration notes
Ryan Aydelott
 
Linux sever building
Linux sever buildingLinux sever building
Linux sever buildingEdmond Yu
 
XPDS16: Xen Live Patching - Updating Xen Without Rebooting - Konrad Wilk, Ora...
XPDS16: Xen Live Patching - Updating Xen Without Rebooting - Konrad Wilk, Ora...XPDS16: Xen Live Patching - Updating Xen Without Rebooting - Konrad Wilk, Ora...
XPDS16: Xen Live Patching - Updating Xen Without Rebooting - Konrad Wilk, Ora...
The Linux Foundation
 
Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3
David Pasek
 

What's hot (20)

The Switch as a Server - PuppetConf 2014
The Switch as a Server - PuppetConf 2014The Switch as a Server - PuppetConf 2014
The Switch as a Server - PuppetConf 2014
 
OSDC 2014: Nat Morris - Open Network Install Environment
OSDC 2014: Nat Morris - Open Network Install EnvironmentOSDC 2014: Nat Morris - Open Network Install Environment
OSDC 2014: Nat Morris - Open Network Install Environment
 
OSDC 2014 ONIE by Nat Morris
OSDC 2014 ONIE by Nat MorrisOSDC 2014 ONIE by Nat Morris
OSDC 2014 ONIE by Nat Morris
 
Rac on NFS
Rac on NFSRac on NFS
Rac on NFS
 
BKK16-312 Integrating and controlling embedded devices in LAVA
BKK16-312 Integrating and controlling embedded devices in LAVABKK16-312 Integrating and controlling embedded devices in LAVA
BKK16-312 Integrating and controlling embedded devices in LAVA
 
Automação do físico ao NetSecDevOps
Automação do físico ao NetSecDevOpsAutomação do físico ao NetSecDevOps
Automação do físico ao NetSecDevOps
 
VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead
 
ONIE: Open Network Install Environment @ OSDC 2014 Netways, Berlin
ONIE: Open Network Install Environment @ OSDC 2014 Netways, BerlinONIE: Open Network Install Environment @ OSDC 2014 Netways, Berlin
ONIE: Open Network Install Environment @ OSDC 2014 Netways, Berlin
 
IxVM on CML
IxVM on CMLIxVM on CML
IxVM on CML
 
ONIE LinuxCon 2015
ONIE LinuxCon 2015ONIE LinuxCon 2015
ONIE LinuxCon 2015
 
Introduction to OpenDaylight & Application Development
Introduction to OpenDaylight & Application DevelopmentIntroduction to OpenDaylight & Application Development
Introduction to OpenDaylight & Application Development
 
Tuned
TunedTuned
Tuned
 
OpenWRT manual
OpenWRT manualOpenWRT manual
OpenWRT manual
 
ONIE / Cumulus Networks Webinar
ONIE / Cumulus Networks WebinarONIE / Cumulus Networks Webinar
ONIE / Cumulus Networks Webinar
 
XPDS16: Xen Scalability Analysis - Weidong Han, Zhichao Huang & Wei Yang, Huawei
XPDS16: Xen Scalability Analysis - Weidong Han, Zhichao Huang & Wei Yang, HuaweiXPDS16: Xen Scalability Analysis - Weidong Han, Zhichao Huang & Wei Yang, Huawei
XPDS16: Xen Scalability Analysis - Weidong Han, Zhichao Huang & Wei Yang, Huawei
 
QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?QEMU Disk IO Which performs Better: Native or threads?
QEMU Disk IO Which performs Better: Native or threads?
 
82599 sriov vm configuration notes
82599 sriov vm configuration notes82599 sriov vm configuration notes
82599 sriov vm configuration notes
 
Linux sever building
Linux sever buildingLinux sever building
Linux sever building
 
XPDS16: Xen Live Patching - Updating Xen Without Rebooting - Konrad Wilk, Ora...
XPDS16: Xen Live Patching - Updating Xen Without Rebooting - Konrad Wilk, Ora...XPDS16: Xen Live Patching - Updating Xen Without Rebooting - Konrad Wilk, Ora...
XPDS16: Xen Live Patching - Updating Xen Without Rebooting - Konrad Wilk, Ora...
 
Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3
 

Viewers also liked

Introduction to nfv movilforum
Introduction to nfv   movilforumIntroduction to nfv   movilforum
Introduction to nfv movilforum
videos
 
Introduction to Open Mano
Introduction to Open ManoIntroduction to Open Mano
Introduction to Open Mano
videos
 
4 dpdk roadmap(1)
4 dpdk roadmap(1)4 dpdk roadmap(1)
4 dpdk roadmap(1)
videos
 
8 intel network builders overview
8 intel network builders overview8 intel network builders overview
8 intel network builders overview
videos
 
2 new hw_features_cat_cod_etc
2 new hw_features_cat_cod_etc2 new hw_features_cat_cod_etc
2 new hw_features_cat_cod_etc
videos
 
6 profiling tools
6 profiling tools6 profiling tools
6 profiling tools
videos
 
5 pipeline arch_rationale
5 pipeline arch_rationale5 pipeline arch_rationale
5 pipeline arch_rationale
videos
 
1 intro to_dpdk_and_hw
1 intro to_dpdk_and_hw1 intro to_dpdk_and_hw
1 intro to_dpdk_and_hw
videos
 
3 additional dpdk_theory(1)
3 additional dpdk_theory(1)3 additional dpdk_theory(1)
3 additional dpdk_theory(1)
videos
 
Intrucciones reto NFV/ Instruction to apply to nfv challenge
Intrucciones reto NFV/ Instruction to apply to nfv challengeIntrucciones reto NFV/ Instruction to apply to nfv challenge
Intrucciones reto NFV/ Instruction to apply to nfv challenge
videos
 
Bases legales reto NFV/ Nfv challenge terms
Bases legales reto NFV/ Nfv challenge termsBases legales reto NFV/ Nfv challenge terms
Bases legales reto NFV/ Nfv challenge terms
videos
 
DPDK KNI interface
DPDK KNI interfaceDPDK KNI interface
DPDK KNI interface
Denys Haryachyy
 
Vagrant
VagrantVagrant
Kvm performance optimization for ubuntu
Kvm performance optimization for ubuntuKvm performance optimization for ubuntu
Kvm performance optimization for ubuntuSim Janghoon
 
Challenges of L2 NID Based Architecture for vCPE and NFV Deployment
Challenges of L2 NID Based Architecture for vCPE and NFV Deployment Challenges of L2 NID Based Architecture for vCPE and NFV Deployment
Challenges of L2 NID Based Architecture for vCPE and NFV Deployment
Bangladesh Network Operators Group
 
Understanding DPDK algorithmics
Understanding DPDK algorithmicsUnderstanding DPDK algorithmics
Understanding DPDK algorithmics
Denys Haryachyy
 
OpenStack and OpenDaylight: An Integrated IaaS for SDN/NFV
OpenStack and OpenDaylight: An Integrated IaaS for SDN/NFVOpenStack and OpenDaylight: An Integrated IaaS for SDN/NFV
OpenStack and OpenDaylight: An Integrated IaaS for SDN/NFV
Cloud Native Day Tel Aviv
 
Linux Profiling at Netflix
Linux Profiling at NetflixLinux Profiling at Netflix
Linux Profiling at Netflix
Brendan Gregg
 
Understanding DPDK
Understanding DPDKUnderstanding DPDK
Understanding DPDK
Denys Haryachyy
 

Viewers also liked (19)

Introduction to nfv movilforum
Introduction to nfv   movilforumIntroduction to nfv   movilforum
Introduction to nfv movilforum
 
Introduction to Open Mano
Introduction to Open ManoIntroduction to Open Mano
Introduction to Open Mano
 
4 dpdk roadmap(1)
4 dpdk roadmap(1)4 dpdk roadmap(1)
4 dpdk roadmap(1)
 
8 intel network builders overview
8 intel network builders overview8 intel network builders overview
8 intel network builders overview
 
2 new hw_features_cat_cod_etc
2 new hw_features_cat_cod_etc2 new hw_features_cat_cod_etc
2 new hw_features_cat_cod_etc
 
6 profiling tools
6 profiling tools6 profiling tools
6 profiling tools
 
5 pipeline arch_rationale
5 pipeline arch_rationale5 pipeline arch_rationale
5 pipeline arch_rationale
 
1 intro to_dpdk_and_hw
1 intro to_dpdk_and_hw1 intro to_dpdk_and_hw
1 intro to_dpdk_and_hw
 
3 additional dpdk_theory(1)
3 additional dpdk_theory(1)3 additional dpdk_theory(1)
3 additional dpdk_theory(1)
 
Intrucciones reto NFV/ Instruction to apply to nfv challenge
Intrucciones reto NFV/ Instruction to apply to nfv challengeIntrucciones reto NFV/ Instruction to apply to nfv challenge
Intrucciones reto NFV/ Instruction to apply to nfv challenge
 
Bases legales reto NFV/ Nfv challenge terms
Bases legales reto NFV/ Nfv challenge termsBases legales reto NFV/ Nfv challenge terms
Bases legales reto NFV/ Nfv challenge terms
 
DPDK KNI interface
DPDK KNI interfaceDPDK KNI interface
DPDK KNI interface
 
Vagrant
VagrantVagrant
Vagrant
 
Kvm performance optimization for ubuntu
Kvm performance optimization for ubuntuKvm performance optimization for ubuntu
Kvm performance optimization for ubuntu
 
Challenges of L2 NID Based Architecture for vCPE and NFV Deployment
Challenges of L2 NID Based Architecture for vCPE and NFV Deployment Challenges of L2 NID Based Architecture for vCPE and NFV Deployment
Challenges of L2 NID Based Architecture for vCPE and NFV Deployment
 
Understanding DPDK algorithmics
Understanding DPDK algorithmicsUnderstanding DPDK algorithmics
Understanding DPDK algorithmics
 
OpenStack and OpenDaylight: An Integrated IaaS for SDN/NFV
OpenStack and OpenDaylight: An Integrated IaaS for SDN/NFVOpenStack and OpenDaylight: An Integrated IaaS for SDN/NFV
OpenStack and OpenDaylight: An Integrated IaaS for SDN/NFV
 
Linux Profiling at Netflix
Linux Profiling at NetflixLinux Profiling at Netflix
Linux Profiling at Netflix
 
Understanding DPDK
Understanding DPDKUnderstanding DPDK
Understanding DPDK
 

Similar to 3. configuring a compute node for nfv

Qemu - Raspberry | while42 Singapore #2
Qemu - Raspberry | while42 Singapore #2Qemu - Raspberry | while42 Singapore #2
Qemu - Raspberry | while42 Singapore #2
While42
 
ONOS SDN Controller - Clustering Tests & Experiments
ONOS SDN Controller - Clustering Tests & Experiments ONOS SDN Controller - Clustering Tests & Experiments
ONOS SDN Controller - Clustering Tests & Experiments
Eueung Mulyana
 
Linux Containers From Scratch
Linux Containers From ScratchLinux Containers From Scratch
Linux Containers From Scratch
joshuasoundcloud
 
Development platform virtualization using qemu
Development platform virtualization using qemuDevelopment platform virtualization using qemu
Development platform virtualization using qemuPremjith Achemveettil
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESJan Kalcic
 
NFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center OperationsNFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center Operations
Cumulus Networks
 
the NML project
the NML projectthe NML project
the NML projectLei Yang
 
Linux Booting Process
Linux Booting ProcessLinux Booting Process
Linux Booting Process
Rishabh5121993
 
linux-memory-explained.pdf
linux-memory-explained.pdflinux-memory-explained.pdf
linux-memory-explained.pdf
TricantinoLopezPerez
 
Vagrant, Ansible, and OpenStack on your laptop
Vagrant, Ansible, and OpenStack on your laptopVagrant, Ansible, and OpenStack on your laptop
Vagrant, Ansible, and OpenStack on your laptop
Lorin Hochstein
 
OpenNebula 5.4 Hands-on Tutorial
OpenNebula 5.4 Hands-on TutorialOpenNebula 5.4 Hands-on Tutorial
OpenNebula 5.4 Hands-on Tutorial
OpenNebula Project
 
RunX ELCE 2020
RunX ELCE 2020RunX ELCE 2020
RunX ELCE 2020
Stefano Stabellini
 
Linux Tracing Superpowers by Eugene Pirogov
Linux Tracing Superpowers by Eugene PirogovLinux Tracing Superpowers by Eugene Pirogov
Linux Tracing Superpowers by Eugene Pirogov
Pivorak MeetUp
 
NetBSD on Google Compute Engine (en)
NetBSD on Google Compute Engine (en)NetBSD on Google Compute Engine (en)
NetBSD on Google Compute Engine (en)
Ryo ONODERA
 
Steps to build and run oai
Steps to build and run oaiSteps to build and run oai
Steps to build and run oai
ssuser38b887
 
Linux
LinuxLinux
Fast boot
Fast bootFast boot
Fast boot
SZ Lin
 
Hands on Virtualization with Ganeti
Hands on Virtualization with GanetiHands on Virtualization with Ganeti
Hands on Virtualization with Ganeti
OSCON Byrum
 
Genode Compositions
Genode CompositionsGenode Compositions
Genode Compositions
Vasily Sartakov
 

Similar to 3. configuring a compute node for nfv (20)

Qemu - Raspberry | while42 Singapore #2
Qemu - Raspberry | while42 Singapore #2Qemu - Raspberry | while42 Singapore #2
Qemu - Raspberry | while42 Singapore #2
 
ONOS SDN Controller - Clustering Tests & Experiments
ONOS SDN Controller - Clustering Tests & Experiments ONOS SDN Controller - Clustering Tests & Experiments
ONOS SDN Controller - Clustering Tests & Experiments
 
Linux Containers From Scratch
Linux Containers From ScratchLinux Containers From Scratch
Linux Containers From Scratch
 
Development platform virtualization using qemu
Development platform virtualization using qemuDevelopment platform virtualization using qemu
Development platform virtualization using qemu
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
 
NFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center OperationsNFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center Operations
 
Dev ops
Dev opsDev ops
Dev ops
 
the NML project
the NML projectthe NML project
the NML project
 
Linux Booting Process
Linux Booting ProcessLinux Booting Process
Linux Booting Process
 
linux-memory-explained.pdf
linux-memory-explained.pdflinux-memory-explained.pdf
linux-memory-explained.pdf
 
Vagrant, Ansible, and OpenStack on your laptop
Vagrant, Ansible, and OpenStack on your laptopVagrant, Ansible, and OpenStack on your laptop
Vagrant, Ansible, and OpenStack on your laptop
 
OpenNebula 5.4 Hands-on Tutorial
OpenNebula 5.4 Hands-on TutorialOpenNebula 5.4 Hands-on Tutorial
OpenNebula 5.4 Hands-on Tutorial
 
RunX ELCE 2020
RunX ELCE 2020RunX ELCE 2020
RunX ELCE 2020
 
Linux Tracing Superpowers by Eugene Pirogov
Linux Tracing Superpowers by Eugene PirogovLinux Tracing Superpowers by Eugene Pirogov
Linux Tracing Superpowers by Eugene Pirogov
 
NetBSD on Google Compute Engine (en)
NetBSD on Google Compute Engine (en)NetBSD on Google Compute Engine (en)
NetBSD on Google Compute Engine (en)
 
Steps to build and run oai
Steps to build and run oaiSteps to build and run oai
Steps to build and run oai
 
Linux
LinuxLinux
Linux
 
Fast boot
Fast bootFast boot
Fast boot
 
Hands on Virtualization with Ganeti
Hands on Virtualization with GanetiHands on Virtualization with Ganeti
Hands on Virtualization with Ganeti
 
Genode Compositions
Genode CompositionsGenode Compositions
Genode Compositions
 

More from videos

Logros y retos evento movilforum 02/2016
Logros y retos evento movilforum 02/2016Logros y retos evento movilforum 02/2016
Logros y retos evento movilforum 02/2016
videos
 
Presentación Atlantida en Networking Day moviforum
Presentación Atlantida en Networking Day moviforum Presentación Atlantida en Networking Day moviforum
Presentación Atlantida en Networking Day moviforum
videos
 
Presentación Quetal en Networking Day moviforum
Presentación Quetal  en Networking Day moviforum Presentación Quetal  en Networking Day moviforum
Presentación Quetal en Networking Day moviforum
videos
 
Presentación GMTECH en Networking Day moviforum
Presentación GMTECH en Networking Day moviforum Presentación GMTECH en Networking Day moviforum
Presentación GMTECH en Networking Day moviforum
videos
 
Presentación movilok en Networking Day moviforum
Presentación movilok en Networking Day moviforum Presentación movilok en Networking Day moviforum
Presentación movilok en Networking Day moviforum
videos
 
Presentación 3G mobile en Networking Day moviforum
Presentación 3G mobile en Networking Day moviforumPresentación 3G mobile en Networking Day moviforum
Presentación 3G mobile en Networking Day moviforum
videos
 
Presentación microestrategy en Networking Day moviforum
Presentación microestrategy en Networking Day moviforumPresentación microestrategy en Networking Day moviforum
Presentación microestrategy en Networking Day moviforum
videos
 
Presentación Telnet en Networking Day moviforum
Presentación Telnet en Networking Day moviforumPresentación Telnet en Networking Day moviforum
Presentación Telnet en Networking Day moviforum
videos
 
Presentación Alma technology en Networking Day movilforum
Presentación Alma technology en Networking Day movilforumPresentación Alma technology en Networking Day movilforum
Presentación Alma technology en Networking Day movilforum
videos
 
Presentación acuerdo de colaboración Fieldeas y EasyOnPad en Networking Day m...
Presentación acuerdo de colaboración Fieldeas y EasyOnPad en Networking Day m...Presentación acuerdo de colaboración Fieldeas y EasyOnPad en Networking Day m...
Presentación acuerdo de colaboración Fieldeas y EasyOnPad en Networking Day m...
videos
 
Presentación Icar Vision en Networking Day movilforum
Presentación Icar Vision en Networking Day movilforumPresentación Icar Vision en Networking Day movilforum
Presentación Icar Vision en Networking Day movilforum
videos
 
Presentación Billage en Networking Day movilforum
Presentación Billage en Networking Day movilforumPresentación Billage en Networking Day movilforum
Presentación Billage en Networking Day movilforum
videos
 
Presentación Face On en Networking Day movilforum
Presentación Face On en Networking Day movilforumPresentación Face On en Networking Day movilforum
Presentación Face On en Networking Day movilforum
videos
 
Hp nfv movilforum as innovation engine for cs ps
Hp nfv movilforum as innovation engine for cs psHp nfv movilforum as innovation engine for cs ps
Hp nfv movilforum as innovation engine for cs ps
videos
 

More from videos (14)

Logros y retos evento movilforum 02/2016
Logros y retos evento movilforum 02/2016Logros y retos evento movilforum 02/2016
Logros y retos evento movilforum 02/2016
 
Presentación Atlantida en Networking Day moviforum
Presentación Atlantida en Networking Day moviforum Presentación Atlantida en Networking Day moviforum
Presentación Atlantida en Networking Day moviforum
 
Presentación Quetal en Networking Day moviforum
Presentación Quetal  en Networking Day moviforum Presentación Quetal  en Networking Day moviforum
Presentación Quetal en Networking Day moviforum
 
Presentación GMTECH en Networking Day moviforum
Presentación GMTECH en Networking Day moviforum Presentación GMTECH en Networking Day moviforum
Presentación GMTECH en Networking Day moviforum
 
Presentación movilok en Networking Day moviforum
Presentación movilok en Networking Day moviforum Presentación movilok en Networking Day moviforum
Presentación movilok en Networking Day moviforum
 
Presentación 3G mobile en Networking Day moviforum
Presentación 3G mobile en Networking Day moviforumPresentación 3G mobile en Networking Day moviforum
Presentación 3G mobile en Networking Day moviforum
 
Presentación microestrategy en Networking Day moviforum
Presentación microestrategy en Networking Day moviforumPresentación microestrategy en Networking Day moviforum
Presentación microestrategy en Networking Day moviforum
 
Presentación Telnet en Networking Day moviforum
Presentación Telnet en Networking Day moviforumPresentación Telnet en Networking Day moviforum
Presentación Telnet en Networking Day moviforum
 
Presentación Alma technology en Networking Day movilforum
Presentación Alma technology en Networking Day movilforumPresentación Alma technology en Networking Day movilforum
Presentación Alma technology en Networking Day movilforum
 
Presentación acuerdo de colaboración Fieldeas y EasyOnPad en Networking Day m...
Presentación acuerdo de colaboración Fieldeas y EasyOnPad en Networking Day m...Presentación acuerdo de colaboración Fieldeas y EasyOnPad en Networking Day m...
Presentación acuerdo de colaboración Fieldeas y EasyOnPad en Networking Day m...
 
Presentación Icar Vision en Networking Day movilforum
Presentación Icar Vision en Networking Day movilforumPresentación Icar Vision en Networking Day movilforum
Presentación Icar Vision en Networking Day movilforum
 
Presentación Billage en Networking Day movilforum
Presentación Billage en Networking Day movilforumPresentación Billage en Networking Day movilforum
Presentación Billage en Networking Day movilforum
 
Presentación Face On en Networking Day movilforum
Presentación Face On en Networking Day movilforumPresentación Face On en Networking Day movilforum
Presentación Face On en Networking Day movilforum
 
Hp nfv movilforum as innovation engine for cs ps
Hp nfv movilforum as innovation engine for cs psHp nfv movilforum as innovation engine for cs ps
Hp nfv movilforum as innovation engine for cs ps
 

Recently uploaded

Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Jeffrey Haguewood
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 

Recently uploaded (20)

Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 

3. configuring a compute node for nfv

  • 1. Configuring a compute node for NFV Network Innovation & Virtualisation Global CTO Unit _Antonio López Gracia Antonio.lopezgracia@telefonica.com 9 Jun 2015
  • 2. 2Configuring a compute node for NFV  HW & SW environment  BIOS setup  Installation of OS and required SW packages  IOMMU IOTLB cache support  Enabling IOMMU  Enabling hugepages  CPU isolation  Deactivating KSM  Enabling SR-IOV  Pre-provision of Linux bridges  Additional configuration to allow access from openvim  Compute node configuration in special cases  Available automation scripts in OpenMANO github
  • 3. 3HW & SW environment  Hardware:  Servers with Xeon E5-based Intel processors with Ivy Bridge or Haswell architecture and 2 sockets - Recommended at least 4 cores per socket - Recommended at least 64 GB RAM per host - Lab: HP DL380 Gen9 and Dell R720/R730 servers …  Data plane: 10Gbps NICs supported by DPDK, equally distributed between NUMAs - Lab: HP 560 SFP+, Intel Niantic and Fortville families NICs  Control plane: 1Gbps NICs  Software:  64bits OS with KVM, qemu and libvirt (i.e. RHEL7, Ubuntu Server 14.04, CentOS 7) with kernel support of huge pages IOTLB cache in IOMMU - Lab: RHEL 7.1
  • 4. 4BIOS setup  Enter the BIOS and ensure that all virtualization options are active: - Enable all Intel vt-x (processor virtualization) and vt-d (pci passthrough) options if present. Sometimes they are grouped together just as “Virtualization Options” - Enable SR-IOV if present as an option - Verify processors are configured for maximum performance (no power savings) - Enable hyperthreading (recommended)  If virtualization options are active, the following command should give a non empty output: $ egrep "(vmx|svm)" /proc/cpuinfo  If hyperthreading is active, the following command should give a non empty output: $ egrep ht /proc/cpuinfo
  • 5. 5Installation of OS and required SW packages  Install RHEL7.1 with the following options: %packages @base @core @development @network-file-system-client @virtualization-hypervisor @virtualization-platform @virtualization-tools  Install the following packages: $ sudo yum install -y screen virt-manager ethtool gcc gcc-c++ xorg-x11-xauth xorg-x11-xinit xorg-x11- deprecated-libs libXtst guestfish hwloc libhugetlbfs-utils libguestfs-tools policycoreutils-python
  • 6. 6IOMMU IOTLB cache support  Use a kernel with support of huge pages IOTLB cache in IOMMU.  From vanilla kernel 3.14 this support is included. In case you are using an older kernel, you should update your kernel.  Some distribution kernels have ported this requirement.  Find out if the kernel of the distribution you are using has this support:  RHEL 7.1 kernel (3.10.0-229.el7.x86_64) meets the requirement  RHEL 7.0 requires a specific upgrade tu support the requirement. You can upgrade the kernel as follows: $ wget http://people.redhat.com/~mtosatti/qemu-kvm-take5/kernel-3.10.0-123.el7gig2.x86_64.rpm $ sudo rpm -Uvh kernel-3.10.0-123.el7gig2.x86_64.rpm --oldpackage
  • 7. 7Enabling IOMMU  Enable IOMMU, by adding the following to the grub command line intel_iommu=on
  • 8. 8Enabling hugepages (I)  Enable 1G hugepages, by adding the following to the grub command line default_hugepagesz=1G hugepagesz=1G  The number of huge pages can be also set at grub: hugepages=24 (reserves 24GB)  Or with a oneshot service that runs on boot (for early memory allocation): $ sudo vi /usr/lib/systemd/system/hugetlb-gigantic-pages.service [Unit] Description=HugeTLB Gigantic Pages Reservation DefaultDependencies=no Before=dev-hugepages.mount ConditionPathExists=/sys/devices/system/node ConditionKernelCommandLine=hugepagesz=1G [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/lib/systemd/hugetlb-reserve-pages [Install] WantedBy=sysinit.target
  • 9. 9Enabling hugepages (II) Then set the number of huge pages: $ sudo vi /usr/lib/systemd/hugetlb-reserve-pages #!/bin/bash nodes_path=/sys/devices/system/node/ if [ ! -d $nodes_path ]; then echo "ERROR: $nodes_path does not exist" exit 1 fi reserve_pages() { echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages } # This example reserves 12 pages of huge pages on each numa node reserve_pages 12 node0 reserve_pages 12 node1 And enable the service: $ sudo chmod +x /usr/lib/systemd/hugetlb-reserve-pages $ sudo systemctl enable hugetlb-gigantic-pages  Recommended best practice: reserve 4GB per NUMA to run the OS and use all other system memory for 1GB huge pages  Mount huge pages in /etc/fstab: $ sudo echo "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" >> /etc/fstab
  • 10. 10CPU isolation  Isolate CPUs so that the host OS is restricted to run only on some cores, leaving the others to run VNFs in exclusive  Recommended best practice: run the OS on the first core of each NUMA node, by adding the isolcpus field to the grub command line. isolcpus=1-9,11-19,21-29,31-39 The exact CPU numbers depend on the CPU numbers presented by the host OS. In the previous example, CPUs 0, 10, 20 and 30 are excluded because CPU 0 and its sibling 20 correspond to the first core of NUMA node 0, and CPU 10 and its sibling 30 correspond to the first core of NUMA node 1.  Running this awk script suggest the value to use in your compute node: $ gawk 'BEGIN{pre=-2;} ($1=="processor"){pro=$3;} ($1=="core" && $4!=0){ if (pre+1==pro){endrange="-" pro} else{cpus=cpus endrange sep pro; sep=","; endrange="";}; pre=pro;} END{printf("isolcpus=%sn",cpus endrange);}' /proc/cpuinfo isolcpus=2-35,38-71
  • 11. 11Dedicated resource allocation CPU QPI I/O device I/O device Core Core Core CoreCore Core Core Core CoreCore MEMORY I/O device I/O device CPU I/O device I/O device Core Core Core CoreCore Core Core Core CoreCore I/O device I/O device MEMORY • CPUs: not oversubscribed, isolated from host OS • Memory: huge pages • I/O devices: passthrough, SR-IOV Host OS + Hypervisor VM 1 VM 2 VM 3Not used 6 VM 5VM 4
  • 12. 12Activating grub changes for iommu, huge pages and isolcpus  In RHEL 7/CentOS OS $ sudo vi /etc/default/grub GRUB_TIMEOUT=5 GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/swap crashkernel=auto rd.lvm.lv=rhel/root rhgb quiet intel_iommu=on default_hugepagesz=1G hugepagesz=1G isolcpus=2-35,38-71" GRUB_DISABLE_RECOVERY="true" Update grub - BIOS: $ sudo grub2-mkconfig -o /boot/grub2/grub.cfg Update grub - EFI: $ sudo grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg -Don’t forget to reboot the system. - After boot check that all options were applied on boot: $ cat /proc/cmdline BOOT_IMAGE=/vmlinuz-3.10.0-229.el7.x86_64 root=/dev/mapper/rhel_nfv105-root ro rd.lvm.lv=rhel_nfv105/swap crashkernel=auto rd.lvm.lv=rhel_nfv105/root rhgb quiet intel_iommu=on default_hugepagesz=1G hugepagesz=1G isolcpus=2-35,38-71
  • 13. 13Deactivating KSM (Kernel Same-page Merging)  KSM enables the kernel to examine two or more already running programs and compare their memory. If any memory regions or pages are identical, KSM reduces multiple identical memory pages to a single page. This page is then marked copy on write. If the contents of the page is modified by a guest virtual machine, a new page is created for that guest virtual machine.  KSM has a performance overhead which may be too large for certain environments or host physical machine systems.  KSM can be deactivated by stopping the ksmtuned and the ksm service. Stopping the services deactivates KSM but does not persist after restarting. # service ksmtuned stop Stopping ksmtuned: [ OK ] # service ksm stop Stopping ksm: [ OK ]  Persistently deactivate KSM with the chkconfig command. To turn off the services, run the following commands: # chkconfig ksm off # chkconfig ksmtuned off
  • 14. 14Enabling SR-IOV  SR-IOV enabling depends on the NIC used  For Intel Niantic and Fortville NICs, the number of VF enabled is defined by writing on echo X > /sys/bus/pci/devices/<PF pci address>/sriov_numvfs  Recommended best practice: set the number of Vfs per PF by using udev rules: $ cat /etc/udev/rules.d/pci_config.rules ACTION=="add", KERNEL=="0000:05:00.0", SUBSYSTEM=="pci", RUN+="/usr/bin/bash -c 'echo 8 > /sys/bus/pci/devices/0000:05:00.0/sriov_numvfs'" ACTION=="add", KERNEL=="0000:05:00.1", SUBSYSTEM=="pci", RUN+="/usr/bin/bash -c 'echo 8 > /sys/bus/pci/devices/0000:05:00.1/sriov_numvfs'" ACTION=="add", KERNEL=="0000:0b:00.0", SUBSYSTEM=="pci", RUN+="/usr/bin/bash -c 'echo 8 > /sys/bus/pci/devices/0000:0b:00.0/sriov_numvfs'“ …  Blacklist the ixgbevf module, by adding the following to the grub command line. This must be done after adding this host to the openvim, but not before. The reason for blacklisting this driver is because it causes that the vlan tags of broadcast packets are not properly removed when received by an SRIOV port modprobe.blacklist=ixgbevf (on grub boot line)
  • 15. 15Pre-provision of Linux bridges (I)  OpenMANO relies on Linux bridges to interconnect VMs when there are no high performance requirements for I/O. This is the case of control plane VNF interfaces that are expected to carry a small amount of traffic.  A set of Linux bridges must be created on every host. Every Linux bridge must be attached to a physical host interface with a specific VLAN. In addition, a external management switch must be used to interconnect those physical host interfaces. Bear in mind that the host interfaces used for data plane VM interfaces will be different from the host interfaces used for control plane VM interfaces.  Currently OpenMANO configuration uses 20 bridges named virbrMan1 to virbrMan20, using vlan tags 2001 to 2020 respectively to interconnect VNF elements  Another bridge called virbrInf with vlan tag 1001 is used to interconnect physical infrastructure (hosts, switches and management VMs like openMANO itself, in case of running virtualized)
  • 16. 16Pre-provision of Linux bridges (II)  To create a bridge in RHEL 7.1 two files must be defined in /etc/sysconfig/network-scripts: $ cat /etc/sysconfig/network-scripts/ifcfg-virbrMan1 DEVICE=virbrMan1 TYPE=Bridge ONBOOT=yes DELAY=0 NM_CONTROLLED=no USERCTL=no $ cat /etc/sysconfig/network-scripts/ifcfg-em2.2001 DEVICE=em2.2001 ONBOOT=yes NM_CONTROLLED=no USERCTL=no VLAN=yes BOOTPROTO=none BRIDGE=virbrMan1  The host interface (em2 in the example), the name of the bridge (virbrMan1) and the VLAN tag (2001) can be different. In case you use a different name for the bridge, you should take it into account in 'openvimd.cfg'
  • 17. 17Additional configuration to allow access from openvim (I)  Uncomment the following lines of /etc/libvirt/libvirtd.conf to allow external connection to libvirtd: unix_sock_group = "libvirt" unix_sock_rw_perms = "0770" unix_sock_dir = "/var/run/libvirt" auth_unix_rw = "none“  Create and configure a user for openvim access. A new user must be created to access the compute node from openvim. The user must belong to group libvirt, and other users must be able to access its home: #creates a new user $ sudo useradd -m -G libvirt <user> #or modified an existing user $ sudo usermod -a -G libvirt <user> # Allow other users to access /home/<user> $ sudo chmod +rx /home/<user>
  • 18. 18Additional configuration to allow access from openvim (II)  Copy the ssh key of openvim into compute node. From the machine where openvim is running (not from the compute node), run: openvim $ ssh-keygen #needed for generate ssh keys if not done before openvim $ ssh-copy-id <user>@<compute host>  After that, ensure that you can access directly without password prompt from openvim to compute host: openvim $ ssh <user>@<compute host>  Create a local folder for image storage and grant access from openvim: Images will be stored in a remote shared location accessible by all compute nodes. This can be a NFS file system for example. The VNFs description will contain a path to images stored in this folder. Openvim assumes that images are stored here and copied to a local file system path at virtual machine creation. The remote shared configuration is outside the scope of the compute node configuration, as it is required only by the VNF descriptors.
  • 19. 19Additional configuration to allow access from openvim (III)  A local folder must be created (in default configuration, we assume /opt/VNF/images) where the deployed VMs will be copied, and access must be granted to libvirt group in a SElinux system. In the automation script we assume that "/home" contains more disk space than "/", so a link to a local home folder is created: $ mkdir -p /home/<user>/VNF_images $ rm -f /opt/VNF/images $ mkdir -p /opt/VNF/ $ ln -s /home/<user>/VNF_images /opt/VNF/images $ chown -R <user> /opt/VNF # SElinux management $ semanage fcontext -a -t virt_image_t "/home/<user>/VNF_images(/.*)?" $ cat /etc/selinux/targeted/contexts/files/file_contexts.local |grep virt_image $ restorecon -R -v /home/<user>/VNF_images
  • 20. 20Compute node configuration in special cases (I)  Datacenter with different types of compute nodes: In a datacenter with different types of compute nodes, it might happen that compute nodes use different interface naming schemes. In that case, you can take the most used interface naming scheme as the default one, and make an additional configuration in the compute nodes that do not follow the default naming scheme. In order to do that, you should create the file hostinfo.yaml file inside the image local folder (e.g. typically /opt/VNF/images). It contains entries with: openvim-expected-name: local-iface-name For example, if openvim contains a network using macvtap to the physical interface em1 (macvtap:em1) but in this compute node the interface is called eno1, creates a local-image-folder/hostinfo.yaml file with this content: em1: eno1
  • 21. 21Compute node configuration in special cases (II)  Compute nodes in a development workstation If a normal workstation is used to develop VNFs (as in this training) some of the compute node requirements should not be configured, as VNF performance is not a possible target. In order to get a working development environment: • Do not configure huge pages, as it would substract memory for the development environment • Do not configure isolcpus, as it would substract CPUs for the development environment • Do not configure SR-IOV interfaces, as normally 10GB data plane interfaces won’t be available
  • 22. 22Available automation scripts in OpenMANO github  Automate all operations from previous slides with Telefonica NFV Reference Lab recommended best practices  https://github.com/nfvlabs/openmano/blob/master/scripts/configure- compute-node-RHEL7.1.sh  Personalize RHEL7.1 on compute nodes  Prepared to work with the following network card drivers: - tg3 driver for management interfaces - ixgbe and i40e driver for data plane interfaces  https://github.com/nfvlabs/openmano/blob/master/scripts/configure- compute-node-develop.sh  For develop workstations, without isolcpus, huge pages, data plane interfaces