Presentation from 2008. Compares Lighttpd .vs Apache for static content. Discovery session for scaling http://www.imagesocket.com during it's peak popularity.
This is really old and /outdated/ at this point.
Presentation from 2008. Compares Lighttpd .vs Apache for static content. Discovery session for scaling http://www.imagesocket.com during it's peak popularity.
This is really old and /outdated/ at this point.
OSDC 2014: Nat Morris - Open Network Install EnvironmentNETWAYS
ONIE defines an open source “install environment” that runs on this management subsystem utilizing facilities in a Linux/BusyBox environment. This environment allows end-users and channel partners to install the target network OS as part of data center provisioning, in the fashion that servers are provisioned.
ONIE enables switch hardware suppliers, distributors and resellers to manage their operations based on a small number of hardware SKUs. This in turn creates economies of scale in manufacturing, distribution, stocking, and RMA enabling a thriving ecosystem of both network hardware and operating system alternatives.
BKK16-312 Integrating and controlling embedded devices in LAVALinaro
Previous introductory tutorials on LAVA have focussed on virtual platforms. This is an end-to-end tutorial as a basis to evaluate LAVA with one or more embedded targets using U-Boot. It integrates both a physical bootloader device with a stand-alone installation of LAVA, along with a simple PDU for target power control which is based on off-the-shelf Arduino components and fully integrated with pdudaemon. It covers device requirements, device configuration for 32- and 64-bit platforms, use of lavatool, tftp, pduclient and logging via the LAVA web interface and /var.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
XPDS16: Xen Scalability Analysis - Weidong Han, Zhichao Huang & Wei Yang, HuaweiThe Linux Foundation
As CPU integrates more cores, server will have more and more cores. It requires hypervisor to have good scalability. This talk will introduce our analysis on many core scalability of Xen, and share some findings and lessons.
XPDS16: Xen Live Patching - Updating Xen Without Rebooting - Konrad Wilk, Ora...The Linux Foundation
Oracle and Citrix have been working together to bring live-patching to the Xen hypervisor. This will allow system administrators to update the hypervisor without the need to reboot. The talk will provide an overview of how it works, what were the difficulties in implementing it, how it compares to the other technologies for patching (uSplice, kSplice, kPatch, kGraft, Linux hot-patching), how to use it, and what is in the roadmap schedule.
The goal of this test plan is to test SPECTRE and MELTDOWN performance impact on Intel CPU. We will run CPU intensive workloads in Virtual Machine(s) running on non-patched and patched ESXi host and observe performance impact.
We will test impact on network, storage and memory performance because these I/O intensive workloads requires CPU caching which is impacted by vulnerabilities remediation.
Qualification of performance is very specific and hard subject. The performance impact varies across different hardware and software configurations. However, performed tests are very well described in this document so the reader can understand all conditions of the test and observed results. The reader can also perform tests on his specific hardware and software configurations.
OSDC 2014: Nat Morris - Open Network Install EnvironmentNETWAYS
ONIE defines an open source “install environment” that runs on this management subsystem utilizing facilities in a Linux/BusyBox environment. This environment allows end-users and channel partners to install the target network OS as part of data center provisioning, in the fashion that servers are provisioned.
ONIE enables switch hardware suppliers, distributors and resellers to manage their operations based on a small number of hardware SKUs. This in turn creates economies of scale in manufacturing, distribution, stocking, and RMA enabling a thriving ecosystem of both network hardware and operating system alternatives.
BKK16-312 Integrating and controlling embedded devices in LAVALinaro
Previous introductory tutorials on LAVA have focussed on virtual platforms. This is an end-to-end tutorial as a basis to evaluate LAVA with one or more embedded targets using U-Boot. It integrates both a physical bootloader device with a stand-alone installation of LAVA, along with a simple PDU for target power control which is based on off-the-shelf Arduino components and fully integrated with pdudaemon. It covers device requirements, device configuration for 32- and 64-bit platforms, use of lavatool, tftp, pduclient and logging via the LAVA web interface and /var.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
XPDS16: Xen Scalability Analysis - Weidong Han, Zhichao Huang & Wei Yang, HuaweiThe Linux Foundation
As CPU integrates more cores, server will have more and more cores. It requires hypervisor to have good scalability. This talk will introduce our analysis on many core scalability of Xen, and share some findings and lessons.
XPDS16: Xen Live Patching - Updating Xen Without Rebooting - Konrad Wilk, Ora...The Linux Foundation
Oracle and Citrix have been working together to bring live-patching to the Xen hypervisor. This will allow system administrators to update the hypervisor without the need to reboot. The talk will provide an overview of how it works, what were the difficulties in implementing it, how it compares to the other technologies for patching (uSplice, kSplice, kPatch, kGraft, Linux hot-patching), how to use it, and what is in the roadmap schedule.
The goal of this test plan is to test SPECTRE and MELTDOWN performance impact on Intel CPU. We will run CPU intensive workloads in Virtual Machine(s) running on non-patched and patched ESXi host and observe performance impact.
We will test impact on network, storage and memory performance because these I/O intensive workloads requires CPU caching which is impacted by vulnerabilities remediation.
Qualification of performance is very specific and hard subject. The performance impact varies across different hardware and software configurations. However, performed tests are very well described in this document so the reader can understand all conditions of the test and observed results. The reader can also perform tests on his specific hardware and software configurations.
OpenStack is a free and open-source software platform for cloud computing, mostly deployed as an infrastructure-as-a-service (IaaS). OpenDaylight is an open source project under the Linux Foundation with the goal of furthering the adoption and innovation of SDN through the creation of a common industry supported platform.
In this session, I will talk about how OpenStack and OpenDaylight can be combined together to solve real world business cases and networking needs. We will cover:
- What is OpenDaylight
- Use cases for OpenDaylight with OpenStack
- The OpenDaylight NetVirt project
- How OpenDaylight interacts with OpenStack
- The future of OpenDaylight, and how we see it help solving challenges in the networking industry such as NFV, container networking and physical network fabric management -- the open source way.
Talk for SCaLE13x. Video: https://www.youtube.com/watch?v=_Ik8oiQvWgo . Profiling can show what your Linux kernel and appliacations are doing in detail, across all software stack layers. This talk shows how we are using Linux perf_events (aka "perf") and flame graphs at Netflix to understand CPU usage in detail, to optimize our cloud usage, solve performance issues, and identify regressions. This will be more than just an intro: profiling difficult targets, including Java and Node.js, will be covered, which includes ways to resolve JITed symbols and broken stacks. Included are the easy examples, the hard, and the cutting edge.
Cumulus Linux supports great networking, what’s next? Matt Peterson (@dorkmatt) our resident expert from the office of the CTO shares his previous experience, his views on devops, and how Cumulus Networks makes it easier to manage networks with ONIE, ZTP and no CLI! “Devops is a lifestyle, shared responsibility”. With Linux as the networks OS, “it’s all just one apt-get away!”
The tutorial covers the process of installing, configuring and operating private, public and hybrid clouds using OpenNebula. Additionally the program briefly addresses the integration of OpenNebula with other components in the data center. The target audience is devops and system administrators interested in deploying a private cloud solution, or in the integration of OpenNebula with other platform.
Linux Tracing Superpowers by Eugene PirogovPivorak MeetUp
For a long time Linux was far behind operating systems of Unix family from the perspective of debuggability, specifically in a live production systems.
However, over the course of 2016 Linux saw a series of patches that brought it on par with Unix world: an old Linux tool called BPF has risen and extended into powerful new one – eBPF. Some say that eBPF marks the begining of true DTrace for Linux.
During the presentation I'm going to talk about tracing basics, cover a series of events that led to the development of eBPF and will compare eBPF with DTrace from Unix world. Current state of affairs of Linux tracing tools will be explored. Finally, together we'll look at some of the exciting examples of eBPF application.
***
Eugene is well known in our Ruby (and Elixir) communities. Last time when he was at #pivorak he made a very light and interesting intro to the Elixir. You can check his speech out here - http://bit.ly/2evCd9R
Ganeti is a cluster virtualization management software tool built on top of existing virtualization technologies such as Xen or KVM and other Open Source software. This hands-on tutorial will give an overview of Ganeti, how to install it, how to get started deploying VMs, & administrative guide to Ganeti. The tutorial will also cover installing & using Ganeti Web Manager as a web front-end.
The lecture by Norman Feske for Summer Systems School'12.
Genode Compositions
SSS'12 - Education event, organized by ksys labs[1] in 2012, for students interested in system software development and information security.
Genode[2] - The Genode operating-system framework provides a uniform API for applications on top of 8 existing microkernels/hypervisors: Linux, L4ka::Pistachio, L4/Fiasco, OKL4, NOVA, Fiasco.OC, Codezero, and a custom kernel for the MicroBlaze architecture.
1. http://ksyslabs.org/
2. http://genode.org
Similar to 3. configuring a compute node for nfv (20)
En el documento encontrarás los hitos de 2015 y los retos de 2016 del programa de partners de Telefónica movilforum. La presentación fue realizada por la responsable del programa Julia Fraile.
Presentación acuerdo de colaboración Fieldeas y EasyOnPad en Networking Day m...videos
Presentación acuerdo de colaboración Fieldas y EasyOnPad en Networking Day movilforum. Es una muestra más de las conexiones entre partners que se generan en el ecosistemas movilforum
Presentación Billage en Networking Day movilforumvideos
Presentación Billage en Networking Day movilforum celebrado el día 3 de diciembre de 2015 en Madrid, en el que nos describen Face on y las posibilidades de negocio.
Presentación Face On en Networking Day movilforumvideos
Presentación Face On en Networking Day movilforum celebrado el día 3 de diciembre de 2015 en Madrid, en el que nos describen Face on y las posibilidades de negocio.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
3. configuring a compute node for nfv
1. Configuring a compute node
for NFV
Network Innovation &
Virtualisation Global CTO Unit
_Antonio López Gracia
Antonio.lopezgracia@telefonica.com
9 Jun 2015
2. 2Configuring a compute node for NFV
HW & SW environment
BIOS setup
Installation of OS and required SW packages
IOMMU IOTLB cache support
Enabling IOMMU
Enabling hugepages
CPU isolation
Deactivating KSM
Enabling SR-IOV
Pre-provision of Linux bridges
Additional configuration to allow access from openvim
Compute node configuration in special cases
Available automation scripts in OpenMANO github
3. 3HW & SW environment
Hardware:
Servers with Xeon E5-based Intel processors with Ivy Bridge or Haswell
architecture and 2 sockets
- Recommended at least 4 cores per socket
- Recommended at least 64 GB RAM per host
- Lab: HP DL380 Gen9 and Dell R720/R730 servers …
Data plane: 10Gbps NICs supported by DPDK, equally distributed between
NUMAs
- Lab: HP 560 SFP+, Intel Niantic and Fortville families NICs
Control plane: 1Gbps NICs
Software:
64bits OS with KVM, qemu and libvirt (i.e. RHEL7, Ubuntu Server 14.04,
CentOS 7) with kernel support of huge pages IOTLB cache in IOMMU
- Lab: RHEL 7.1
4. 4BIOS setup
Enter the BIOS and ensure that all virtualization options are active:
- Enable all Intel vt-x (processor virtualization) and vt-d (pci passthrough) options if present. Sometimes they
are grouped together just as “Virtualization Options”
- Enable SR-IOV if present as an option
- Verify processors are configured for maximum performance (no power savings)
- Enable hyperthreading (recommended)
If virtualization options are active, the following command should give a non
empty output:
$ egrep "(vmx|svm)" /proc/cpuinfo
If hyperthreading is active, the following command should give a non empty
output:
$ egrep ht /proc/cpuinfo
5. 5Installation of OS and required SW packages
Install RHEL7.1 with the following options:
%packages
@base
@core
@development
@network-file-system-client
@virtualization-hypervisor
@virtualization-platform
@virtualization-tools
Install the following packages:
$ sudo yum install -y screen virt-manager ethtool gcc gcc-c++ xorg-x11-xauth xorg-x11-xinit xorg-x11-
deprecated-libs libXtst guestfish hwloc libhugetlbfs-utils libguestfs-tools policycoreutils-python
6. 6IOMMU IOTLB cache support
Use a kernel with support of huge pages IOTLB cache in IOMMU.
From vanilla kernel 3.14 this support is included. In case you are using an older
kernel, you should update your kernel.
Some distribution kernels have ported this requirement.
Find out if the kernel of the distribution you are using has this support:
RHEL 7.1 kernel (3.10.0-229.el7.x86_64) meets the requirement
RHEL 7.0 requires a specific upgrade tu support the requirement. You can
upgrade the kernel as follows:
$ wget http://people.redhat.com/~mtosatti/qemu-kvm-take5/kernel-3.10.0-123.el7gig2.x86_64.rpm
$ sudo rpm -Uvh kernel-3.10.0-123.el7gig2.x86_64.rpm --oldpackage
8. 8Enabling hugepages (I)
Enable 1G hugepages, by adding the following to the grub command line
default_hugepagesz=1G hugepagesz=1G
The number of huge pages can be also set at grub:
hugepages=24 (reserves 24GB)
Or with a oneshot service that runs on boot (for early memory allocation):
$ sudo vi /usr/lib/systemd/system/hugetlb-gigantic-pages.service
[Unit]
Description=HugeTLB Gigantic Pages Reservation
DefaultDependencies=no
Before=dev-hugepages.mount
ConditionPathExists=/sys/devices/system/node
ConditionKernelCommandLine=hugepagesz=1G
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/lib/systemd/hugetlb-reserve-pages
[Install]
WantedBy=sysinit.target
9. 9Enabling hugepages (II)
Then set the number of huge pages:
$ sudo vi /usr/lib/systemd/hugetlb-reserve-pages
#!/bin/bash
nodes_path=/sys/devices/system/node/
if [ ! -d $nodes_path ]; then
echo "ERROR: $nodes_path does not exist"
exit 1
fi
reserve_pages()
{
echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages
}
# This example reserves 12 pages of huge pages on each numa node
reserve_pages 12 node0
reserve_pages 12 node1
And enable the service:
$ sudo chmod +x /usr/lib/systemd/hugetlb-reserve-pages
$ sudo systemctl enable hugetlb-gigantic-pages
Recommended best practice: reserve 4GB per NUMA to run the OS and use all
other system memory for 1GB huge pages
Mount huge pages in /etc/fstab:
$ sudo echo "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" >> /etc/fstab
10. 10CPU isolation
Isolate CPUs so that the host OS is restricted to run only on some cores,
leaving the others to run VNFs in exclusive
Recommended best practice: run the OS on the first core of each NUMA node,
by adding the isolcpus field to the grub command line.
isolcpus=1-9,11-19,21-29,31-39
The exact CPU numbers depend on the CPU numbers presented by the host OS. In the previous example, CPUs
0, 10, 20 and 30 are excluded because CPU 0 and its sibling 20 correspond to the first core of NUMA node 0,
and CPU 10 and its sibling 30 correspond to the first core of NUMA node 1.
Running this awk script suggest the value to use in your compute node:
$ gawk 'BEGIN{pre=-2;} ($1=="processor"){pro=$3;} ($1=="core" && $4!=0){ if (pre+1==pro){endrange="-" pro}
else{cpus=cpus endrange sep pro; sep=","; endrange="";}; pre=pro;} END{printf("isolcpus=%sn",cpus
endrange);}' /proc/cpuinfo
isolcpus=2-35,38-71
11. 11Dedicated resource allocation
CPU
QPI
I/O
device
I/O
device
Core Core Core CoreCore
Core Core Core CoreCore
MEMORY
I/O
device
I/O
device
CPU
I/O
device
I/O
device
Core Core Core CoreCore
Core Core Core CoreCore
I/O
device
I/O
device
MEMORY
• CPUs: not oversubscribed, isolated from host OS
• Memory: huge pages
• I/O devices: passthrough, SR-IOV
Host OS + Hypervisor VM 1 VM 2 VM 3Not used
6
VM 5VM 4
12. 12Activating grub changes for iommu, huge pages and isolcpus
In RHEL 7/CentOS OS
$ sudo vi /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/swap crashkernel=auto rd.lvm.lv=rhel/root rhgb quiet intel_iommu=on
default_hugepagesz=1G hugepagesz=1G isolcpus=2-35,38-71"
GRUB_DISABLE_RECOVERY="true"
Update grub - BIOS:
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Update grub - EFI:
$ sudo grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
-Don’t forget to reboot the system.
- After boot check that all options were applied on boot:
$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-229.el7.x86_64 root=/dev/mapper/rhel_nfv105-root ro
rd.lvm.lv=rhel_nfv105/swap crashkernel=auto rd.lvm.lv=rhel_nfv105/root rhgb quiet intel_iommu=on
default_hugepagesz=1G hugepagesz=1G isolcpus=2-35,38-71
13. 13Deactivating KSM (Kernel Same-page Merging)
KSM enables the kernel to examine two or more already running programs
and compare their memory. If any memory regions or pages are identical, KSM
reduces multiple identical memory pages to a single page. This page is then
marked copy on write. If the contents of the page is modified by a guest virtual
machine, a new page is created for that guest virtual machine.
KSM has a performance overhead which may be too large for certain
environments or host physical machine systems.
KSM can be deactivated by stopping the ksmtuned and the ksm service.
Stopping the services deactivates KSM but does not persist after restarting.
# service ksmtuned stop
Stopping ksmtuned: [ OK ]
# service ksm stop
Stopping ksm: [ OK ]
Persistently deactivate KSM with the chkconfig command. To turn off the
services, run the following commands:
# chkconfig ksm off
# chkconfig ksmtuned off
14. 14Enabling SR-IOV
SR-IOV enabling depends on the NIC used
For Intel Niantic and Fortville NICs, the number of VF enabled is defined by
writing on
echo X > /sys/bus/pci/devices/<PF pci address>/sriov_numvfs
Recommended best practice: set the number of Vfs per PF by using udev rules:
$ cat /etc/udev/rules.d/pci_config.rules
ACTION=="add", KERNEL=="0000:05:00.0", SUBSYSTEM=="pci", RUN+="/usr/bin/bash -c 'echo 8 >
/sys/bus/pci/devices/0000:05:00.0/sriov_numvfs'"
ACTION=="add", KERNEL=="0000:05:00.1", SUBSYSTEM=="pci", RUN+="/usr/bin/bash -c 'echo 8 >
/sys/bus/pci/devices/0000:05:00.1/sriov_numvfs'"
ACTION=="add", KERNEL=="0000:0b:00.0", SUBSYSTEM=="pci", RUN+="/usr/bin/bash -c 'echo 8 >
/sys/bus/pci/devices/0000:0b:00.0/sriov_numvfs'“
…
Blacklist the ixgbevf module, by adding the following to the grub command line.
This must be done after adding this host to the openvim, but not before. The
reason for blacklisting this driver is because it causes that the vlan tags of
broadcast packets are not properly removed when received by an SRIOV port
modprobe.blacklist=ixgbevf (on grub boot line)
15. 15Pre-provision of Linux bridges (I)
OpenMANO relies on Linux bridges to interconnect VMs when there are no
high performance requirements for I/O. This is the case of control plane VNF
interfaces that are expected to carry a small amount of traffic.
A set of Linux bridges must be created on every host. Every Linux bridge must
be attached to a physical host interface with a specific VLAN. In addition, a
external management switch must be used to interconnect those physical host
interfaces. Bear in mind that the host interfaces used for data plane VM
interfaces will be different from the host interfaces used for control plane VM
interfaces.
Currently OpenMANO configuration uses 20 bridges named virbrMan1 to
virbrMan20, using vlan tags 2001 to 2020 respectively to interconnect VNF
elements
Another bridge called virbrInf with vlan tag 1001 is used to interconnect
physical infrastructure (hosts, switches and management VMs like openMANO
itself, in case of running virtualized)
16. 16Pre-provision of Linux bridges (II)
To create a bridge in RHEL 7.1 two files must be defined in
/etc/sysconfig/network-scripts:
$ cat /etc/sysconfig/network-scripts/ifcfg-virbrMan1
DEVICE=virbrMan1
TYPE=Bridge
ONBOOT=yes
DELAY=0
NM_CONTROLLED=no
USERCTL=no
$ cat /etc/sysconfig/network-scripts/ifcfg-em2.2001
DEVICE=em2.2001
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no
VLAN=yes
BOOTPROTO=none
BRIDGE=virbrMan1
The host interface (em2 in the example), the name of the bridge (virbrMan1)
and the VLAN tag (2001) can be different. In case you use a different name for
the bridge, you should take it into account in 'openvimd.cfg'
17. 17Additional configuration to allow access from openvim (I)
Uncomment the following lines of /etc/libvirt/libvirtd.conf to allow external
connection to libvirtd:
unix_sock_group = "libvirt"
unix_sock_rw_perms = "0770"
unix_sock_dir = "/var/run/libvirt"
auth_unix_rw = "none“
Create and configure a user for openvim access. A new user must be created
to access the compute node from openvim. The user must belong to group
libvirt, and other users must be able to access its home:
#creates a new user
$ sudo useradd -m -G libvirt <user>
#or modified an existing user
$ sudo usermod -a -G libvirt <user>
# Allow other users to access /home/<user>
$ sudo chmod +rx /home/<user>
18. 18Additional configuration to allow access from openvim (II)
Copy the ssh key of openvim into compute node. From the machine where
openvim is running (not from the compute node), run:
openvim $ ssh-keygen #needed for generate ssh keys if not done before
openvim $ ssh-copy-id <user>@<compute host>
After that, ensure that you can access directly without password prompt from
openvim to compute host:
openvim $ ssh <user>@<compute host>
Create a local folder for image storage and grant access from openvim:
Images will be stored in a remote shared location accessible by all compute nodes.
This can be a NFS file system for example. The VNFs description will contain a path to
images stored in this folder. Openvim assumes that images are stored here and copied
to a local file system path at virtual machine creation. The remote shared
configuration is outside the scope of the compute node configuration, as it is required
only by the VNF descriptors.
19. 19Additional configuration to allow access from openvim (III)
A local folder must be created (in default configuration, we assume
/opt/VNF/images) where the deployed VMs will be copied, and access must
be granted to libvirt group in a SElinux system. In the automation script we
assume that "/home" contains more disk space than "/", so a link to a local
home folder is created:
$ mkdir -p /home/<user>/VNF_images
$ rm -f /opt/VNF/images
$ mkdir -p /opt/VNF/
$ ln -s /home/<user>/VNF_images /opt/VNF/images
$ chown -R <user> /opt/VNF
# SElinux management
$ semanage fcontext -a -t virt_image_t "/home/<user>/VNF_images(/.*)?"
$ cat /etc/selinux/targeted/contexts/files/file_contexts.local |grep virt_image
$ restorecon -R -v /home/<user>/VNF_images
20. 20Compute node configuration in special cases (I)
Datacenter with different types of compute nodes:
In a datacenter with different types of compute nodes, it might happen that compute
nodes use different interface naming schemes. In that case, you can take the most
used interface naming scheme as the default one, and make an additional
configuration in the compute nodes that do not follow the default naming scheme.
In order to do that, you should create the file hostinfo.yaml file inside the image local
folder (e.g. typically /opt/VNF/images). It contains entries with:
openvim-expected-name: local-iface-name
For example, if openvim contains a network using macvtap to the physical interface
em1 (macvtap:em1) but in this compute node the interface is called eno1, creates a
local-image-folder/hostinfo.yaml file with this content:
em1: eno1
21. 21Compute node configuration in special cases (II)
Compute nodes in a development workstation
If a normal workstation is used to develop VNFs (as in this training) some of the
compute node requirements should not be configured, as VNF performance is not a
possible target.
In order to get a working development environment:
• Do not configure huge pages, as it would substract memory for the development
environment
• Do not configure isolcpus, as it would substract CPUs for the development
environment
• Do not configure SR-IOV interfaces, as normally 10GB data plane interfaces won’t
be available
22. 22Available automation scripts in OpenMANO github
Automate all operations from previous slides with Telefonica NFV Reference
Lab recommended best practices
https://github.com/nfvlabs/openmano/blob/master/scripts/configure-
compute-node-RHEL7.1.sh
Personalize RHEL7.1 on compute nodes
Prepared to work with the following network card drivers:
- tg3 driver for management interfaces
- ixgbe and i40e driver for data plane interfaces
https://github.com/nfvlabs/openmano/blob/master/scripts/configure-
compute-node-develop.sh
For develop workstations, without isolcpus, huge pages, data plane
interfaces