SUPERFLUIDITY project goals: instantiate network functions and services on-the-fly; run them anywhere in the network (core, aggregation, edge); migrate them transparently to different locations; make them portable across heterogeneous infrastructure environments (computing and networking), while taking advantage of specific hardware features, such as high performance accelerators, when available.
Conclusions: Unikernel virtualization can provide VM instantiation and boot time in the order of ms; ongoing: consolidation of results, generic and automatic optimization process for hypervisor toolstack and for guests. Work is still needed at the level of Virtual Infrastructure Managers e.g. OpenStack (~ 1 s), Nomad (~ 300 ms). VIMs are currently designed for generality, the challenge is to specialize them in a flexible way, keeping the compatibility with the mainstream versions.
Deploying of Unikernels in the NFV InfrastructureStefano Salsano
Unikernel technology allows to build tiny VMs with memory footprint in the order of hundreds of KBs and boot time in the order of milliseconds.
We consider the usage of Unikernels as Virtual Network Functions for NFV, in particular assuming discuss highly dynamic and distributed scenarios in which Unikernels need to be instantiated in few tens of milliseconds in a highly distributed infrastructures.
We have patched existing VIMs (Virtual Infrastructure Managers) like OpenStack, OpenVIM and a lightweight orchestrator like Nomad in order to orchestrate ClickOs Unikernels and we measured the achieved performances.
Finally we present a complete testbed for the orchestration of ClickOS Unikernels, based on the enhancement of OpenVIM and of XEN. The proposed enhancements are Open Source.
Performance Evaluation and Tuning of Virtual Infrastructure Managers for (Micro) Virtual Network Functions
Virtualized Network Functions (VNFs) are emerging as the keystone of 5G network architectures: flexibility, agility, fast instantiation times, consolidation, Commercial Off The Shelf (COTS) hardware support and significant cost savings are fundamental for meeting the requirements of the new generation of mobile networks. In this paper we deal with the management of the virtual computing resources for the execution of Micro VNFs. This functionality is performed by the Virtual Infrastructure Manager (VIM) in the NFV MANagement and Orchestration (MANO) reference architecture. We discuss the VIM instantiation process and propose a generic reference model, starting from the analysis of two Open Source VIMs, namely OpenStack Nova and Nomad. We implemented a tuned version of the VIMs with the specific goal of reducing the duration of the instantiation process. We realized a performance comparison of the two VIMs, both considering the plain and the tuned versions. The tuned VIMs and the performance evaluation tools that we have employed are provided openly and can be downloaded from our repository.
Extending ETSI VNF descriptors and OpenVIM to support UnikernelsStefano Salsano
After a short introduction to the goals and approach of the Superfluidity EU research project, we discuss the Unikernels and their orchestration aspects. Unikernel technology allows to build tiny VMs with memory footprint in the order of hundreds of KBs and boot time in the order of milliseconds. We focus on ClickOS Unikernels.
We have adapted 3 VIMs (OpenStack, Nomad, OpenVIM) to support ClickOS Unikernels and report a performance evaluation of the VM instantiation time.
We have implemented a scenario that can combines Unikernels and regular VMs in the same Network Service or VNF extending OpenVIM.We describe how we have extended the ETSI NFV models and OpenVIM. In particular, we provide the details of the OpenVIM descriptor extensions to support Unikernels.
Superfluid Orchestration of heterogeneous Reusable Functional Blocks for 5G n...Stefano Salsano
The demo is composed of three scenes presenting tools and results from the Superfluidity project.
1) RDCL 3D is an extensible web framework which can be used to: edit, validate, visualize service and component descriptors expressed with different modelling languages (RDCLs); deploy the component / services over execution platforms.
2) Software defined wireless network (RAN as a Service). An end-to-end wireless network is described as a chain of RFBs (Reusable Functional Blocks) with RDCL 3D. This chain is dynamically instantiated in a cloud environment using containers. The demonstration shows a full software solution orchestrating different RFBs (RAN and CORE) over Central/EDGE/Front-End clouds. The fronthaul network is also made reprogrammable through SDN, which is also deployed as RFBs.
3) Orchestration of micro-VNFs (Unikernels). We have added support for Unikernels (ClickOS) in the XEN hypervisor and in OpenVIM Virtual Infrastructure Manager. Regular VMs (XEN HVM) and Unikernels can run together in the same infrastructure. In the demo we dynamically instantiate an end-to-end service on the infrastructure by chaining regular VMs and Unikernel-based VNFs.
http://cloudstack.org/about-cloudstack/cloudstack-events/viewevent/29-build-an-open-source-cloud-day-boston.html
XCP combines the Xen hypervisor with enhanced security, storage, and network virtualization technologies to offer a rich set of virtualinfrastructure cloud services. These XCP cloud services can be leveraged by cloud providers to enable isolation and multi-tenancy capabilities in their environments. XCP also provides the user requirements of security, availability, performance, and isolation for private and public cloud deployments.
The document discusses the evolution of XenServer architecture to address scalability limitations. The current architecture works well now but will hit bottlenecks on larger servers. The new "Windsor" architecture uses domain 0 disaggregation to move virtualization functions out of domain 0 and into separate domains for improved performance, scalability, and isolation. Key benefits include better VM density, use of hardware resources, stability, availability, and extensibility. It provides a flexible platform that can scale-out across servers.
A Reimplementation of NetBSD Based on a Microkernel by Andrew S. Tanenbaumeurobsdcon
Minix 3 is a reimplementation of NetBSD based on a microkernel architecture. It aims to build a highly reliable operating system through isolation of components, running drivers and servers as user-mode processes, and making the system self-healing. The presentation outlines the architecture and goals of Minix 3, and encourages participation from the audience to help further develop and expand the system.
Deploying of Unikernels in the NFV InfrastructureStefano Salsano
Unikernel technology allows to build tiny VMs with memory footprint in the order of hundreds of KBs and boot time in the order of milliseconds.
We consider the usage of Unikernels as Virtual Network Functions for NFV, in particular assuming discuss highly dynamic and distributed scenarios in which Unikernels need to be instantiated in few tens of milliseconds in a highly distributed infrastructures.
We have patched existing VIMs (Virtual Infrastructure Managers) like OpenStack, OpenVIM and a lightweight orchestrator like Nomad in order to orchestrate ClickOs Unikernels and we measured the achieved performances.
Finally we present a complete testbed for the orchestration of ClickOS Unikernels, based on the enhancement of OpenVIM and of XEN. The proposed enhancements are Open Source.
Performance Evaluation and Tuning of Virtual Infrastructure Managers for (Micro) Virtual Network Functions
Virtualized Network Functions (VNFs) are emerging as the keystone of 5G network architectures: flexibility, agility, fast instantiation times, consolidation, Commercial Off The Shelf (COTS) hardware support and significant cost savings are fundamental for meeting the requirements of the new generation of mobile networks. In this paper we deal with the management of the virtual computing resources for the execution of Micro VNFs. This functionality is performed by the Virtual Infrastructure Manager (VIM) in the NFV MANagement and Orchestration (MANO) reference architecture. We discuss the VIM instantiation process and propose a generic reference model, starting from the analysis of two Open Source VIMs, namely OpenStack Nova and Nomad. We implemented a tuned version of the VIMs with the specific goal of reducing the duration of the instantiation process. We realized a performance comparison of the two VIMs, both considering the plain and the tuned versions. The tuned VIMs and the performance evaluation tools that we have employed are provided openly and can be downloaded from our repository.
Extending ETSI VNF descriptors and OpenVIM to support UnikernelsStefano Salsano
After a short introduction to the goals and approach of the Superfluidity EU research project, we discuss the Unikernels and their orchestration aspects. Unikernel technology allows to build tiny VMs with memory footprint in the order of hundreds of KBs and boot time in the order of milliseconds. We focus on ClickOS Unikernels.
We have adapted 3 VIMs (OpenStack, Nomad, OpenVIM) to support ClickOS Unikernels and report a performance evaluation of the VM instantiation time.
We have implemented a scenario that can combines Unikernels and regular VMs in the same Network Service or VNF extending OpenVIM.We describe how we have extended the ETSI NFV models and OpenVIM. In particular, we provide the details of the OpenVIM descriptor extensions to support Unikernels.
Superfluid Orchestration of heterogeneous Reusable Functional Blocks for 5G n...Stefano Salsano
The demo is composed of three scenes presenting tools and results from the Superfluidity project.
1) RDCL 3D is an extensible web framework which can be used to: edit, validate, visualize service and component descriptors expressed with different modelling languages (RDCLs); deploy the component / services over execution platforms.
2) Software defined wireless network (RAN as a Service). An end-to-end wireless network is described as a chain of RFBs (Reusable Functional Blocks) with RDCL 3D. This chain is dynamically instantiated in a cloud environment using containers. The demonstration shows a full software solution orchestrating different RFBs (RAN and CORE) over Central/EDGE/Front-End clouds. The fronthaul network is also made reprogrammable through SDN, which is also deployed as RFBs.
3) Orchestration of micro-VNFs (Unikernels). We have added support for Unikernels (ClickOS) in the XEN hypervisor and in OpenVIM Virtual Infrastructure Manager. Regular VMs (XEN HVM) and Unikernels can run together in the same infrastructure. In the demo we dynamically instantiate an end-to-end service on the infrastructure by chaining regular VMs and Unikernel-based VNFs.
http://cloudstack.org/about-cloudstack/cloudstack-events/viewevent/29-build-an-open-source-cloud-day-boston.html
XCP combines the Xen hypervisor with enhanced security, storage, and network virtualization technologies to offer a rich set of virtualinfrastructure cloud services. These XCP cloud services can be leveraged by cloud providers to enable isolation and multi-tenancy capabilities in their environments. XCP also provides the user requirements of security, availability, performance, and isolation for private and public cloud deployments.
The document discusses the evolution of XenServer architecture to address scalability limitations. The current architecture works well now but will hit bottlenecks on larger servers. The new "Windsor" architecture uses domain 0 disaggregation to move virtualization functions out of domain 0 and into separate domains for improved performance, scalability, and isolation. Key benefits include better VM density, use of hardware resources, stability, availability, and extensibility. It provides a flexible platform that can scale-out across servers.
A Reimplementation of NetBSD Based on a Microkernel by Andrew S. Tanenbaumeurobsdcon
Minix 3 is a reimplementation of NetBSD based on a microkernel architecture. It aims to build a highly reliable operating system through isolation of components, running drivers and servers as user-mode processes, and making the system self-healing. The presentation outlines the architecture and goals of Minix 3, and encourages participation from the audience to help further develop and expand the system.
Oscon 2012 : From Datacenter to the Cloud - Featuring Xen and XCPThe Linux Foundation
Here are some common existing deployment methods for virtual machines:
- Manual installation from ISO - Booting a virtual machine from an installation ISO and manually installing an operating system through the graphical user interface. Good for one-off deployments but not scalable.
- Scripted installation - Using scripts to automate the installation process. Better than manual but still requires customizing for each new virtual machine.
- Templates - Creating a "golden image" template virtual machine with a pre-installed and configured operating system. New virtual machines can be quickly deployed by cloning the template. Allows consistent deployments but still requires customizing each template.
- Configuration management - Using configuration management tools like Puppet, Chef, Ansible to declar
UPDATED OCTOBER 2015: Unikernels are small, fast, easily deployable, and very secure application stacks. Lacking a traditional operating system layer, they provide a new way of looking at the cloud which goes beyond the methodologies used by Docker and other container technologies.
This is an update of the deck as delivered by Russell Pavlicek. This includes some ground-breaking work done in the Rump Kernel project to bring web servers, database, and scripting language into the world of Unikernels.
Deck result of the Ohio Linuxfest 2015 in Columbus, OH.
This document provides an overview of Xen virtualization and the Xen community. It discusses the goals of Xen, including paravirtualization and hardware virtualization techniques. It also summarizes recent work done to improve FreeBSD support in Xen, including PVHVM support in FreeBSD 10.x and ongoing work to support PVH domains in FreeBSD HEAD. Finally, it introduces the Xen toolstack.
The document discusses OnApp's distributed block storage platform built on Xen. It aims to provide affordable enterprise-level storage for cloud providers using commodity hardware. The platform utilizes integrated storage drives within hypervisors managed by storage VMs. Content is replicated across drives and servers for high performance and resilience without a single point of failure. The distributed design allows for scaling of IOPS and capacity without the high costs of traditional SANs.
This talk will discuss the challenges of client virtualization and introduce at a technical level XenClient XT, a security-oriented client virtualization product by Citrix. By describing XenClient XT architecture and features, it will be shown how the unique Xen's design and its support for modern x86 platform hardware can increase security and isolation among VMs.
Disaggregation of services provided by the platform will be a key of this talk. It will also be shown how third party software components can provide services to VMs in a secure and controlled way.
Xen Cloud Platform (XCP) provides a complete virtualization stack based on the open source Xen hypervisor. XCP includes the Xen hypervisor, Xen management tools (XAPI), virtual networking (Open vSwitch), storage integration, and templates for installing Windows and Linux guests. XAPI is the central management component that provides an API and user interfaces for provisioning and managing virtual machines and infrastructure resources. XCP aims to deliver an enterprise-ready cloud platform with high performance, security, and scalability.
This document provides an introduction to the NetBSD kernel. It discusses NetBSD's history and focus on portability across architectures. Key features of the NetBSD kernel discussed include its process scheduling, SMP support, threading model using scheduler activations, and event notification using kqueues. Debugging support via DDB and KGDB is also summarized. The document provides a brief overview of NetBSD's build system and configuration, and notes some limitations in device support. It concludes by highlighting NetBSD's clean code, documentation, and commercial support options.
Xen in the Cloud provides a brief history of Xen in cloud computing and an overview of current Xen projects. Xen originated as an academic project in the late 1990s and was an early influence on cloud platforms like Amazon EC2. The Xen Hypervisor was designed for cloud computing. Today, the Xen Community Project oversees various open source Xen-based projects including Xen itself, Xen Cloud Platform (XCP), and the Xen API (XAPI). XCP provides a complete virtualization stack and XAPI enables cloud management. Work is ongoing to integrate Xen further with Linux and bring Xen security and reliability features to cloud platforms.
XPDDS18: Linux-based Device Model Stubdomains in Qubes OS - Marek Marczykowsk...The Linux Foundation
One of the killer features of Xen is the ability to contain qemu in a minimal stubdomain. But even though qemu-upstream has been supported by Xen for a long time, stubdomains are compatible only with the ancient qemu-traditional. There were multiple approaches to this problem discussed over time (rumprun, Linux, ...), including some PoC patches. In this presentation I'll explain why we've chosen the Linux solution in Qubes OS and what challenges we faced to make it really work.
Xen is a mature enterprise-grade virtual machine with many advanced security features which are unique to Xen. For this reason it's the hypervisor of choice for the NSA, the DoD, and the new QubesOS Secure Desktop project. However, while much of the security of Xen is inherent in its design, many of the advanced security features, such as stub domains, driver domains, XSM, and so on are not enabled by default. This session will describe all of the advanced security features of Xen, and the best way to configure them for the Cloud environment.
As the current stubdomain based on minios is difficult to maintain, we have worked on a stubdomain based on Linux. This helps to use QEMU upsteam in the stubdom with little change.
So first I will present how a Linux based stubdomain is built and lauched, and the difficulties around it. Then, to see if this is a viable option, I will show disk and network benchmarks to compare it with a traditional QEMU in dom0 configuration.
To finish, I will present the current limitations of this type of stubdomains.
Design and implementation of a reliable and cost-effective cloud computing in...Francesco Taurino
This document summarizes the INFN Napoli experience in designing and implementing a reliable and cost-effective cloud computing infrastructure. Key aspects included using existing hardware, virtualization and clustering technologies to consolidate services and reduce costs. A network with redundant switches and storage servers using GlusterFS provided high availability. Custom tools were developed to simplify administration tasks like provisioning, migration, and load balancing of virtual machines. The solution provided an efficient and reliable private cloud with over one year of uninterrupted uptime.
RBD, the RADOS Block Device in Ceph, gives you virtually unlimited scalability (without downtime), high performance, intelligent balancing and self-healing capabilities that traditional SANs can't provide. Ceph achieves this higher throughput through a unique system of placing objects across multiple nodes, and adaptive load balancing that replicates frequently accessed objects over more nodes. This talk will give a brief overview of the Ceph architecture, current integration with Apache CloudStack, and recent advancements with Xen and blktap2.
Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)The Linux Foundation
Xen Cloud Platform (XCP) provides a complete virtualization stack for server virtualization and cloud computing. It is based on the open source Xen hypervisor and extends it with features for cloud management and orchestration through the open source XenAPI toolstack. XCP delivers Xen, XenAPI, and all related components as a pre-packaged virtual appliance that can be easily deployed. This summary focuses on the history and architecture of Xen in cloud computing and how XCP builds upon Xen to deliver an enterprise-ready virtualization platform.
The document provides information about an IT professional who manages Insan Solutions and provides various IT services including software development, virtualization using KVM, and IT support. It then discusses KVM virtualization in more detail, explaining that KVM allows using the Linux kernel as a hypervisor for virtual machines, providing benefits like leveraging the Linux scheduler and memory management, free cost, and stable I/O performance. The document concludes with a demonstration of KVM virtualization.
As time goes on more OSes are getting Dom0 support, so there's a growing need to provide a platform independent set of tools from which to operate Xen. This talk will expose the different mechanisms used on NetBSD that diverge from the Linux approach, and how Xen is improving its userspace tools to provide a more platform independent support.
The talk also touches upon various features that BSD provides or plans to provide with Xen, thus presenting a coherent roadmap view of where we've come from, and what lies ahead.
What's in this talk:
Xen and BSD
Status updates from the world of BSD
Ecosystem/userbase
Superfluid networking for 5G: vision and state of the artStefano Salsano
In physics, superfluidity is a state in which matter behaves like a fluid with zero viscosity. The vision of superfluid networking corresponds to the ability to decompose services into network functions to be deployed on-the-fly, run them anywhere in the network (core, aggregation, edge) and shift them transparently to different locations and heterogeneous execution environments. Superfluid networking tackles crucial shortcomings in today’s networks like long provisioning times, with wasteful over-provisioning used to meet variable demand and reliance on rigid and cost-ineffective hardware devices. The 5G System architecture can be deployed using techniques like Network Function Virtualization (NFV) that potentially enable the realization of superfluid networking. In this talk, we discuss the state of the art of NFV models and infrastructures for 5G and illustrate the path toward superfluid networking, considering the results of the Superfluidity research project (funded by EU in the H2020 framework).
Superfluid Deployment of Virtual Functions: Exploiting Mobile Edge Computing ...Stefano Salsano
The Network Function Virtualization (NFV) technologies are fundamental enablers to meet the objectives of 5G networks. In this work, we first introduce the architecture for dynamic deployment and composition of virtual functions proposed by the Superfluidity project. Then we consider a case study based on a typical 5G scenario. In particular, we detail the design and implementation of a Video Streaming service exploiting Mobile Edge Computing (MEC) functionalities. The analysis of the case study provide an assessment on what can be achieved with current technologies and gives a first confirmation of the validity of the proposed approach. Finally, we identify future directions of work towards the realization of a superfluid softwarized network.
Oscon 2012 : From Datacenter to the Cloud - Featuring Xen and XCPThe Linux Foundation
Here are some common existing deployment methods for virtual machines:
- Manual installation from ISO - Booting a virtual machine from an installation ISO and manually installing an operating system through the graphical user interface. Good for one-off deployments but not scalable.
- Scripted installation - Using scripts to automate the installation process. Better than manual but still requires customizing for each new virtual machine.
- Templates - Creating a "golden image" template virtual machine with a pre-installed and configured operating system. New virtual machines can be quickly deployed by cloning the template. Allows consistent deployments but still requires customizing each template.
- Configuration management - Using configuration management tools like Puppet, Chef, Ansible to declar
UPDATED OCTOBER 2015: Unikernels are small, fast, easily deployable, and very secure application stacks. Lacking a traditional operating system layer, they provide a new way of looking at the cloud which goes beyond the methodologies used by Docker and other container technologies.
This is an update of the deck as delivered by Russell Pavlicek. This includes some ground-breaking work done in the Rump Kernel project to bring web servers, database, and scripting language into the world of Unikernels.
Deck result of the Ohio Linuxfest 2015 in Columbus, OH.
This document provides an overview of Xen virtualization and the Xen community. It discusses the goals of Xen, including paravirtualization and hardware virtualization techniques. It also summarizes recent work done to improve FreeBSD support in Xen, including PVHVM support in FreeBSD 10.x and ongoing work to support PVH domains in FreeBSD HEAD. Finally, it introduces the Xen toolstack.
The document discusses OnApp's distributed block storage platform built on Xen. It aims to provide affordable enterprise-level storage for cloud providers using commodity hardware. The platform utilizes integrated storage drives within hypervisors managed by storage VMs. Content is replicated across drives and servers for high performance and resilience without a single point of failure. The distributed design allows for scaling of IOPS and capacity without the high costs of traditional SANs.
This talk will discuss the challenges of client virtualization and introduce at a technical level XenClient XT, a security-oriented client virtualization product by Citrix. By describing XenClient XT architecture and features, it will be shown how the unique Xen's design and its support for modern x86 platform hardware can increase security and isolation among VMs.
Disaggregation of services provided by the platform will be a key of this talk. It will also be shown how third party software components can provide services to VMs in a secure and controlled way.
Xen Cloud Platform (XCP) provides a complete virtualization stack based on the open source Xen hypervisor. XCP includes the Xen hypervisor, Xen management tools (XAPI), virtual networking (Open vSwitch), storage integration, and templates for installing Windows and Linux guests. XAPI is the central management component that provides an API and user interfaces for provisioning and managing virtual machines and infrastructure resources. XCP aims to deliver an enterprise-ready cloud platform with high performance, security, and scalability.
This document provides an introduction to the NetBSD kernel. It discusses NetBSD's history and focus on portability across architectures. Key features of the NetBSD kernel discussed include its process scheduling, SMP support, threading model using scheduler activations, and event notification using kqueues. Debugging support via DDB and KGDB is also summarized. The document provides a brief overview of NetBSD's build system and configuration, and notes some limitations in device support. It concludes by highlighting NetBSD's clean code, documentation, and commercial support options.
Xen in the Cloud provides a brief history of Xen in cloud computing and an overview of current Xen projects. Xen originated as an academic project in the late 1990s and was an early influence on cloud platforms like Amazon EC2. The Xen Hypervisor was designed for cloud computing. Today, the Xen Community Project oversees various open source Xen-based projects including Xen itself, Xen Cloud Platform (XCP), and the Xen API (XAPI). XCP provides a complete virtualization stack and XAPI enables cloud management. Work is ongoing to integrate Xen further with Linux and bring Xen security and reliability features to cloud platforms.
XPDDS18: Linux-based Device Model Stubdomains in Qubes OS - Marek Marczykowsk...The Linux Foundation
One of the killer features of Xen is the ability to contain qemu in a minimal stubdomain. But even though qemu-upstream has been supported by Xen for a long time, stubdomains are compatible only with the ancient qemu-traditional. There were multiple approaches to this problem discussed over time (rumprun, Linux, ...), including some PoC patches. In this presentation I'll explain why we've chosen the Linux solution in Qubes OS and what challenges we faced to make it really work.
Xen is a mature enterprise-grade virtual machine with many advanced security features which are unique to Xen. For this reason it's the hypervisor of choice for the NSA, the DoD, and the new QubesOS Secure Desktop project. However, while much of the security of Xen is inherent in its design, many of the advanced security features, such as stub domains, driver domains, XSM, and so on are not enabled by default. This session will describe all of the advanced security features of Xen, and the best way to configure them for the Cloud environment.
As the current stubdomain based on minios is difficult to maintain, we have worked on a stubdomain based on Linux. This helps to use QEMU upsteam in the stubdom with little change.
So first I will present how a Linux based stubdomain is built and lauched, and the difficulties around it. Then, to see if this is a viable option, I will show disk and network benchmarks to compare it with a traditional QEMU in dom0 configuration.
To finish, I will present the current limitations of this type of stubdomains.
Design and implementation of a reliable and cost-effective cloud computing in...Francesco Taurino
This document summarizes the INFN Napoli experience in designing and implementing a reliable and cost-effective cloud computing infrastructure. Key aspects included using existing hardware, virtualization and clustering technologies to consolidate services and reduce costs. A network with redundant switches and storage servers using GlusterFS provided high availability. Custom tools were developed to simplify administration tasks like provisioning, migration, and load balancing of virtual machines. The solution provided an efficient and reliable private cloud with over one year of uninterrupted uptime.
RBD, the RADOS Block Device in Ceph, gives you virtually unlimited scalability (without downtime), high performance, intelligent balancing and self-healing capabilities that traditional SANs can't provide. Ceph achieves this higher throughput through a unique system of placing objects across multiple nodes, and adaptive load balancing that replicates frequently accessed objects over more nodes. This talk will give a brief overview of the Ceph architecture, current integration with Apache CloudStack, and recent advancements with Xen and blktap2.
Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)The Linux Foundation
Xen Cloud Platform (XCP) provides a complete virtualization stack for server virtualization and cloud computing. It is based on the open source Xen hypervisor and extends it with features for cloud management and orchestration through the open source XenAPI toolstack. XCP delivers Xen, XenAPI, and all related components as a pre-packaged virtual appliance that can be easily deployed. This summary focuses on the history and architecture of Xen in cloud computing and how XCP builds upon Xen to deliver an enterprise-ready virtualization platform.
The document provides information about an IT professional who manages Insan Solutions and provides various IT services including software development, virtualization using KVM, and IT support. It then discusses KVM virtualization in more detail, explaining that KVM allows using the Linux kernel as a hypervisor for virtual machines, providing benefits like leveraging the Linux scheduler and memory management, free cost, and stable I/O performance. The document concludes with a demonstration of KVM virtualization.
As time goes on more OSes are getting Dom0 support, so there's a growing need to provide a platform independent set of tools from which to operate Xen. This talk will expose the different mechanisms used on NetBSD that diverge from the Linux approach, and how Xen is improving its userspace tools to provide a more platform independent support.
The talk also touches upon various features that BSD provides or plans to provide with Xen, thus presenting a coherent roadmap view of where we've come from, and what lies ahead.
What's in this talk:
Xen and BSD
Status updates from the world of BSD
Ecosystem/userbase
Superfluid networking for 5G: vision and state of the artStefano Salsano
In physics, superfluidity is a state in which matter behaves like a fluid with zero viscosity. The vision of superfluid networking corresponds to the ability to decompose services into network functions to be deployed on-the-fly, run them anywhere in the network (core, aggregation, edge) and shift them transparently to different locations and heterogeneous execution environments. Superfluid networking tackles crucial shortcomings in today’s networks like long provisioning times, with wasteful over-provisioning used to meet variable demand and reliance on rigid and cost-ineffective hardware devices. The 5G System architecture can be deployed using techniques like Network Function Virtualization (NFV) that potentially enable the realization of superfluid networking. In this talk, we discuss the state of the art of NFV models and infrastructures for 5G and illustrate the path toward superfluid networking, considering the results of the Superfluidity research project (funded by EU in the H2020 framework).
Superfluid Deployment of Virtual Functions: Exploiting Mobile Edge Computing ...Stefano Salsano
The Network Function Virtualization (NFV) technologies are fundamental enablers to meet the objectives of 5G networks. In this work, we first introduce the architecture for dynamic deployment and composition of virtual functions proposed by the Superfluidity project. Then we consider a case study based on a typical 5G scenario. In particular, we detail the design and implementation of a Video Streaming service exploiting Mobile Edge Computing (MEC) functionalities. The analysis of the case study provide an assessment on what can be achieved with current technologies and gives a first confirmation of the validity of the proposed approach. Finally, we identify future directions of work towards the realization of a superfluid softwarized network.
Mpls conference 2016-data center virtualisation-11-marchAricent
Aricent’s presentation on “Micro VNFs and Micro service environment” on next generation Virtualized Network Functions (VNFs) is heating up. In debate on micro services, carriers has requested communities to step up research on micro service deployments.
Aricent believes that existing VNFs, which comes directly from the physical appliances software are not rightly designed and are less suited for cloud operations. These first generation VNFs are replication of physical appliances, monolithic architecture and need more computational power. These are heavy with physical appliance platform features i.e. HA, ISSU, Nonstop Routing/Switching and they have lots of redundant code which may not be necessary on cloud. As cloud platform provides these feature through its inherent platform capabilities.
Unikraft: Fast, Specialized Unikernels the Easy WayScyllaDB
P99 CONF
Unikernels are famous for providing excellent performance in terms of boot times, throughput and memory consumption, to name a few metrics. However, they are infamous for making it hard and extremely time consuming to extract such performance, and for needing significant engineering effort in order to port applications to them. We introduce Unikraft, a novel micro-library OS that (1) fully modularizes OS primitives so that it is easy to customize the unikernel and include only relevant components and (2) exposes a set of composable, performance-oriented APIs in order to make it easy for developers to obtain high performance.
Our evaluation using off-the-shelf applications such as nginx, SQLite, and Redis shows that running them on Unikraft results in a 1.7x-2.7x performance improvement compared to Linux guests. In addition, Unikraft images for these apps are around 1MB, require less than 10MB of RAM to run, and boot in around 1ms on top of the VMM time (total boot time 3ms-40ms). Unikraft is a Linux Foundation open source project and can be found at www.unikraft.org.
Presentation given at the 2017 LinuxCon China
Unikernel is a novel software technology that links an application with OS in the form of a library and packages them into a specialized image that facilitates direct deployment on a hypervisor. Comparing to the traditional VM or the recent containers, Unikernels are smaller, more secure and efficient, making them ideal for cloud environments. There are already lots of open source projects like OSv, Rumprun and so on. But why these existing unikernels have yet to gain large popularity broadly? We think Unikernels are facing three major challenges: 1. Compatibility with existing applications; 2. Lack of production support (e.g. monitoring, debugging, logging); 3. Lack of compelling use case. In this presentation, we will review our investigations and exploration of if-how we can convert Linux as Unikernel to eliminate these significant shortcomings, plus some explorations of coordinating and cooperating with hypervisor.
RDCL 3D, a Model Agnostic Web Framework for the Design and Composition of NFV...Stefano Salsano
RDCL 3D is a “model agnostic” web framework for the design and composition of NFV services and components. The framework allows editing and validating the descriptors of services and components both textually and graphically and supports the interaction with external orchestrators or with deployment and execution environments. RDCL 3D is open source and designed with a modular approach, allowing developers to “plug in” the support for new models. We describe several advances with respect to the NFV state of the art, which have been implemented with RDCL 3D. We have integrated in the platform the latest ETSI NFV ISG model specifications for which no parsers/validators were available. We have also included in the platform the support for OASIS TOSCA models, reusing existing parsers. Then we have considered the modelling of components in a modular software router (Click), which goes beyond the traditional scope of NFV. We have further developed this approach by combining traditional NFV components (Virtual Network Functions) and Click elements in a single model. Finally, we have considered the support of this solution using the Unikernels virtualization technology.
Marcelo Perazolo, Lead Software Architect, IBM Corporation - Monitoring a Pow...Nagios
Marcelo Perazolo, Lead Software Architect, IBM Corporation - In this session, Marcelo will describe how Nagios can be
integrated and extended for the monitoring of a typical
power-based converged infrastructure, and how it interfaces with existing element managers to provide a single point of integration for passive and active monitoring purposes.
LF_DPDK17_OpenNetVM: A high-performance NFV platforms to meet future communic...LF_DPDK
This document discusses software-based networking and network function virtualization (NFV). It introduces NetVM, an NFV platform developed by the author that provides high performance packet delivery across virtual machines using DPDK for zero-copy networking. NetVM enables complex network services to be distributed across multiple VMs while maintaining high throughput. The author also discusses OpenNetVM, an open source version of NetVM, and contributions like Flurries that enable unique network functions to run per flow for improved scalability. NFVnice, a userspace framework for scheduling NFV chains, is also introduced to improve throughput, fairness and CPU utilization.
The Modern Telco Network: Defining The Telco CloudMarco Rodrigues
This document discusses the modern telco network and the telco cloud. It begins by explaining why telcos need to move to a cloud model due to factors like IP transport commoditization and the customer experience. It then defines what a telco cloud is, highlighting its key properties like physical distribution, low latency, and seamless integration of data centers and networks. Requirements for the telco cloud are outlined, including the need to support various use cases and unique requirements of telco VNFs. Finally, a mobile use case is presented to demonstrate how a telco cloud could support functions like the EPC and provide orchestration across distributed infrastructure.
Explain the elements of the NFV infrastructure and their interrelationships.
Understand key design issues related to virtualized network functions.
Explain the purpose of and operation of NFV management and orchestration.
Present an overview of important NFV use cases.
Virtualisation For Network Testing & Staff TrainingAPNIC
This document discusses the benefits of network virtualization for technical training and testing. Some key points:
- Virtualization abstracts functionality from hardware, allowing more efficient use of resources, lower costs, and flexibility.
- It allows consolidating many servers onto few physical machines for efficiency or distributing applications across many virtual servers for scalability.
- For training, virtualization reduces logistics costs like shipping hardware, lowers footprint needs, and makes environments easy to reconfigure.
- NSRC has used virtualization successfully for its technical capacity building workshops in Africa and Asia Pacific, replacing physical hardware with a few virtualization hosts.
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storageMayaData Inc
Webinar Session - https://youtu.be/_5MfGMf8PG4
In this webinar, we share how the Container Attached Storage pattern makes performance tuning more tractable, by giving each workload its own storage system, thereby decreasing the variables needed to understand and tune performance.
We then introduce MayaStor, a breakthrough in the use of containers and Kubernetes as a data plane. MayaStor is the first containerized data engine available that delivers near the theoretical maximum performance of underlying systems. MayaStor performance scales with the underlying hardware and has been shown, for example, to deliver in excess of 10 million IOPS in a particular environment.
Unikernels are constructed by combining application code with only the operating system components necessary for that code to run. The result is a highly specialized, single-purpose application which can be deployed directly to the cloud or onto IoT-like devices. Unikernels reduce software complexity by only including code that is required, resulting in portable applications with much smaller footprints and fast boot times.
By combining the familiar tooling and portability of Docker with the efficiency and specialization of next-generation unikernel technology, organizations have a flexible platform to build, ship and run distributed applications without being restricted to a particular infrastructure. Because workloads that reach the data center today are on a spectrum from physical machine to container to hypervisor, only the Docker platform can further widen the scope and provide more flexibility for orchestrating hybrid applications.
Watch the video from Docker Online Meetup #31: https://blog.docker.com/2016/01/docker-online-meetup-unikernels/
Hands on guide to the nuts and bolts of administering an MQ Appliance and key differences from working with a software MQ installation. (Live presentation was accompanied by demonstration of the MQ Console WebUI capabilities - some screenshots included give a flavor).
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud
In this workshop VMware will provide a quick reminder of the main contributions of the NSX network virtualization platform: consistent network and security management, increased application resiliency, rapid migration of workloads to and from the cloud.
VMware and OVH will then move on to practical cases with implementation of micro-segmentation, dynamic routing, automatic deployment of an application, load balancing in the OVH Hosted Private Cloud. This workshop is aimed at a technical audience.
ISC Cloud 2013 - Cloud Architectures for HPC – Industry Case StudiesOpenNebula Project
This presentation discusses private cloud architectures for high-performance computing (HPC). It begins by describing the use case of using a private cloud for HPC workloads. It then covers the main challenges of deploying private HPC clouds, including flexible application management, resource management at scale, and ensuring application performance. Several case studies of existing private HPC clouds are presented, including those at FermiCloud, CESGA Cloud, SARA Cloud, SZTAKI Cloud, and KTH Cloud. Finally, trends in private cloud adoption by industry are discussed, such as experimenting with ARM architectures and providing hybrid cloud deployments.
ISC Cloud 2013 - Cloud Architectures for HPC – Industry Case StudiesIgnacio M. Llorente
This presentation discusses private cloud architectures for high-performance computing (HPC). It begins by describing the use case of using a private cloud for HPC workloads. It then covers the main challenges of deploying private HPC clouds, including flexible application management, resource management at scale, and ensuring application performance. Several case studies of existing private HPC clouds are presented, including those at FermiCloud, CESGA Cloud, SARA Cloud, SZTAKI Cloud, and KTH Cloud. Finally, trends in private cloud adoption by industry are discussed, such as experimenting with ARM architectures and providing hybrid cloud deployments.
Discover and learn how to build a microservices platform, get a view of the best of breed architecture, solving common challenges, dig into Netflix stack, Yelp PaaSTA, AirBnB SmartStack, Apache Mesos, SoundCloud, Spinnaker experiences.
French audience : the JUG live recording is available here, https://www.youtube.com/watch?v=5LnL1HYmLwY&feature=youtu.be
Konrad Wilk is a Software Development Manager at Oracle. His group’s mission is to make Linux and Xen Project virtualization better and faster. As part of this work, Konrad has been the maintainer of the Xen Project subsystem in Linux, Xen Project maintainer and now also Release Manager for the 4.5 release of the Xen Project Hypervisor. Konrad has been active in the Linux and Xen Project communities for more than 6 years and was instrumental in adding Xen Project support to the Linux Kernel.
Similar to Superfluid NFV: VMs and Virtual Infrastructure Managers speed-up for instantaneous service instantiation (20)
Dataplane programming with eBPF: architecture and toolsStefano Salsano
eBPF is definitely a complex technology. Developing complex systems based on eBPF is challenging due to the intrinsic limitations of the model and the known shortcomings of the tool chain.
The learning curve of this technology is very steep and needs continuous coaching from experts. This tutorial will investigate:
What is eBPF and why it has gained a prominent position among the solutions to improve the packet processing performance in Linux/x86 nodes. We will shortly present some important use case scenarios for eBPF, like Kubernetes’ Cilium
The architecture of eBPF and its programming toolchain (e.g. bcc
What are the frameworks for eBPF programming, such as Polycube and InKeV.
How to make eBPF programming easier, more flexible and modular with HIKe/eCLAT
How to implement a custom application logic in eBPF with eCLAT using a python-like script
How to extend the framework and develop new modules
SRv6 experience and future perspectives
1) SRv6 and SRv6 Network Programming model
2) ROSE : Research on Open source SRv6 Ecosystem
3) SRv6 for SD-WAN & our EveryWAN solution
4) User Controlled SD-WAN Services (UCSS) project
5) Conclusions & next steps
Segment Routing over IPv6 (SRv6) is an architecture based on the source routing paradigm that seeks the right balance between distributed (network-wide) intelligence and centralized (controller-based) programmability. Using SRv6, network devices have complete control over the forwarding paths and the network functions to be applied to packets, by combining simple network instructions. Moreover, applications can become SRv6 aware and gain control over the network-wide forwarding and processing of packets. SRv6 technology has been implemented in hardware by different vendors (e.g. CISCO, Huawei, Barefoot), in software (e.g. Linux kernel networking) and in software with I/O acceleration (e.g. FD.io Vector Packet Processing using DPDK). Several large scale deployments of SRv6 have been rolled out in 2019 (including Softbank, Iliad, ChinaTelecom, China Unicom), see https://tools.ietf.org/html/draft-matsushima-spring-srv6-deployment-status. This tutorial will provide a quick introduction to SRv6 architecture and protocols and will illustrate the design and implementation of SRv6 services with hands-on examples. The hands-on part will be based on the open-source SRv6 ecosystem developed in the ROSE project: https://netgroup.github.io/rose/
This presentation discusses Segment Routing over IPv6 (SRv6) and the Network Programming Model. It provides an overview of what SRv6 is, how it works, and how the Network Programming Model can be used for applications like VPNs, SD-WANs, and service function chaining. The presentation also covers SRv6 standardization efforts, open source implementations, and areas of ongoing research.
Testbeds IntErconnections with L2 overlays - SRv6 for SFCStefano Salsano
1) The TIE-SR demo shows a Service Function Chaining (SFC) scenario across different testbeds using SRv6 (Segment Routing over IPv6). It automatically designs and deploys an arbitrary Layer 2 overlay network topology over multiple SoftFIRE testbeds.
2) It creates an SRv6 domain on the overlay network and defines two SRv6 policies - one for traffic engineering and one for SFC. The SFC policy routes traffic through a snort intrusion detection system virtual network function.
3) An SDN controller can periodically change the SRv6 policies to route traffic through different paths and virtual network functions for testing purposes.
Energy-efficient Path Allocation Heuristic for Service Function ChainingStefano Salsano
1) The document proposes an energy-efficient heuristic algorithm for service function chaining path allocation in SDN networks. The goal is to minimize energy consumption by switching off unused servers while meeting quality of service constraints.
2) It formulates the problem as a mixed integer linear program to find optimal resource allocations and then develops a low-complexity heuristic to solve larger problem instances in reasonable time.
3) Results show the heuristic finds near-optimal solutions with much less computation time compared to the optimal approach as problem size increases in terms of network size and number of flows.
Extending OpenVIM R3 to support Unikernels (and Xen)Stefano Salsano
After a short introduction to the goals and approach of the Superfluidity EU research project, we present the proposed extensions to OpenVIM to support ClickOS Unikernels and Xen.
We have implemented a scenario that can combines Unikernels and regular VMs in the same Network Service or VNF extending OpenVIM.We describe how we have extended the ETSI NFV models and OpenVIM. In particular, we provide the details of the OpenVIM descriptor extensions to support Unikernels.
As a background information, we discuss the Unikernels and their orchestration aspects. Unikernel technology allows to build tiny VMs with memory footprint in the order of hundreds of KBs and boot time in the order of milliseconds. We focus on ClickOS Unikernels. We have adapted 3 VIMs (OpenStack, Nomad, OpenVIM) to support ClickOS Unikernels and report a performance evaluation of the VM instantiation time.
D-STREAMON - NFV-capable distributed framework for network monitoringStefano Salsano
Several reasons make NFV an attractive paradigm for IT security: lowers costs, agile operations and better isolation as well as fast security updates, improved incident responses and better level of automation. At the same time, the network threats tend to be increasingly complex and distributed, implying huge traffic scale to be monitored and increasingly strict mitigation delay requirements. Considering the current trend of the networking and the requirements to counteract to the evolution of cyber-threats, it is expected that also network monitoring will move towards NFV based solutions. In this paper, we present Distributed StreaMon (D-StreaMon) an orchestration framework for distributed monitoring on NFV network architectures. D-StreaMon has been designed to face the above described challenges. It relies on the StreaMon platform, a solution for network monitoring originally designed for traditional middleboxes. Changes that allow Streamon to be deployed on NFV network architectures are described. The paper reports a performance evaluation of the realized NFV based solutions and discusses potential benefits in monitoring tenants' VMs for Service Providers.
The SCISSOR approach to establishing situational awareness in Industrial Cont...Stefano Salsano
The SCISSOR project aims to establish situational awareness in industrial control systems through a highly scalable security monitoring framework. The framework integrates a wide range of heterogeneous sensors, uses a distributed data aggregation approach, and advanced detection and correlation models. It exploits cloud computing concepts. The architecture includes sensors, local correlation and aggregation layers, and a decision and analysis layer. The framework was tested on a real industrial control system in Favignana, Italy using various sensors.
This document discusses cloud and mobile/edge cloud computing. It mentions cloud computing, virtualization technologies, datacenters, and public cloud providers as enablers of cloud computing. It also references platforms for cloud computing and was presented by Prof. Stefano Salsano from the University of Rome Tor Vergata's Electronic Engineering Department.
Generalized Virtual Networking, an enabler for Service Centric Networking and...Stefano Salsano
In this presentation we introduce the Generalized Virtual Networking (GVN) concept. GVN provides a framework to influence the routing of packets based on service level information that is carried in the packets. It is based on a protocol header inserted between the Network and Transport layers, therefore it can be seen as a layer 3.5 solution. Technically, GVN is proposed as a new transport layer protocol in the TCP/IP protocol suite. An IP router that is not GVN capable will simply process the IP destination address as usual. Similar concepts have been proposed in other works, and referred to as Service Oriented Networking, Service Centric Networking, Application Delivery Networking, but they are now generalized in the proposed GVN framework. In this respect, the GVN header is a generic container that can be adapted to serve the needs of arbitrary service level routing solutions. The GVN header can be managed by GVN capable end-hosts and applications or can be pushed/popped at the edge of a GVN capable network (like a VLAN tag). In this position paper, we show that Generalized Virtual Networking is a powerful enabler for SCN (Service Centric Networking) and NFV (Network Function Virtualization) and how it couples with the SDN (Software Defined Networking) paradigm.
OSHI - Open Source Hybrid IP/SDN networking @EWSDN14Stefano Salsano
The introduction of SDN in IP backbones requires the coexistence of regular IP forwarding and SDN based forwarding. The former is typically applied to best effort Internet traffic, the latter can be used for different types of advanced services (VPNs, Virtual Leased Lines, Traffic Engineering…). In this paper we first introduce the architecture and the services of an “hybrid” IP/SDN networking scenario. Then we describe the design and implementation of an Open Source Hybrid IP/SDN (OSHI) node. It combines Quagga for OSPF routing and Open vSwitch for OpenFlow based switching on Linux. The availability of tools for experimental validation and performance evaluation of SDN solutions is fundamental for the evolution of SDN. We provide a set of open source tools that allow to facilitate the design of hybrid IP/SDN experimental networks, their deployment on Mininet or on distributed SDN research testbeds and their test. Finally, using the provided tools, we evaluate key performance aspects of the proposed solutions. The OSHI development and test environment is available in a VirtualBox VM image that can be downloaded.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
Ready to Unlock the Power of Blockchain!Toptal Tech
Imagine a world where data flows freely, yet remains secure. A world where trust is built into the fabric of every transaction. This is the promise of blockchain, a revolutionary technology poised to reshape our digital landscape.
Toptal Tech is at the forefront of this innovation, connecting you with the brightest minds in blockchain development. Together, we can unlock the potential of this transformative technology, building a future of transparency, security, and endless possibilities.
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
Superfluid NFV: VMs and Virtual Infrastructure Managers speed-up for instantaneous service instantiation
1. Superfluid NFV: VMs and Virtual Infrastructure Managers speed-up for
instantaneous service instantiation
Stefano Salsano (CNIT/Univ. of Rome Tor Vergata), Felipe Huici (NEC)
October 10th 2016 – EWSDN @ SDN & OpenFlow World Congress
Joint work with Filipe Manco, Florian Schmidt, Kenichi Yasukata (NEC) - Pier Luigi Ventre,
Claudio Pisa, Giuseppe Siracusano, Paolo Lungaroni, Nicola Blefari-Melazzi (CNIT)
A super-fluid, cloud-native, converged edge system
2. Outline
• The SUPERFLUIDITY project – goals and approach
• Part I – Speed up of:
– Virtualization Platform (including the hypervisor)
– The guests (i.e., virtual machines)
• Part II – Speed up of:
– Virtual Infrastructure Managers
2
3. Outline
• The SUPERFLUIDITY project – goals and approach
• Part I – Speed up of:
– Virtualization Platform (including the hypervisor)
– The guests (i.e., virtual machines)
• Part II – Speed up of:
– Virtual Infrastructure Managers
3
4. SUPERFLUIDITY goals
• Instantiate network functions and services on-the-fly
• Run them anywhere in the network (core, aggregation, edge)
• Migrate them transparently to different locations
• Make them portable across heterogeneous infrastructure environments
(computing and networking), while taking advantage of specific hardware
features, such as high performance accelerators, when available
4
5. SUPERFLUIDITY approach
• Decomposition of network components and services into elementary and reusable
primitives (“Reusable Functional Blocks – RFBs”)
• Native, converged cloud-based architecture
• Virtualization of radio and network processing tasks
• Platform-independent abstractions, permitting reuse of network functions across
heterogeneous hardware platforms
• High performance software optimizations along with leveraging of hardware
accelerators
5
6. SUPERFLUIDITY architecture
6
Based on the on the concept of
Reusable Functional Blocks (RFBs),
applied to different heterogeneous
RFB Execution Environments (REE)
Different RDCLs (RFB Description and
Composition Languages) can be used in
different environments.
7. • Classical NFV environments (i.e. by ETSI NFV standards)
– VNFs are composed/orchestrated to realize Network Services
– VNFs can be decomposed in VNFC (VNF Components)
«Big»
VNF
«Big»
VNF
«Big»
VNF
«Big»
VNF
VNF
C
VNF
C
VNF
C
VM
VM
VM
Heterogeneous composition/execution environments
7
8. Heterogeneous composition/execution environments
• Towards more «fine-grained» decomposition…
• Modular software routers (e.g. Click)
– Click elements are combined in configurations (Direct Acyclic Graphs)
8
9. Heterogeneous composition/execution environments
• Towards more «fine-grained» decomposition…
• XSFM-based (eXtended Finite State Machine) decomposition of traffic forwarding /
flow processing tasks, and HW support for wire speed execution
9
10. Network Functions reuse/composition
NFV-like VNF
management
General purpose
Computing Platform (CPUs)
specific
VNF
VM
specific
VNF
VM
SDN-like
Configuration
deployment
The ‘traditional’ VNF’s view
General purpose computing platform
Full flexibility (VNF = ‘anything’ coded in ‘any’ language)
Performance limitations (slow path execution)
Pre-implemented
match/action table
OpenFlow
(HW) switch
Flow table Entry
Flow table Entry
Flow table Entry
flow-mod
Traditional SDN southbound (OpenFlow)
Domain-specific platform (OpenFlow router)
Extremely limited flexibility (hardly an NF)
Line-rate performance (TCAM/HW)
10
11. General purpose
Computing Platform (CPUs)
specific
VNF
VM
specific
VNF
VM
The ‘traditional’ VNF’s view
General purpose computing platform
Full flexibility (VNF = ‘anything’ coded in ‘any’ language)
Performance limitations (slow path execution)
Pre-implemented
match/action table
OpenFlow
(HW) switch
Flow table Entry
Flow table Entry
Flow table Entry
flow-mod
Traditional SDN southbound (OpenFlow)
Domain-specific platform (OpenFlow router)
Extremely limited flexibility (hardly an NF)
Line-rate performance (TCAM/HW)
NFV-like VNF
management
SDN-like
Configuration
deployment
Lean towards ‘more
domain specific’
network computing
HW
Lean towards ‘more
expressive’ programming
constructs / APIs
Network Functions reuse/composition
11
12. APIs definition
RFB
#a
RFB
#b
RFB
#c
RFB
#n
REE - RFB Execution Environment
(node-level) RDCL script
REEREE
RFB#2 RFB#3
(network-wide) REE - RFB Execution Environment
(network-level) RDCL script
RFB#1
REE
Manager
REE User
REE
Resource
Entity
UM API
MR API
REE User
REE
Manager
UM API
REE
Resource
Entity
MR API
RDCLs (RFB Description and
Composition Languages) are used
on the logical API between the
“user” of an RFB Execution
Environment and the “manager”
(provider) of such environment
Different RDCLs can be used in
different environments.
12
13. Rationale for the unified RFB concept
• It is not a top-down approach: we cannot impose a single model and apply it in all
environments
• Convergence across different heterogeneous environments (where possible)
– Unify/combine the languages and tools
• Helps to identify how the different environments can share resources and can be
combined in a common infrastructure
13
14. Convergence approach
A unified cloud platform for radio and network functions. CRAN, MEC and cloud technologies are
integrated with an architectural paradigm that can unify heterogeneous equipment and
processing into one dynamically optimised, superfluid, network 14
15. Towards sub 10 ms service instantiation
• The SUPERFLUIDITY project – goals and approach
• Part I – Speed up of:
– Virtualization Platform (including the hypervisor)
– The guests (i.e., virtual machines)
• Part II – Speed up of:
– Virtual Infrastructure Managers
15
16. Why a superfluid NFV (sub 10 ms service instantiation)
• Quick provisioning of services: JIT proxies, firewalls, on-the-fly monitoring
• Quick migration of services: base station splitting
• Optimized use of resources thanks to dynamic sharing
• Hosting large number of services on the same server: e.g., vCPE
• High-performance networking: NFV, virtualized CDNs, etc.
• Quick-checkpointing
• General investment and operating cost reductions
16
19. VM instantiation and boot time
19
Orchestrator
request
VIM
operations
Virtualization
Platform
Guest OS (VM)
Boot time
1-2 s
5-10 s
~1 s
20. Towards sub 10 ms service instantiation
• The SUPERFLUIDITY project – goals and approach
• Part I – Speed up of:
– Virtualization Platform (including the hypervisor)
– The guests (i.e., virtual machines)
• Part II – Speed up of:
– Virtual Infrastructure Managers
20
22. But I need to pick my poison ☹
Lightweight
Iffy isolation
CONTAINERS HYPERVISORS
Strong isolation
Heavy weight
We need a superfluid virtualization
22
24. Towards a Superfluid Platform
• Fast boot/destroy/migration times
• Reducing guest memory footprints
• Optimizing packet I/O (40-80 Gb/s)
• New hypervisor schedulers
24
25. Towards a Superfluid Platform
• Fast boot/destroy/migration times
• Reducing guest memory footprints
• Optimizing packet I/O (40-80 Gb/s)
• New hypervisor schedulers
25
26. A Quick Xen Primer
Dom0 (Linux/NetBSD)
Hardware (CPU, Memory, MMU, NICs, …)
Xen Hypervisor
libxc libxs
libxl toolstack
xl
NIC
drivers
block
SW switch
virt
drivers
netback
xenbus
DomU 1
netfront
xenbus
OS (Linux)
apps
Xen
store
26
27. A Unikernel Primer
• Specialized VM: single
application + minimalistic OS
• Single address space,
co-operative scheduler so low
overheads
driver1
driver2
app1
GENERAL-PURPOSE
OPERATING SYSTEM
(e.g., Linux, FreeBSD)
KERNELSPACEUSERSPACE
app2
appNdriverN
Vdriver1
vdriver2
app
MINIMALISTIC
OPERATING SYSTEM
(e.g., MiniOS, OSv)
SINGLEADDRESS
SPACE
27
28. Memory Footprint
• Xen allocates a minimum of 4MB for all guests, irrespective
of how much memory is needed or asked for
– Modified the toolstack to allow memory allocations to be
specified in KBs
• Guests require a lot of memory to run
– Use unikernels instead
28
29. Memory Footprint - Result
• Hello world guest
– 296KB
• Ponger guest 692KB
– 350KB come from lwip and newlibc
• This is with minor optimizations to MiniOS
(e.g., reducing the threads’ stack size)
29
30. VM Boot Times
1. xl create myvm.cfg
2. libxl (e.g., parse config)
3. libxc (e.g., hypercalls to create guest, reserve memory, load image into memory)
4. Write entries to Xenstore for guest to use
5. Boot guest
6. Guest retrieves information from Xenstore (e.g., even channels, back-end
domains)
Note: VM destroy and migration times depend on
similar toolstack/Xenstore operations!
30
31. Main Culprits
• Toolstack
– Inefficient/outdated code
– Too generic for our purposes (e.g., support for HVM guests, QEMU).
• Xenstore
– Used to communicate information between guests (e.g., event channel
numbers, back-end domain information)
– Relies on transactions, watches
– Single point of failure, bottleneck
• And of course the guest
– Use unikernels
31
32. Towards a Solution
• Toolstack – Chaos
– Complete re-write of toolstack, no need for libxl/libxc
– Includes framework for easily plugging in different elements of a toolstack
(e.g., with or without Xenstore)
• Xenstore
– Do we really need one?
– Design and implementation of “Xenstore-less” guests and the corresponding
toolstack
32
40. Virtualization Platforms & Guests - Ongoing & Future Work
• Short term
– Lots of clean-up, more results
– Libxc replacement
– High performance (40-80 Gb/s) service chaining
• Longer term
– New hypervisor schedulers for massive consolidation, high packet I/O
– Unicore: tools for automatically building high performance unikernels
and OSes → OS-level decomposition
40
41. Towards sub 10 ms service instantiation
• The SUPERFLUIDITY project – goals and approach
• Part I – Speed up of:
– Virtualization Platform (including the hypervisor)
– The guests (i.e., virtual machines)
• Part II – Speed up of:
– Virtual Infrastructure Managers
41
42. VM instantiation and boot time
42
Orchestrator
request
VIM
operations
Virtualization
Platform
Guest OS (VM)
Boot time
1-2 s
~1 ms
~1 ms
• Unikernels can provide low
latency instantiation times for
“Micro-VNF”
• What about VIMs (Virtual
Infrastructure Managers) ?
43. Performance analysis and Tuning of VIMs for Micro VNFs
• General model of the VNF instantiation process
• Modifications to VIMs to instantiate Micro-VNFs based on
ClickOS Unikernel
• Methodology to evaluate the performances
• Performance Evaluation
43
44. Virtual Infrastructure Managers (VIMs)
We considered the performance of two VIMs :
• OpenStack Nova
– OpenStack is composed by subprojects
– Nova: orchestration and management of computing resources ---> VIM
– 1 Nova node (scheduling) + several compute nodes (which interact with the hypervisor)
– Not tied to a specific virtualization technology
• Nomad by HashiCorp
– Minimalistic cluster manager and job scheduler
– Nomad server (scheduling) + Nomad clients (interact with the hypervisor)
– Not tied to a specific virtualization technology
44
49. VIM modifications to instantiate (ClickOS) Micro VNFs
49
A regular VM can boot its OS
from an image or a disk snapshot
that can be read from an
associated block device (disk).
The host hypervisor instructs the
VM to run the boot loader, which
reads the kernel image from the
block device.
ClickOS based MicroVNFs, are
shipped as a tiny kernel without
a block device. These VMs need
to boot from a so-called diskless
image. The host hypervisor reads
the kernel image from a file or a
repository and directly injects it
in the VM memory.
Virtual
Infrastructure
Manager
Virtualization
Platform
(Hypervisor)
This interface needs to
be modified to support
the boot of “diskless
images”
50. VIM modifications to instantiate (ClickOS) Micro VNFs
• OpenStack
– Xen supported out of the box, using the Libvirt toolstack
– We considered the boot of diskless images targeting only one component
(Nova Compute) and a specific toolstack, Libvirt.
– Libvirt talks with Xen using libxl the default Xen toolstack API.
– We modified the XML description of the guest domain provided by the driver,
changing the XML description on the fly before the creation of the domain.
• Nomad
– Xen not supported out of the box
– We developed a new Nomad driver for Xen, called XenDriver .
– The new driver communicates with the XL Xen toolstack and it is also able to
instantiate a ClickOS VM.
50
51. VIM performance evaluation approach
• We evaluate the VM scheduling and instantiation phase, combining message trace
analysis and timestamps in the code
• Message traces (coarse information, beginning and end of the different phases)
– VIM Message Analyzer capable of analyzing Nova and Nomad message exchanges
• Detailed breakdown with timestamps in the code (Nomad Client, Nova Compute)
• Workload generators:
– OpenStack : Rally benchmarking tool
– Nomad : developed the “Nomad Pusher”, a utility written in the GO language which
programmatically submits jobs to the Nomad Server.
51
52. Results – ClickOS instantiation times
52
OpenStack Nova
Nomad
seconds
seconds
53. There is no comparison implied…
• NB: the purpose of the work is NOT to compare OpenStack vs. Nomad.
The goal is to understand how both behave and find ways to reduce
instantiation times.
• A direct comparison makes few sense. OpenStack is a much more
complete framework in terms of offered functionality and different
types of supported hypervisors. Moreover, the comparison is unfair
also because for the Nomad case we have developed a driver only
targeted to support the Xen/Click OS case.
53
54. VIM Tuning
• OpenStack
– Diskless VM -> we can skip most of the actions performed during the image creation;
– UniKernels are special purpose VMs:
• SSH is really needed ?
• Full-IP stack ?
– We were able to reduce the spawning time of about 70%
– Looking at the overall instantiation time, the relative reduction is about 45%;
• Nomad
– No much space for the optimization;
• We implemented only the necessary functionality;
– We introduced further improvements assuming a local store for the Micro VNFs,
reducing the Driver operation of about 30 ms;
54
57. VIM performances - Ongoing & Future Work
• Consider the impact of system load on the performance
– Measure the average instantiation times considering batches of incoming requests with given
rate (requests/s) and arrival patterns.
– Analyze the impact of the number of already allocated VMs and of the number of target nodes
to be deployed.
• Keep improving the performance of the considered VIMs
– e.g. trying to replace the lazy notification mechanism of Nomad with a reactive approach
• Extend the analysis to another VIM
– OpenVIM from the OSM project
57
58. Unikernel virtualization in the SUPERFLUIDITY vision
• We have considered the optimization of Unikernel virtualization and the needed
enhancements to Virtual Infrastructure Managers to support Unikernels.
• In the SUPERFLUIDITY vision, Unikernels are interesting as they support the
decomposition of network services in “smaller” components that can be deployed
on the fly.
• The NFV Infrastructure should be extended in order to support Unikernel
virtualization in addition to traditional VMs. This way it will be possible to design
services that exploit the most efficient solutions depending on several factors.
58
59. Conclusions
• Unikernel virtualization can provide VM instantiation and boot time in
the order of ms
– ongoing: consolidation of results, generic and automatic optimization process for
hypervisor toolstack and for guests
• Work is still needed at the level of Virtual Infrastructure Managers
– e.g. OpenStack (~ 1 s), Nomad (~ 300 ms)
• VIMs are currently designed for generality, the challenge is to specialize
them in a flexible way, keeping the compatibility with the mainstream
versions
59
60. References - SUPERFLUIDITY
• SUPERFLUIDITY project Home Page http://superfluidity.eu/
• G. Bianchi, et al. “Superfluidity: a flexible functional architecture for 5G
networks”, Transactions on Emerging Telecommunications Technologies
27, no. 9, Sep 2016
60
61. References – Speed up of Virtualization Platforms / Guests
• J. Martins, M. Ahmed, C. Raiciu, V. Olteanu, M. Honda, R. Bifulco, F. Huici,
“ClickOS and the art of network function virtualization”, NSDI 2014, 11th
USENIX Conference on Networked Systems Design and Implementation,
2014.
• F. Manco, J. Martins, K. Yasukata, J. Mendes, S. Kuenzer, F. Huici,
“The Case for the Superfluid Cloud”, 7th USENIX Workshop on Hot Topics
in Cloud Computing (HotCloud 15), 2015
61
62. References – Speed up of VIMs
• P. L. Ventre, C. Pisa, S. Salsano, G. Siracusano, F. Schmidt, P. Lungaroni, N.
Blefari-Melazzi,
“Performance Evaluation and Tuning of Virtual Infrastructure Managers
for (Micro) Virtual Network Functions”, IEEE NFV-SDN 2016 Conference,
Palo Alto, USA, 7-11 Nov. 2016
62
63. Thank you. Questions?
Contacts
SUPERFLUIDITY project, Speed up of VIMs
Stefano Salsano, Associate Professor
University of Rome Tor Vergata / CNIT
stefano.salsano@uniroma2.it
Speed up of Virtualization Platforms / Guests
Felipe Huici, Chief Researcher
Networked Systems and Data Analytics Group
NEC Laboratories Europe
felipe.huici@neclab.eu
63
64. The SUPERFLUIDITY project has received funding from the European Union’s Horizon
2020 research and innovation programme under grant agreement No.671566
(Research and Innovation Action).
The information given is the author’s view and does not necessarily represent the view
of the European Commission (EC). No liability is accepted for any use that may be
made of the information contained.
64