Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Superfluid NFV: VMs and Virtual Infrastructure Managers speed-up for
instantaneous service instantiation
Stefano Salsano (...
Outline
• The SUPERFLUIDITY project – goals and approach
• Part I – Speed up of:
– Virtualization Platform (including the ...
Outline
• The SUPERFLUIDITY project – goals and approach
• Part I – Speed up of:
– Virtualization Platform (including the ...
SUPERFLUIDITY goals
• Instantiate network functions and services on-the-fly
• Run them anywhere in the network (core, aggr...
SUPERFLUIDITY approach
• Decomposition of network components and services into elementary and reusable
primitives (“Reusab...
SUPERFLUIDITY architecture
6
Based on the on the concept of
Reusable Functional Blocks (RFBs),
applied to different hetero...
• Classical NFV environments (i.e. by ETSI NFV standards)
– VNFs are composed/orchestrated to realize Network Services
– V...
Heterogeneous composition/execution environments
• Towards more «fine-grained» decomposition…
• Modular software routers (...
Heterogeneous composition/execution environments
• Towards more «fine-grained» decomposition…
• XSFM-based (eXtended Finit...
Network Functions reuse/composition
NFV-like VNF
management
General purpose
Computing Platform (CPUs)
specific
VNF
VM
spec...
General purpose
Computing Platform (CPUs)
specific
VNF
VM
specific
VNF
VM
The ‘traditional’ VNF’s view
General purpose com...
APIs definition
RFB
#a
RFB
#b
RFB
#c
RFB
#n
REE - RFB Execution Environment
(node-level) RDCL script
REEREE
RFB#2 RFB#3
(n...
Rationale for the unified RFB concept
• It is not a top-down approach: we cannot impose a single model and apply it in all...
Convergence approach
A unified cloud platform for radio and network functions. CRAN, MEC and cloud technologies are
integr...
Towards sub 10 ms service instantiation
• The SUPERFLUIDITY project – goals and approach
• Part I – Speed up of:
– Virtual...
Why a superfluid NFV (sub 10 ms service instantiation)
• Quick provisioning of services: JIT proxies, firewalls, on-the-fl...
ETSI MANagement and Orchestration (MANO) Model
17
VM instantiation and boot time
18
Orchestrator
request
VM instantiation and boot time
19
Orchestrator
request
VIM
operations
Virtualization
Platform
Guest OS (VM)
Boot time
1-2 ...
Towards sub 10 ms service instantiation
• The SUPERFLUIDITY project – goals and approach
• Part I – Speed up of:
– Virtual...
Lightweight
CONTAINERS HYPERVISORS
Strong isolation
We need a superfluid virtualization
21
But I need to pick my poison ☹
Lightweight
Iffy isolation
CONTAINERS HYPERVISORS
Strong isolation
Heavy weight
We need a s...
Lightweight
Iffy isolation
CONTAINERS HYPERVISORS
Strong isolation
Heavy weight
We need a superfluid virtualization
?
Can ...
Towards a Superfluid Platform
• Fast boot/destroy/migration times
• Reducing guest memory footprints
• Optimizing packet I...
Towards a Superfluid Platform
• Fast boot/destroy/migration times
• Reducing guest memory footprints
• Optimizing packet I...
A Quick Xen Primer
Dom0 (Linux/NetBSD)
Hardware (CPU, Memory, MMU, NICs, …)
Xen Hypervisor
libxc libxs
libxl toolstack
xl
...
A Unikernel Primer
• Specialized VM: single
application + minimalistic OS
• Single address space,
co-operative scheduler s...
Memory Footprint
• Xen allocates a minimum of 4MB for all guests, irrespective
of how much memory is needed or asked for
–...
Memory Footprint - Result
• Hello world guest
– 296KB
• Ponger guest 692KB
– 350KB come from lwip and newlibc
• This is wi...
VM Boot Times
1. xl create myvm.cfg
2. libxl (e.g., parse config)
3. libxc (e.g., hypercalls to create guest, reserve memo...
Main Culprits
• Toolstack
– Inefficient/outdated code
– Too generic for our purposes (e.g., support for HVM guests, QEMU)....
Towards a Solution
• Toolstack – Chaos
– Complete re-write of toolstack, no need for libxl/libxc
– Includes framework for ...
Configuration for plugging in/out different elements
33
dom0 NW
backend
NW
frontend
xenbus
netfront
Xen
store
xenbus
netback
toolstack
With Xenstore
Without Xenstore
dom0 NW
fron...
Optimizing the Toolstack
xl
libxl
libxc
xcall xevtchannel
hypervisor
chaos
libh2
libxcl
Xen
store
noxen
store
35
Early Results
• Guest: MiniOS, 1 VCPU, 8MB RAM, 1 VIF
• Standard: 87.77 msecs
36
Early Results
• Guest: MiniOS, 1 VCPU, 8MB RAM, 1 VIF
• Without libxl: 6.67 msecs
• Standard: 87.77 msecs
37
Early Results
• Without xen store: 1.43 ms
• Guest: MiniOS, 1 VCPU, 8MB RAM, 1 VIF
• Without libxl: 6.67 msecs
• Standard:...
Quick Breakdown
xc_dom_allocate 0.02
xc_evtchn_alloc* 0.00
xc_dom_kernel_* 0.02
xc_dom_boot_xen_init 0.00
xc_dom_parse_ima...
Virtualization Platforms & Guests - Ongoing & Future Work
• Short term
– Lots of clean-up, more results
– Libxc replacemen...
Towards sub 10 ms service instantiation
• The SUPERFLUIDITY project – goals and approach
• Part I – Speed up of:
– Virtual...
VM instantiation and boot time
42
Orchestrator
request
VIM
operations
Virtualization
Platform
Guest OS (VM)
Boot time
1-2 ...
Performance analysis and Tuning of VIMs for Micro VNFs
• General model of the VNF instantiation process
• Modifications to...
Virtual Infrastructure Managers (VIMs)
We considered the performance of two VIMs :
• OpenStack Nova
– OpenStack is compose...
Reference Model of the VNF instantiation process
45
Mapping of the reference model to the considered VIMs
46
VIM instantiation model for Openstack Nova
47
VIM instantiation model for nomad
48
VIM modifications to instantiate (ClickOS) Micro VNFs
49
A regular VM can boot its OS
from an image or a disk snapshot
tha...
VIM modifications to instantiate (ClickOS) Micro VNFs
• OpenStack
– Xen supported out of the box, using the Libvirt toolst...
VIM performance evaluation approach
• We evaluate the VM scheduling and instantiation phase, combining message trace
analy...
Results – ClickOS instantiation times
52
OpenStack Nova
Nomad
seconds
seconds
There is no comparison implied…
• NB: the purpose of the work is NOT to compare OpenStack vs. Nomad.
The goal is to unders...
VIM Tuning
• OpenStack
– Diskless VM -> we can skip most of the actions performed during the image creation;
– UniKernels ...
seconds
seconds
Results – OpenStack details and tuning
55
OpenStack Nova overall
OpenStack Nova spawn phase
Results – Nomad details and tuning
56
Nomad overall
Nomad spawn phase
seconds
seconds
VIM performances - Ongoing & Future Work
• Consider the impact of system load on the performance
– Measure the average ins...
Unikernel virtualization in the SUPERFLUIDITY vision
• We have considered the optimization of Unikernel virtualization and...
Conclusions
• Unikernel virtualization can provide VM instantiation and boot time in
the order of ms
– ongoing: consolidat...
References - SUPERFLUIDITY
• SUPERFLUIDITY project Home Page http://superfluidity.eu/
• G. Bianchi, et al. “Superfluidity:...
References – Speed up of Virtualization Platforms / Guests
• J. Martins, M. Ahmed, C. Raiciu, V. Olteanu, M. Honda, R. Bif...
References – Speed up of VIMs
• P. L. Ventre, C. Pisa, S. Salsano, G. Siracusano, F. Schmidt, P. Lungaroni, N.
Blefari-Mel...
Thank you. Questions?
Contacts
SUPERFLUIDITY project, Speed up of VIMs
Stefano Salsano, Associate Professor
University of ...
The SUPERFLUIDITY project has received funding from the European Union’s Horizon
2020 research and innovation programme un...
Upcoming SlideShare
Loading in …5
×

Superfluid NFV: VMs and Virtual Infrastructure Managers speed-up for instantaneous service instantiation

478 views

Published on

SUPERFLUIDITY project goals: instantiate network functions and services on-the-fly; run them anywhere in the network (core, aggregation, edge); migrate them transparently to different locations; make them portable across heterogeneous infrastructure environments (computing and networking), while taking advantage of specific hardware features, such as high performance accelerators, when available.
Conclusions: Unikernel virtualization can provide VM instantiation and boot time in the order of ms; ongoing: consolidation of results, generic and automatic optimization process for hypervisor toolstack and for guests. Work is still needed at the level of Virtual Infrastructure Managers e.g. OpenStack (~ 1 s), Nomad (~ 300 ms). VIMs are currently designed for generality, the challenge is to specialize them in a flexible way, keeping the compatibility with the mainstream versions.

Published in: Internet
  • Be the first to comment

  • Be the first to like this

Superfluid NFV: VMs and Virtual Infrastructure Managers speed-up for instantaneous service instantiation

  1. 1. Superfluid NFV: VMs and Virtual Infrastructure Managers speed-up for instantaneous service instantiation Stefano Salsano (CNIT/Univ. of Rome Tor Vergata), Felipe Huici (NEC) October 10th 2016 – EWSDN @ SDN & OpenFlow World Congress Joint work with Filipe Manco, Florian Schmidt, Kenichi Yasukata (NEC) - Pier Luigi Ventre, Claudio Pisa, Giuseppe Siracusano, Paolo Lungaroni, Nicola Blefari-Melazzi (CNIT) A super-fluid, cloud-native, converged edge system
  2. 2. Outline • The SUPERFLUIDITY project – goals and approach • Part I – Speed up of: – Virtualization Platform (including the hypervisor) – The guests (i.e., virtual machines) • Part II – Speed up of: – Virtual Infrastructure Managers 2
  3. 3. Outline • The SUPERFLUIDITY project – goals and approach • Part I – Speed up of: – Virtualization Platform (including the hypervisor) – The guests (i.e., virtual machines) • Part II – Speed up of: – Virtual Infrastructure Managers 3
  4. 4. SUPERFLUIDITY goals • Instantiate network functions and services on-the-fly • Run them anywhere in the network (core, aggregation, edge) • Migrate them transparently to different locations • Make them portable across heterogeneous infrastructure environments (computing and networking), while taking advantage of specific hardware features, such as high performance accelerators, when available 4
  5. 5. SUPERFLUIDITY approach • Decomposition of network components and services into elementary and reusable primitives (“Reusable Functional Blocks – RFBs”) • Native, converged cloud-based architecture • Virtualization of radio and network processing tasks • Platform-independent abstractions, permitting reuse of network functions across heterogeneous hardware platforms • High performance software optimizations along with leveraging of hardware accelerators 5
  6. 6. SUPERFLUIDITY architecture 6 Based on the on the concept of Reusable Functional Blocks (RFBs), applied to different heterogeneous RFB Execution Environments (REE) Different RDCLs (RFB Description and Composition Languages) can be used in different environments.
  7. 7. • Classical NFV environments (i.e. by ETSI NFV standards) – VNFs are composed/orchestrated to realize Network Services – VNFs can be decomposed in VNFC (VNF Components) «Big» VNF «Big» VNF «Big» VNF «Big» VNF VNF C VNF C VNF C VM VM VM Heterogeneous composition/execution environments 7
  8. 8. Heterogeneous composition/execution environments • Towards more «fine-grained» decomposition… • Modular software routers (e.g. Click) – Click elements are combined in configurations (Direct Acyclic Graphs) 8
  9. 9. Heterogeneous composition/execution environments • Towards more «fine-grained» decomposition… • XSFM-based (eXtended Finite State Machine) decomposition of traffic forwarding / flow processing tasks, and HW support for wire speed execution 9
  10. 10. Network Functions reuse/composition NFV-like VNF management General purpose Computing Platform (CPUs) specific VNF VM specific VNF VM SDN-like Configuration deployment The ‘traditional’ VNF’s view General purpose computing platform Full flexibility (VNF = ‘anything’ coded in ‘any’ language) Performance limitations (slow path execution) Pre-implemented match/action table OpenFlow (HW) switch Flow table Entry Flow table Entry Flow table Entry flow-mod Traditional SDN southbound (OpenFlow) Domain-specific platform (OpenFlow router) Extremely limited flexibility (hardly an NF) Line-rate performance (TCAM/HW) 10
  11. 11. General purpose Computing Platform (CPUs) specific VNF VM specific VNF VM The ‘traditional’ VNF’s view General purpose computing platform Full flexibility (VNF = ‘anything’ coded in ‘any’ language) Performance limitations (slow path execution) Pre-implemented match/action table OpenFlow (HW) switch Flow table Entry Flow table Entry Flow table Entry flow-mod Traditional SDN southbound (OpenFlow) Domain-specific platform (OpenFlow router) Extremely limited flexibility (hardly an NF) Line-rate performance (TCAM/HW) NFV-like VNF management SDN-like Configuration deployment Lean towards ‘more domain specific’ network computing HW Lean towards ‘more expressive’ programming constructs / APIs Network Functions reuse/composition 11
  12. 12. APIs definition RFB #a RFB #b RFB #c RFB #n REE - RFB Execution Environment (node-level) RDCL script REEREE RFB#2 RFB#3 (network-wide) REE - RFB Execution Environment (network-level) RDCL script RFB#1 REE Manager REE User REE Resource Entity UM API MR API REE User REE Manager UM API REE Resource Entity MR API RDCLs (RFB Description and Composition Languages) are used on the logical API between the “user” of an RFB Execution Environment and the “manager” (provider) of such environment Different RDCLs can be used in different environments. 12
  13. 13. Rationale for the unified RFB concept • It is not a top-down approach: we cannot impose a single model and apply it in all environments • Convergence across different heterogeneous environments (where possible) – Unify/combine the languages and tools • Helps to identify how the different environments can share resources and can be combined in a common infrastructure 13
  14. 14. Convergence approach A unified cloud platform for radio and network functions. CRAN, MEC and cloud technologies are integrated with an architectural paradigm that can unify heterogeneous equipment and processing into one dynamically optimised, superfluid, network 14
  15. 15. Towards sub 10 ms service instantiation • The SUPERFLUIDITY project – goals and approach • Part I – Speed up of: – Virtualization Platform (including the hypervisor) – The guests (i.e., virtual machines) • Part II – Speed up of: – Virtual Infrastructure Managers 15
  16. 16. Why a superfluid NFV (sub 10 ms service instantiation) • Quick provisioning of services: JIT proxies, firewalls, on-the-fly monitoring • Quick migration of services: base station splitting • Optimized use of resources thanks to dynamic sharing • Hosting large number of services on the same server: e.g., vCPE • High-performance networking: NFV, virtualized CDNs, etc. • Quick-checkpointing • General investment and operating cost reductions 16
  17. 17. ETSI MANagement and Orchestration (MANO) Model 17
  18. 18. VM instantiation and boot time 18 Orchestrator request
  19. 19. VM instantiation and boot time 19 Orchestrator request VIM operations Virtualization Platform Guest OS (VM) Boot time 1-2 s 5-10 s ~1 s
  20. 20. Towards sub 10 ms service instantiation • The SUPERFLUIDITY project – goals and approach • Part I – Speed up of: – Virtualization Platform (including the hypervisor) – The guests (i.e., virtual machines) • Part II – Speed up of: – Virtual Infrastructure Managers 20
  21. 21. Lightweight CONTAINERS HYPERVISORS Strong isolation We need a superfluid virtualization 21
  22. 22. But I need to pick my poison ☹ Lightweight Iffy isolation CONTAINERS HYPERVISORS Strong isolation Heavy weight We need a superfluid virtualization 22
  23. 23. Lightweight Iffy isolation CONTAINERS HYPERVISORS Strong isolation Heavy weight We need a superfluid virtualization ? Can we break the “myth” of VMs being heavy weight? 23
  24. 24. Towards a Superfluid Platform • Fast boot/destroy/migration times • Reducing guest memory footprints • Optimizing packet I/O (40-80 Gb/s) • New hypervisor schedulers 24
  25. 25. Towards a Superfluid Platform • Fast boot/destroy/migration times • Reducing guest memory footprints • Optimizing packet I/O (40-80 Gb/s) • New hypervisor schedulers 25
  26. 26. A Quick Xen Primer Dom0 (Linux/NetBSD) Hardware (CPU, Memory, MMU, NICs, …) Xen Hypervisor libxc libxs libxl toolstack xl NIC drivers block SW switch virt drivers netback xenbus DomU 1 netfront xenbus OS (Linux) apps Xen store 26
  27. 27. A Unikernel Primer • Specialized VM: single application + minimalistic OS • Single address space, co-operative scheduler so low overheads driver1 driver2 app1 GENERAL-PURPOSE OPERATING SYSTEM (e.g., Linux, FreeBSD) KERNELSPACEUSERSPACE app2 appNdriverN Vdriver1 vdriver2 app MINIMALISTIC OPERATING SYSTEM (e.g., MiniOS, OSv) SINGLEADDRESS SPACE 27
  28. 28. Memory Footprint • Xen allocates a minimum of 4MB for all guests, irrespective of how much memory is needed or asked for – Modified the toolstack to allow memory allocations to be specified in KBs • Guests require a lot of memory to run – Use unikernels instead 28
  29. 29. Memory Footprint - Result • Hello world guest – 296KB • Ponger guest 692KB – 350KB come from lwip and newlibc • This is with minor optimizations to MiniOS (e.g., reducing the threads’ stack size) 29
  30. 30. VM Boot Times 1. xl create myvm.cfg 2. libxl (e.g., parse config) 3. libxc (e.g., hypercalls to create guest, reserve memory, load image into memory) 4. Write entries to Xenstore for guest to use 5. Boot guest 6. Guest retrieves information from Xenstore (e.g., even channels, back-end domains) Note: VM destroy and migration times depend on similar toolstack/Xenstore operations! 30
  31. 31. Main Culprits • Toolstack – Inefficient/outdated code – Too generic for our purposes (e.g., support for HVM guests, QEMU). • Xenstore – Used to communicate information between guests (e.g., event channel numbers, back-end domain information) – Relies on transactions, watches – Single point of failure, bottleneck • And of course the guest – Use unikernels 31
  32. 32. Towards a Solution • Toolstack – Chaos – Complete re-write of toolstack, no need for libxl/libxc – Includes framework for easily plugging in different elements of a toolstack (e.g., with or without Xenstore) • Xenstore – Do we really need one? – Design and implementation of “Xenstore-less” guests and the corresponding toolstack 32
  33. 33. Configuration for plugging in/out different elements 33
  34. 34. dom0 NW backend NW frontend xenbus netfront Xen store xenbus netback toolstack With Xenstore Without Xenstore dom0 NW frontend xenbus netfront NW backendxenbus netback toolstack Backend-id:x Evt channel id:y Backend-id:x Evt channel id:y Can we get rid of Xenstore ? 34
  35. 35. Optimizing the Toolstack xl libxl libxc xcall xevtchannel hypervisor chaos libh2 libxcl Xen store noxen store 35
  36. 36. Early Results • Guest: MiniOS, 1 VCPU, 8MB RAM, 1 VIF • Standard: 87.77 msecs 36
  37. 37. Early Results • Guest: MiniOS, 1 VCPU, 8MB RAM, 1 VIF • Without libxl: 6.67 msecs • Standard: 87.77 msecs 37
  38. 38. Early Results • Without xen store: 1.43 ms • Guest: MiniOS, 1 VCPU, 8MB RAM, 1 VIF • Without libxl: 6.67 msecs • Standard: 87.77 msecs 38
  39. 39. Quick Breakdown xc_dom_allocate 0.02 xc_evtchn_alloc* 0.00 xc_dom_kernel_* 0.02 xc_dom_boot_xen_init 0.00 xc_dom_parse_image 0.06 xc_dom_mem_init 0.00 xc_dom_boot_mem_init 0.13 xc_dom_build_image 0.24 xc_dom_boot_image 0.32 xc_dom_gnttab_init 0.01 xc_dom_p2m 0.00 xc_cpuid_apply_policy 0.06 xc_dom_release 0.13 xc_domain_init 1.08 dev_create 0.06 xs_domain_create 0.00 other 0.29 chaos_create 1.43 39
  40. 40. Virtualization Platforms & Guests - Ongoing & Future Work • Short term – Lots of clean-up, more results – Libxc replacement – High performance (40-80 Gb/s) service chaining • Longer term – New hypervisor schedulers for massive consolidation, high packet I/O – Unicore: tools for automatically building high performance unikernels and OSes → OS-level decomposition 40
  41. 41. Towards sub 10 ms service instantiation • The SUPERFLUIDITY project – goals and approach • Part I – Speed up of: – Virtualization Platform (including the hypervisor) – The guests (i.e., virtual machines) • Part II – Speed up of: – Virtual Infrastructure Managers 41
  42. 42. VM instantiation and boot time 42 Orchestrator request VIM operations Virtualization Platform Guest OS (VM) Boot time 1-2 s ~1 ms ~1 ms • Unikernels can provide low latency instantiation times for “Micro-VNF” • What about VIMs (Virtual Infrastructure Managers) ?
  43. 43. Performance analysis and Tuning of VIMs for Micro VNFs • General model of the VNF instantiation process • Modifications to VIMs to instantiate Micro-VNFs based on ClickOS Unikernel • Methodology to evaluate the performances • Performance Evaluation 43
  44. 44. Virtual Infrastructure Managers (VIMs) We considered the performance of two VIMs : • OpenStack Nova – OpenStack is composed by subprojects – Nova: orchestration and management of computing resources ---> VIM – 1 Nova node (scheduling) + several compute nodes (which interact with the hypervisor) – Not tied to a specific virtualization technology • Nomad by HashiCorp – Minimalistic cluster manager and job scheduler – Nomad server (scheduling) + Nomad clients (interact with the hypervisor) – Not tied to a specific virtualization technology 44
  45. 45. Reference Model of the VNF instantiation process 45
  46. 46. Mapping of the reference model to the considered VIMs 46
  47. 47. VIM instantiation model for Openstack Nova 47
  48. 48. VIM instantiation model for nomad 48
  49. 49. VIM modifications to instantiate (ClickOS) Micro VNFs 49 A regular VM can boot its OS from an image or a disk snapshot that can be read from an associated block device (disk). The host hypervisor instructs the VM to run the boot loader, which reads the kernel image from the block device. ClickOS based MicroVNFs, are shipped as a tiny kernel without a block device. These VMs need to boot from a so-called diskless image. The host hypervisor reads the kernel image from a file or a repository and directly injects it in the VM memory. Virtual Infrastructure Manager Virtualization Platform (Hypervisor) This interface needs to be modified to support the boot of “diskless images”
  50. 50. VIM modifications to instantiate (ClickOS) Micro VNFs • OpenStack – Xen supported out of the box, using the Libvirt toolstack – We considered the boot of diskless images targeting only one component (Nova Compute) and a specific toolstack, Libvirt. – Libvirt talks with Xen using libxl the default Xen toolstack API. – We modified the XML description of the guest domain provided by the driver, changing the XML description on the fly before the creation of the domain. • Nomad – Xen not supported out of the box – We developed a new Nomad driver for Xen, called XenDriver . – The new driver communicates with the XL Xen toolstack and it is also able to instantiate a ClickOS VM. 50
  51. 51. VIM performance evaluation approach • We evaluate the VM scheduling and instantiation phase, combining message trace analysis and timestamps in the code • Message traces (coarse information, beginning and end of the different phases) – VIM Message Analyzer capable of analyzing Nova and Nomad message exchanges • Detailed breakdown with timestamps in the code (Nomad Client, Nova Compute) • Workload generators: – OpenStack : Rally benchmarking tool – Nomad : developed the “Nomad Pusher”, a utility written in the GO language which programmatically submits jobs to the Nomad Server. 51
  52. 52. Results – ClickOS instantiation times 52 OpenStack Nova Nomad seconds seconds
  53. 53. There is no comparison implied… • NB: the purpose of the work is NOT to compare OpenStack vs. Nomad. The goal is to understand how both behave and find ways to reduce instantiation times. • A direct comparison makes few sense. OpenStack is a much more complete framework in terms of offered functionality and different types of supported hypervisors. Moreover, the comparison is unfair also because for the Nomad case we have developed a driver only targeted to support the Xen/Click OS case. 53
  54. 54. VIM Tuning • OpenStack – Diskless VM -> we can skip most of the actions performed during the image creation; – UniKernels are special purpose VMs: • SSH is really needed ? • Full-IP stack ? – We were able to reduce the spawning time of about 70% – Looking at the overall instantiation time, the relative reduction is about 45%; • Nomad – No much space for the optimization; • We implemented only the necessary functionality; – We introduced further improvements assuming a local store for the Micro VNFs, reducing the Driver operation of about 30 ms; 54
  55. 55. seconds seconds Results – OpenStack details and tuning 55 OpenStack Nova overall OpenStack Nova spawn phase
  56. 56. Results – Nomad details and tuning 56 Nomad overall Nomad spawn phase seconds seconds
  57. 57. VIM performances - Ongoing & Future Work • Consider the impact of system load on the performance – Measure the average instantiation times considering batches of incoming requests with given rate (requests/s) and arrival patterns. – Analyze the impact of the number of already allocated VMs and of the number of target nodes to be deployed. • Keep improving the performance of the considered VIMs – e.g. trying to replace the lazy notification mechanism of Nomad with a reactive approach • Extend the analysis to another VIM – OpenVIM from the OSM project 57
  58. 58. Unikernel virtualization in the SUPERFLUIDITY vision • We have considered the optimization of Unikernel virtualization and the needed enhancements to Virtual Infrastructure Managers to support Unikernels. • In the SUPERFLUIDITY vision, Unikernels are interesting as they support the decomposition of network services in “smaller” components that can be deployed on the fly. • The NFV Infrastructure should be extended in order to support Unikernel virtualization in addition to traditional VMs. This way it will be possible to design services that exploit the most efficient solutions depending on several factors. 58
  59. 59. Conclusions • Unikernel virtualization can provide VM instantiation and boot time in the order of ms – ongoing: consolidation of results, generic and automatic optimization process for hypervisor toolstack and for guests • Work is still needed at the level of Virtual Infrastructure Managers – e.g. OpenStack (~ 1 s), Nomad (~ 300 ms) • VIMs are currently designed for generality, the challenge is to specialize them in a flexible way, keeping the compatibility with the mainstream versions 59
  60. 60. References - SUPERFLUIDITY • SUPERFLUIDITY project Home Page http://superfluidity.eu/ • G. Bianchi, et al. “Superfluidity: a flexible functional architecture for 5G networks”, Transactions on Emerging Telecommunications Technologies 27, no. 9, Sep 2016 60
  61. 61. References – Speed up of Virtualization Platforms / Guests • J. Martins, M. Ahmed, C. Raiciu, V. Olteanu, M. Honda, R. Bifulco, F. Huici, “ClickOS and the art of network function virtualization”, NSDI 2014, 11th USENIX Conference on Networked Systems Design and Implementation, 2014. • F. Manco, J. Martins, K. Yasukata, J. Mendes, S. Kuenzer, F. Huici, “The Case for the Superfluid Cloud”, 7th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 15), 2015 61
  62. 62. References – Speed up of VIMs • P. L. Ventre, C. Pisa, S. Salsano, G. Siracusano, F. Schmidt, P. Lungaroni, N. Blefari-Melazzi, “Performance Evaluation and Tuning of Virtual Infrastructure Managers for (Micro) Virtual Network Functions”, IEEE NFV-SDN 2016 Conference, Palo Alto, USA, 7-11 Nov. 2016 62
  63. 63. Thank you. Questions? Contacts SUPERFLUIDITY project, Speed up of VIMs Stefano Salsano, Associate Professor University of Rome Tor Vergata / CNIT stefano.salsano@uniroma2.it Speed up of Virtualization Platforms / Guests Felipe Huici, Chief Researcher Networked Systems and Data Analytics Group NEC Laboratories Europe felipe.huici@neclab.eu 63
  64. 64. The SUPERFLUIDITY project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No.671566 (Research and Innovation Action). The information given is the author’s view and does not necessarily represent the view of the European Commission (EC). No liability is accepted for any use that may be made of the information contained. 64

×