Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Extending OpenVIM R3 to support Unikernels (and Xen)

197 views

Published on

After a short introduction to the goals and approach of the Superfluidity EU research project, we present the proposed extensions to OpenVIM to support ClickOS Unikernels and Xen.

We have implemented a scenario that can combines Unikernels and regular VMs in the same Network Service or VNF extending OpenVIM.We describe how we have extended the ETSI NFV models and OpenVIM. In particular, we provide the details of the OpenVIM descriptor extensions to support Unikernels.

As a background information, we discuss the Unikernels and their orchestration aspects. Unikernel technology allows to build tiny VMs with memory footprint in the order of hundreds of KBs and boot time in the order of milliseconds. We focus on ClickOS Unikernels. We have adapted 3 VIMs (OpenStack, Nomad, OpenVIM) to support ClickOS Unikernels and report a performance evaluation of the VM instantiation time.

Published in: Internet
  • Be the first to comment

  • Be the first to like this

Extending OpenVIM R3 to support Unikernels (and Xen)

  1. 1. Extending OpenVIM R3 to support Unikernels (and Xen) Paolo Lungaroni (1), Claudio Pisa(2), Stefano Salsano(2,3), Giuseppe Siracusano(3), Francesco Lombardo(2) (1)Consortium GARR, Italy; (2)CNIT, Italy; (3)Univ. of Rome Tor Vergata, Italy Stefano Salsano Project coordinator – Superfluidity project Univ. of Rome Tor Vergata, Italy / CNIT, Italy ETSI OSM-Mid-Release#4 meeting, February 8th, Roma, Italy A super-fluid, cloud-native, converged edge system
  2. 2. Outline • Superfluidity project goals and approach • Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers) • Unikernels orchestration over OpenStack, OpenVIM and Nomad – Performance evaluation • Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support Unikernels orchestration – Live demo • Details of OpenVIM extensions for Unikernels support (proposal for a patch…) 2
  3. 3. Superfluidity project Superfluidity Goals • Instantiate network functions and services on-the-fly • Run them anywhere in the network (core, aggregation, edge), across heterogeneous infrastructure environments (computing and networking), taking advantage of specific hardware features, such as high performance accelerators, when available Superfluidity Approach • Decomposition of network components and services into elementary and reusable primitives (“Reusable Functional Blocks – RFBs”) • Platform-independent abstractions, permitting reuse of network functions across heterogeneous hardware platforms 3
  4. 4. The Superfluidity vision 4 Current NFV technology Granularity Time scale Superfluid NFV technology Days, Hours Minutes Seconds Milliseconds Big VMs Small components Micro operations • From VNF Virtual Network Functions to RFB Reusable Functional Blocks • Heterogeneous RFB execution environments - Hypervisors - Modular routers - Packet processors …
  5. 5. Outline • Superfluidity project goals and approach • Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers) • Unikernels orchestration over OpenStack, OpenVIM and Nomad – Performance evaluation • Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support Unikernels orchestration – Live demo • Details of OpenVIM extensions for Unikernels support (proposal for a patch…) 5
  6. 6. Extending the ETSI NFV models to support Unikernels 6 • In the NFV models, a Virtual Network Function (VNF) is decomposed in Virtual Deployment Units (VDU) • We extended the VDU information elements in the model to support Unikernel VDUs (based on the ClickOS Unikernel) • “Regular” VDUs based on traditional VMs and Unikernel VDUs can coexist in the same VNF Descriptor
  7. 7. Working prototype (see the live demo!) 7 Orchestrator prototype RDCL 3D VIM OpenVIM XEN We configured XEN to support both regular VMs (HVM) and Click Unikernels NSD NSD NSD ETSI release 2 descriptors NSD NSD VNFD Our orchestrator prototype (RDCL 3D) uses the enhanced VDU descriptors and interacts with OpenVIM OpenVIM has been enhanced to support XEN and Unikernels
  8. 8. Working prototype (see the live demo!) 8 This is a regular VM (XEN HVM) These are 3 Unikernel VMs (ClickOS)
  9. 9. Regular VM (Alpine) Unikernels Chaining Proof of Concept 9 OpenVSwitch ICMP responder (ClickOS) Firewall (ClickOS) OpenVIM Firewall Descriptor ICMP Responder Descriptor VLAN Encapsulator/ Decapsulator Descriptor VLAN Encap/ Decap (ClickOS) “Regular” Linux Alpine VM Descriptor 3 Unikernel VMs1 “regular” VM Extended ETSI NFV Release 2 models Extended OpenVIM YAML descriptors RDCL 3D
  10. 10. Some details of the working prototype 10 RDCL 3D GUI VIM OpenVIM XEN NSDNSDNSD ETSI release 2 descriptorsNSDNSDVNFD ClickOS images are prepared “on the fly” by the RDCL 3D agent using the Click Configuration files RDCL 3D Agent libvirt ClickOS images NSDNSDClick Click Configurations
  11. 11. Unikernel Chaining Proof of Concept • Regular VM – Pings towards the ICMP responder over a VLAN • VLAN Encapsulator/Decapsulator – Decapsulates the VLAN header (and re-encapsulates in the return path) • Firewall – Lets through only ARP and IP packets with TOS == 0xcc • ICMP Responder – Responds to ARP and ICMP echo requests 09/02/2018 CNIT 11
  12. 12. ClickOS configurations 09/02/2018 CNIT 12 Firewall ALLOW: ToS=0xCC Ping Responder IP: 10.10.0.3 VLAN Decap/Encap VLAN ID: 100 Compute Node eth3 IP: 10.10.0.2 define($IP 10.10.0.3); define($MAC 00:15:17:15:5d:75); source :: FromDevice(0); sink :: ToDevice(1); // classifies packets c :: Classifier( 12/0806 20/0001, // ARP Requests goes to output 0 12/0806 20/0002, // ARP Replies to output 1 12/0800, // ICMP Requests to output 2 -); // without a match to output 3 arpq :: ARPQuerier($IP, $MAC); arpr :: ARPResponder($IP $MAC); source -> Print -> c; c[0] -> ARPPrint -> arpr -> sink; c[1] -> [1]arpq; Idle -> [0]arpq; arpq -> ARPPrint -> sink; c[2] -> CheckIPHeader(14) -> ICMPPingResponder() -> EtherMirror() -> sink; c[3] -> Discard; source0 :: FromDevice(0); sink0 :: ToDevice(1); source1 :: FromDevice(1); sink1 :: ToDevice(0); VLANDecapsulator ::VLANDecap() VLANEncapsulator ::VLANEncap(100) //source0 -> VLANDecapsulator -> EnsureEther() -> sink0; source0 -> VLANDecapsulator -> sink0; source1 -> VLANEncapsulator -> sink1; source0 :: FromDevice(0); sink0 :: ToDevice(1); source1 :: FromDevice(1); sink1 :: ToDevice(0); c :: Classifier( 12/0806, // ARP goes to output 0 12/0800 15/cc, // IP to output 1, only if QoS == 0xcc -); // without a match to output 2 source0 -> c; c[0] -> sink0; // c[1] -> CheckIPHeader -> ipf -> sink0; c[1] -> sink0; c[2] -> Print -> Discard; source1 -> Null -> sink1;
  13. 13. ClickOS chain scenario 09/02/2018 CNIT 13 Firewall ALLOW: ToS=0xCC Ping Responder IP: 10.10.0.3 VLAN Encap/Decap VLAN ID: 100 Compute Node eth3 IP: 10.10.0.2 Alpine Linux eth0.100: 10.10.0.4
  14. 14. Status checks after VM startup 09/02/2018 CNIT 14 After the completion of the VM startup, we can check the status via the Libvirt and Xen command line tools in the target compute node • On Libvirt CLI: $ virsh -c xen:/// list Id Name State ---------------------------------------------------- 105 vm-clickos-ping2_56c0edb0-5b4c-11e7-ad8f-0cc47a7794be running • On Xen console: $ sudo xl list Name ID Mem VCPUs State Time(s) Domain-0 0 10238 8 r----- 96646.2 vm-clickos-ping2_56c0edb0-5b4c-11e7-ad8f-0cc47a7794be 105 8 1 r----- 227.6
  15. 15. Live Demo
  16. 16. Outline • Superfluidity project goals and approach • Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers) • Unikernel orchestration over OpenStack, OpenVIM and Nomad – Performance evaluation • Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support Unikernels orchestration – Live demo • Details of OpenVIM extensions for Unikernels support (proposal for a patch…) 16
  17. 17. OpenVIM extensions 1. Extension to OpenVIM to support Xen and Unikernel VMs 2. Extension to OpenVIM for a different networking model (multiple OvS bridges) 1709/02/2018 CNIT
  18. 18. OpenVIM extension 1 (Xen/Unikernels) Extension to OpenVIM to support Unikernel VMs • Xen hypervisor support – Unikernel support (in particular, ClickOS Unikernels) – Full HVM machines support – Coexistence on the same compute node of Unikernels and HVM VMs In order to specify that we are using the Xen hypervisor and the Unikernels, the configuration is extended by adding a new tags in the object descriptor files. • No changes in configuration openvim.cfg file. This extension works in “development” mode and in “normal” mode. 1809/02/2018 CNIT
  19. 19. OpenVIM extension 1 (Xen/Unikernels) • The patch extends the behavior of OpenVIM enabling the support for Xen : – Orchestrate a unikernel machine such as ClickOS – Orchestrate a standard Virtual Machines with Xen hypervisor • Backward compatibility is granted with the original OpenVIM’s modes (“normal”, “test”, “host only”, “OF only”, “development”) • NB: We execute our experiments in “development” mode, to run them with hardware not meeting all the requirements for “normal” OpenVIM mode 1909/02/2018 CNIT
  20. 20. Extension 1 : New descriptor tags Server (VMs) descriptor new tags: • hypervisor [kvm|xen-unik|xenhvm] defines which hypervisor is used. “kvm” reflects the original mode, while "xen-unik" and "xenhvm" start xen with support for Unikernels and full VM respectively. • osImageType: [clickos] defines the type of Unikernel image to start. It is mandatory if hypervisor = xen-unik. Currently, only ClickOS Unikernel are supported, but this tag allows future support of different types of Unikernels. Host (Compute Nodes) descriptor new flag: • hypervisors (comma separated list of kvm,xen-unik,xenhvm) defines the hypervisors supported by the compute node. NB: in a compute node kvm and xen* are mutually exclusive, while xenhvm and xen-unik can coexist. 2009/02/2018 CNIT
  21. 21. Extension 1 : Scheduling enhancements • The Compute Node is now selected based on the available resources AND the type of hypervisor. • If a specific Compute Node is requested for a Server (using the “hostId” tag), a consistency check between the requested hypervisor type and the supported hypervisor type in the Compute Node is performed. An error is returned if the hypervisor type is not supported. 09/02/2018 CNIT 21
  22. 22. Extension 2: OpenVIM Networking enhancements NB This extension is independent from the previous one, we used it to support the VNF chaining in the proposed example • Networking enhancements – Additional networking model: a separate OVS datapath (within the same OVS instance) is associated to each OpenVIM network • It allows transparent L2 networking instead of VLAN based • It could be extended to work across multiple compute nodes (with VXLAN tunneling) 2209/02/2018 CNIT
  23. 23. Extension 2: An additional Networking Model 09/02/2018 CNIT 23 Open vSwitch Bridge 1 VNF 1 VNF 2 VNF 3 VNF N Open vSwitch Bridge 2 Open vSwitch Bridge M External Network Open vSwitch Bridge α VNF a VNF b VNF z Open vSwitch Bridge β Open vSwitch Bridge ω VXLAN Tunnel Compute NodeCompute Node
  24. 24. OpenVIM Instantiation sequence 09/02/2018 CNIT 24 OpenVIM deamon Compute Node VNF Descriptors files OpenVIM CLI tool Libvirt XML descriptor POST create_server ./openvim vm-create clickos-ping.yaml OpenVIM supports an OpenStack-like REST API on Northbound side. A CLI tool called openvim sends command over the REST APIs to OpenVIM deamon. This tool convert YAML descriptor to JSON format and sends it via REST.
  25. 25. OpenVIM Flavor and Image descriptors for a Unikernel • Flavor 1 flavor: 2 name: CloudVM_1C_8M 3 description: clickos cloud image with 8M, 1core 4 ram: 8 5 vcpus: 1 $ openvim flavor-create flavor_1C_8M.yaml 5a258552-0a51-11e7-a086-0cc47a7794be CloudVM_1C_8M • Image 1 image: 2 name: clickos-ping 3 description: click-os ping image 4 path: /var/lib/libvirt/images/clickos_ping 5 metadata: 6 use_incremental: "no" $ openvim image-create vmimage-clickos-ping.yaml c418a8ec-10c1-11e7-ad8f-0cc47a7794be clickos-ping 25
  26. 26. An example of Unikernel «Server» descriptor (extension 1) 09/02/2018 CNIT 26 New tags • Server 1 server: 2 name: vm-clickos-ping2 3 description: ClickOS ping vm with simple requisites. 4 imageRef: 'c418a8ec-10c1-11e7-ad8f-0cc47a7794be' 5 flavorRef: '5a258552-0a51-11e7-a086-0cc47a7794be' 6 # hostId: '195d4fb2-54fe-11e7-ad8f-0cc47a7794be' 7 start: "yes" 8 hypervisor: "xen-unik" 9 osImageType: "clickos" 10 networks: 11 - name: vif0 12 uuid: f136bd32-3fd8-11e7-ad8f-0cc47a7794be 13 mac_address: "00:15:17:15:5d:74“ $ openvim net-create net-firewall_ping.yaml 56c0edb0-5b4c-11e7-ad8f-0cc47a7794be vm-clickos-ping2 Created
  27. 27. «Host» descriptor (extension 1) • Host 1 { 2 "host":{ 3 "name": "nec-test-408-eth3", 4 "user": "compute408", 5 "password": "*****", 6 "ip_name": "10.0.11.2" 7 }, 8 "host-data": 9 { 10 "name": "nec-test-408-eth3", 11 "ranking": 300, 12 "description": "compute host for openvim testing", 13 "ip_name": "10.0.11.2", 14 "features": "lps,dioc,hwsv,ht,64b,tlbps", 15 "hypervisors": "xen-unik,xenhvm", 16 "user": "compute408", 17 "password": "*****", ... 292 } 09/02/2018 CNIT 27 New tag
  28. 28. OpenVIM Network descriptor (Extension 2) • Net 1 network: 2 name: firewall_ping 3 type: bridge_data 4 provider: ovsbr:firewall_ping 5 enable_dhcp: false 6 shared: false $ openvim net-create net-firewall_ping.yaml f136bd32-3fd8-11e7-ad8f-0cc47a7794be firewall_ping ACTIVE 09/02/2018 CNIT 28 New value Current approach
  29. 29. Libvirt XML descriptor for ClickOS Unikernel generated by OpenVIM <domain type='xen'> <name>vm-clickos-ping2_56c0edb0-5b4c-11e7-ad8f- 0cc47a7794be</name> <uuid>56c0edb0-5b4c-11e7-ad8f-0cc47a7794be</uuid> <memory unit='KiB'>8192</memory> <currentMemory unit='KiB'>8192</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='xenpv'>xen</type> <kernel>/var/lib/libvirt/images/clickos_ping</kernel> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='host-model'></cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <console type='pty'> <target type='xen' port='0'/> </console> <interface type='bridge'> <source bridge='ovim-firewall_ping'/> <script path='vif-openvswitch'/> <mac address='00:15:17:15:5d:74'/> </interface> </devices> </domain> 09/02/2018 CNIT 29
  30. 30. Installation of the extended OpenVIM (R2, R3) • Download the extended version of OpenVIM in your system from our repository: $ git clone https://github.com/superfluidity/openvim4unikernels.git • Install OpenVIM via bash script: openvim/scripts$ ./install-openvim.sh –noclone • Our extensions are in the “unikernel” branch: openvim/scripts$ git checkout unikernel unikernel/scripts$ ./unikernels_patch_vim_db.sh –u vim –p vimpw install After updating the database, you can start OpenVIM as usual. 3009/02/2018 CNIT
  31. 31. Repository structure • Unikernel folder contains some tool ad example that is useful to start work with our patch. • Descriptors contains some preconfigured ClickOS images and the descriptors to use as an example to start working with the Unikernels. • Docs contains documentation • Scripts folder contains a bash scripts that updates OpenVIM database to support the new fields for unikernel operations and a script for a quick example to start work with ClickOS. 3109/02/2018 CNIT
  32. 32. Conclusions – Feedbacks • We have designed and implemented a solution for the combined orchestration of regular VMs and Unikernels • OpenVIM implementation has been extended. We can propose two patches: – 1. Extension to support Xen and Unikernels – 2. Extension for multi OvS bridges networking model 32
  33. 33. Thank you. Questions? Contacts Stefano Salsano University of Rome Tor Vergata / CNIT stefano.salsano@uniroma2.it These tools are available on github (Apache 2.0 license) https://github.com/superfluidity/RDCL3D https://github.com/superfluidity/openvim4unikernels https://github.com/netgroup/vim-tuning-and-eval-tools http://superfluidity.eu/ The work presented here only covers a subset of the work performed in the project 33
  34. 34. References • SUPERFLUIDITY project Home Page http://superfluidity.eu/ • G. Bianchi, et al. “Superfluidity: a flexible functional architecture for 5G networks”, Transactions on Emerging Telecommunications Technologies 27, no. 9, Sep 2016 • P. L. Ventre, C. Pisa, S. Salsano, G. Siracusano, F. Schmidt, P. Lungaroni, N. Blefari-Melazzi, “Performance Evaluation and Tuning of Virtual Infrastructure Managers for (Micro) Virtual Network Functions”, IEEE NFV-SDN Conference, Palo Alto, USA, 7-9 November 2016 http://netgroup.uniroma2.it/Stefano_Salsano/papers/salsano-ieee-nfv-sdn-2016-vim-performance-for-unikernels.pdf • S. Salsano, F. Lombardo, C. Pisa, P. Greto, N. Blefari-Melazzi, “RDCL 3D, a Model Agnostic Web Framework for the Design and Composition of NFV Services”, submitted paper, https://arxiv.org/abs/1702.08242 34
  35. 35. References – Speed up of Virtualization Platforms / Guests • Light VM project http://cnp.neclab.eu/projects/lightvm/ • F. Manco, C. Lupu, F. Schmidt, J. Mendes, Simon Kuenzer, S. Sati, K. Yasukata, C. Raiciu, F. Huici, “My VM is Lighter (and Safer) than your Container”, SOSP 2017 • J. Martins, M. Ahmed, C. Raiciu, V. Olteanu, M. Honda, R. Bifulco, F. Huici, “ClickOS and the art of network function virtualization”, NSDI 2014, 11th USENIX Conference on Networked Systems Design and Implementation, 2014. • F. Manco, J. Martins, K. Yasukata, J. Mendes, S. Kuenzer, F. Huici, “The Case for the Superfluid Cloud”, 7th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 15), 2015 35
  36. 36. References – Unikraft project • http://cnp.neclab.eu/projects/unikraft/ • https://www.xenproject.org/developers/teams/unikraft.html The fundamental drawback of unikernels is that they require that applications be manually ported to the underlying minimalistic OS (e.g. having to port nginx, snort, mysql or memcached to MiniOS or OSv); this requires both expert work and often considerable amount of time. In essence, we need to pick between either high performance with unikernels, or no porting effort but decreased performance and decreased efficiency with standard OS/VM images. The goal of this proposal is to change this status quo by providing a highly configurable unikernel code base; we call this base Unikraft. 36
  37. 37. Background information
  38. 38. Outline • Superfluidity project goals and approach • Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers) • Unikernel orchestration over OpenStack, OpenVIM and Nomad – Performance evaluation • Extending ETSI NFV Release 2 models (NFV-IFA 011&014) and OpenVIM to support Unikernels orchestration – Live demo • Details of OpenVIM extensions for Unikernels support 38
  39. 39. Unikernels: a tool for superfluid virtualization Containers e.g. Docker • Lightweight (not enough?) • Poor isolation 39 Hypervisors (traditional VMs) e.g. XEN, KVM, wmware… • Strong isolation • Heavyweight Unikernels Specialized VMs (e.g. MiniOS, ClickOS…) • Strong isolation • Very Lightweight • Very good security properties They break the “myth” of VMs being heavy weight…
  40. 40. What is a Unikernel? • Specialized VM: single application + minimalistic OS • Single address space, co-operative scheduler so low overheads • Unikernel virtualization platforms extend existing hypervisors (e.g. XEN) driver1 driver2 app1 (e.g., Linux, FreeBSD) KERNELSPACEUSERSPACE app2 appNdriverN Vdriver1 vdriver2 app SINGLEADDRESS SPACE 40 General purpose OS Unikernel a minimalistic OS (e.g., MiniOS, Osv)
  41. 41. ClickOS Unikernel • ClickOS Unikernel combines: – Click modular router • a software architecture to build flexible and configurable routers – MiniOS • a minimalistic Unikernel OS available with the Xen sources • ClickOS VMs – Are small: ~6MB – Boot quickly: ~ few ms – Add little delay: ~45µs – Support ~10Gb/s throughput for almost all packet sizes 09/02/2018 CNIT 41
  42. 42. Unikernels (ClickOS) memory footprint and boot time VM configuration: MiniOS, 1 VCPU, 8MB RAM, 1 VIF • 4 ms • 87.77 ms 42 Boot time, state of the art results Recent results from Superfluidity, by redesigning the XEN toolstack Memory footprint • Hello world guest VM : 296 KB • Ponger (ping responder) guest VM : ~700KB
  43. 43. Unikernels (ClickOS) memory footprint and boot time VM configuration: MiniOS, 1 VCPU, 8MB RAM, 1 VIF 43 Boot time, state of the art results Memory footprint • Hello world guest VM : 296 KB • Ponger (ping responder) guest VM : ~700KB Recent results from Superfluidity, by redesigning the XEN toolstack • 4 ms • 87.77 ms
  44. 44. VM instantiation and boot time typical performance (no Unikernels) 44 Orchestrator request VIM operations Virtualization Platform Guest OS (VM) Boot time 1-2 s 5-10 s ~1 s
  45. 45. VM instantiation and boot time typical performance (no Unikernels) 45 Orchestrator request VIM operations Virtualization Platform Guest OS (VM) Boot time 1-2 s ~1 ms ~1 ms XEN Hypervisor Enhancements Unikernels Unikernels and Hypervisor can provide low instantiation times for “Micro-VNF”
  46. 46. VM instantiation and boot time typical performance (no Unikernels) 46 Orchestrator request VIM operations Virtualization Platform Guest OS (VM) Boot time 1-2 s ~1 ms ~1 ms XEN Hypervisor Enhancements Unikernels Can we improve VIM performances? Unikernels and Hypervisor can provide low instantiation times for “Micro-VNF”
  47. 47. Outline • Superfluidity project goals and approach • Unikernels and their orchestration using VIMs (Virtual Infrastructure Managers) • Unikernels orchestration over OpenStack, OpenVIM and Nomad – Performance evaluation • Extending ETSI NFV Release 2 models (IFA011, IFA014) and OpenVIM to support Unikernels orchestration – Live demo • Details of OpenVIM extensions for Unikernels support (proposal for a patch…) 48
  48. 48. Performance analysis and Tuning of Virtual Infrastructure Managers (VIMs) for Unikernel VNFs • We considered 3 VIMs (OpenStack, Nomad, OpenVIM) 49 - General model of the VNF instantiation process, mapping of the operations of the 3 VIMs in the general model - (Quick & dirty) modifications to VIMs to instantiate Micro-VNFs based on ClickOS Unikernel - Performance Evaluation
  49. 49. Virtual Infrastructure Managers (VIMs) We considered three VIMs : • OpenStack Nova – OpenStack is composed by subprojects – Nova: orchestration and management of computing resources ---> VIM – 1 Nova node (scheduling) + several compute nodes (which interact with the hypervisor) – Not tied to a specific virtualization technology • Nomad by HashiCorp – Minimalistic cluster manager and job scheduler – Nomad server (scheduling) + Nomad clients (interact with the hypervisor) – Not tied to a specific virtualization technology • OpenVIM – NFV specific VIM, originally developed by the OpenMANO open source project, now maintained in the context of ETSI OSM 50
  50. 50. Results – ClickOS instantiation times (OpenStack, Nomad, OpenVIM) 51 OpenStack Nova Nomad seconds seconds OpenVIM seconds
  51. 51. The SUPERFLUIDITY project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No.671566 (Research and Innovation Action). The information given is the author’s view and does not necessarily represent the view of the European Commission (EC). No liability is accepted for any use that may be made of the information contained. 53

×