This is my latest OpenStack Networking presentation. I presented it at OSDC 2014. It includes a lot of backup slides with CLI outputs that show how ML2 with the OVS agent creates GRE based overlay networks and logical routers
This presentation for a talk at the Linux Tag 2014 has a couple of new Slides compared to earlier presentations that explain some different networking models like Flat, VLAN based, 'SDN Fabric based', etc.
Open stack networking_101_update_2014-os-meetupsyfauser
This is the latest Update to my OpenStack Networking / Neutron 101 Slides with some more Information and caveats on the new DVR and Gateway HA Features
This presentation was shown at the OpenStack Online Meetup session on August 28, 2014. It is an update to the 2013 sessions, and adds content on Services Plugin, Modular plugins, as well as an Outlook to some Juno features like DVR, HA and IPv6 Support
This was a tutorial which Mark McClain and I led at ONUG, Spring 2015. It was well received and serves as a walk through of OpenStack Neutron and it's features and usage.
Quantum (OpenStack Meetup Feb 9th, 2012)Dan Wendlandt
This is a talk I gave on Quantum at the Bay Area OpenStack Meetup on Feb 9th, 2012.
I added a few slides to try and address some of questions people had during the talk.
This is my latest OpenStack Networking presentation. I presented it at OSDC 2014. It includes a lot of backup slides with CLI outputs that show how ML2 with the OVS agent creates GRE based overlay networks and logical routers
This presentation for a talk at the Linux Tag 2014 has a couple of new Slides compared to earlier presentations that explain some different networking models like Flat, VLAN based, 'SDN Fabric based', etc.
Open stack networking_101_update_2014-os-meetupsyfauser
This is the latest Update to my OpenStack Networking / Neutron 101 Slides with some more Information and caveats on the new DVR and Gateway HA Features
This presentation was shown at the OpenStack Online Meetup session on August 28, 2014. It is an update to the 2013 sessions, and adds content on Services Plugin, Modular plugins, as well as an Outlook to some Juno features like DVR, HA and IPv6 Support
This was a tutorial which Mark McClain and I led at ONUG, Spring 2015. It was well received and serves as a walk through of OpenStack Neutron and it's features and usage.
Quantum (OpenStack Meetup Feb 9th, 2012)Dan Wendlandt
This is a talk I gave on Quantum at the Bay Area OpenStack Meetup on Feb 9th, 2012.
I added a few slides to try and address some of questions people had during the talk.
While every new release of OpenStack offers improvements in functionality and the user experience, one thing’s for certain: troubleshooting is hard if you don’t know where to start.
Join us as we cover some common and not-so-common issues with Nova and Neutron that lead to some of our favorite error messages, including “No valid host was found”. Participants will learn basic troubleshooting procedures, including tips, tricks, and processes of elimination, to get their cloud back on track.
Overview of OpenStack nova-networking evolution towards Neutron. Architecture overview of OVS plugin, ML2, and MidoNet Overlay product. Overview and example of Heat templates, along with automation of physical switches using Cumulus
DevOops - Lessons Learned from an OpenStack Network ArchitectJames Denton
Join as we discuss various OpenStack Neutron network configuration options and issues experienced with VLAN, VXLAN, L2population, multicast, Neutron routers, Open vSwitch and more.
2014 OpenStack Summit - Neutron OVS to LinuxBridge MigrationJames Denton
Presentation titled 'Migrating production workloads from OVS to LinuxBridge'. Presented at the Fall 2014 OpenStack summit in Paris, this slide deck introduced the possibility of migrating live workloads from Open vSwitch to LinuxBridge with minimal downtime.
Openstack Networking Internals - Advanced Part
The pictures of the VNI were taken with the "Show my network state" tool
https://sites.google.com/site/showmynetworkstate/
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
Control Your Network ASICs, What Benefits switchdev Can Bring UsHungWei Chiu
In this slide, I will introduce what is switchdev and what problem it wants to solve. To this day, most of the hardware switch's application-specific integrated circuit (ASIC) only be controlled by the vendor's proprietary binary (SDK) and it's inconvenient for system administrator/developer. In order to break the chip vendor's lock-in situation, the switchdev had been designed to solve this. With the help of switchdev, we can develop a general solution for hardware switch chips and break the connection with vendor's binary-blob (SDK).
In order words. Linux kernel can directly communicate with the vendor's proprietary ASIC now, and the software programmer/system administrator can easily control that ASIC to provide more flexible, powerful and programmable network function.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
This presentation covers the basics about OpenvSwitch and its components. OpenvSwitch is a Open Source implementation of OpenFlow by the Nicira team.
It also also talks about OpenvSwitch and its role in OpenStack Networking
Openstack Networking Internals - first partlilliput12
Openstack Networking Internals - first part
Description of the Virtual Network Infrastructure inside an OpenStack cluster
The pictures of the VNI were taken with the "Show my network state" tool
https://sites.google.com/site/showmynetworkstate/
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud
In this workshop VMware will provide a quick reminder of the main contributions of the NSX network virtualization platform: consistent network and security management, increased application resiliency, rapid migration of workloads to and from the cloud.
VMware and OVH will then move on to practical cases with implementation of micro-segmentation, dynamic routing, automatic deployment of an application, load balancing in the OVH Hosted Private Cloud. This workshop is aimed at a technical audience.
While every new release of OpenStack offers improvements in functionality and the user experience, one thing’s for certain: troubleshooting is hard if you don’t know where to start.
Join us as we cover some common and not-so-common issues with Nova and Neutron that lead to some of our favorite error messages, including “No valid host was found”. Participants will learn basic troubleshooting procedures, including tips, tricks, and processes of elimination, to get their cloud back on track.
Overview of OpenStack nova-networking evolution towards Neutron. Architecture overview of OVS plugin, ML2, and MidoNet Overlay product. Overview and example of Heat templates, along with automation of physical switches using Cumulus
DevOops - Lessons Learned from an OpenStack Network ArchitectJames Denton
Join as we discuss various OpenStack Neutron network configuration options and issues experienced with VLAN, VXLAN, L2population, multicast, Neutron routers, Open vSwitch and more.
2014 OpenStack Summit - Neutron OVS to LinuxBridge MigrationJames Denton
Presentation titled 'Migrating production workloads from OVS to LinuxBridge'. Presented at the Fall 2014 OpenStack summit in Paris, this slide deck introduced the possibility of migrating live workloads from Open vSwitch to LinuxBridge with minimal downtime.
Openstack Networking Internals - Advanced Part
The pictures of the VNI were taken with the "Show my network state" tool
https://sites.google.com/site/showmynetworkstate/
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
Control Your Network ASICs, What Benefits switchdev Can Bring UsHungWei Chiu
In this slide, I will introduce what is switchdev and what problem it wants to solve. To this day, most of the hardware switch's application-specific integrated circuit (ASIC) only be controlled by the vendor's proprietary binary (SDK) and it's inconvenient for system administrator/developer. In order to break the chip vendor's lock-in situation, the switchdev had been designed to solve this. With the help of switchdev, we can develop a general solution for hardware switch chips and break the connection with vendor's binary-blob (SDK).
In order words. Linux kernel can directly communicate with the vendor's proprietary ASIC now, and the software programmer/system administrator can easily control that ASIC to provide more flexible, powerful and programmable network function.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
This presentation covers the basics about OpenvSwitch and its components. OpenvSwitch is a Open Source implementation of OpenFlow by the Nicira team.
It also also talks about OpenvSwitch and its role in OpenStack Networking
Openstack Networking Internals - first partlilliput12
Openstack Networking Internals - first part
Description of the Virtual Network Infrastructure inside an OpenStack cluster
The pictures of the VNI were taken with the "Show my network state" tool
https://sites.google.com/site/showmynetworkstate/
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud
In this workshop VMware will provide a quick reminder of the main contributions of the NSX network virtualization platform: consistent network and security management, increased application resiliency, rapid migration of workloads to and from the cloud.
VMware and OVH will then move on to practical cases with implementation of micro-segmentation, dynamic routing, automatic deployment of an application, load balancing in the OVH Hosted Private Cloud. This workshop is aimed at a technical audience.
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld
VMworld 2013
Richard Cockett, VMware
Umesh Goyal, VMware Software India Pvt ltd
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
OpenStack and OpenContrail for FreeBSD platform by Michał Dubieleurobsdcon
Abstract
OpenStack and OpenContrail network virtualization solution form a complete suite able to successfully handle orchestration of resources and services of a contemporary cloud installations. These projects, however, have been only available for Linux hosted platforms by now. This talk is about a work underway that brings them into the FreeBSD world.
It explains in greater details an architecture of an OpenStack system and shows how support for the FreeBSD bhyve hypervisor was brought up using the libvirt library. Details of the OpenContrail network virtualization solution is also provided, with special emphasis on the lower level system entities like a vRouter kernel module, which required most of the work while developing the FreeBSD version.
Speaker bio
Michal Dubiel, M.Sc. Eng., born 17th of September 1983 in Kraków, Poland. He graduated in 2009 from the faculty of Electrical Engineering, Automatics, Computer Science and Electronics of AGH University of Science and Technology in Kraków. Throughout his career he worked for ACK Cyfronet AGH on hardware-accelerated data mining systems and later for Motorola Electronics on DSP software for LTE base stations. Currently he is working for Semihalf on various software projects ranging from low level kernel development to Software Defined Networking systems. He is mainly interested in the computer science, especially the operating systems, programming languages, networks, and digital signal processing.
Scaling OpenStack Networking Beyond 4000 Nodes with Dragonflow - Eshed Gal-Or...Cloud Native Day Tel Aviv
As OpenStack matures, more users move from “dipping a toe” to deploying at large scale, with 1000's of nodes.
OpenStack networking has long been a limiting factor in scaling beyond a few hundreds of nodes, forcing users to turn to cell splitting, or to complete offloading of the networking to the underlay systems and forfeit the overlay network altogether.
Dragonflow is a fully distributed, open source, SDN implementation of Neutron, that handles large scale deployments without splitting to cells.
In testing we've conducted, we were able to scale to 4000+ controllers (each controller is typically deployed on a compute node), while maintaining the same performance we had on a small 30 node environment.
Nuage Arista Hardware VTEP. Demoing the integration of Arista switch into Nuage VSP and automatic way of building Vxlan tunnels from virtual to bare metal infrastructure.
Flexible NFV WAN interconnections with Neutron BGP VPNThomas Morin
[talk given during the OpenStack Summit, May 2018 in Vancouver, BC]
Telcos use OpenStack to deploy virtualized network functions, and have specific requirements to interconnect these OpenStack deployments to their backbones and mobile backhaul networks. These interconnections, in particular, need to involve dynamic routing and interconnections with operators internal VPNs.
This talk will explain the role that the networking-bgpvpn Neutron Stadium project plays to address this need, from the basics of the BGPVPN Interconnection API, to more advanced uses made possible by evolutions of this API delivered in Queens.
The more interesting use cases will be the opportunity for a step by step demo.
We'll give a status of where the project stands today in terms of feature coverage, look at the set of SDN controllers providing an implementation for this API beyond the implementation in reference drivers, and last, look at the future of the project.
In this talk Jiří Pírko discusses the design and evolution of the VLAN implementation in Linux, the challenges and pitfalls as well as hardware acceleration and alternative implementations.
Jiří Pírko is a major contributor to kernel networking and the creator of libteam for link aggregation.
Similar to Open stack networking_101_part-2_tech_deep_dive (20)
Welcome to the Program Your Destiny course. In this course, we will be learning the technology of personal transformation, neuroassociative conditioning (NAC) as pioneered by Tony Robbins. NAC is used to deprogram negative neuroassociations that are causing approach avoidance and instead reprogram yourself with positive neuroassociations that lead to being approach automatic. In doing so, you change your destiny, moving towards unlocking the hypersocial self within, the true self free from fear and operating from a place of personal power and love.
4. OpenStack Networking before Neutron - Refresher
§ Nova has its own networking service –
nova-network. It was used before Neutron
§ Nova-network is still present today,
and can be used instead of Neutron
§ Nova-network does § base L2 network provisioning
through Linux Bridge (brctl)
§ IP Address management for
Tenants (in SQL DB)
nova-console
(vnc/vmrc)
nova-api
(OS,EC2,Admin)
nova-compute
nova-cert
Libvirt, XenAPI, etc.
Nova
DB
Hypervisor
(KVM, Xen,
etc.)
Queue
novaconsoleauth
nova-metadata
nova-scheduler
§ configure DHCP and DNS
entries in dnsmasq
§ configure fw-policies and NAT
in IPTables (nova-compute)
§ Calls to network services are
done through the nova API
nova-volume
novanetwork
Volume-Provider
(iSCSI, LVM, etc.)
Network-Providers
(Linux-Bridge or OVS with
brcompat, dnsmasq, IPTables)
§ Nova-network only knows 3 basic Network-Models;
§ Flat & Flat DHCP – direct bridging of Instance to external eth. Interface
with and w/o DHCP
§ VLAN based – Every tenant gets a VLAN, DHCP enabled
4
Inspired by
5. Nova-Networking deployment modes - Flat
§ In flat mode all VMs are patched into the same bridge (normally the Linux Bridge)
§ All VM Traffic is directly bridged onto the physical transport network (or single VLAN)
(aka as ‘fixed network’)
§ DHCP and Default Gateway is provided externally, and is not done using OpenStack
components
§ All VMs in a project are bridged to the same network, there is no multi-tenancy
beside security groups (IPTables between VM interfaces and bridge)
Compute Node
Compute Node
Compute Node
nova-compute
nova-compute
nova-compute
hypervisor
hypervisor
hypervisor
IP Stack
Management
Network
(or VLAN)
5
Bridge 100
Transport
Network
(or VLAN)
VM VM
VM VM
VM VM
IP Stack
Bridge 100
WAN/
Internet
IP Stack
Bridge 100
DHCP Server
6. Nova-Networking deployment modes – Flat / DHCP
§ As in flat mode all VMs are patched into the same bridge and all VM traffic is directly
bridged onto the physical transport network (or single VLAN) – (aka as ‘fixed network’)
§ DHCP and Default Gateway is provided by OpenStack Networking – Through
‘dnsmasq’ (DHCP) and iptables/routing stack + NAT / floating-ip’s
§ All VMs in a project are bridged to the same network, there is no multi-tenancy beside
security groups (IPTables between VM interfaces and bridge)
Compute Node
+ Networking *
nova-netw. dnsmasq nova-compute
NAT &
floating
-IPs
WAN/
Internet
iptables/
routing
IP Stack
hypervisor
Compute Node
nova-compute
nova-compute
hypervisor
hypervisor
VM VM
VM VM
VM VM
Bridge 100
Compute Node
IP Stack
Bridge 100
IP Stack
Bridge 100
External
Network
(or VLAN)
* With ‘multi-host’, each compute node will also be a networking node
6
Internal
Network
(or VLAN)
7. Nova-Networking deployment modes – VLAN
§ Other than with the flat modes, each project has its own network that maps to a VLAN and
bridge that needs to be pre-configured on the physical network
§ VM Traffic is bridged through one bridge and VLAN per project onto the physical network
§ DHCP and Default Gateway is provided by OpenStack Networking – Through
‘dnsmasq’ (DHCP) and iptables/routing stack + NAT / floating-ip’s
Compute Node
+ Networking *
nova-netw. dnsmasq
dnsmasq nova-compute
NAT &
floating
-IPs
WAN/
Internet
iptables/
routing
IP Stack
External
Network
(or VLAN)
* With ‘multi-host’,
each compute node will also be a networking node
7
nova-compute
nova-compute
hypervisor
hypervisor
VM
hypervisor
Bridge 40
VLAN30
VM VM
VM VM
VM
Bridge 30
Compute Node
Compute Node
IP Stack
VLAN40
Br
30
VLAN30
VLAN Trunk
Br
40
IP Stack
VLAN40
Br
30
VLAN30
VLAN Trunk
Br
40
VLAN40
Internal
VLANs
10. Neutron – Open Source OVS Plugin Architecture
§ The following components play a role in the open source OVS Plugin Architecture
§ Neutron-OVS-Agent: Receives tunnel & flow setup information from OVS-Plugin and programs
OVS to build tunnels and to steers traffic into those tunnels
§ Neutron-DHCP-Agent: Sets up dnsmasq in a namespace per configured network/subnet,
and enters mac/ip combination in dnsmasq dhcp lease file
§ Neutron-L3-Agent: Sets up iptables/routing/NAT Tables (routers) as directed by OVS Plugin
§ In most cases GRE overlay tunnels
are used, but flat and vlan modes
are also possible
NeutronNetwork-Node
N.-L3-Agent
NAT &
floating
-IPs
WAN/
Internet
N.-DHCP-Agent
iptables/
routing
iptables/
routing
N.-OVS-Agent
dnsmasq
dnsmasq
ovsdb/
ovsvsd
Compute Node
nova-compute
Neutron-Server + OVS-Plugin
Compute Node
nova-compute
ovsdb/
ovsvsd
hypervisor
External
Network
(or VLAN)
hypervisor
VM VM
br-int
br-tun
br-tun
br-tun
IP Stack
IP Stack
L2 in L3 (GRE)
Tunnel
Layer 3 Transport Network
10
VM VM
ovsdb/
ovsvsd
br-int
br-int
br-ex
IP Stack
N.-OVS-Agent
N.-OVS-Agent
Layer 3 Transport Net.
11. Open Source OVS Plugin / VMware NSX Plugin differences
§ With the VMware NSX Plugin (aka NVP Plugin) the following services are replaced by
VMware NSX components
§ OVS-Plugin: The OVS Plugin is exchanged by the NVP-Plugin
§ Neutron-OVS-Agent: Instead of the OVS-Agent, a centralized NVP controller cluster is used
§ Neutron-L3-Agent: Instead of the L3-Agent, a scale out cluster of NVP Layer3 Gateways is used
§ IPTables/Ebtables: Security is provided by native OpenVSwitch methods, controlled by the NVPController Cluster
NeutronCompute Node
Compute Node
§ GRE Tunneling is exchanged
Network-Node
with the more performing
STT technology
Neutron-Server + OVS NVP-Plugin
N.-L3-Agent
NAT &
floating
-IPs
WAN/
Internet
N.-DHCP-Agent
iptables/
routing
iptables/
routing
N.-OVS-Agent
dnsmasq
dnsmasq
ovsdb/
ovsvsd
External
Network
(or VLAN)
ovsdb/
ovsvsd
hypervisor
VM VM
br-tun
IP Stack
br-tun
ovsdb/
ovsvsd
hypervisor
VM VM
br-int
br-tun
IP Stack
L2 in L3 (GRE)
Tunnel
Layer 3 Transport Network
11
N.-OVS-Agent
N.-OVS-Agent
br-int
br-int
br-ex
IP Stack
nova-compute
nova-compute
Layer 3 Transport Net.
12. OpenVSwitch with VMware NSX
Transport
Network
MGMT
NSX
Controller
Cluster
eth1
eth0
TCP 6633
OpenFlow
TCP 6632
OVSDB
user
kernel
Linux IP stack + routing table
192.168.10.1
Config/State DB
br-0
Flows & Tunnel
Ports
(to Linux IP
Stack)
ovsdb-server
br-int (flow table)
ovs-vswitchd
WEB
12
WEB
APP
APP
13. Open Source OVS Plugin / VMware NSX Plugin differences
§ Centralized scale-out controller cluster controls all OpenVSwitches in all Compute- and
Network Nodes. It configures the tunnel interfaces and programs the flow tables of OVS
§ NSX L3 Gateway Service (scale-out) is taking over the L3 routing and NAT functions
§ NSX Service-Node relieves the Compute Nodes from the task of replicating broadcast,
unknown unicast and multicast traffic sourced by VMs
§ Security-Groups are implemented natively in OVS, instead of iptables/ebtables
NeutronNetwork-Node
NSX Controller
Cluster
Neutron-Server + NVP-Plugin
Compute Node
Compute Node
nova-compute
nova-compute
N.-DHCP-Agent
hypervisor
hypervisor
ovsdb/
ovsvsd
dnsmasq
dnsmasq
ovsdb/
ovsvsd
br-0
IP Stack
ovsdb/
ovsvsd
VM VM
br-int
br-int
br-int
WAN/
Internet
VM VM
br-0
br-0
IP Stack
IP Stack
Management
Network
NSX L3GW
+ NAT
13
Layer 3 Transport Network
NSX ServiceNode
L2 in L3 (STT) Tunnel
Layer 3 Transport Net.
16. Management & Operations – Software Upgrades
§ Automated deployment
of new Version
§ Built in compatibility
verification
§ Rollback
§ Online Upgrade
(i.e. dataplane &
control plane services
stay up)
16
17. Nova Metadata Service in Folsom
§ Nova-metadata is used to enable the use of cloud-init enabled images
(https://help.ubuntu.com/community/CloudInit)
§ After getting an IP address the Instance contacts the well know IP 169.254.169.254
via HTTP and requests the needed metadata for the Instance
• Some of the things cloud-init configures are:
• setting a default locale,
hostname, etc.
• Set up ephemeral mount points
• Generate ssh private keys, and add ssh
keys to user's .ssh/authorized_keys so
they can log in
§ With neutron in Folsom, the quantum-dhcp-agent will do the following:
§ provides option 121 “classless static routes” - adds a static route to
169.254.169.254 pointing to the dhcp-agent host itself
§ IPTables on the dhcp-agent host NATs the request either to the local metadata
server on the dhcp-agent host, or to a remote metadata service
Novametadata
dhcpagent
Instance
HTTP req. to 169.254.169.254
next-hop = quantum-dhcp-agent IP
in Tennant-Net
Novametadata
NAT to local
nova-metadata ,or
Forward to remote
nova-metadata
§ !! Caveat: In Folsom there is no support for overlapping IPs, and no support of
namespaces if nova-metadata is used. In Grizzly this will change (see next Slide)
17
18. Nova Metadata Service in Grizzly
§ To address the limitations of nova-metadata in Folsom, the Grizzly release introduces two
new services on the network-node;
quantum-ns-metadata-proxy and quantum-metadata-proxy (http://tinyurl.com/a3n4ypl for details)
§ In Grizzly DHCP option 121 is not used
anymore. The L3GW will route the request to
169.254.169.254 to the ns-metadata-proxy
§ The ns-metadata-proxy parses the request
and forwards it internally to the metadataproxy with two new headers: ‘X-Forward-For’
and the ‘X-Quantum-Router-ID’. These
headers provide context to properly identify
the Instance that made the original request.
Only the metadata-proxy can reach hosts on
the management network
§ The metadata-proxy uses the two headers to
retrieve the device-id of the port that sent the
request by interrogating quantum server
Network Node
Node in
management network
Tenant router network
namespace
quantum-nsmetadata-proxy
nova-metadata
via UNIX domain
socket
quantummetadata-proxy
quantum-server
§ The metadata proxy uses the device-id received from quantum-server to construct the
‘X-Instance-id’ header, and sends the request to nova-metadata including this information
§ Nova-metadata then uses the ‘X-Instance-id’ header to identify the tenant, and to properly
service the request
18