Three Nodes Architecture with Neutron.
• Management network. Used for internal
communication between OpenStack
• Internal network. Used for VM data
communication within the cloud
• External network. Used to provide VMs
with Internet access.
• Controller Node: Controller node contains
all OpenStack API services.
• Network Node: Network node contains
DHCP server and virtual routing.
• Compute Node: Network node contains
compute service and neutron plugin
The core components of Nova include the
• The nova-api accepts and responds to end-
user compute API calls. It also initiates
most of the orchestration activities (such as
running an instance) as well as enforcing
• The nova-compute process is primarily a
worker daemon that creates and
terminates virtual machine instances via
hypervisor APIs (XenAPI for
XenServer/XCP, libvirt for KVM or QEMU,
VMwareAPI for vSphere, etc.).
• The nova-scheduler process is conceptually
the simplest piece of code in OpenStack
Nova: it take a virtual machine instance
request from the queue and determines
where it should run (specifically, which
compute node it should run on).
• plugin agent (quantum-*-agent):Runs
on each hypervisor to perform local
vswitch configuration. Agent to be run
depends on which plugin you are using,
as some plugins do not require an agent.
• dhcp agent (quantum-dhcp-
agent):Provides DHCP services to
tenant networks. This agent is the same
across all plugins.
• l3 agent (quantum-l3-agent):Provides
L3/NAT forwarding to provide external
network access for VMs on tenant
networks. This agent is the same across
Use Case: Per-tenant Routers with Private
A more advanced router scenario in which
each tenant gets at least one router, and
potentially has access to the OpenStack
Networking API to create additional routers.
The tenant can create their own networks,
potentially uplinking those networks to a
router. This model enables tenant-defined
multi-tier applications, with each tier being
a separate network behind the router. Since
there are multiple routers, tenant subnets
can be overlapping without conflicting,
since access to external networks all
happens via SNAT or Floating IPs. Each
router uplink and floating IP is allocated
from the external network subnet.
AMQP is the messaging technology chosen by the
OpenStack cloud. The AMQP broker, either
RabbitMQ or Qpid, sits between any two Nova
components and allows them to communicate in a
loosely coupled fashion. More precisely, Nova
components (the compute fabric of OpenStack) use
Remote Procedure Calls (RPC hereinafter) to
communicate to one another; however such a
paradigm is built atop the publish/subscribe
paradigm so that the following benefits can be
• Decoupling between client and servant (such as
the client does not need to know where the
servant reference is).
• Full a-synchronism between client and servant
(such as the client does not need the servant to
run at the same time of the remote call).
• Random balancing of remote calls (such as if
more servants are up and running, one-way
calls are transparently dispatched to the first
22. ML2 “Drivers”
ML2 exposes two different types of drivers:
“Type” and “Mechanism”
ML2 Type Drivers:
• Maintain type-specific state
Provide tenant network allocation
Validate provider networks
local, flat, VLAN, GRE, and VXLAN
ML2 Mechanism Drivers:
• Responsible for taking information sup
plied by TypeDrivers and ensuring it is
properly applied given the specific netw
orking mechanisms which have been en
Arista, Cisco Nexus, Hyper-V, L2 Popula
tion, LinuxBridge, Open vSwitch, Tail-F