OpenStack Super Bootcamp.pdf

5,363 views
5,158 views

Published on

true

0 Comments
19 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,363
On SlideShare
0
From Embeds
0
Number of Embeds
37
Actions
Shares
0
Downloads
770
Comments
0
Likes
19
Embeds 0
No embeds

No notes for slide

OpenStack Super Bootcamp.pdf

  1. 1. OpenStackSuperBootcamp Mirantis, 2012
  2. 2. Agenda● OpenStack Essex architecture recap● Folsom architecture overview● Quantum vs Essexs networking model
  3. 3. Initial State Tenant is created, provisioning quota is available, user has an access to Horizon/CLI Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  4. 4. Step 1: Request VM Provisioning via UI/CLI User specifies VM params: name, flavor, keys, etc. and hits "Create" button Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  5. 5. Step 2: Validate Auth Data Horizon sends HTTP request to Keystone. Auth info is specified in HTTP headers. Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  6. 6. Step 2: Validate Auth Data Keystone sends temporary token back to Horizon via HTTP. Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  7. 7. Step 3: Send API request to nova-api Horizon sends POST request to nova-api (signed with given token). Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  8. 8. Step 4: Validate API Token nova-api sends HTTP request to validate API token to Keystone. Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  9. 9. Step 4: Validate API Token Keystone validates API token and sends HTTP response with token acceptance/rejection info. Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  10. 10. Step 5: Process API request nova-api parses request and validates it by fetching data from nova-db. If request is valid, it saves initia db entry about VM Horizon CLI to the database. Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  11. 11. Step 6: Publish provisioning request to queue nova-api makes rpc.call to scheduler. It publishes a short message to scheduler queue Horizon CLI with VM info. Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  12. 12. Step 6: Pick up provisioning request scheduler picks up the message from MQ. Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  13. 13. Step 7: Schedule provisioning Scheduler fetches information about the whole cluster from database and based on this info selects the most applicable Horizon CLI compute host. Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  14. 14. Step 8: Start VM provisioning on compute node Scheduler publishes message to the compute queue (based on host ID) and triggers VM Horizon CLI provisioning Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  15. 15. Step 9: Start VM rendering via hypervisor nova-compute fetches information about VM from DB, creates a command to hypervisor and delegates VM Horizon CLI rendering to hypervisor. Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  16. 16. Step 10: Request VM Image from Glance hypervisor request VM image from Glance via Image ID Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  17. 17. Step 11: Get Image URI from Glance If image with given image ID can be found - return Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  18. 18. Step 12: Download image from Swift hypervisor downloads image using URI, given by Glance, from Glances back-end. After Horizon CLI downloading - it renders it. Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  19. 19. Step 13: Configure network nova-compute makes rpc.call to nova-network requesting networking info. Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  20. 20. Step 14: allocate and associate network nova-network updates tables with networking info and VM entry in database Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  21. 21. Step 15: Request volume attachment Tenant is created, provisioning quota is available, user has an access to Horizon/CLI Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  22. 22. Initial State Tenant is created, provisioning quota is available, user has an access to Horizon/CLI Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  23. 23. Still part of nova Horizon CLI Keystone Glancenova: Cloud Controller nova-api endpoint glance-api glance-registry scheduler nova: Compute nova-db nova-network nova-compute Swift que nova-volume ue hypervisor proxy-server object Shared Storage store
  24. 24. Limitations?● Nova is "overloaded" to do 3 things ○ Compute ○ Networking ○ Block Storage● API for networking and block storage are still parts of nova
  25. 25. UI: horizon or CLI Keystone keystone servernova compute node keystone -db controller nova: Hypervisor queu nova- Compute nova-api e compute Quantum V M quantum scheduler server nova-db Network quantum -db Cinder block Glance quantum storage plugin endpoint queu e node glance-api scheduler storage glance-registry Swift cinder-vol proxy-server glance db object cinder-db store
  26. 26. UI: horizon or CLI Keystone keystone servernova compute node keystone -db controller nova: Hypervisor queu nova- Compute nova-api e compute Quantum V M quantum scheduler server nova-db Network quantum -db Cinder block Glance quantum storage plugin endpoint queu e node glance-api scheduler storage glance-registry Swift cinder-vol proxy-server glance db object cinder-db store
  27. 27. Essex networking model overview● single-host vs multi-host networking● the role of network manager● different network manager types● floating IPs
  28. 28. Revisit of KVM networking with linux bridge router router: 10.0.0.1 (def. gateway for VMs) VM VM VM VM 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 linux linux bridge bridge eth0 eth0 nova-compute host nova-compute host switch
  29. 29. Single-host networking● KVM host == nova-compute● router == nova-network eth1 public ip routing/NAT eth0 eth0 IP:10.0.0.1 nova-network host (def. gateway for VMs) VM VM VM VM 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 linux linux bridge bridge eth0 eth0 nova-compute host nova-compute host switch
  30. 30. What happens if nova-network host diesVM:~root$ ping google.comNo route to host eth1 routing/NAT eth0 VM VM VM VM 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 linux linux bridge bridge eth0 eth0 nova-compute host nova-compute host switch
  31. 31. Multi-host networkingMove routing from the central serverto each compute node independently to eth1prevent SPOF. routing/NAT eth0 public ip public ip eth1 eth1 routing/NAT routing/NAT VM VM VM VM 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 linux linux bridge bridge eth0 eth0 nova-compute & nova-compute & nova-network host switch nova-network host
  32. 32. Multi-host networkingCompute servers maintain Internet access independent from each other. Each ofthem runs nova-network & nova-compute components. public ip public ip eth1 eth1 routing/NAT routing/NAT VM VM VM VM 10.0.0.2 10.0.0.3 10.0.0.4 10.0.0.5 linux linux bridge bridge 10.0.0.1(gw) 10.0.0.6(gw) eth0 eth0 nova-compute & nova-compute & nova-network host switch nova-network host
  33. 33. Multi-host networking - features● Independent traffic: Instances on both compute nodes access external networks independently. For each of them, the local linux bridge is used as default gateway
  34. 34. Multi-host networking - features● Independent traffic: Instances on both compute nodes access external networks independently. For each of them, the local linux bridge is used as default gateway● Routing: Kernel routing tables are checked to decide if the packet should be NAT-ed to eth1 or sent via eth0
  35. 35. Multi-host networking - features● Independent traffic: Instances on both compute nodes access external networks independently. For each of them, the local linux bridge is used as default gateway● Routing: Kernel routing tables are checked to decide if the packet should be NAT-ed to eth1 or sent via eth0● IP address management: nova-network maintains IP assignments to linux bridges and instances in nova database
  36. 36. Network manager● Determines network layout of the cloud infrastructure● Capabilities of network managers ○ Plugging instances into linux bridges ○ Creating linux bridges ○ IP allocation to instances ○ Injecting network configuration into instances ○ Providing DHCP services for instances ○ Configuring VLANs ○ Traffic filtering ○ Providing external connectivity to instances
  37. 37. FlatManager● Features: ○ Operates on ONE large IP pool. Chunks of it are shared between tenants. ○ Allocates IPs to instances (in nova database) as they are created ○ Plugs instances into a predefined bridge ○ Injects network config to /etc/network/interfaces eth1 eth1 VM /etc/network/interfaces: VM "address 10.0.0.2 gateway 10.0.0.1" linux linux bridge bridge 10.0.0.1(gw) eth0 eth0 nova-compute & nova-compute & nova-network nova-network
  38. 38. FlatDHCPManager● Features: ○ Operates on ONE large IP pool. Chunks of it are shared between tenants ○ Allocates IPs to instances (in nova database) as they are created ○ Creates a bridge and plugs instances into it ○ Runs a DHCP server (dnsmasq) for instances to boot from eth1 eth1 VM obtain dhcp static lease: VM ip: 10.0.0.2 gw: 10.0.0.1 dnsmasq linux bridge 10.0.0.1(gw) eth0 eth0 nova-compute & nova-compute nova-network & nova-network
  39. 39. DHCP server (dnsmasq) operation● is managed by nova-network component● in multi-host networking runs on every compute node and provides addresses only to instances on that node (based on DHCP reservations) eth1 eth1 VM VM VM VM ip: 10.0.0.4 ip: 10.0.0.6 ip: 10.0.0.2 ip: 10.0.0.5 gw: 10.0.0.3 gw: 10.0.0.3 gw: 10.0.0.1 gw: 10.0.0.1dnsmasq: static leases for linux dnsmasq: static leases for linux10.0.0.4 & 10.0.0.6 bridge 10.0.0.3(gw) 10.0.0.2 & 10.0.0.5 bridge 10.0.0.1(gw) eth0 eth0 nova-compute & nova-compute nova-network & nova-network switch
  40. 40. VlanManager● Features: ○ Can manage many IP subnets ○ Uses a dedicated network bridge for each network ○ Can map networks to tenants ○ Runs a DHCP server (dnsmasq) for instances to boot from ○ Separates network traffic with 802.1Q VLANs eth1 eth1 VM_net1 VM_net2 VM_net1 VM_net2 br100 br200 dnsmasq_net1 dnsmasq_net2 eth0.100 eth0 eth0 eth0.200 nova-compute & nova-compute nova-network & nova-network
  41. 41. VlanManager - switch requirementsThe switch requires support for 802.1Q tagged VLANs toconnect instances on different compute nodes eth1 eth1 VM_net1 VM_net2 VM_net2 VM_net1 br100 br200 br100 br200 eth0.100 eth0.200 eth0.100 eth0 eth0.200 eth0 nova-compute nova-compute & nova-network & nova-network tagged traffic 802.1Q capable switch
  42. 42. Network managers comparisonName Possible use cases LimitationsFlatManager Deprecated - should not be used for ● Only Debian derivatives any deployments. supported. ● Single IP network suffers from scalability limitations.FlatDHCPManager Internal, relatively small corporate ● Instances share the same linux clouds which do not require tenant bridge regardless which tenant isolation. they belong to. ● Limited scalability because of one huge IP subnet.VlanManager Public clouds which require L2 traffic ● Requires 802.1Q capable switch. isolation between tenants and use of ● Scalable only to 4096 VLANs dedicated subnets.
  43. 43. Inter-tenant trafficCompute nodes routing tableconsulted to route trafficbetween tenants networks public ip eth1(based on IPs of the linuxbridges) VM_net1 VM_net2 routing 10.100.0.0 via br100 10.200.0.0 via br200 br100 br200 10.100.0.1 10.200.0.1 eth0.100 eth0 eth0.200 nova-compute & nova-network
  44. 44. Accessing internet● eth1 address is set as the compute nodes default gateway● Compute nodes routing table public ip eth1 consulted to route traffic from the instance to the internet over VM_net1 VM_net2 the public interface (eth1) routing/NAT 0.0.0.0 via eth1 source NAT is performed to the src 10.100.0.0 -j SNAT to public_ip● compute nodes public address br100 10.100.0.1 br200 10.200.0.1 eth0.100 eth0 eth0.200 nova-compute & nova-network
  45. 45. Floating & fixed IPs● Fixed IPs: ○ given to each instance on boot (by dnsmasq) ○ private IP ranges (10.0.0.0, 192.168.0.0, etc.) ○ only for communication between instances and to external networks ○ inaccessible from external networks● Floating IPs: ○ allocated & associated to instances by cloud users ○ bunches of publicly routable IPs registered in Openstack by cloud dmin ○ accessible from external networks ○ multiple floating IP pools, leading to different ISP-s
  46. 46. Floating IPs● User associates a floating floating IP added as a IP with VM: secondary IP on eth1 ○ floating IP is added as a secondary IP address on vm_float_ ip: 92.93.94.95 public ip compute nodes eth1 (public eth1 floating IP DNAT: IF) -d 92.93.94.95/32 -j DNAT -- to-destination 10.0.0.2 ○ DNAT rule is set to redirect floating IP -> fixed IP VM 10.0.0.2 VM 10.0.0.3 (10.0.0.2) linux bridge eth0 nova-compute & nova-network host
  47. 47. Limitations● Networking management is available only for admin● Implementation is coupled with networking abstractions
  48. 48. QUANTUM - a new networking platform● Provides a flexible API for service providers or their tenants to manage OpenStack network topologies● Presents a logical API and a corresponding plug-in architecture that separates the description of network connectivity from its implementation● Offers an API that is extensible and evolves independently of the compute API● Provides a platform for integrating advanced networking solutions
  49. 49. QUANTUM - a new networking platform● Provides a flexible API for service providers or their tenants to manage OpenStack network topologies● Presents a logical API and a corresponding plug-in architecture that separates the description of network connectivity from its implementation● Offers an API that is extensible and evolves independently of the compute API● Provides a platform for integrating advanced networking solutions
  50. 50. QUANTUM - a new networking platform● Provides a flexible API for service providers or their tenants to manage OpenStack network topologies● Presents a logical API and a corresponding plug-in architecture that separates the description of network connectivity from its implementation● Offers an API that is extensible and evolves independently of the compute API● Provides a platform for integrating advanced networking solutions
  51. 51. QUANTUM - a new networking platform● Provides a flexible API for service providers or their tenants to manage OpenStack network topologies● Presents a logical API and a corresponding plug-in architecture that separates the description of network connectivity from its implementation● Offers an API that is extensible and evolves independently of the compute API● Provides a platform for integrating advanced networking solutions
  52. 52. QUANTUM - a new networking platform● Provides a flexible API for service providers or their tenants to manage OpenStack network topologies E V IC plug-in Presents a logical API and a corresponding R● E architecture that separates the description of network S N connectivity from its implementation. T IO C Offers an API that is extensible and evolves independently RA compute API● of the S T● Provides B platform for integrating advanced networking Aa A N solutions
  53. 53. Quantum Overview● quantum abstracts● quantum architecture● quantum Open vSwitch plugin● quantum L3 agents
  54. 54. QUANTUM - abstracts - tenant network layout provider nets external net external net 8.8.0.0/16 172.24.0.0/16 NAT NAT router1 router2 10.0.0.1 192.168.0.1 10.23.0.1 GW GW vm vm vm vm 10.0.0.2 192.168.0.2 10.23.0.2 10.23.0.3 local nets
  55. 55. QUANTUM - abstracts● virtual L2 networks (port & switches)● IP pools & DHCP● virtual routers & NAT● "local" & "external" networks
  56. 56. Quantum network abstracts vs hardware compute node compute node vm vm vm vm vm vm remote DC net DC DMZ DC tunnel compute node (another DC) internet vm vm vm
  57. 57. QUANTUM - abstracts● virtual L2 networks● IP pools & DHCP● virtual ports & routers● "local" & "external" networks virtual networks delivered on top of datacenter hardware
  58. 58. Quantum - architecture source: http://openvswitch.org
  59. 59. quantum uses plugins to dealwith hardware diversity anddifferent layers of the OSI model
  60. 60. Quantum plugin architecture source: http://openvswitch.org
  61. 61. Quantum plugin architecture source: http://openvswitch.org
  62. 62. Quantum plugin architecture source: http://openvswitch.org
  63. 63. quantum plugin determinesnetwork connectivity layout.
  64. 64. Folsom - available quantum plugins● Linux Bridge● OpenVSwitch● Nicira NVP● Cisco (UCS Blade + Nexus)● Ryu OpenFlow controller● NEC ProgrammableFlow Controller
  65. 65. Example plugin: OpenVswitch
  66. 66. Example - Open vSwitch plugin● leverages OpenVSwitch software switch● modes of operation: ○ FLAT: networks share one L2 domain ○ VLAN: networks are separated by 802.1Q VLANs ○ TUNNEL: traffic is carried over GRE with different per- net tunnel IDs
  67. 67. Open vSwitch plugin - bridges single integration compute node bridge "br-int" vm vm LV_1 LV_2 br-int a patch port leads to ovs daemon a bridge which is attached to a br- eth0 physical interface q-agt eth0Integration bridge & NIC bridge
  68. 68. Open vSwitch plugin - ovs daemon compute node openvswitch Quantum agent daemon accepts vm vm accepts calls from calls from Quantumthe central quantum LV_1 LV_2 agent & reconfigures server via plugin br-int network ovs daemon br- eth0 q- quantum server plugi q-agt n eth0
  69. 69. Open vSwitch plugin vs compute farm q-quantum server plugi nQuantum server manages an OpenVswitch server farmthrough Quantum agents on compute nodes
  70. 70. Open vSwitch plugin - local VLANs traffic separated by compute node "local" VLANs: LV_1, LV_2 vm vm LV_1 LV_2 br-int ovs daemon br- eth0 q-agt eth0One bridge, many VLANs
  71. 71. OpenStack connectivity - Open vSwitch plugin● leverages OpenVSwitch software switch● modes of operation: ○ FLAT: networks share one L2 domain ○ VLAN: networks are separated by 802.1Q VLANs ○ TUNNEL: traffic is carried over GRE with different per- net tunnel IDs
  72. 72. Open vSwitch plugin - FLAT modeSingle L2 bcastdomain
  73. 73. Local vs global traffic ID-s - FLAT mode openvswitchFLAT: LV_1 >> UNTAGGED LV_1 VM br-int br-eth0 eth0Local VLAN tag stripped before sending down the wire
  74. 74. Open vSwitch plugin - VLAN mode802.1Q VLANs
  75. 75. Local vs global traffic ID-s - VLAN mode openvswitchVLAN: LV_1 >> NET1_GLOBAL_VID LV_1 VM br-int br-eth0 eth0Local VLAN tag changed to "global" VLAN tag.
  76. 76. Open vSwitch plugin - Tunnel modeGRE tunnel IDs
  77. 77. Local vs global traffic ID-s - Tunnel mode openvswitchGRE: LV_1 >> NET1_TUNNEL_ID LV_1 VM br-int br-tun eth0Local VLAN tag changed to GRE tunnel ID
  78. 78. VLAN vs GRE scalabilityVLAN ID: 12bit field: 2^12 = 4096GRE tunnel ID: 32bit field: 2^32 = 4 294 967 296
  79. 79. Tenant connection needs - provider networks compute node compute node vm vm vm vm vm vm remote DC net DC DMZ DC tunnelneed for many physicalconnections per compute node (another DC) internetcompute node vm vm vm
  80. 80. Integration with cloud and datacenter networksdedicated per-NICbridge
  81. 81. Integration with cloud and datacenter networks vlan range: vlan range: 100-400 401-800 tunnel ID range: 50-600vlan ranges aremapped to per-NICbridges
  82. 82. Agents: routing/NAT & IPAM
  83. 83. Tenant connection needs - L3 vm vm vm vm vm vm vm vmEnsuring "wired" instance vmconnectivity is not enough
  84. 84. Tenant connection needs - L3 10.1.0.0/24 vm vm 10.0.0.0/24 vm vm vm 10.2.0.0/24 vm vm vmWe need IP addresses vm
  85. 85. Tenant connection needs - L3 10.1.0.0/24 vm vm 10.0.0.0/24 vm vm vm 10.2.0.0/24 vm vm vmWe need routers vm
  86. 86. Tenant connection needs - L3 10.1.0.0/24 vm vm 10.0.0.0/24 vm vm vm 10.2.0.0/24 vm vm vmWe need external vmaccess/NAT
  87. 87. Quantum vs L3 services 10.1.0.0/24 dhcp-agent & vm vm quantum db for IP address 10.0.0.0/24 mgmt vm vm vm 10.2.0.0/24 vm vm vm l3-agent for routing & NAT vm
  88. 88. IPAM● create Quantum network● create Quantum subnet● pair subnet with network● boot instances and throw them into the network
  89. 89. DHCPdhcp-agent: aims to manage different dhcp backends toprovide dhcp services to openstack instances.
  90. 90. Routingl3-agent: ● creates virtual routers and plugs them into different subnets ● provides connectivity between Quantum networks ● creates default gateways for instances
  91. 91. NATl3-agent: creates NAT-ed connections to "external" networks
  92. 92. Compute cluster /w plugins & agentsquantum server dhcp host gateway host dnsmasq routing/NAT dhcp-agent l3-agent
  93. 93. Quantum - plugin & agent summary OVS CISCO NICIRA RYU NEC OTHER? Open Prograflat vlan gre nexus UCS NVP Flow/O mmabl ??? VS eFlow QUANTUM dnsma iptable HApro NAT router s ??? xy F5 ??? sq DHCP L3 FIREWALL L-B AGENT AGENT AGENT AGENT
  94. 94. Quantum /w OVS - model summary● each tenant manages his IP addresses independently● networks separated on L2 with VLANs or GRE tunnel IDs● disjoint concepts of "network" and "IP pool"● tenant networks connected with one another by "virtual routers"● internal vs external networks
  95. 95. Quantum vs nova-network NOVA-NETWORK QUANTUM multi-host Yes No VLAN networking Yes Yes Flat(DHCP) Yes Yes networking Tunneling (GRE) No Yes many bridges No Yes SDN No Yes IPAM Yes Yes Limited - no floating dashboard support No IPs Limited - only with security groups Yes non-overlapping IP pools
  96. 96. Questions? kishanov@mirantis.com

×