- The keynote at the OpenStack 2012 Fall Summit highlighted Rackspace's decreasing contribution to OpenStack commits over time and Rackspace's private cloud which runs OpenStack and sees high usage.
- The Quantum project in OpenStack provides network connectivity as a service and allows different virtualization technologies to be plugged in as backends. It has evolved to add L3 and L4-L7 network services.
- Quantum uses a plugin architecture so that different virtual network backends like Open vSwitch, Linux bridge can be used. Extensions allow for additional network properties and new services like routing, load balancing to be added.
How Networking works with Data Science HungWei Chiu
Introduce the basic concept of networking model, including the OSI model and TCP/IP model.
Also introduce basic ideas/function in networking, such as routing, classification, security..etc
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
Docker Networking presentation at ONS2016.
Docker Macvlan and Ipvlan Networking Drivers Experimental Readme:
github.com/docker/docker/blob/master/experimental/vlan-networks.md
Kernel requirements for Ipvlan mode is v4.2+, Macvlan mode is v3.19.
If using Virtualbox to test with, use NAT mode interfaces unless you have multiple MAC addresses working in your setup. Use the 172.x.x.x subnet and gateway used by the VBox NAT network. Vmware Fusion works out of the box.
Here is a screenshot of a VirtualBox NAT interface:
https://www.dropbox.com/s/w1rf61n18y7q4f1/Screenshot%202016-03-20%2001.55.13.png?dl=0
Accelerating Neutron with Intel DPDK from #vBrownBag session at OpenStack Summit Atlanta 2014.
1. Many OpenStack deployments use Open vSwitch plugin for Neutron.
2. But its performance and scalability are not enough for production.
3. Intel DPDK vSwitch - an DPDK optimized version of Open vSwitch developed by Intel and publicly available at 01.org. But it doesn't have enough functionality for Neutron. We have implemented the needed parts included GRE and ARP stacks. Neutron pluging
4. We got 5 times performance improving for netwroking in OpenStack!
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
How Networking works with Data Science HungWei Chiu
Introduce the basic concept of networking model, including the OSI model and TCP/IP model.
Also introduce basic ideas/function in networking, such as routing, classification, security..etc
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
Docker Networking presentation at ONS2016.
Docker Macvlan and Ipvlan Networking Drivers Experimental Readme:
github.com/docker/docker/blob/master/experimental/vlan-networks.md
Kernel requirements for Ipvlan mode is v4.2+, Macvlan mode is v3.19.
If using Virtualbox to test with, use NAT mode interfaces unless you have multiple MAC addresses working in your setup. Use the 172.x.x.x subnet and gateway used by the VBox NAT network. Vmware Fusion works out of the box.
Here is a screenshot of a VirtualBox NAT interface:
https://www.dropbox.com/s/w1rf61n18y7q4f1/Screenshot%202016-03-20%2001.55.13.png?dl=0
Accelerating Neutron with Intel DPDK from #vBrownBag session at OpenStack Summit Atlanta 2014.
1. Many OpenStack deployments use Open vSwitch plugin for Neutron.
2. But its performance and scalability are not enough for production.
3. Intel DPDK vSwitch - an DPDK optimized version of Open vSwitch developed by Intel and publicly available at 01.org. But it doesn't have enough functionality for Neutron. We have implemented the needed parts included GRE and ARP stacks. Neutron pluging
4. We got 5 times performance improving for netwroking in OpenStack!
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
This talks shows how to implement the Application-Based Routing in the common Linux Distribution. We use the NDPI to execute the DPI function to category the packet first, use the linux kernel build-it mark to pass the information from user-space to kernel space and then the policy routing system use that mark to route the packet by different destination or interface.
Introduction to Docker Networking options. We give in-depth description of the different options with single host examples. See our other presentations for multi-host, IPv6, and CoreOS Flannel descriptions.
In this slide, we discuss the concept of IPTABLES/EBTABLES and then show how they work in a simple docker environment.
In order to track the packet flow in those containers communication, we use the LOG module in IPTABLES/EBTABLE to track the information.
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
This is a followup to our Docker networking tutorial. This slidedeck describes the options for deploying Docker container in a multi-host cluster environment. We introduce the LorisPack toolkit for connecting and isolating pods of containers deployed across multiple hosts.
Control Your Network ASICs, What Benefits switchdev Can Bring UsHungWei Chiu
In this slide, I will introduce what is switchdev and what problem it wants to solve. To this day, most of the hardware switch's application-specific integrated circuit (ASIC) only be controlled by the vendor's proprietary binary (SDK) and it's inconvenient for system administrator/developer. In order to break the chip vendor's lock-in situation, the switchdev had been designed to solve this. With the help of switchdev, we can develop a general solution for hardware switch chips and break the connection with vendor's binary-blob (SDK).
In order words. Linux kernel can directly communicate with the vendor's proprietary ASIC now, and the software programmer/system administrator can easily control that ASIC to provide more flexible, powerful and programmable network function.
This is my latest OpenStack Networking presentation. I presented it at OSDC 2014. It includes a lot of backup slides with CLI outputs that show how ML2 with the OVS agent creates GRE based overlay networks and logical routers
2014 OpenStack Summit - Neutron OVS to LinuxBridge MigrationJames Denton
Presentation titled 'Migrating production workloads from OVS to LinuxBridge'. Presented at the Fall 2014 OpenStack summit in Paris, this slide deck introduced the possibility of migrating live workloads from Open vSwitch to LinuxBridge with minimal downtime.
Cilium - API-aware Networking and Security for Containers based on BPFThomas Graf
Cilium is open source software for providing and transparently securing network connectivity and loadbalancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. Cilium is integrated into common orchestration frameworks such as Kubernetes and Mesos.
Docker networking basics & coupling with Software Defined NetworksAdrien Blind
This presentation reminds Docker networking, exposes Software Defined Network basic paradigms, and then proposes a mixed-up implementation taking benefits of a coupled use of these two technologies. Implementation model proposed could be a good starting point to create multi-tenant PaaS platforms.
As a bonus, OpenStack Neutron internal design is presented.
You can also have a look on our previous presentation related to enterprise patterns for Docker:
http://fr.slideshare.net/ArnaudMAZIN/docker-meetup-paris-enterprise-docker
This talks shows how to implement the Application-Based Routing in the common Linux Distribution. We use the NDPI to execute the DPI function to category the packet first, use the linux kernel build-it mark to pass the information from user-space to kernel space and then the policy routing system use that mark to route the packet by different destination or interface.
Introduction to Docker Networking options. We give in-depth description of the different options with single host examples. See our other presentations for multi-host, IPv6, and CoreOS Flannel descriptions.
In this slide, we discuss the concept of IPTABLES/EBTABLES and then show how they work in a simple docker environment.
In order to track the packet flow in those containers communication, we use the LOG module in IPTABLES/EBTABLE to track the information.
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...Nati Shalom
Video recording: https://www.youtube.com/watch?v=tGlIgUeoGz8
It’s no news that containers represent a portable unit of deployment, and OpenStack has proven an ideal environment for running container workloads. However, where it usually becomes more complex is that many times an application is often built out of multiple containers. What’s more, setting up a cluster of container images can be fairly cumbersome because you need to make one container aware of another and expose intimate details that are required for them to communicate which is not trivial especially if they’re not on the same host.
These scenarios have instigated the demand for some kind of orchestrator. The list of container orchestrators is growing fairly fast. This session will compare the different orchestation projects out there - from Heat to Kubernetes to TOSCA - and help you choose the right tool for the job.
Session link from teh summit: https://openstacksummitmay2015vancouver.sched.org/event/abd484e0dedcb9774edda1548ad47518#.VV5eh5NViko
This is a followup to our Docker networking tutorial. This slidedeck describes the options for deploying Docker container in a multi-host cluster environment. We introduce the LorisPack toolkit for connecting and isolating pods of containers deployed across multiple hosts.
Control Your Network ASICs, What Benefits switchdev Can Bring UsHungWei Chiu
In this slide, I will introduce what is switchdev and what problem it wants to solve. To this day, most of the hardware switch's application-specific integrated circuit (ASIC) only be controlled by the vendor's proprietary binary (SDK) and it's inconvenient for system administrator/developer. In order to break the chip vendor's lock-in situation, the switchdev had been designed to solve this. With the help of switchdev, we can develop a general solution for hardware switch chips and break the connection with vendor's binary-blob (SDK).
In order words. Linux kernel can directly communicate with the vendor's proprietary ASIC now, and the software programmer/system administrator can easily control that ASIC to provide more flexible, powerful and programmable network function.
This is my latest OpenStack Networking presentation. I presented it at OSDC 2014. It includes a lot of backup slides with CLI outputs that show how ML2 with the OVS agent creates GRE based overlay networks and logical routers
2014 OpenStack Summit - Neutron OVS to LinuxBridge MigrationJames Denton
Presentation titled 'Migrating production workloads from OVS to LinuxBridge'. Presented at the Fall 2014 OpenStack summit in Paris, this slide deck introduced the possibility of migrating live workloads from Open vSwitch to LinuxBridge with minimal downtime.
Cilium - API-aware Networking and Security for Containers based on BPFThomas Graf
Cilium is open source software for providing and transparently securing network connectivity and loadbalancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. Cilium is integrated into common orchestration frameworks such as Kubernetes and Mesos.
Docker networking basics & coupling with Software Defined NetworksAdrien Blind
This presentation reminds Docker networking, exposes Software Defined Network basic paradigms, and then proposes a mixed-up implementation taking benefits of a coupled use of these two technologies. Implementation model proposed could be a good starting point to create multi-tenant PaaS platforms.
As a bonus, OpenStack Neutron internal design is presented.
You can also have a look on our previous presentation related to enterprise patterns for Docker:
http://fr.slideshare.net/ArnaudMAZIN/docker-meetup-paris-enterprise-docker
SmartCom - for better indonesia digital creative industryTanto Suratno
how to provide a simple yet aceptable approach of fostering digital creativity platform to the community startups for provide vertical industry solutions.
An introductory slides for explaining the SDN and NFV technologies. what's the difference between them and when each one is used. Also it talk about some of Cisco products in each area either SDN or NFV or the Automation with some of real use cases deployed in today's service provider network.
Hope you like it
Quantum - Virtual networks for Openstacksalv_orlando
An overview of Quantum, the soon-to-be default Openstack network service.
These slides introduce Quantum, its design goals, and discusses the API. It also tries to address how quantum relates to Software Defined Networking (SDN)
Slides presented to OpenStack developer summit during the "Quantum Overview" session (note: these are not the slides presented during the conference, these slides are more technical, and less polished)
Networking is NOT Free: Lessons in Network DesignRandy Bias
An in-depth critique of the existing OpenStack networking approach, with a focus on how the Nova network controller is more of a hindrance than a help. Discusses the gap in Quantum's functionality required to close the gap, and alternative solutions. How can we make networking in OpenStack robust, high performance, and fault tolerant? What do typical large scale networks look like and what lessons can we learn from them? Is there an approach to networking we can take that is the same with a handful of servers as it is with hundreds of racks?
Software Defined Networking is seeing a lot of momentum these days. With server virtualization solving the virtual machines problem, and large scale object storage solving the distributed storage challenge, SDN is seen as key in virtual networking.
In this talk we don't try to define SDN but rather dive straight into what in our opinion is the core enabled of SDN: the virtual switch OVS.
OVS can help manage VLAN for guest network isolation, it can re-route any traffic at L2-L4 by keeping forwarding tables controlled by a remote controller (Openfow controller). We show these few OVS capabilities and highlight how they are used in CloudStack and Xen.
Xen Summit presentation of CloudStack and Software Defined Networks. OpenVswitch is the default bridge in Xen and supported in XenServer and Xen Cloud Platform
Understanding and deploying Network VirtualizationSDN Hub
Analogous to server virtualization, Network Virtualization decouples and isolates virtual networks (i.e. tenant) from the underlying network hardware. One of the key value propositions of Software-Defined Networking (SDN) is to enable the provisioning and operation of virtual networks. This tutorial motivates the need for network virtualization, describes the high-level requirements, provides an overview of all architectural approaches, and gives you a clear picture of the vendor landscape.
Previously presented at ONUG Fall 2013 and Spring 2014.
Open stack networking_101_update_2014-os-meetupsyfauser
This is the latest Update to my OpenStack Networking / Neutron 101 Slides with some more Information and caveats on the new DVR and Gateway HA Features
Overview of OpenStack nova-networking evolution towards Neutron. Architecture overview of OVS plugin, ML2, and MidoNet Overlay product. Overview and example of Heat templates, along with automation of physical switches using Cumulus
2. Agenda
• Keynote
• Quantum
• Quantum Plugins
• OpenStack Quantum Use cases
• SDN and what we do for it
2
3. Keynote speech -- Troy Toman, Rackspace
– Rackspace contribution percentages have been steadily declining, from
54% of commits in Essex to 30% in Folsom
– Continuous delivery by running trunk in production
– Deploying every few weeks in less than an hour
– Private cloud (Alamo) which runs on OpenStack: 120 million API hits,
99.97% availability
3
4. OpenStack Folsom
• What is the big feeling in Folsom Summit
– OpenStack is in production
• Two of the most noteworthy new features in the OpenStack
Folsom release are Quantum and Cinder
• Quantum
– The interest around network virtualization and the Quantum project
was overwhelming and very gratifying
– Not just about L2 virtual network, also about network services ( load
balancing, firewall…) and SDN
4
5. Quantum Design Session
• Learn about what the design
session processes
• The main subject in this design
session
– IPv6, DHCP, VPN access
– Modeling the insertion of
services
– LBaaS, firewall
– Metering
– Quantum L3 and adv APIs
improvements
5
6. What is Quantum
• To provide "network connectivity as a service" between
interface devices (e.g., vNICs) managed by other Openstack
services ( L2 )
– Quote: Provides a “building block” for sophisticated cloud network
topologies. @Dan Wendlandt
• The functionality of Quantum
– Tenant-facing APIs
– Rich network topologies
– Allow to plugin different virtualization technologies
6
7. Quantum Evolution
• Essex (L2 Support)
– network segments
– ports
• Folsom (L2 + L3 Support, to replace Nova Network )
– IP subnets
– DHCP
– Routing
• Grizzly (more L3, L4-L7)
– Firewalling, Load balancers, and more
7
8. Quantum Architecture
Generic OpenStack APIs Operator Selected Backends
Compute API KVM
Network API OVS plugin
Tenant Tools
(GUI, CLI, Storage API Ceph
API code)
An eco-system of tools A generic tenant API to A “plugin” architecture with
that leverage the create and configure different back-end “engines”
Quantum API. “virtual networks”
8
9. Quantum Architecture
API Clients Quantum Service Backend X
Quantum
API
Tenant
Create-net
Scripts
.
Horizon . Plugin
GUI Create- X
Orchestration
port Physical
virtual switch
Code Network
API Nova Compute
Extension
s
9
10. Basic API Abstractions
• “virtual networks” and “virtual subnets” are fundamentally
multi-tenant, just like virtual servers (e.g., overlapping IPs can
be used on different networks).
VM1 VM2 virtual server
Nova 10.0.0.2 10.0.0.3
virtual interface (VIF)
virtual port
Quantum Net1 L2 virtual network
10.0.0.0/24 virtual subnet
10
11. Dynamic Network Creation + Association
• Tenant can use API to create many networks.
• Can even plug-in “instances” that provide more advanced
network functionality (e.g., routing + NAT)
TenantA-VM1 TenantA-VM2 TenantA-VM3
10.0.0.2 10.0.0.3 9.0.0.3 9.0.0.2
Tenant-A Net1 Tenant-A Net2
10.0.0.0/24 9.0.0.0/24
External Net
88.0.0.0/18 11
12. Quantum API Extensions
• Enables innovation in virtual networking.
• Add properties on top of existing network/port abstractions:
– QoS/SLA guarantees / limits
– Security Filter Policies
– port statistics / netflow
• New Services
– L3 forwarding, ACLs + NAT (“elastic” or “floating” IPs)
– VPN connectivity between cloud and customer site, or another cloud
datacenter.
12
13. Available Quantum Plugins
– Open vSwitch
• L2 isolation with VLAN or GRE Tunneling
– Cisco UCS/Nexus
• L2 isolation with VLAN and UCS products
– Linux Bridge
• Pure Linux solution with Linux bridge, L2 isolation with VLAN
– NTT-Data Ryu
• L2 isolation with OpenFlow
– Nicira NVP
Many of them
• Proprietary solution ( also with OpenFlow ) are related with
– NEC OpenFlow OpenFlow/SDN
• L2 isolation with OpenFlow
– Big Switch
• L2 isolation with OpenFlow
– MidoNet
• Proprietary solution with OVS for L2 to L4
– Juniper 13
14. Quantum Project Update
• Folsom release:
– v2 API, with L2 + IP address mgmt (IPAM)
– Tenant API with Keystone + Horizon Integration
– Updated CLI
– Extensions:
• L3 “routers” and floating IPs
• “provider networks” mapped to specific VLANs
• Tenant quotas
• Notifications
14
15. Use Case in Quantum/Nova Network
• Rackspace
– Quantum NVP Plugin
• Intel
– Nova Network Now – move to Quantum with Grizzly
• DreamHost
– Nicira NVP Plugin
– Switch OEM by Delta Networks
• Cisco Webex
– Quantum UCS Plugin
• eBay
– Nicira NVP Plugin
• Sina
– Nova Network Now – move to Quantum with Grizzly
15
16. What is SDN
• SDN separates the control plane from the data plane in
network switches and routers.
• Most well known in the SDN world is OpenFlow
– an open protocol designed to expose the internals of a router or switch
and provide functionality to modify it. ( OpenFlow != SDN )
16
17. What do we do in Quantum/SDN
• We use OpenStack + Quantum with
Plugin ( OVS ) in overlay model:
– Provide L2 isolation + Virtual Networks + L3
routing
• We focus on OpenFlow in hop by hop
model:
– Leverage Open Source Project Trema,
ZeroMQ…
– Provide flow management and traffic
engineering
– In Hop by Hop model
– Provide Northbound API
17
18. Our SDN Framework Concept
SDN Applications / Orchestration
Http / Restful
Northbound API
zmq
ZeroMQ
Monitoring ECMP / Traffic
Re-routing Engineering
Trema Apps
Flow Topology Routing
Manager Discovery Switch
We are implementing
Trema Framework
OpenFlow Protocol
18
19. My point of view and conclusion
• Networking can blend into the computing world with software
abstractions ( APIs )
• Quantum opens a door for networking vendors to plugin their
SDN solution
• Expect to see Grizzly version of OpenStack/Quantum
19
20. Reference Sources
• OpenStack Folsom Summit
– http://www.openstack.org/summit/san-diego-2012/
• Quantum Project Update
– http://www.slideshare.net/danwent/quantum-grizzly-summit
• SDN is bussniess, OpenFlow is technology
– http://www.networkcomputing.com/next-gen-network-tech-
center/sdn-is-business-openflow-is-technology/240142193?pgno=1
• Mirantis : OpenStack Super bootcamp material
– http://www.slideshare.net/openstack/openstack-super-bootcamppdf
• Quantum Plugin Comparison
– http://www.sebastien-han.fr/blog/2012/09/28/quantum-plugin-
comparison/
20