Unlocking the Future of AI Agents with Large Language Models
SDN and NFV Reference Guide
1. SDN and
OpenFlow,NFV and
Virtual Network
Reference
SDN: A Comprehensive Approach
Market Report: NV Solutions
SDN Series Part1-8 (TheNewStack)
P4: Programming Protocol-Independent Packet
Processors
CloudBay Networks Inc.
CEO
Timothy Lam
2. Trends
Know Why Know What
Know Who
Technologies
Vendors
ApproachKnowledge
Know Where
Markets
Know How
Applications
3. Paradigm Shift
Distributed vs Centralized
Circuit-Switched
Center
switchboard
Connection-
oriented
Single point of
failure
Overhead in
initiation
Internet
Autonomous
devices
Connectionless
(packet-switched)
Mutliple data
streams
Overhead in
disassembly &
reassembly
Virtual Network
More centralized
with cloud
Packet-switched
(can emulate
circuit)
Distributed data-
plane, but
centralized
control-plane
Centralized Distributed Mostly
Centralized
4. 2000-2010
Hardware
extended to L3
(routing)
Hardware
extended to L4
(QoS, ACL)
Control
protocols in
device
Paradigm Shift
Hardware vs Software
1990-2000
Hardware in L1
only (bridging)
Hardware
extended to L2
(switching)
The rest in
Linux kernel
Software Hardware Back to Software
2010-2020?
L1, L2 (forward
only), and ACL in
hardware
L2 (control), L3
and QoS in
software
Programmable
control in server
5. Key to SDN
Centralized Control
Traditional
Routing updates among
autonomous switches
Slow convergence in case of
topology change
Static route optimization
Control
Data
SDN
Flow updates from central
controller to switches in
secured channel
Fast convergence since
handled centrally
Dynamic route
optimization to live traffic
loadings
Control
Data
Control
Data Data
Control
Data Data
Route updates
(L3)
Flow updates (L2-L4)
6. Key to SDN
Programmability
Traditional
Complicated header
manipulation (long latency)
Limited changes are
protocol-dependent
Static & manual device
configuration
Control-Plane
Data-Plane
API
SNMP, CLI
SDN
Directly programmable
rules in data-plane (line
rate)
Any change possible by
application
Dynamic configuration
from controller
Control-Plane
Data-Plane
API
OpenFlow
7. Traditional
Open routing packages (ex.
Quagga) recompiled for
each
Merchant silicon must be
compatibility-tested
Result: Vendor-lock-in
Key to SDN
Open Implementation
Control-Plane
Data-Plane
Proprietary
SDN
Open routers can be freely
ported
Merchant silicon can be
optimized toward standard
Result: Open to innovation
Control-Plane
Data-Plane
Open
8. SDN Characteristics
Benefits
Plane Separation
• Data-plane in switches (or routers) forwards packets
• Control-plane in a server programs forwarding tables
Simpler Devices
• Simpler is better (ex. CISC to RISC, Unix to Linux)
Network Abstraction
• Distributed-state abstraction
• Forward-engine abstraction (cross vendor-specifics)
• Object abstraction
Openness
• Open projects to drive researcher and vendor
communities
• Open standards to ensure multi-vendor interoperabilities
9. SDN Characteristics
Drawbacks
Too Discruptive!
• Requires device/topology replacement and new expertise
Single Point of Failure
• Can be mitigated with HA and hardened links
• Controller clustering & hierarchy (roof-leaf controllers)
Lack of DPI
• Unable to inspect L5-7 payload (ex. URLs, hostnames)
• Shunt traffic to IDS/IPS for inspection
Lack of Statefulness
• Process independently, ignore prior state-changing
packets
• Unable to track dynamic port allocation (ex. FTP)
• Unable to follow session exchanges (ex. HTTP)
10. SDN Components
SDN Devices
Device Examples (software only)
Commercial
Switch-light (BSN): Tied with ASIC (Broadcom) and OS (Linux Virtual
Switch)
onePK (Cisco) : Path determination, per-flow policy (QoS), auto
configuration
Open-Source
Indigo (BSN): Integrated with ASIC to run at line rate!
Software Implementation
Flow entries naturally mapped to
data structures (ex. array, hash table)
Complicated logic required to
process wildcard matching
Packets modification is easy
Statistics collectable in full
Hardware Implementation
Flow entries somehow mapped to
native CAM/TCAM tables
TCAM is natively designed to process
wildcards and partial matching
Packet modification may be
unavailable
Problematic in flow count statistics
11. SDN Components
SDN Controller
Controller Examples
Commercial
BNC (BSN): Compatible with FloodLight at interface-level
XNC (Cisco): Slices to partition admin domain and TIF to interpolate
endpoints
Open-Source
FloodLight (BSN): Define OpenFlow classes/interfaces and Restlet
framework
• Device & topology discovery, flow management, statistics
tracking
Core Function Modules
• REST & Java API for northbound, OF-Config & OVSDB
southbound
Interfaces (other than SNMP & CLI
• Application pipelining, northbound API standard, flow
prioritization
Current Challenges
13. SDN Components
SDN Controller: Trema
Quick Facts
1. Trema is more of a software development platform for than a production
controller.
2. For an integrated development environment, Trema provides an Emulator,
and TremaShark for bebugging.
3. Trema employs a multi-process model, in which modules are loosely
coupled via a messenger (6 APIs: send/receive notification/request/reply
messages).
4. The switch manager is responsible for creating the instance (switch
daemon) of a switch (switch.”OFS IPaddr:port” or switch.dpid).
Command Syntax (Ruby & C)
./trema run ./objects/examples/dumper/dumper –c
./src/examples/dumper/dumper.conf
15. SDN Components
SDN Controller: NOX/POX
Quick Facts
1. Originally developed by Nirica.
2. NOX applications typically determine how each flow is routed or not
routed in the enterprise network.
3. DSO deployer scans the directory structure for any components being
implemented as DSO (Dynamically Shared Objects).
4. Events drive all the execution in NOX. NOX events can be classified as core
events (datapath, flow, port…) and application events (host, link…).
5. NOX applications provide a component factory for the container, where
container hold all the component contexts (including component instance
itself).
Command Syntax (C++)
./nox_core [OPTIONS] [APP[=ARG[,ARG]...]] [APP[=ARG[,ARG]...]]...
./nox_core -v -i ptcp:6633 switch
17. SDN Components
SDN Controller:Ryu
Quick Facts
1. Strongly endorsed by NTT Labs.
2. Ryu has a large collection of libraries, ranging from southbound protocols
(OF-Config, NETCONF, OVSDB…) to various packet-processing operations
(packet builder/parser APIs for VLAN, MPLS, GRE…).
3. Include an Openstack Neutron plug-in that supports both GRE-based
overlay and VLAN configurations, with WSGI to enable one to easily
introduce newer REST APIs into an application.
4. Ryu applications are single-threaded entities, sending asynchronous
events to each other (with event handlers processing in a blocking fashion).
Command Syntax (Python)
ryu-manager [--flagfile <path to configuration file>] [generic/application
specific options...]
19. SDN Components
SDN Controller: Floodlight
Quick Facts
1. Floodlight developed by BSN based on Beacon (Stanford).
2. Floodlight’ is an umbrella-term to cover multiple projects such as
Floodlight Controller, Indigo, LoxiGen, and OFTest.
3. Two components to OpenStack: RestProxy (connectivity between
controller and Neutron) and VirtualNetworkFilter (MAC-based network
isolation).
4. Floodlight includes a RestAPI server using Restlets library. With the
Restlets, any module developed can expose additional REST APIs through
an IRestAPI service (implementing RestletRoutable in a class).
Command Syntax (Java)
curl http://10.0.0.1:8080/wm/core/controller/switches/json (to get a OFS
connected to the controller)
21. SDN Components
SDN Controller: ODL
Quick Facts
1. ODL is managed by the Linux Foundation, and is multi-vendor & multi-
project.
2. ODL is characterized by OSGi framework, vendor components, and SAL.
3. OSGi framework is mainly used by applications that will run in the same
address space as the controller, and it ensures modularity during
development and run-time (ISSU).
4. Vendor components are proprietary extensions including VTN Manager,
PCEP, GBP, SDNi, SFC, etc…
5. SAL is responsible for assembling the request by binding producer and
consumer into a contract, brokered and serviced by SAL:
5a. AD-SAL converts the language spoken by the protocol plugins into
application-specific APIs.
5b. MD-SAL dynamically generates APIs (RPC, RESTful, DOM…) from
23. SDN Components
SDN Controller: ONOS
Quick Facts
1. ONOS was developed by ON.Lab, and is aiming for wide area network
(WAN) and service provider networks.
2. ONOS design principles are: “Intent-based networking”, “Distributed
controller architecture”, and “SDN and Service Providers”.
3. Intent can be described in terms of network resource, constraints, criteria
and instructions.
4. Various distributed techniques such as partitioning, sharding, aggregation,
replication, etc…define how controller interact and share info.
5. Multiple service providers may be associated with a single subsystem.
6. ONOS cluster embraces several HA techniques: Anti-entropy protocol
(gossip-based), eventual consistency model, vector clocks, distributed
queues, and in-memory data grid.
25. ProactiveReactive
SDN Components
SDN Applications
Reactive Application
Switch Switch
Switch
Flows Flows
Listener API Response API
Process Packet
Device
Message
Packet
Action
Flow
Change
Controller
Proactive
Application
Switch Switch
Flows Flows
Flow Pusher
Event
Listener
Controller
Configur
e Flow
Networks
Devices
Message
REST API
External
26. SDN Components
SDN Applications
Application Examples (open source)
Routing Protocols (proactive in nature)
RouteFlow: Map distributed routing tables into OpenFlow topology
Quagga: Provide IP routing protocols (ex. IS-IS, OSPF)
The BIRD: Provide IP routing protocols (ex. IS-IS, OSPF)
Security (reactive in nature)
FortNOX: Provide security mediation service through reconciling policies
FRESCO: Scripting language to prototype security detection and mitigation
Proactive
Most written above network
abstraction, so use high-level API (ex.
REST API)
Example: spanning tree, multipath
forward
Deal with more aggregate flows (ex.
TCP port-specific) so fewer flow entries
Reactive
Most written in controller native
language, so use low-level API (ex. Java,
Python)
Example: per-user firewall, security
access
Deal with more granular flows (ex. NAC)
so more flow entry hungry
27. SDN Components
Useful SDN Tools
Benchmarker & Simulator Examples (open source)
Cbench: Emulate variable number of switches to send packets to controller
and
observe response from controller
OFLPS: Emulate a standalone controller to send/receive messages with
switches and observe response from switches
MiniNet: Simulate large network of switches and hosts. (Not SDN-specific)
Orchestrator Examples (open source)
FlowVisor: Enable multiple controllers to share physical switches (slicing)
Maestro: Provide interface for NAC to access and modify network
OESS: Provide user-controlled VLAN provisioning with OpenFlow switches
NetL2API: Provide generic API to control L2 swithces via vendors’ CLIs,
not
OpenFlow (for non-OpenFlow network virtualization)
28. OpenFlow Protocol
Introduction
• Define the communication between data-plane and control
plane
• Define part of data-place behavior (none of controller)
Definition
• A Stanford project attempted to build generic programming of
various switch implementations based on common ASICs
Origin of Development
• (Similar to SDN) Switch, controller, protocol, and secure channel
Components
29. OpenFlow Protocol
OpenFlow 1.0 Basics
Flow Table & Entries
• Each entry has header fields, counters, and actions
Match Fields- 12 Tuple
• L2: Switch input port, VLAN ID, VLAN priority, MAC
addresses (src/dst), EtherType
• L3: IP addresses (src/dst), IP protocol, IP ToS bits
• L4: TCP/UDP ports (src/dst)
Virtual Ports
• CONTROLLER/TABLE, LOCAL/NORMAL, ALL/FLOOD,
IN_PORT, <specified port>
Message Types
• Symmetric, controller-switch, asynchronous
30. OpenFlow Protocol
OpenFlow 1.1 Addictions
Multiple Flow Tables & Action Set
• Together construct an instruction-based process pipeline
which is very programmable
Group Table, Entries & Action Buckets
• Perform individual pre-processing before packets are
forward to each specified port (in a multicast)
• Simplify rerouting to a new next-hop port (from multiple
flows)
MPLS & VLAN Tag
• PUSH/POP actions to support MPLS/VLAN encapsulation
Controller Connection Failure
• Fail secure mode (as usual) & fail standalone mode
(native)
31. OpenFlow Protocol
OpenFlow 1.2 Addictions
Extended Match Descriptor
• Set of TLV pairs to match virtually any header field
• No more complicated parsing and hardcoding
• EXPERIMENTER match class for additional payload fields
Extended Context Info
• For messages from switch to controller (PACKET_IN)
• Include input virtual/physical port, metadata from packet-
matching pipeline
Multiple Controllers
• Equal mode where all can program the switches
• Master/slave mode where slaves can only read statistics
32. OpenFlow Protocol
OpenFlow 1.3 Addictions
Per-Flow Meters & Meter Bands
• Discrete levels of bands (threshold) to match current
usage
• Matched band enforces QoS control actions (DROP/DSCP)
Per Connection Filtering
• Controllers can filter asynchronous messages from
switches with SET_ASYNC message
Auxiliary Connections
• Data packets between switches and controller auxiliary
• Control messages primary connection
Cookies
• Flow-entry cookies in controller caches to boost
performance
33. P4 (OpenFlow 2.0?)
P4
Used to configure packet processing
(@DP)
Programmable parser can define new
headers
Actions are composed from protocol-
independent primitives
Match+Action stages in parallel or series.
Classic OpenFlow 1.X
Used to populate forwarding tables (@CP)
Pre-defined set of header fields
Pre-defined small set of actions
Match+Action stages in series
34. P4 (OpenFlow 2.0?)
Reconfigurability
• Controller able to redefine packet parsing and processing
Protocol Independence
• Controller able to specify header fields to extract and
tables to process these headers
Target Independence
• Turn target-independent description into target-
dependent program (for ASIC, NPU, FPGA, etc…)
1st-step: high-level
• Express in imperative language to represent the control
flow
2nd-step: below
• Translate P4 representation to TDGs (Table Dependency
Graph) for dependency analysis
• Map the TDG to a specific switch target
Objectives2-stepCompile
36. P4 (OpenFlow 2.0?)
A header definition describes the sequence and structure of a
series of fields. It includes specification of field widths and
constraints on field values.
Components- Headers
37. P4 (OpenFlow 2.0?)
A parser definition specifies how to
identify headers and valid header
sequences within packets.
P4 assumes the underlying switch can
implement a state machine that
traverses packet headers from start
to finish, extracting field values as it
goes.
Components- Parsers
38. P4 (OpenFlow 2.0?)
Match+action tables are the
mechanism for performing
packet processing.
P4 program defines the fields
on which a table may match
and the actions it may execute.
Programmer describes how
the defined header fields are
to be matched in the
match+action stages
(e.g., should they be exact
matches, ranges, or wildcards?)
and what actions should be
performed.
Components- Tables
39. P4 (OpenFlow 2.0?)
P4 supports construction of complex actions from simpler
protocol-independent primitives. These complex actions are
available within match+action tables.
P4 assumes parallel execution of primitives within an action
function.
Components- Actions
40. P4 (OpenFlow 2.0?)
The control program determines the order of match+action
tables that are applied to a packet. A simple imperative
program describe the flow of control between
match+action tables.
Control flow is specified as
a program via a collection
of functions, conditionals,
and table references.
Components- Control Flow
41. SDN Alternatives
Open SDN
Physically Centralized Controller
The control-plane is physically decoupled from the data-plane
Controller (on server) communicates with data-plane (on switches) using
OpenFlow
Flow tables are synchronized in between
SB API provides abstraction to the applications above (through NB)
Global view of current topology and live traffic loads in place
Data
Example Beacon, FloodLight/BNC, Indigo/SwitchLight, OVS/NVP,
App App App
Controller
NB API
SB API
OpenFlow
Data Data
Global View
Flows Flows Flows
42. SDN Alternatives
SDN via APIs
Partially Centralized Controller
Still control-plane on each switch
Controller just to automate the configurations on switches via improved
APIs
Configurations through SNMP/CLI are still static and error-prone
SDN-appropriate APIs must be dynamic and have immediate effect
upon changes (ex. RESTful API)
Applications still have to synchronize the distributed control-planes
Data
Example ODL/XNC, SDN from Arista, Brocade, etc…
App App App
Controller
Data Data
Control Control Control
SNMP/CLI
43. Distributed or logically centralized Controller
Virtualized network overlay on existing physical network (unchanged)
Controller just to ensure mappings from VMs to tunnel endpoints (VTEPs)
Distributed approach by placing “control agent” on each vSwitch
Another logically-centralized approach by “controller instances” on vSwitches
L3 tunnels (MAC-in-IP) in use are: VXLAN(VMW), NVGRE(MSFT), STT(NCR)
Overlay solutions differ in learning of virtual MAC addresses across tunnels
Fully DPI-capable and state-aware (since any feature can be implemented)
SDN Alternatives
SDN via Network Overlays
Example NSX, Contrail, DOVE, MidoNet, etc…
VM
Data Data Data
Control Control Control
CP
DP
CP
DP
CP
DP DP DP
CP
DPvSwitches
(on hypervisors)
Agen
t
Agen
t
Agen
t
Control
Instances
VM VM VMVM VM
tunn
el
tunn
el
tunnel
s
44. SDN Alternatives
SDN via Open Device
No Controller!
Dependent on how “open” chip vendors are willing to be (Broadcom,
Intel)
Dependent on popularity of open linux (ONL, Cumulus) as switch OS
Similar approach from WiFi router- OpenWRT
Somehow applicable to data-center switches, but not enterprise
switchesExample BMS(QCI), DPDK(Intel), OFDPA(BRCM), CL(Cumulus)
App
App
SDKData
Control
Chip-Level
ASIC Interface
Board-Level
OS-Level
Protocol Stacks
API
BSP
ONL/ONI
E
OVS…
Open
Open
Open
Open
Open
App
REST
RPC
45. SDN Alternatives
Comparing Side-by-side
Open SDN SDN via APIs SDN via
Overlays
Benefits
Plane Separation high low medium
Simpler Devices high low medium
Network
Abstraction
high medium high
Openness high low medium-high
Drawbacks
Too Discruptive! low high n/a
Single Point of
Failure
medium medium medium
Lack of DPI low low medium
Lack of Statefulness low low medium
46. SDN in Data Center
Current Technologies
VXLAN
• UDP header (source port hash for LB); VXLAN_ID = 24 bits
NVGRE
• GRE header (no src/dst ports); Virtual_Subset_ID = 24 bits
STT
• TCP header (ports yet to be ratified); Context_ID = 64 bits
MSTP
• Each VLAN with its own spanning tree (share unused ports)
SPB
• Use IS-IS to determine optimal paths
• Apply Q-in-Q at edge and Mac-in-Mac in core (for QoS)
Fat-Tree
• Aggregate bandwith consistent across all tiers (non-
blocking)
Tunneling(L3)Multi-Pathing
47. SDN in Data Center
Data Center Demands
Overcome Current Limitations
• L2 networks stretched by MAC-in-IP tunneling across
WANs and server virtualization lead to MAC address
explosion
• VLAN limit is natively 4096 (12 bits)
• Cross-sectional bandwidth, not single-rooted hierarchy
(STP)
Add, Move, Delete Resources
• Allocate resources before network services come online
Failure Recovery
• Network restored to known state (deterministic paths)
Multitenancy
Traffic Engineering
• Consider current traffic load (or congestion)
• Increasing East-West traffic due to virtualized workloads
48. SDN in Data Center
Open SDN
• Controller create tunnels then route traffic into appropriate
tunnels
• Need hardware switches with built-in tunneling support
Overcome Current
Limits
• Restore routes based on traffic loads, time of day, scheduled or
observed loads over time
Failure Recovery
• Control directly hardware network traffic down to the flow level
Traffic
Engineering
49. SDN in Data Center
SDN via APIs
• Need controller aware of server virtualization changes
• But fundamental capabilities still not changed
Add, Move, and Delete
Resources
• Need controller to automate device updates and centralize
route and path management (not typically the case)
Failure Recovery
• Combine traffic-monitoring tools with PBR and SNMP/CLI APIs
to provide traffic engineering (ex. RSVP, MPLS-TE)
Traffic
Engineering
50. SDN in Data Center
SDN via Overlays
• VTEPs further upstream or more VMs per hypervisor can
maximize MAC address saving
Overcome Current Limits
• Tasks performed in virtual tunnels are less complicated than if
applied and replicated on all physical devices
Add, Move, and Delete
Resources
• VLANs are relevant only within a single tenant, 4096 VLANs
suffice
Multitetancy
51. SDN in Data Center
Comparing Side-by-side
Open SDN SDN via APIs SDN via Overlays
Demands
Overcome Current
Limits
yes no yes
Add, Move, Delete yes yes yes
Failure Recovery yes no no
Multitenancy yes no yes
Traffic Engineering yes some no
52. SDN in Other Settings
WANs
• Yield deterministic best LSPs in MPLS network, rather
than traditional RVSP with unpredictable or competing
LSP
SP and Carrier Networks
• Push/pop MPLS/VLAN tags or PBB encapsulation to
route traffic within and between carrier networks
Campus Networks
• Traffic redirection of unauthenticated flows to captive
portal
• Traffic suppression based on hostnames or IP addresses
Mobile Networks
• Controller redirect traffic from multi-vendor hotspot to
registered mobile network for usage charge and QoS
policy
Optical Networks
• Controller redirect elephant flows to circuit-switched
53. SDN in Border Cases
Mobile Roaming
Traffic Offload: Auto-roam to RAN with lighter load (ex. 3GWiFi)
Media-independent Handover: MBB or BBM handover from BS
to AP
Infra-controlled Roaming: Explicit control to appoint AP to client
Big Data Flows
Hadoop Offload: Rapid flow table sync across multiple switches
to
direct Hadoop traffic to optical devices
Smart Wireless Backhaul
For Providers: Segregate different traffic types from providers to
different flows into shared backhaul (based on SLAs)
For Consumers: OVS on smartphone to choose best RAN and AP
Energy Saving
For AP: Adjust down the transmission power level when traffic is
light
54. SDN Ecosystem
Academic
Stanford
UC Berkeley
Indiana (InCNTRE)
ONRC
ON.LAB
Industry Research
HP
NTT
Microsoft Labs
NEC
Software Vendor
VMware
Microsoft
Big Switch Network
Cumulus
ODM
Quanta
etc…
Merchant Silicon
Broadcom
Mellanox
55. SDN Ecosystem
Industry Alliances
Open Network Forum (ONF)
Members are mostly LDCs:
Google, Yahoo!, Facebook,
NTT, Verizon, Deutsche
Telekom
(and Goldman Sachs)
Focus on OpenFlow to
communicate between
controller and SB devices
Major proponent of
OpenSDN!
OpenDayLight (ODL)
Members are mostly NEMs:
Cisco, Brocade, Juniper, IBM,
NEC, Fujitsu, Huawei,
Ericsson
(and VMware, Citrix, Red
Hat)
Focus on NOS (an universal
controller) to support all
NB apps and SB protocols!
May divert into SDN via
APIs!
56. Major SDN Acquisitions
Cisco
Acquired Cariden (11 years) for $141M
Cariden specialize in mapping flows to MPLS LSPs and VLANs
Acquired Meraki (6 years) for $1.2B
Meraki specialize in cloud-based control of Wi-Fi APs and wired
switches
Acquired Insieme (1 year) for $863M (spin-in)
Insieme make new router and switch interoperable with other vendors
but better working with Cisco-proprietary configuration (ACI)
Juniper
Acquired Contrail (<1 year) for $176M (spin-in)
Contrail specialize in network virtualization and applications to address
East-West traffic patterns in data center
Fully support OpenStack. No support for OpenFlow (XMPP only)
VMware
Acquired Nicira (5 years) for $1.26B
Nicira specialize in network virtualization with preferably OpenFlow
After acquisition, OpenFlow component is replaced with proprietary
57. SDN Startups
OpenFlow Followers
Big Switch Network
Started with free Indigo software switch to popularize Floodlight
controller
Paired with commercial versions named Switch Light and BNC
Then changed to sell bundles on white-box switches with bootloader to
download Switch Light and self-configure (Big Pivot)
Provide purpose-driven solutions: Big Tap (network monitoring) and
Big Fabric (network virtualization)
Rather than overlay, Big Fabric replace all physical switches with white-
box
Pica8
NOS to integrate OVS with white-box switches to build OF-based
network
Control is logically centralized with OVS switching agent on each switch
Cumulus Network
Turn switch into “server” with great number of NICs (Cumulus Linux)
58. SDN Startups
Network Virtualization
ConteXtream
Use grid-computing for distributed network virtualization solution with
global knowledge of network (similar to IS-IS, OSPF)
Control of session routing is at rack level (control agent at TOR server)
PLUMgrid
Provide SDN via overlays solution well integrated with ESX and KVM
Proprietary protocol to direct traffic to VXLAN tunnels rather than
OpenFlow
Proprietary virtual switch implementation rather than OVS or Indigo
Pertino
Provision to corporate users on-demand WAN or LAN connectivity for
private network through the Internet (SDN via Cloud)
Usage-based charge model and monthly subscription (free for up to 3
users)
59. SDN Startups
NFV
Embrane
Virtualize load balancer, firewall, VPN, and WAN optimization through
distributed virtual appliance (DVA)
Annual subscription model (fix-rate) and usage-based model to charge
hourly for certain bandwidth (on-demand)
Pluribus
Feature virtual load balancer and firewall distributed across Pluribus
“server switches” with some assistance from Pluribus SDN controller
Pluribus controller is interoperable with OpenFlow and NSX controllers
Midokura
Create virtual switch, load balancer, and firewall on virtualized network
overlay (many hypervisors supported)
Feature unified management software to administer thousands of
virtual networks from a single physical network
60. SDN Startups
Optical and Mobile Switching
Plexxi
Apply centralized controller concept for L2 and L3 into L1 for optical
switch
Feature dual-mode switch (Ethernet & optical) and Plexxi controller
Switch interconnection traffic is in optical paths
Normal flows (short-duration) are in Ethernet paths
Elephant flows (persistent) are shunted to Calient optical switch
Tollac
SDM to connect diverse network services to a shared virtual network
WaaS to dynamically provision network services in mulititenant WiFi
environment (with fine-grained control of tunnels)
61. Future for SDN
Gartner Hype Curve
2012: Massive VC investments into SDN startups and rapid acquisitions by
incumbents
2013: Solution-oriented bundles required, not just open source software alone
2014: OF-specific ASIC build, best practices to be formulated, business will
consolidateSDN is the future. The best way to predict the future is to
make it!
Visibilit
y
Maturit
y
Technology
Trigger
Peak of
Inflated
Expectatio
n
Trough of
Disillusionme
nt
Slope of
Enlightenme
nt
Plateau of
Productivity
2012 2013 2014
62. Know Why Know What
Know Who
Know Where
Know How
Reality Check !!
Know Why Not
Success
of
SDN
Maturity
of
OpenFlow
Commercial
POCs
OpenFlow
ASIC
Depend On Catalyzed By
63. Challenges to OpenFlow
Control Plane
Master-Slave Failover Issue
• Need to define master re-election mechanism
Balance between Centralized and Distributed
Control
• Consider regionally centralized with globally distributed?
Performance Issue (with Flow Entries)
• Out-of-sequence problem when network is large
Incompatibility Issue
• Need private extensions for some features resulting in
incompatibility (PLUMgrid fixed this)
Security Issue
• Controller-switch channel (and controller-controller)
need further security beyond TLS
64. TCAM Hungry
• Need to be able to mask any match field(s) at will
Multi-FlowTable (with “Cascading”)
• Action output from one table be the input for next table
lookup
Protocol-neutral
• Like ACL but need far more actions and any field modifiable
Stateless & Timeless
• Unlike traditional with state machines and timestamps
(demanded in telco use cases)
No Conditional Branch (if/else if…then)
• Very expensive to implement with large number of flow
tables
Switch/Router Only (currently)
• Need extend further for other devices (ex. FW, SLB, etc…)
OverSpecsUnderSpecs
Challenges to OpenFlow
Data Plane
65. ONF Resolutions
CAB
(Nick
McKeown)
(Intel, Broadcom, Mellanox, Huawei,
etc…)
FAWG
Aggressive
Type table (user define new
header offset and length)
SRAM to replace TCAM (use
metadata to cascade hash
lookup)
Packet modifications (any
match field can be modified
independently)
Shared memory (variable
number of flow tables and table
sizes)
Progressive
NDM
(Negotiable Data-plane Model)
Current ASIC capabilities
mapped into a number of
models (profiling the number
of flow tables, exact match
fields, instructions supported)
Behind the scene are mostly
ACLs (and route, MAC, VLAN,
MPLS tables, etc...
66. Complementary Frameworks
SDN vs NV vs NFV
SDN on NV: Centralized control of virtual networks to enable automation
SDN on NFV: Decouple FW, SLB, etc. is the first step toward virtualization (as well as
cross-vendor compatibility and easier management)
NV
SDN
NFV
Objective:
Create virtual
network fabric
above physical
network to
relieve
constraints
(from software
vendors)
Objective:
Reallocate services
from appliances to
commodities to
save cost
(from telecom)
Objective:
Decouple CP/DP
and provide
abstraction to
foster network
innovation
(from research)
67. NV Alternatives
Direct Fabric Programming vs Overlays
VM VM VM
TOR Switch
Virtual Switch
Hypervisor
Direct fabric
programming
Inside the hypervisor
Modify or replace vSwitch
Run as VM instance
Device driver / Agent A
B
C
D
E
Pros Cons
VM/Driver Mobile with portable
VMs
Scalability limit; Little QoS
Overlay More guest OS support Access to hypervisor kernel
Direct Fabric Strong QoS/SLA
controls
Vendor to support
OpenFlow
E
C D
A B
68. NV Vendors (Top 4)
VMware (~32%)
NSX Controller Cluster: Semi-distributed control with logical routers (in
thousand)
port-group mapped to hypervisor switches
NSX Edge Gateways: Edge routers as appliances with its own routing tables
Micro-segmentation: Stateful firewall at VM-vNIC granularity
Cisco (~21%)
ACI: Centralized policy management with family of N9K, UCS, APIC, and AVS
Nexus 1000V: Multi-hypervisor overlay with distributed virtual switches and
CSR
Cisco Intercloud to connect over 30 telecom providers
Juniper (~13%)
Contrail: Multi-DC and inter-cloud with distributed control plane and deep
analytics
Northstar: WAN virtualization integratible with OSS/BSS and NEBS-compliant,
features NFV service-chaining and monetization for service providers
HP (~12%)
VCN: Integrated with Helio Openstack and FlexFabric, for open-source clouds
VAN: Combined SDN controller with VMware NSX platform, for large
Policy as in PBR: Video-streaming packets (loss okay but latency-sensitive) to short path, email packets (latency okay but loss-sensitive) to redundant paths.
Open implementation of OSPF, RIP, etc…
Open interfaces: Switch configuration interface, programming interface, and even forwarding behavior! (standardization is okay but open is hard!)
Linux opened from Unix takes years…
Hardware switches include: HP 3500/5400/8200; Broade NetIron CER, CES; IBM RackSwitch and Flex; NEC Programmable Flow 5240/5820; Juniper MX/EX; Arista 7050
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
OpenFlow primitive classes and interfaces are IOFSwitch, OFMatch, OFAction, etc…
TIF = Topology Independent Forwarding
SDN is very suitable for security apps, because:
Based on policies
Static control preferred (hard to control dynamic)
Network-wide coordination
Problem: Back-compatibility issues (ex. Connection loss to controller emergency flow cache (1.0) to restart anew (1.2)
Hardware switches include: HP 3500/5400/8200; Broade NetIron CER, CES; IBM RackSwitch and Flex; NEC Programmable Flow 5240/5820; Juniper MX/EX; Arista 7050
Hardware switches include: HP 3500/5400/8200; Broade NetIron CER, CES; IBM RackSwitch and Flex; NEC Programmable Flow 5240/5820; Juniper MX/EX; Arista 7050
OCP: BMS only has boatloader, any 3rd party OS can be ported onto the hardware, as well as re-compiled and re-developed (if open-source)
Infra-controlled Roaming uses Odin (LVAP) to achieve the feature
Microsoft is member of both alliances!
ONF: Control plane vs Data plane separated
ODL: Management plane vs Data plane separated (so CP can join DP, or join MP)
Cisco ACI = Application Centric Infrastructure
SDM = Software defined mobility
WaaS = Wifi-as-a-service
Further concerns:
In NV, SDN used to control hypervisors (vSwitches) , no need to change physical switches.
What if ODL drops support for OF-Config?
Can be fixed in software, relatively less effort.
Need resolution with ASIC, tremendous effort and risk!
FAWG=forward abstraction working group
CAB=chipmaker advisory board (https://www.opennetworking.org/technical-communities/groups/chipmakers-advisory-board)
在每一級流程表的查閱結果中都包含一個資訊,這個資訊是告訴下一個流程表,用什麼欄位組合來進行Hash查閱