Efficient Topology Discovery
in Software Defined Networks
Farzaneh Pakzad, Marius Portmann, Wee Lum Tan,
and Jadwiga Indulska
School of ITEE, The University of Queensland
Brisbane, Australia
Presented by Farzaneh Pakzad
8th International Conference on Signal Processing and Communication Systems
15 - 17 December 2014, Gold Coast, Australia
1
Overview
• Background
• SDN Topology Discovery
– State of the art
– Improved version
• Experiments
• Conclusion
2
Logical view of
Software Defined Networking (SDN)
Architecture
3
Topology
Discovery
OpenFlow
4
 Topology discovery provides and maintains a global view of the network
 Topology discovery underpins the logically centralized network management and
configuration
Topology
Discovery
SDN Topology Discovery - State of the art
• Controller performs Topology Discovery
• OFDP = OpenFlow Discovery Protocol
– Implemented by most SDN controllers  de facto
standard
– OFDP leverages the packet format of LLDP
– OFDP operates completely differently
• LLDP = Link Layer Discovery Protocol
– IEEE 802.1AB
– Used in traditional Ethernet network devices
5
SDN Topology Discovery
Topology Discovery Frame structure
Chassis
ID
TLV
port
ID
TLV
Time
To Live
TLV
Optional
TLVS
END of
LLDPDU
6
LLDP Data Unit (LLDPDU)
Ethernet Frame
 LLDP information is sent by devices in the form of an Ethernet frame.
 Each frame contains one LLDP Data Unit (LLDPDU)
 Each LLDPDU is a sequence of type-length-value (TLV) structures.
LLDP Type
OFDP
Port 1
Port 2
Port 3
7
S1-port1---S2-port3
Port 1
Port 2
Port 3
 Controller creates individual LLDP packets
for each port of each switch
LLDP pkt
Chassis ID = s1
Port ID = Port 1
LLDP pkt
Chassis ID = s1
Port ID = Port 3
LLDP pkt
Chassis ID = s1
Port ID = Port 2
Idea: reduce the number of packet-Out
Port 1
Port 2
Port 3
8
Port 1
Port 2
Port 3
 Controller load and performance is
critical for the scalability of SDN
 Reduction the number of packet-Out
from each port of every switch
to one per switch
 Ask the switch to send the packet
on all its ports
The Problem
Port 1
Port 2
Port 3
9
Port 1
Port 2
Port 3
 Can we instruct the switch to rewrite the LLDP Port ID TLV on-the-fly !?
 The answer is
OpenFlow switches do not support access to packet payload data
and rewriting of that…
LLDP pkt
Chassis ID = s1
Port ID = Port 1
LLDP pkt
Chassis ID = s1
Port ID = Port 1
LLDP pkt
Chassis ID = s1
Port ID = Port 1
Our proposed improvement: “OFDPv2”
Port 1
Port 2
Port 3
10
Port 3
Port 2
Port 1
S1-port1---S2-port3
Ethernet Frame
Src MAC addr = Port 3
LLDP pkt
Chassis ID = s1
Ethernet Frame
Src MAC addr = Port 1
LLDP pkt
Chassis ID = s1
Ethernet Frame
Src MAC addr = Port 2
LLDP pkt
Chassis ID = s1
What about rewriting the packet headers!?
In particular, rewrite the source MAC address of the outgoing
Ethernet frame. (Controller can map MAC addresses to Port
Numbers)
Implementation
• POX as our SDN controller platform
• Implement OFDPv2 based on OFDP - topology
discovery module - in POX and also use it as a
benchmark
11
Experiments
• Goal:
– Measure the controller overhead reduction of OFDPv2 vs.
OFDP
• In terms of the number of control messages
• In terms of the controller CPU load
• Methodology
– Mininet: Linux based network emulator
• Allows creation of arbitrary virtual networks
• In-built support for SDN
– Considered 4 different topologies
12
Experiment Results
Number of Packet-Out control message
Topology #Switches #ports
Topology 1 Tree, d =4, f=4 85 424
Topology 2 Tree, d =7, f=2 127 380
Topology 3 Linear, N=100 100 298
Topology 4 Fat Tree, K=4 20 80
# control msgs
OFDP
# control msgs
OFDPv2
Efficiency Gain (G)
Topology 1 424 85 80%
Topology 2 380 127 67%
Topology 3 298 100 67%
Topology 4 80 20 75%
13
A great reduction in the number of
Packet-Out control messages
Controller CPU load
• Controller load is critical for any SDN application and is a
key factor for network scalability and performance
• Second experiment explores CPU load imposed by both
OFDP and OFDPv2
• We measure cumulative CPU time consumed by POX
topology discovery service, over a period of 300 seconds
14
Controller CPU load
15
Reduction of CPU time and load ranging from a minimum
of 35% for Topology 2 up to 45% for Topology 1
Conclusions
• Topology discovery is a key service provided by all SDN
controllers
• We proposed improved topology discovery mechanism
for OpenFlow based SDNs
• Reduction in controller overhead, in terms of:
– Number of control messages (packet-Out)
– CPU load on SDN controller (up to 45%)
• Increased controller efficiency
– Increased network performance
– Increased network scalability
16
17

Efficient Topology Discovery in Software Defined Networks

  • 1.
    Efficient Topology Discovery inSoftware Defined Networks Farzaneh Pakzad, Marius Portmann, Wee Lum Tan, and Jadwiga Indulska School of ITEE, The University of Queensland Brisbane, Australia Presented by Farzaneh Pakzad 8th International Conference on Signal Processing and Communication Systems 15 - 17 December 2014, Gold Coast, Australia 1
  • 2.
    Overview • Background • SDNTopology Discovery – State of the art – Improved version • Experiments • Conclusion 2
  • 3.
    Logical view of SoftwareDefined Networking (SDN) Architecture 3 Topology Discovery
  • 4.
    OpenFlow 4  Topology discoveryprovides and maintains a global view of the network  Topology discovery underpins the logically centralized network management and configuration Topology Discovery
  • 5.
    SDN Topology Discovery- State of the art • Controller performs Topology Discovery • OFDP = OpenFlow Discovery Protocol – Implemented by most SDN controllers  de facto standard – OFDP leverages the packet format of LLDP – OFDP operates completely differently • LLDP = Link Layer Discovery Protocol – IEEE 802.1AB – Used in traditional Ethernet network devices 5
  • 6.
    SDN Topology Discovery TopologyDiscovery Frame structure Chassis ID TLV port ID TLV Time To Live TLV Optional TLVS END of LLDPDU 6 LLDP Data Unit (LLDPDU) Ethernet Frame  LLDP information is sent by devices in the form of an Ethernet frame.  Each frame contains one LLDP Data Unit (LLDPDU)  Each LLDPDU is a sequence of type-length-value (TLV) structures. LLDP Type
  • 7.
    OFDP Port 1 Port 2 Port3 7 S1-port1---S2-port3 Port 1 Port 2 Port 3  Controller creates individual LLDP packets for each port of each switch LLDP pkt Chassis ID = s1 Port ID = Port 1 LLDP pkt Chassis ID = s1 Port ID = Port 3 LLDP pkt Chassis ID = s1 Port ID = Port 2
  • 8.
    Idea: reduce thenumber of packet-Out Port 1 Port 2 Port 3 8 Port 1 Port 2 Port 3  Controller load and performance is critical for the scalability of SDN  Reduction the number of packet-Out from each port of every switch to one per switch  Ask the switch to send the packet on all its ports
  • 9.
    The Problem Port 1 Port2 Port 3 9 Port 1 Port 2 Port 3  Can we instruct the switch to rewrite the LLDP Port ID TLV on-the-fly !?  The answer is OpenFlow switches do not support access to packet payload data and rewriting of that… LLDP pkt Chassis ID = s1 Port ID = Port 1 LLDP pkt Chassis ID = s1 Port ID = Port 1 LLDP pkt Chassis ID = s1 Port ID = Port 1
  • 10.
    Our proposed improvement:“OFDPv2” Port 1 Port 2 Port 3 10 Port 3 Port 2 Port 1 S1-port1---S2-port3 Ethernet Frame Src MAC addr = Port 3 LLDP pkt Chassis ID = s1 Ethernet Frame Src MAC addr = Port 1 LLDP pkt Chassis ID = s1 Ethernet Frame Src MAC addr = Port 2 LLDP pkt Chassis ID = s1 What about rewriting the packet headers!? In particular, rewrite the source MAC address of the outgoing Ethernet frame. (Controller can map MAC addresses to Port Numbers)
  • 11.
    Implementation • POX asour SDN controller platform • Implement OFDPv2 based on OFDP - topology discovery module - in POX and also use it as a benchmark 11
  • 12.
    Experiments • Goal: – Measurethe controller overhead reduction of OFDPv2 vs. OFDP • In terms of the number of control messages • In terms of the controller CPU load • Methodology – Mininet: Linux based network emulator • Allows creation of arbitrary virtual networks • In-built support for SDN – Considered 4 different topologies 12
  • 13.
    Experiment Results Number ofPacket-Out control message Topology #Switches #ports Topology 1 Tree, d =4, f=4 85 424 Topology 2 Tree, d =7, f=2 127 380 Topology 3 Linear, N=100 100 298 Topology 4 Fat Tree, K=4 20 80 # control msgs OFDP # control msgs OFDPv2 Efficiency Gain (G) Topology 1 424 85 80% Topology 2 380 127 67% Topology 3 298 100 67% Topology 4 80 20 75% 13 A great reduction in the number of Packet-Out control messages
  • 14.
    Controller CPU load •Controller load is critical for any SDN application and is a key factor for network scalability and performance • Second experiment explores CPU load imposed by both OFDP and OFDPv2 • We measure cumulative CPU time consumed by POX topology discovery service, over a period of 300 seconds 14
  • 15.
    Controller CPU load 15 Reductionof CPU time and load ranging from a minimum of 35% for Topology 2 up to 45% for Topology 1
  • 16.
    Conclusions • Topology discoveryis a key service provided by all SDN controllers • We proposed improved topology discovery mechanism for OpenFlow based SDNs • Reduction in controller overhead, in terms of: – Number of control messages (packet-Out) – CPU load on SDN controller (up to 45%) • Increased controller efficiency – Increased network performance – Increased network scalability 16
  • 17.

Editor's Notes

  • #4 The key is to have a standardized control interface that speaks directly to hardware A whole network is like a big machine
  • #5 The key is to have a standardized control interface that speaks directly to hardware A whole network is like a big machine
  • #6 In order for an SDN controller to be able to manage the network and to provide services such as routing, it needs to have up-to-date information about the network state, in particular the network topology. Therefore, a reliable and efficient topology discovery mechanism is critical for any SDN system. OpenFlow switches do not support any dedicated functionality for topology discovery, and it is the sole responsibility of the controller to implement this service. Furthermore, there is no official standard that defines a topology discovery method in OpenFlow based SDNs. However, most current controller platforms implement topology discovery in the same fashion, derived from an implementation in NOX, the original SDN controller. This makes that mechanism the de facto SDN topology discovery standard. The mechanism is referred to as OFDP (OpenFlow Discovery Protocol) for lack of an official term
  • #7 In order for an SDN controller to be able to manage the network and to provide services such as routing, it needs to have up-to-date information about the network state, in particular the network topology. Therefore, a reliable and efficient topology discovery mechanism is critical for any SDN system. OpenFlow switches do not support any dedicated functionality for topology discovery, and it is the sole responsibility of the controller to implement this service. Furthermore, there is no official standard that defines a topology discovery method in OpenFlow based SDNs. However, most current controller platforms implement topology discovery in the same fashion, derived from an implementation in NOX, the original SDN controller. This makes that mechanism the de facto SDN topology discovery standard. The mechanism is referred to as OFDP (OpenFlow Discovery Protocol) for lack of an official term
  • #11 Sending an LLDP Packet-Out message for each port on each switch seems inefficient. A better alternative would be to only send a single Packet-Out message to each switch, and ask it to send the corresponding LLDP packet out on all its ports. This functionality is supported in OpenFlow.
  • #14 POX controller collects statistics about the number of Packet-Out messages in each discovery cycle, for each of our four example topologies. We ran each experiment 10 times, with identical results, as expected. the relative reduction in the number of control messages, i.e. the efficiency Gain G of OFDPv2 over OFDP. We see that the experimental results correspond to Equations and the relevant parameters for the various topologies. It is evident that OFDPv2 achieves a great reduction in required LLDP Packet-Out control messages, with up to 80% fewer messages for Topology 1, and a minimum reduction of 67% for Topologies 2 and 3.
  • #16 We repeated the same experiment 20 times for each of our four example topologies. Figure shows the total CPU time used by the topology discovery process over the entire duration of the experiment (300 seconds). The figure shows the average over the 20 experiment runs and the 95% confidence interval In summary, we observe a reduction in CPU time and load, ranging from a minimum of 35% for Topology 2 up to 45% for Topology 1, and potentially greater for networks with a higher port density.