The Potential Impact of Software Defined Networking SDN on Security


Published on

The Potential Impact of Software Defined Networking SDN on Security. The video of the presentation is at by Brent Salisbury. It is the first cut so a lot is not in the deck yet on the example use case front.

Published in: Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

The Potential Impact of Software Defined Networking SDN on Security

  1. The Potential Impact of Software Defined Networking on SecurityBrent SalisburyNetwork ArchitectUniversity of
  2. The Problem: Limited Choices/Flexibility• We designed and build Service Provider, Data Center and Enterprise networks the same due to rigid constraints placed by Proprietary Hardware, Software and APIs.• Leads to inflexible network architectures that do not meet the business needs.• Each market has very different problems, yet we try and solve the same way with today’s technology.
  3. My Obligatory Rationalizing Change is Bad • We are operating far to close to the hardware. o Do systems administrators configure their services in x86 Bios? Guess what? We do. • Generic components decomposed into resources to consume anywhere, anytime. • Abstraction of Forwarding, State and Management. o Forwarding: Networking gear with flow tables and firmware. o State: Bag of protocols destruction. o Management: Orchestration, CMDB etc. Join the rest of the data center (and world)
  4. More Protocols != Answer Doh! >Jumbled Protocol Picture source: Nick McKeown -Stanford
  5. The Problem Has Always Been the Edge• Security Policy at the Edge.• Multi-Tenancy at the Edge.• Traffic Engineering Classification at the edge.• Operational Complexity at the Edge.• QOS Policy at the Edge.• Cost at the Edge.
  6. Commoditization 1. Commodity Hardware. Off the shelf “Merchant Silicon”. – If all vendors are using the same pieces and parts where is the value? Software becomes the differentiation. • “We want to create a dynamic where we have a very good base set of vendor- agnostic instructions. On the other hand, we need to give room for switch/chip vendors to differentiate.” -Nick McKeown • “You don’t have to have an MBA to realize there is a problem. We are still ok but not for very long.” -Stuart Selby,Verizon • When you run a large data center it is cheaper per unit to run a large thing rather than a small thing, unfortunately in networking that’s not really true. -Urs Hoezle, Google • “Work with existing silicon today; tomorrow may bring dedicated OpenFlow silicon.” - David Erickson • “The path to OpenFlow is not a four lane highway of joy and freedom with a six pack and a girl in the seat next to you, it’s a bit more complex and a little hard to say how it will work out, but I’d be backing OpenFlow in my view” – Greg Ferro
  7. Commoditization: A Collage of DisruptionGoogle’s Pluto
  8. Not New Ideas VM Farms Today SDN Network Physical Server Infrastructure Physical Network Infrastructure Servers, CPU, Memory, Disk, Physical HW Router, Switches, RIB, LIB, NIC, Bus. TCAM, Memory, CPU, ASIC. HyperVisors, Vmware, Multi-Tenancy FlowVisor Hyper-V, KVM, Xen, X86 Virtualization Openflow Controller Instruction SetWindows General Secure Windows Windows Research WindowS Slices WindowS WindowS WindowS Purpose WindowS Network WindowS Slices Slices Slices lice lice lice lice lice lice Slice Slice Slices
  9. Abstraction SDN Stack Operating System Abstraction Layers Applications Applications/ Northbound API Policy (POSIX, REST, JSON) Kernel/OS/ Controllers/ Slicing Hypervisor Southbound API Hardware/Firmware/ (x86 ‘like’ or a HAL) vSwitch Firmware CPU Device Memory
  10. SLIDE* -VLAN Switching Switch MAC MAC Eth VLAN IP IP IP TCP TCP Action Port src dst type ID Src Dst Prot sport dport * * 00:1f.. * vlan1 * * * * * port6 -Routing Switch MAC MAC Eth VLAN IP IP IP TCP TCP Action Port src dst type ID Src Dst Prot sport dport * * * * * * * * * port6 -Firewall Switch MAC MAC Eth VLAN IP IP IP TCP TCP Action Port src dst type ID Src Dst Prot sport dport * * * * * * * * * 22 drop -Flow Switching Switch MAC MAC Eth VLAN IP IP IP TCP TCP Action Port src dst type ID Src Dst Prot sport dport port3 00:20. 00:1f.. 0800 vlan1 4 17264 80 port6, port7, port9
  11. Open vSwitch – Scale HW vs. SW• VM rack density East-West traffic could be problematic for general purpose top of rack.• 100K+ entries in a rack is unrealistic in HW today. Action Bucket Packet-in with match in TCAM – Action is forward to port 0/2 Port 0/3 * * * * * * * * Send Packet to Port 0/2 TCAM Lookup In (n)RAM Ingress Port Ethec Src Ether Dst Ether Type Vlan ID IP Dst IP Src TCP Dst TCP Src IP Proto Port 0/1 * * * * * * 80 * * * * * * * * * * * Action Bucket * * * * * * 25 * * Port 0/3 * * * * * * * * Send Packet TCAM Lookup to Controller 0/5 * * * * * 80 * * Packet-in with NO match in TCAM – Action is Punt to Controller
  12. What Changed? Why Now? #2 The Data Center• “The network is in my way” -James Hamilton, Amazon• Networking is complex because the appropriate abstractions have not yet been defined.” –A Case for Expanding OpenFlow/SDN Deployments On University Campuses• “If you look at the way things are done today, it makes it impossible to build an efficient cloud. If you think about the physical network because of things like VLAN placements, you are limited on where you can place workloads. So even without thinking about the application at all, there are limits on where you can place a VM because of capacity issues or because of VLAN placement issues.” – Martin Casado• The tools we have today for automation: snmp, netconf, subprocess.Popen(Python), Net::Telnet(Perl),#!/bin/bash, autoexpect, etc.
  13. Evolution or Re-Invention? Software Defined 2-Tier Flat TRILL/SPB/MPLS North-South 75/25 3-TierNorth-South 90/10 ?
  14. What Changed? #2 The Data Center• Public Cloud Scale• VID Limitations - ~4094 Tags The Edge Needs to Be Smarter but also manageable: Below is neither• ¼ of Servers are Virtualized• Customers want flat networks but Physical Policy they do not scale. Network• Complexity in the network substrate to support bad application design. Physical x86 Hardware• Required- Flexible & Open APIs to consume Network Resources. VM Farm• East-West policy application.• East-West BW Consumption.• L2 Multi-Tenancy. VM1 VM2 VM3 VM4• Hypervisor Agnostic. Port1 Port2 Port3 Port4• VM port characteristic mobility.• Traffic Trombone for Policy.
  15. Virtual Switching (Example: Open vSwitch) Physical Network • Security-Vlan Layer2 isolation, Traffic filtering • QOS-Traffic queuing and shaping Physical • Monitoring- Netflow, sFlow, SPAN, x86 Hardware RSPAN • Control- OpenFlow or NextGen Open vSwitch & Hypervisor • Features: Bonding, GRE, VXLan, Capwap, STT, Ipsec. • Hypervisor Support: KVM, XEN, VM1 VM2 VM3 VM4 Xenserv, Vbox. Port1 Port2 Port3 Port4 • Orchestrator Support: OpenStack, % ovs-appctl fdb/show br0 CloudStack. port VLAN MAC Age • License: Apache2 and GPL 0 0 00:0f:cc:e3:0d:d8 6 (upstream kernel mod) 1 0 00:50:56:25:21:68 1 • **Point being, switch SW is a Switch 2 0 10:40:f3:94:e0:82 1 3 10 10:40:f3:94:e0:82 1 minus hardware.** 4 10 00:0f:cc:e3:0d:d8 1
  16. Open vSwitch Forwarding Physical Hardware/Hypervisor Open vSwitch Controller or Controller (x) First Packet in a Flow Subsequent Packets VM 1 VM 2 Open vSwitch Data Path• First Packet in the flow goes to the OVS controller (slowpath)• Subsequent are forwarded by the OVS data path (fastpath)• When attached to a controller datapaths are determined by the OpenFlow Controller.• Actions: Forward, Drop, encapsulate (packet-in) and send to controller.
  17. Data Center Overlays• Early SDN adoptions are happening today in Data Centers. decouple the virtual from the physical discreetly. Native NetworkWhere do we terminate Tenancy XTunnel endpoints? Tenancy Y Tenancy ZHW or SW SDN Overlaysfor De-Encap? (GRE, STT, VXLan) Traditional and SDN Network Substrates Creating Dynamic Network Resource Pools Resources Consumption (Storage, Network, Compute) Either Local or Hybrid Private/Public Cloud. Visibility, OAM, Dynamic Provisioning, Brokerage and Analytics.
  18. Does This Make Sense? Tenancy X Tenancy Y Tenancy Z Cloud Provided Elastic Compute Disaster Recovery Warm/Hot Site Layer 3 Network e.g. Carrier MPLS/VPN, Internet, L3 Segmented Data Center Data Center Data Center West Segment East Segment Leveraging Overlays With VXLAN/(NV)GRE/CAPWAP Create one Flat Network
  19. East - West Traffic• Within a Security Domain Everything is Great.
  20. Policy Application Breaks East WestDesign • Policy Application Begins to Create Bottlenecks.
  21. East - West Traffic Network • Policy Applied at the Edge Removing Hairpin • OpenStack is Doing this Today. Network Gateway Vnet0 Vnet1 Vnet2 Tenancy X = Vlan1 VM to VM(East-West) Tenancy Y = Vlan2 Traffic is Filtered by Tenancy Z = Vlan3 the OVS Libvirt Plugin
  22. Pushing Policy to the Edge• Self-provisioning of network security or template based policy.• Auditing becomes significantly easier.• East-West traffic policy application constraints are relaxed.
  23. Hybrid Cloud Look How Public Cloud Feels How it Really is: Internets Public Cloud Spoke Controller Dnsmasquerading & IPTables Internets Aka, Router, Switch and Firewall VM Instance VM Instance VM Networks Public and Private IP addr on one NIC
  24. Data Center Orchestration Stacks– A Quick Look Overlays Between Physicals Host HyperVisors NETWORK_GATEWAY Eth1=br-int(OVS) NETWORK_GATEWAY Eth1=br-int(OVS) FIXED_RANGE FIXED_RANGE Vnet0 Vnet1 Vnet2 Vnet0 Vnet1 Vnet2 Tenancy X = Vlan1 or GRE1 VM to VM(East-West) Traffic Tenancy Y = Vlan2 or GRE2 is Filtered by the Tenancy Z = Vlan3 or GRE3 OpenvSwitch Libvirt Plugin
  25. Hybrid Cloud - IMO Not as Bad as It Looks, this exists today in most DCs Internet Spoke Spoke Public and Private On one Nic Hub Gateway Spoke Spoke Private Cloud On Your Network
  26. Tunneling & Hybrid Cloud Creates One Network and Hybrid Cloud Public Cloud Spoke An x86 Node Can Aggregate the Tunnel Endpoints. Hub and Spoke. The Alternative Internets would be a Full Mesh. Policy could centrally be Encapsulated Tunnels applied there. Network is Hub Gateway Unaware of Underlying Substrate Spoke Spoke Private Cloud On Your Network
  27. De-Duping Policy is the best Reason forTunnels • Leverage existing Public Cloud Spoke centralized policy Application and Orchestration. Crypto, IDS/IPS, Firewall etc. Internets • However, sending the client directly to a cloud Hub Gateway provider outside of a Spoke tunnel via the Internet Spoke Private Cloud is by far the easiest and On Your Network most scalable solution.
  28. Public Cloud: The Internet will be the newLAN Option 1:General Internet1 best effort Option 2: Dedicated peerings to Option 3: Internet2, connectivity through commodity I1 any node from tenant to colo into Ideally begin leveraging drains like a Cogent for example at the super-regional anyone selling their peering and Colos ~¢25-¢50 cents per/mb. Capture that resource pools with open APIs. globally for a broader as service level as a lower tier SLA but Rackspace, HP, Dell, Piston Cloud. net to capture priced significantly lower. Companies whose end game is not competitively priced Primary option. 100% cloud. resources Leverage Regional & Super-Regional Statewide Networks and Open Peerings to Cloud Providers. xAAS driven as a commodities market through Emerging Open API Standards. Programmability Should Enable Efficiency in Usage and Allow for Time Sharing via Orchestration.OpenStack Resources Either Local w/the ability to leverage Hybrid Private/Public Cloud offerings based onthe best market price that year, month maybe even day depending on the elasticity and flexibility to move workloads. Also balancing workloads amongst each other through scheduling and predictive analysis and magic. Tenants would be any community anchor, state, city, education non-profit etc.
  29. Enterprise Problems • Policy classification and management. • Regulatory implications from cost to OpEx. • Identity Management (AAA) • BYOD, BYOD, BYOD, BYOD
  30. NAC• Why do the overwhelming majority of NAC deployments never make it past an “fail open” policy?• Relying on SNMP, DHCP or any distributed “OS” Embedded in the Network Device. AAA/NAC Dirty Vlan/VRF Dirty Vlan/VRF
  31. SDN Wireless Campus Controllers Core Apply Policy Centrally Distribution Distribution Distribution Access Points
  32. Enterprise Wireless at Larger Scale Today Campus Distributed Controllers Core In the same Administrative Domain Distribution Distribution Distribution
  33. Decoupled Control Plane (NOS) SDN/OF x86 Campus Controllers Core Apply Policy Centrally Distribution Distribution Distribution Edge Switches
  34. Policy Application in Wired Networks• Decoupling the Control Plane != mean distributed systems in networks go away.• The problem is a distributed systems theory problem managed in software independent of hardware.• We centralize the control plane in traditional hierarchical campus architectures today in big expensive chassis. Distributed SDN/OF Controllers
  35. The Alternative is More of the Same• The Alternative to apply policy is Business as usual. Un-Scalable and cost prohibitive bumps in the wire Campus Core• NAC and BYOD at w/ low Cap and OpEx is even more Mythical than and SDN. Distribution Distribution Distribution Edge Switches
  36. Example Security Use Case #1 SDN Enabled Switch A • Monitoring either particular data Insert Expensive sets or entire links often requires Proprietary Magic expensive, purpose built hardware. Security Analytics Engine SDN Enabled Switch B
  37. Example Security Use Case #1 Corporate Financials SDN Enabled Switch A SDN Controller Port-7 Security Analytics Engine Port-6 SDN Enabled Switch B Match: IP_DST Instruction: Forward to Port6 AND Port7 Switch MAC MAC Eth VLAN IP IP IP TCP TCP Action Port src dst type ID Src Dst Prot sport dport * * * * * * port6, * * * port7
  38. Security Use Case #2 – Operational Sanity and Cost ControlShips in the Night.• We virtualize networks at varying degrees of scale today.• Operational overhead, provisioning and cost of hardware is a barrier in many cases PCI HIPPA Financials
  39. Control• Centralize Operational Management by ERP Decoupling Virtual from the Physical. DMZ PCI Backbone
  40. Control• Extracting Applications and Features from Hardware Allows for Programmatic Operations and Proper Abstractions to Facilitate IT Objectives.• Exposes Data to Analytics.
  41. Today’s Network Data resides in /dev/null • These are “Big Data” problems. • Extract, snapshot, replay a “global view” of the network during attacks. • Feed network data into predictive analysis engines. • All Flow Data is exposed.
  42. Plug)Easy as Calling a Method -IPv4.fromIPv4Address(match.getNetworkDestination()) Simple use Case: Name Based Path Selection SDN Enabled Switch A 2 Separate Physical or Logical Paths. SDN Enabled Switch B - (Business Critical) - (Compliance Critical) *.*.*.*/* - (Everything Else) Example Scenario: Classify Traffic Based on DNS Values and Select the path Based on Policy Proactively or Reactively Residing on the SDN Controller. It does not have to be just the path. There are many powerful headers between Layer2-4 for starters.
  43. Plug)Extracting packet-in payloadis as easy as calling a method:(Match) …Recieve OFPacketIn pi = (OFPacketIn) msg; OFMatch match = new OFMatch(); IPv4.fromIPv4Address (match.getNetworkDestination());(Action) … Provider Core(String name, OFFlowMod fm, String swDpid)…Continue (Gold Path Priority #1) (Silver Path Priority #2)
  44. Ez Deployment ScenarioNew Flow Processing -- structofp_packet_in (POX L2 Simple Hybrid Deployment Layer2 Path IsolationLearningAlgorthym) between SDN and Native networks leveraging Vlans POX, FloodLight/Beacon 1. Update Source Address in (T)CAM or SW etc. Controller Traditional tables. Network 2. Is destination address a Ethertpe LLDP or Bridge Filtered MAC, or is? Access Port Drop or FWD to Controller or even hand off to to Controllers Traditional STP. LLDP may be used to build a topology Vlan 1 Layer 3 Gateway (important for future). Redistribution. Port 24 1 3. Is Multicast? Yes Flood. OF and/or SDN Enabled Switch (Semantics) 802.1q Trunk 4. Is the destination address in port mac 10 11 or (M)LAG group address table. If no Flood. 5. Is output port the same as input port? Drop to prevent loops. SDN - Vlan/10 Legacy Vlan/20 6. Install flow and forward buffered and subsequent packets.
  45. Where to Begin?
  46. Stitching Islands
  47. Not If But When? Directory/AAA Analytics/ Firewall Inspection HA/Load Services Topology DNS Policy Policy Balancing Applications NetworkOperating System Forwarding
  48. Brent’s Bookmarks••••• (Floodlight OF Controller)• (Open vSwitch)• (POX)• First 10 minutes of McKeown’s presentation for anyone with manager in title not to mention brings tears to my eyes.• (McKeown)• An attempt to motivate and clarify Software-Defined Networking (SDN) -Scott Shenker••• (Rackspace OpenStack Private Cloud build)•••• My Ramblings• #openflow #openvswitch #openstack
  49. Closing- Comments Questions Nerd Rage?