2. Agenda
2
• Data Center Trends
• Data Center challenges & needs
• Data Center Network Architecture
• Data Center Interconnect with VxLAN
• Data Center Network Programmability
• Data Center / CDN Edge use case
3. DC Trend – Cloud growth for several years to
come
3
8. • A highly resilient network
– No downtime – planned or unplanned
– High bandwidth
• Automate provisioning, change control and upgrades
– Legacy human network middleware – can’t scale to the
demand!
• Supports all use cases and applications
– Client-server
– Modern distributed apps, Big Data
– Storage, Virtualization
Data Center challenges & needs
8
9. • Multi-tenancy
• Integrated Security
• Low and predictable latency
• Low power consumption
• Add racks over time
• Mix and match multiple generations of technologies
Data Center challenges & needs
9
10. Big Data
Data Center Network Architecture
10
IP Storage
VM Farms
Cloud
VDI
Legacy
Applications
Web
2.0
Standards Based LACP/LAG/L3
10/40/100G IP Fabric
11. 11
Data Center Design – L3LS
L3LS ECMP Spine Design
• Spine redundancy and
capacity
• Ability to grow/scale as
capacity is needed
• Collapsing of fault/broadcast
domains (due to Layer 3
topologies)
• Deterministic failover and
simpler troubleshooting
• Readily available
operational expertise as well
as a variety of traffic
engineering capabilities
...
10GbE
40GbE
Link Legend
4-Way Layer 3 Leaf/Spine with ECMP
Hosts
Dual-Homed Leaf
Rack 2Rack 1
MLAG Pair
...
Spine 1 Spine 2
MPLS
Edge
Routers
External
Network
Edge/Border Leaf
CORE
MetroA
Metro B
MLAG Pair
L3 L3
Hosts
Dual-Homed Leaf
Rack 2Rack 1
MLAG Pair
LAG LAG
L3
Spine 3 Spine 4
12. 12
Data Center Design – L3LS with VxLAN
Network Based Overlay
• Virtual Tunnel End
Points (VTEP’s) reside
on physicalswitches at
the Leaf, Spine or both
• Data plane learning is
integrated into the
physical
hardware/software
• Hardware accelerated
VXLAN encap/decap
• Support for all workload
types, Baremetal or
Virtual Machines,IP
Storage, Firewalls,
Load
balancers/Application
Delivery Controllers etc.
Layer 3 Leaf/Spine with Layer 2 VXLAN Overlay
Dual-Homed Compute
Rack 4Rack 3
MPLS
Edge
Routers
External
Networks
CORE
Metro A
Metro B
Rack 2Rack 1
Spine 3Spine 2Spine 1 Spine 4
VNI-5013
Layer 3 IP FabricActive/Active
VTEP’s + MLAG
Dual-Homed Compute
10GbE
40GbE
Link Legend
L3
L2
VXLAN Bridging
& Routing
VTEPVTEP VTEP VTEP VTEP VTEP
Cloud Vision
VXLAN
Control
Service
Layer 2 VXLAN
Overlay(s)
VNI-6829
13. Data Center Interconnect with VxLAN
• Enterprises looking to interconnect DCs across geographically
dispersed sites
• Layer 2 connectivity between sites, providing VM mobility between
sites
• Within the DC for server migration between PODs, for integrating
new infrastructure
DCI to provide Layer 2 connectivity
between geographically disperse sites
VTEP
VTEP
VNI
VNI
Server migration POD interconnect for connectivity
between DC’s PODs
14. 14
Data Center Network Programmability
• Automation of repetitive configuration tasks
– VLAN and Interface State
– ACL Entries
– Software image management
– Configuration Templates
• Choose your level of Integration with network overlay solution
– Full hardware VTEP design
– Mixed VXLAN in hardware or hypervisor
– Fully hypervisor-based with underlying VXLAN-aware network
– Dynamic provisioning of VLANs
• Network wide visibility and monitoring
– Congestion management
– Virtual to physicalconnectivity
– Connectivity monitoring
15. Data Center / CDN Edge Use case
15
DC
DC Edge
Switch
Transit Provider
Peer A
Peer A
Peer A
Peer A
Private Peering
BGP SDN Controller
pmacct Traffic Flow analyzer
Default
IP Prefixes
IP Prefixes
sFlow collector
Caches
Caches
Caches
Caches
Caches
BGP
16. 16
DC / CDN Edge Use case
BGP
IP FIB
BGP Controller (peer)
Receive
IP prefixes Advertise
Apply Policy Filter
install-map withvarious match
criteria supported
Install
SRD
RIB
Best path selection
Mark inactive
BGP routes
17. Data Center / CDN Edge Use case
17
• Limited requirement for large routing tables at DC/CDN edge
– 90% traffic hits less than 10% routes in RIB
– Channel higher bandwidth towards switch, away from expensive
internet router ports
– Edge router (DC switch plays this role) only needs to program a
small subset of prefixes in hardware (FIB)
• BGP Controller
– sFlow information is sent to BGP Controller
– BGP information is sent to BGP Controller
– Computes Top ‘N’ prefixes and instructs the router to install them
in FIB
• Spotify/Netflix are already using this in their network
https://media.readthedocs.org/pdf/sdn-internet-router-sir/latest/sdn-internet-router-
sir.pdf