• Like
Morphology of Modern Data Center Networks - YaC 2013
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Morphology of Modern Data Center Networks - YaC 2013

  • 365 views
Published

Dinesh Dutt presentation about Morphology of Modern Data Center Networks at YaC 2013 in Moscow, Russia. …

Dinesh Dutt presentation about Morphology of Modern Data Center Networks at YaC 2013 in Moscow, Russia.

Published in Software , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
365
On SlideShare
0
From Embeds
0
Number of Embeds
3

Actions

Shares
Downloads
46
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Data Center Topologies Morphology of Modern Data Center Networks Dinesh G Dutt | Chief Scientist
  • 2. Dawn of the Modern Data Center Network 2.0 Routing Taming the Configuration Beast Agenda 4/23/2014 1YAC 2013 (Pictures courtesy of Wikimedia, where not stated)
  • 3. Dawn of the Modern Data Center Network 2.0 Routing Taming the Configuration Beast Agenda 4/23/2014 2YAC 2013
  • 4. Evolution of the Data Center Application 4/23/2014 3YAC 2013 Traditional Enterprise Applications  L2-centric  Sensitive to network failures  Mostly static  VLANs  No Server Virtualization  Mostly North-South Lower Capacity 100s-few thousand endpoints Modern Data Center Applications  IP-centric  Workaround network failures  Dynamic  Clouds  Server Virtualization  Mostly East-West High Capacity Thousands to millions of endpoints
  • 5. Challenges:  Large failure domain  Agg box failure  Unscalability of agg boxes  MAC/ARP  VLANs  Choke point for E-W  Complex  HA  Too many protocols  Many proprietary enhancements  Each vendor has their version of the same feature 4/23/2014 4YAC 2013 Traditional Enterprise DC Network Design L3 L2 Access Aggregation Core VRRP VRRP STP/VTP/GVRP/UDLD ECMP STP/VTP/GVRP/UDLD
  • 6. 4/23/2014 5YAC 2013 Network's Function is to Serve the Application Needs
  • 7. Dawn of the Modern Data Center Network 2.0 Routing Taming the Configuration Beast Agenda 4/23/2014 6YAC 2013
  • 8. Folded CLOS Network 4/23/2014 7YAC 2013
  • 9.  ECMP  IP fabric ubiquitous  Better Failure Handling  Predictable Latency  Simple Feature Set  Scalable  L2/L3 Boundary  ToR vs. EoR design Characteristics Of CLOS Network 4/23/2014 8YAC 2013 LEAF SPINE
  • 10. Calculating Network Size 4/23/2014 9YAC 2013 TIER-1 TIER-2 TIER-3 2 Tier Fabric For smaller environments 3 Tier Fabric For large-sale environments Pods can be of dissimilar size LEAF SPINE
  • 11. Calculating Network Size 4/23/2014 10YAC 2013 2 Tier Fabric • #ports @ToR = (m*n)/2 • Max #ports @ToR= 2K with 64px10GE at Tor/Spine • Max #ports @ToR= 4608 with 96px10GE at Tor/Spine 3 Tier Fabric • #ports @ToR = (m*n*o)/4 • Max #ports @ToR= 65K with 64px10GE at Tor/Spine/Spine • Max #ports @ToR = 221K with 96x10GE at Tor/Spine/Spine m m n on
  • 12. Oversubscription & Such 4/23/2014 11YAC 2013 Number of servers: Number of uplinks Non-blocking after this first layer Using Trident and 40 servers per rack: Oversubscription is 2.5 Using Trident2 in same config: Oversubscription can be 1
  • 13. Paganini Variations 4/23/2014 12YAC 2013
  • 14. Size Does Matter Fine grained failure domain Large boxes vs small boxes Interconnect link Scheduling Downtime Trying on new clothes Multi-vendor 4/23/2014 13YAC 2013 Failure Analysis
  • 15. Dawn of the Modern Data Center Network 2.0 Routing Taming the Configuration Beast Agenda 4/23/2014 14YAC 2013 Picture courtesy Nanoer.com @flickr
  • 16. What Protocol Link state (OSPF/ISIS) or BGP Managing IPv4/v6 Separate session/protocol or unified Multi-Vendor Support Deployment Experience 4/23/2014 15YAC 2013 Questions That Affect Routing Protocol
  • 17. Commonly deployed protocol within enterprises Simplify config: Only 2 area IDs, backbone and non-BB Unnumbered interfaces Run OSPFv3 also if you have IPv6 Route summarization possible, not desired due to non-optimal routing 4/23/2014 16YAC 2013 OSPF Backbone area Area 0.0.0.1 Area 0.0.0.1
  • 18. Simple up-down routing Use Private AS numbers Route summarization not possible Interface addresses only Single BGP session for v4/v6 or separate sessions 4/23/2014 17YAC 2013 eBGP ASx ASx1 ASx2 ASxn ASy1ASy1ASy1 ASy1ASy ASy ASy ASy ASz ASz ASz ASz ASx3 ASx ASx1 ASx2 ASxnASx3
  • 19. Simple up-down routing No IGP Eliminates AS number distraction Use of NH Self with RR Single-hop BGP peer, use interface address Single BGP session for v4/v6 or separate sessions 4/23/2014 18YAC 2013 iBGP RR RR RR RR RR RR RR RR RR RR RR RR
  • 20. VM VM VM Logical switch Great fit for modern data center apps Layer complex applications such as clouds as an overlay L2 as a service 4/23/2014 19 Network Virtualization YAC 2013
  • 21. Dawn of the Modern Data Center Network 2.0 Routing Taming the Configuration Beast Agenda 4/23/2014 20YAC 2013
  • 22. To err is human, to automate divine But traditional networking gear is a black box OS functions more like an embedded OS No programmable way to configure the box Primitive network management tool chain Vendor-specific 4/23/2014 21YAC 2013 Automate Configuration
  • 23. Turn Black box into White & use Linux as the network OS Why Linux ? Well established and open API Vibrant community fueling innovation Sophisticated management tool chain Excellent networking support Linux As The Network OS 4/23/2014 22YAC 2013
  • 24. Server management tools to manage networks Puppet, Chef, Ansible or in house Common Toolset 4/23/2014 23YAC 2013
  • 25. Verify connectivity is as per operator specified cabling plan User defined actions on topology check result  For example, routing adjacency is brought up only if physical connectivity check passes Example:  T1, port1 is connected to M1, port1  T1, port2 is connected to M2, port1  …  M1, port 3 is connected to S1, port1  M1, port 4 is connected to S2, port1 … 4/23/2014 24YAC 2013 Validating Physical Topology S2 M2M1 T2T1 M4M3 T4T3 S1
  • 26. Graphviz: Network topology specified via DOT language Well understood graph modeling language Wide range of supported tools Open source Central management tool: Network topology is pushed out to all nodes Each node determines its relevant information LLDP: Use the discovery protocol to verify connectivity Graph G { S1:p1 – M1:p3; S1:p2 – M2:p3; S1:p3 – M3:p3; S1:p4 – M4:p3; S2:p1 – M1:p4; S2:p2 – M2:p4; S2:p3 – M3:p4; S2:p4 – M4:p4; M1:p1 – T1:p1; M1:p2 – T2:p2; … M4:p2 – T4:p2; } 4/23/2014 25YAC 2013 ptmd: Prescriptive Topology Manager https://github.com/CumulusNetworks/ptm
  • 27. CLOS Fabric as the foundation for modern data center networks Layer Complex applications such as Clouds on top with overlays Automate Configuration & Simplify Networking Linux as the network OS to use sophisticated management tools Simplify networking further with tools such as ptmd 4/23/2014 26YAC 2013 Conclusion
  • 28. 4/23/2014 27YAC 2013 www.cumulusnetworks.com ddutt@cumulusnetworks.com @cumulusnetworks Spasibo! Web: Email: Twitter: