• Share
  • Email
  • Embed
  • Like
  • Private Content
Fabric for the Next Generation Data Centre
 

Fabric for the Next Generation Data Centre

on

  • 1,075 views

The transition to the modern virtualised data centre is reshaping the compute environment and placing new ...

The transition to the modern virtualised data centre is reshaping the compute environment and placing new
demands on the data centre network. Traditional complex and tiered network designs can no longer meet the real
time demands of the dynamic and cloud based data centre. This requires a fundamental rethink in the underlying
network architecture - the solution lies with fabric based networks. However, not all fabrics are created equals.

Statistics

Views

Total Views
1,075
Views on SlideShare
1,075
Embed Views
0

Actions

Likes
0
Downloads
87
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Read - http://juniper.net/us/en/local/pdf/whitepapers/2000327-en.pdf in conjunction with this deck.Also useful collateral to leave with customers following this presentation.
  • Background: The first 4 slides outline how most of the infrastructure in the Data Center has evolved, while the network design has remained unchanged and has now become outdated and not suited for purpose. Note – the physical infrastructure (servers/storage) and applications will have much higher focus for customers than networking infrastructure. An underlying message thru this presentation should be that having the right type of network will maximise the efficiency of these other areas.Content: Within many DC’s (see DC taxonomy info for more info on which types of customers) servers and storage have been both consolidated and virtualised. (Note: While storage has been consolidated for quite a long time, virtualising that storage resource is still relatively new - see ‘thin provisioning’ http://en.wikipedia.org/wiki/Thin_provisioning). These create pools of resource within the DC increasing efficiency (reducing cost/power/space etc) of that resource. (After 1st build) With SRX, security and other network services can also be consolidated and virtualised with similar benefits. (After 2nd build) A single high performance network is now needed to interconnect these new resource pools - which is the focus of this presentation.
  • Content: As well as the physical infrastructure applications have also evolved. In the older client server architecture the complete compute process was contained within a server. The majority of traffic flowing within a DC was between the client and server (referred to as ‘North/South’ traffic from a network perspective). Now to improve both application scalability and reliability SOA distributes individual applications onto single servers running dedicated routines within an application. Data passing between this routines now needs to cross the DC network (‘East/West traffic’), for many customers this accounts for over 75% of the traffic in the DC. Storage traffic, DataBase traffic, Virtualisation (VM) traffic, Server Mash-ups are all additional examples of new East/West traffic.
  • Content: This slide explains how businesses address the issue of managing this complexity by scaling back to keep within reasonable limits (Either limited by ability to fix in a reasonable time frame – or if an SP, within SLA).The “Choice”, is to either scale to take full advantage of virtualisation and server/app evolution discussed earlier, or constrain by designing in manageable silos. Narrative:Scalability ideally is the ability to add capacity without adding complexity. The red line represents adding more capacity to the data center network. Ideally the incremental operational complexity associated with that capacity is either zero or very small. What you see here is the ideal environment.[BUILD]: Unfortunately that is not reality. Today’s reality is that as you add more resourcesto today’s data center network they get exponentially more complex. [BUILD]: At some point in time, to limit that complexity, you will limit the size of the network itself.Further Reading:http://forums.juniper.net/t5/Architecting-the-Network/The-Mathematics-of-Complexity-You-Can-t-Just-Mask-It-You-Must/ba-p/39132
  • Background: These next 3 slides introduce the concept of ‘FABRIC’ as the ideal network architecture for the DC. Importantly this also defines ‘FABRIC’ as many of our competitors have also started using the term fabric in their marketing, but only partly meeting the requirements for a network fabric. (See Yankee report for more detail on what defines a Network Fabric)Content: This slide starts by showing the traditional 3 layer hierarchical DC network (with all the limitations mentioned earlier). Clicking through show that while this architecture was suited for client-server networks of the past, for modern server architectures in an ideal world each device would be directly connected to each other device. (While this might be possible for a few devices it clearly wouldn’t be practical at any scale).Key Point: We need a FLAT network infrastructure – where every port (device) is (or appears to be) directly connected to every other port. A true network FABRIC provides this ‘any to any’ connectivity. I.e. Claiming an architecture is a ‘fabric’ requires more than just creating a single point for management.
  • Content: A single Ethernet switch provides ‘Flat – any port to any port’ connectivity. A true Network Fabric requires that in addition to being ‘Flat’, it also operates and is managed as a single device, with all ports sharing a common state so that any packet only needs to be processed once to be able to reach it’s final destination (single hop). A single Ethernet switch meets the requirements of a Network Fabric. This slides continues to build to demonstrate that a single switch has limitations of scale (Which is one of the main reasons why the hierarchical LAN architecture was adopted in the DC). Key Messages: “Fabric” is often used by our competition to describe their control plane only solutions. This only simplifies management, and doesn’t provide the single hop + ‘any to any’ performance benefits also required from a DC network fabric.Further Reading: http://forums.juniper.net/t5/Architecting-the-Network/Not-All-Fabrics-Are-Created-Equal/ba-p/40121
  • Content: This slide introduces the concept of Network Fabric as the ‘ideal network’ to span the complete DC. This would provide both the ‘Performance’ and ‘Simplicity’ of a fabric (single switch) and encompasses Juniper’s vision for the Data Center Network.
  • Content: This slide explains the dramatic levels of simplification that running a 2 layer fabric based architecture can deliver. Earlier slides explained the link between interactions and complexity.Technical Messages: X-axis shows number of ports while the Y-axis then shows the number of managed devices required to connect this ports. Black line is for a conventional hierarchical DC network, and Blue line the resulting number of potential interactions. The lower lines are the equivalent for connecting the ports with a two layer network fabric based solution.Business Values: Such dramatic simplification will drive both operational cost savings and improved network reliability. (Plus application performance improvements previously mentioned).

Fabric for the Next Generation Data Centre Fabric for the Next Generation Data Centre Presentation Transcript

  • FABRIC FOR THE NEXT GENERATION Data Centre
    Andy Jolliffe – Data Centre Strategy
    Oct 2010
  • Agenda
    Evolution within the Data Centre
    Network Fabrics for Data Centre
    Simplifying the Data Centre Network
    Juniper’s Vision
  • The servers and storage evolved
    Network services can be consolidated and virtualized
    A single network to integrate the resource pools
    Servers were consolidated
    standardized
    and virtualized
    Storage was consolidated
    and virtualized
  • The Applications evolved
    Client – Server Architecture
    Service Oriented Architecture
    Client
    Client
    95%
    25%
    75%
    Server
    Server
    Server
    Server
    Server
    Server
    B
    B
    A
    C
    A
    C
    DB
    D
    DB
    D
    A fundamental change in data flows
  • The network architecture has not changed
    Complex
    Inefficient
    Expensive
    • High CapEx / OpEx
    • Constrains Virtualisation
    • Appliance sprawl
    • Multiple networks
    • Limited scalability
    • High Latency
    • Spanning Tree
    L2/L3Switch
    L2/L3Switch
    SSL VPN
    Firewall
    IPSec VPN
    IPS
    L2/L3 Switch
    L2/L3 Switch
    L2 Switch
    Up to 75% of traffic
    E
    W
    SERVERS
    NAS
    STORAGE
    Cluster Network
    FC SAN
  • 3 problems with DATA CENTRe networks today
    Data Center
    N
    Today’s Challenges:
    • Too slow
    • Too expensive
    • Too complex
    Up to 75% of traffic
    E
    W
    S
  • Every extra “hop” adds latency
    Every hop adds to the potential for congestion – inconsistent performance
    Application behavior is impacted
    Solution
    E
    W
    #1: TOO SLOW
    Flat &Any-to-any network
    Too Slow
    Up to 75% of traffic
  • #2: Challenges of Efficiency
    Too Expensive
    Up to 50% of the ports interconnect switches, not servers or storage
    Up to 50% of the bandwidth is disabled by spanning tree
    Up to 30% of the networkspend can be avoided
    Eliminate$1Bof annual spend world wide
    Solution
    Eliminate STP
    Reduce Tiers
  • Operational Complexity
    • Number of managed devices
    • Each switch is autonomous
    • 7 managed devices
    • Number of potential interactions
    • Shared protocols
    • 21 potential interactions
    N = no. of managed devices
    N
    #3 Complexity – a function of devices + interactions
    N*(N-1)
    2
    S
  • CHALLENGES OF SCALE
    Today’s Data Centre Network Forces a Choice
    The ability to add capacity without adding complexity
    SCALABILITY:
    Today’s reality
    Limits of Scale
    Capacity
    Capacity
    Complexity
    Complexity
    Ideal
    Scale
  • DEFINING THE IDEAL NETWORK
    Typical tree configuration
    Flat - any port connected to any port
    FABRIC
  • DEFINING THE IDEAL NETWORK
    Flat - any port connected to any port
    FABRIC = PERFORMANCE
    FABRIC
    Fabric - definition
    Control
    • Managed as a single device
    • All ports have shared state
    Data Forwarding
    • Flat – any to any
    • Packet processed only once
    Switch = Fabric
    FABRIC = SIMPLICITY
    Simplicity of a single switch
    Single switch does not scale
  • DEFINING THE IDEAL NETWORK – A FABRIC
    Flat - any port connected to any port
    FABRIC = PERFORMANCE
    Fabric - definition
    Control
    • Managed as a single device
    • All ports have shared state
    Data Forwarding
    • Flat – any to any
    • Packet processed only once
    FABRIC = PERFORMANCE
    Single lookup for reduced latency
    A Network Fabric has the….
    FABRIC = SIMPLICITY
    Simplicity of a single switch
    FABRIC = SIMPLICITY + PERFORMANCE
    Scalability of a network
  • Fabric at the access layer
    2
    3
    2
    1
    L2/L3Switch
    L2/L3Switch
    L2/L3 Switch
    L2/L3 Switch
    SRX5800
    EX4200
    L2 Switch
    SERVERS
    NAS
    STORAGE
    FC SAN
  • ImprovinGApPlication Performance
    ×
    up to 160µs
    Routers/Switches
    Access Switch
    Access Switch
    App 3
    O/S
    ~2.6 – 17µs
    EX 4200
    EX 4200
    Server 1
    Rack 1
    Server 2
    Rack 2
    Benefits for:
    • VMotion
    • DataBase calls
    • SOA
    • Server Mash-ups
    3
    Hypervisor (VMWare)
    Hypervisor
    Hypervisor (VMWare)
    Hypervisor
    App 1
    App 2
    App 1
    App 2
    App 3
    App 1
    App 2
    App 4
    App 5
    O/S
    O/S
    O/S
    O/S
    O/S
    O/S
    O/S
    O/S
    O/S
    Unused
    Unused
    Unused
    Unused
    Unused
    VM 1
    VM 2
    VM 1
    VM 2
    VM 3
    VM 3
    VM 4
    VM 5
  • 2
    3
    2
    1
    L2/L3Switch
    L2/L3Switch
    EX8216
    L2/L3 Switch
    collapse core &aggregation layers
    L2/L3 Switch
    SRX5800
    EX4200
    SERVERS
    NAS
    STORAGE
    FC SAN
  • Juniper's simplified data center approach will allow us to deploy a complete 10 Gigabit Ethernet network with ultra-low latency at a substantial cost savings," said Steve Rubinow, executive vice president and co-global CIO of NYSE Euronext. "Juniper has developed truly unique and innovative technologies that help us to deploy a very high capacity, low latency network that meets the stringent demands of the new data center. With Juniper, we are able to dramatically cut the cost and complexity of managing our data center network today, while continuing to enhance our competitive position with a next-generation data center fabric that will enable us to scale to tens of thousands of 10GbE ports. With such an elastic and efficient infrastructure, we can provide enhanced functionality to our customers at unmatched scale while minimizing total cost of ownership."
    Case study: New York Stock Exchange: New Services Rollout
  • FaBRIC AT THE CORE
    3
    2
    1
    2
    2011
    Fabric in the Core
    2
    MPLS/VPLS
    Data Center Interconnect
    MX Series
    Eliminates need to run spanning tree
    STP
    EX8216
    Virtual Chassis in the core
    • EX8200 and MX-3D
    • Eliminates STP and VRRP
    • Across L2 in the data center
    • Highly resilient architecture
    • Available early next year
    SRX5800
    EX4500
    EX4200
    EX8208
    Servers
    FC Storage
    NAS
    FCoE
    FC SAN
  • Fabrics reduce complexity
    Devices
    Interactions
    N*(N-1)
    2
    At 6000 ports:
    • 89% reduction in managed devices
    • 99% reduction in interactions
    10,000
    400
    Interactions
    7,500
    300
    Complexity
    At 1000 ports:
    • 83% reduction in managed devices
    • 97% reduction in interactions
    5,000
    200
    2,500
    100
    Managed Devices
    Interactions with Virtual Chassis
    5000
    4000
    6000
    0
    2000
    1000
    3000
    No. of Ports
  • JUNOS Operating System
    T Series
    EX8216
    • One OS
    • Single source code base
    • Consistent implementation of features
    EX8208
    SRX5800
    TX Matrix
    SRX
    SRX5600
    SRX3600
    NSMXpress
    Frequent Releases
    MX Series
    SRX650
    • One Release
    • Single software release track of feature supersets
    • Stable, predictable development of new features
    EX4500
    M Series
    SRX240
    EX4200
    J Series
    SRX210
    EX3200
    SRX100
    EX2200
    SECURITY
    ROUTERS
    SWITCHES
    • One Architecture
    • Modular software with resource separation
    • Highly available, secure and scalable software
    10.0
    10.1
    10.0
    10.1
    10.2
    10.2
    Module
    x
    –API–
    Module
    x
    Frequent Releases
    SRX
    Tx Matrix
    One OS
    One Release
    One Architecture
    Video: Why is Junos different?
  • 2 Tier FABRIC NETWORK
    3
    2
    1
    2
    MPLS/VPLS
    Data Centre Interconnect
    MX Series
    SRX5800
    EX8216
    EX8208
    EX4500
    EX4200
    Servers
    FC Storage
    NAS
    FCoE
    FC SAN
  • 2 Tier FABRIC NETWORK
    3
    2
    1
    2
    MPLS/VPLS
    Data Centre Interconnect
    MX Series
    SRX5800
    EX8216
    EX8208
    EX4500
    EX4200
    Servers
    FC Storage
    NAS
    FCoE
    FC SAN
  • Juniper Vision – SIMPLIFY
    1
    3
    2
    1
    MPLS/VPLS
    Data Centre Interconnect
    MX Series
    DATA CENTER FABRIC
    SRX5800
    The Stratus Project
    SRX5800
    A very large distributed L2/L3 switch that runs…
  • STRATUS AT A GLANCE
    Scale:
    10’s to 10’s of 1000’s of ports
    Performance:
    Flat and Non-Blocking
    Simplicity:
    N=1
    Designed for modern DC
    Virtualization and Convergence
    Juniper Confidential
  • Legacy architecture Vs Stratus
    N
    The Stratus Solution:
    • Flat, any-to-any fabric
    • Inherently lower TCO
    • Massive Scale, N=1
    Today’s Challenges:
    • Too slow
    • Too expensive
    • Too complex
  • How is Stratus different from solutions from other vendor “FABRICS”
    Project Stratus
    Building a fabric to behave like a single switch
    L2 & L3
    Yes
    Yes
    Network FabricData Plane
    • Flat
    • Any-to-any
    Control Plane
    • Single device
    • Shared state
    Network FabricData Plane
    • Flat
    • Any-to-any
    Control Plane
    • Single device
    • Shared state
    Other Vendor Fabrics(using Trill)
    Making multiple switches try to behave like a fabric
    L2 only
    No
    No
  • Integrating stratus fabric
    MX Series
    Stratus Fabric
    EX8216
    SRX5800
    EX4200
    4
    Pod 1
    Pod 2