SlideShare a Scribd company logo
White Paper




T Series Core Router
Architecture Overview
High-End Architecture for Packet Forwarding
and Switching




Copyright © 2012, Juniper Networks, Inc.	               1
White Paper - T Series Core Router Architecture Overview




                        Table of Contents
                        Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
                        Nine Years of Industry-Leading Core Routing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
                        Forwarding Plane Design Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
                        Architectural Framework and Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
                            PIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
                            FPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
                            Switch Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
                            Routing Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
                        T Series Forwarding Path Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
                        Data Flow Through the 3D FPC on the T4000 Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
                        Data Flow Through the Enhanced Scaling (ES) FPCs on T Series Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
                        T Series Switch Fabric Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
                        Switch Fabric Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
                        Multichassis Switch Fabric (TX Matrix and TX Matrix Plus) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
                            CLOS Topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
                            Juniper Networks Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
                                Multichassis System Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
                        Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13
                        About Juniper Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13




                        List of Figures
                        Figure 1. T Series scaling by slot and chassis (including the JCS1200). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
                        Figure 2. Scalable performance in multiple dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
                        Figure 3. T Series routing platform architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
                        Figure 4. T Series router components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
                        Figure 5. Data flow through the T4000 router. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
                        Figure 6. PFE and switch fabric using the T Series chipset (“to fabric” direction). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
                        Figure 7. Packet flow from fabric to egress PFE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
                        Figure 5. Five switch fabric planes for the T Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
                        Figure 6. Multicast and unicast each utilizing fabric queues in T Series routers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
                        Figure 7. Three-stage CLOS topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
                        Figure 8. T Series multichassis switch fabric planes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  12
                        Figure 9. Multichassis system high-level overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  12
                        Figure 10. Switch-card chassis to line-card chassis interconnections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13




2	                                                                                                                                                                                             Copyright © 2012, Juniper Networks, Inc.
White Paper - T Series Core Router Architecture Overview




                         Executive Summary
                         Juniper Networks® T Series Core Routers have been in production since 2002, with the introduction of the Juniper
                         Networks T640 Core Router. Since that time, T Series routers have evolved to maintain an unequivocal industry lead in
                         capacity (slot, chassis, and system) and operational efficiencies in power and usability. Maintaining this standard has
                         in part been possible due to design decisions made with the very first T Series system.

                         Nine Years of Industry-Leading Core Routing
                         In April 2002, Juniper Networks began shipping the first T Series routing platform: the T640 Core Router. The T640 is a
                         carrier-class, multichassis-capable core routing platform that supports high-density 10 Gbps (OC-192c/STM-64 and 10
                         Gbps Gigabit Ethernet) to 40 Gbps (OC-768c/STM-256 and 40 Gbps Gigabit Ethernet) interfaces.
                         In July 2002, Juniper Networks began shipping the second T Series platform: the Juniper Networks T320 Core Router.
                         The T320 is a carrier-class, single-chassis core router that has a smaller form factor than the T640 and a lower entry
                         price point.
                         In 2004, Juniper offered the first multichassis core routing system with the Juniper Networks TX Matrix, supporting 3.2
                         Tbps in a four-chassis system.
                         Then in 2007, Juniper announced and released the industry’s first 100 Gbps/slot system in the Juniper Networks T1600
                         Core Router, a multichassis-capable routing platform that was designed—taking advantage of the original switch plane
                         architecture of the T640 —to be upgradeable in service from the T640.
                         In 2009, Juniper introduced the Juniper Networks TX Matrix Plus, a central switching and routing element that connects
                         up to 4 T1600 routing chassis into a single routing entity: a 6.4 Tbps system.
                         In 2011, Juniper introduced the 350 Gbps/slot-capable system, Juniper Networks T4000 Core Router, which has
                         significantly extended the T Series system capacity, and allowed customers’ investment in the T Series to be protected
                         by upgrading existing T640 and T1600 systems to the T4000.
                         All T Series platforms use Juniper Networks Junos® operating system and T Series ASICs to provide the ease of use,
                         performance, reliability, and feature richness that service providers have come to expect from all Juniper Networks
                         products. The T Series Core Routers—T320, T640, T1600, T4000, TX Matrix, and TX Matrix Plus—provide the
                         ingredients for high-end and core networks of the future, especially when controlled by the Juniper Networks JCS1200
                         Control System.
                         Figure 1 illustrates the industry-leading scaling characteristics of the T Series on the forwarding and control planes.


                                                                                               JCS1200




                             Single Chassis   T640                        TX Matrix     Single Chassis    T1600                TX Matrix Plus      Single Chassis    T4000
                                                           Multichassis                                                                  Ports
                                                 Ports                       Ports                          Ports     Multichassis                                      Ports
                                                                                                                                     320 10 Gbps
                                              40 10 Gbps                  160 10 Gbps                    80 10 Gbps                                                 208 10 Gbps
                                                                                                                               32    64 40 Gbps
                                              8 40 Gbps                   32 40 Gbps                     16 40 Gbps                                                 16 40 Gbps
                                                                                                                                     32 100 Gbps
                                                                                                         8 100 Gbps                                                 16 100 Gbps




                              40 Gbps/slot         2002                         2004    100 Gbps/slot         2007                         2009    350 Gbps/slot          2011



                                                           Figure 1. T Series scaling by slot and chassis (including the JCS1200)

                         This paper provides a technical introduction to the architecture of T Series Core Routers, both single-chassis and
                         multichassis systems. It describes the design objectives, system architecture, packet forwarding architecture, single-
                         chassis switch fabric architecture, and multichassis switch fabric architecture.
                         For an explanation of the core virtualization capabilities available with the integration of the JCS1200, see the
                         References section in Appendix A, particularly the paper entitled Virtualization in the Core of the Network at www.
                         juniper.net/us/en/local/pdf/whitepapers/2000299-en.pdf.



Copyright © 2012, Juniper Networks, Inc.	                                                                                                                                         3
White Paper - T Series Core Router Architecture Overview




                        Forwarding Plane Design Objectives
                        Juniper’s architectural philosophy is based on the premise that multiservice networks can be viewed as three-
                        dimensional systems containing forwarding, control, and service dimensions:
                        •	 Forwarding (“how you move the bits”)
                        •	 Control (“how you direct the bits”)
                        •	 Software/service (“how you monetize the bits”)
                        For a network to be a converged, scalable, packet core infrastructure, it must scale in all these dimensions (Figure 2):




                                                                                  Control: JCS1200
                                                                              P                      Forwarding: T1600
                                                                          D
                                                                          S
                                                                          P
                                                                      :
                                                                     ce
                                                                     vi
                                                                 er
                                                                 S




                                                           Figure 2. Scalable performance in multiple dimensions

                        In the forwarding plane, traffic growth is the key driver of the core router market. As the global economy becomes
                        increasingly networked and dependent upon the communications infrastructure, traffic rates continue to balloon—
                        growing 70 to 80 percent a year by most estimates—and high-density core routing remains critical.
                        Furthermore, the importance of the control plane cannot be overlooked. A router’s control plane must scale to
                        accommodate ever-growing routing and forwarding tables, service tunnels, virtual networks, and other information
                        related to network configuration and management.
                        Finally, the importance of the service plane is brought to bear when considering the requirements of an increasingly
                        disparate and global marketplace. These changing market dynamics create pressure for greater network innovation
                        and a much deeper integration between applications and the network.1.
                        T Series routing platforms are designed with this philosophy in mind at all times—the main focus of this paper is on the
                        forwarding plane of the router. The T Series routing platforms were developed to support eight key design objectives:
                        •	 Packet forwarding performance
                        •	 Bandwidth density
                        •	 IP service delivery
                        •	 Multichassis capability
                        •	 High availability (HA)
                        •	 Single software image
                        •	 Security
                        •	 Power efficiency
                        Juniper Networks leads the industry in all of these categories, as the white papers in the References section illustrate.




4	                                                                                                                       Copyright © 2012, Juniper Networks, Inc.
White Paper - T Series Core Router Architecture Overview




                         Architectural Framework and Components
                         Figure 3 illustrates the high-level architecture of a T Series routing platform. A T Series platform uses a distributed
                         architecture with packet buffering at the ingress Packet Forwarding Engine (PFE) before the switch fabric, as well as
                         packet buffering at the egress PFE before the output port.
                         As a packet enters a T Series platform from the network, the ingress PFE segments the packet into cells, the cells are
                         written to ingress memory, a route lookup is performed, and the cells representing the packet are read from ingress
                         memory and sent across the switch fabric to the egress PFE. When the cells arrive at the egress PFE, they are written
                         to the second memory; an egress lookup is performed; the cells representing the packet are read out of the second
                         memory, reassembled into a packet, and transmitted on the output interface to the network.


                                            Packet Processing/                                                       Packet Processing/
                                                  Buffer                                                                   Buffer




                                            Packet Processing/                        Switch                         Packet Processing/
                                                  Buffer                                                                   Buffer




                                            Packet Processing/                                                       Packet Processing/
                                                  Buffer                                                                   Buffer


                                                                  Figure 3. T Series routing platform architecture

                         Drilling more deeply into the components, a T Series system consists of four major components: PICs, Flexible PIC
                         Concentrators (FPCs), the switch fabric, and one or more Routing Engines (Figure 4).


                                                       Routing Engine           Routing Engine


                                                                                                                      Switching
                                                FPC        FPC           FPC                     FPC                   Planes


                                                 PIC        PIC           PIC                    PIC


                                                                       Figure 4. T Series router components

                         These components are discussed in more detail in the following sections.

                         PIC
                         The PICs connect a T Series platform to the network and perform both physical and link-layer packet processing. They
                         perform all of the functions that are required for the routing platform to receive packets from the network and transmit
                         packets to the network. Many PICs, such as the Juniper IQ PICs, also perform packet processing.

                         FPC
                         Each FPC can contain one or more PFEs. For instance, a T4000 240 Gbps slot supports two 120 Gbps PFEs. Logically,
                         each PFE can be thought of as a highly integrated packet-processing engine using custom ASICs developed by Juniper
                         Networks. These ASICs enable the router to achieve data forwarding rates that match fiber optic capacity. Such high
                         forwarding rates are achieved by distributing packet-processing tasks across this set of highly integrated ASICs.
                         When a packet arrives from the network, the ingress PFE extracts the packet header, performs a routing table lookup
                         and any packet filtering operations, and determines the egress PFE connected to the egress PIC. The ingress PFE
                         forwards the packet across the switch fabric to the egress PFE. The egress PFE performs a second routing table
                         lookup to determine the output PIC, and it manages any egress class-of-service (CoS) and quality-of-service (QoS)
                         specifications. Finally, the packet is forwarded to the network.




Copyright © 2012, Juniper Networks, Inc.	                                                                                                                        5
White Paper - T Series Core Router Architecture Overview




                        Switch Fabric
                        The switch fabric provides connectivity between the PFEs. In a single-chassis system, the switch fabric provides
                        connectivity among all of the PFEs residing in the same chassis. In a multichassis system, the switch fabric provides
                        connectivity among all of the PFEs in the different chassis of the routing node cluster. In a single-chassis or a multichassis
                        system, each PFE is considered to be logically contiguous to every other PFE connected to the switch fabric.

                        Routing Engine
                        The Routing Engine executes the Junos OS and creates the routing tables that are downloaded into the lookup ASICs
                        of each PFE. An internal Ethernet connects the Routing Engine to the other subsystems of a T Series platform. Each
                        subsystem includes one or more embedded microprocessors for controlling and monitoring the custom ASICs, and
                        these microprocessors are also connected to the internal Ethernet.

                        T Series Forwarding Path Architecture
                        The function of the PFE can be understood by following the flow of a packet through the router: first into a PIC, then
                        through the switching fabric, and finally out another PIC for transmission on a network link. Generally, the data flows
                        through the PFE as follows:
                        •	 Packets enter the router through incoming PIC interfaces, which contain controllers that perform media-specific
                           processing (and optionally intelligent packet processing).
                        •	 PICs pass the packets to the FPCs, where they are divided into cells and are distributed to the router’s buffer memory.
                        •	 The PFE performs route lookups, forwards the cells over the switch fabric to the destination PFE, reads the cells from
                           buffer memory, reassembles the cells into packets, and sends them to the destination port on the outgoing PIC.
                        •	 The PIC performs encapsulation and other media-specific processing, and it sends the packets out into the network.
                        The use of highly integrated ASICs is critical to the delivery of industry-leading forwarding performance and
                        packet processing. The T Series chipset is specifically designed and developed to comprise the state of the art in
                        carrier-class routers.

                        Data Flow Through the 3D FPC on the T4000 Router
                        3D refers to the newer Type 5 FPC introduced along with the T4000 system. To ensure the efficient movement of data
                        through the 3D FPC on the T4000 router, the router is designed so that ASICs on the hardware components handle the
                        forwarding of data. Data flows through the 3D FPC on the T4000 router in the following sequence (see Figure 5):



                                                                          Lookup and
                                                                            Packet-                          Midplane
                                                                        Processing ASIC




                                                           P              LAN/WAN                 Switch
                                              Packets                                                                   Switch
                                                           I               Interface,            Interface
                                                                                                                        Fabric
                                                In                       Buffering ASIC            ASIC
                                                           C




                                                           P              LAN/WAN                 Switch
                                              Packets                                                                   Switch
                                                           I               Interface,            Interface
                                                                                                                        Fabric
                                                Out                      Buffering ASIC            ASIC
                                                           C




                                                                          Lookup and
                                                                            Packet-
                                                                        Processing ASIC



                                                               Figure 5. Data flow through the T4000 router




6	                                                                                                                      Copyright © 2012, Juniper Networks, Inc.
White Paper - T Series Core Router Architecture Overview



                         1.	 Packets arrive at an incoming PIC interface.
                         2.	 The PIC passes the packets to the FPC, where the interface ASIC does pre-classification and sends the packet
                             header to the Packet-Processing ASIC that performs Layer 2 and Layer 3 lookup. The Packet-Processing ASIC is
                             also capable of doing Layer 4 through Layer 7 packet processing.
                         3.	 The Interface ASIC receives the modified header from the Packet-Processing ASIC and updates the packet and
                             divides it into 64-byte cells.
                         4.	 The Interface ASIC sends these 64-byte cells to the Switch Fabric via the Switch Interface ASIC-facing Switch
                             Fabric, unless the destination is on the same Packet Forwarding Engine. In this case, the Interface ASIC sends
                             packets to the outgoing port without passing them through the Switch Fabric.
                         5.	 The Interface ASIC sends bandwidth requests through the Switch Fabric to the destination port.
                         6.	 The destination Interface ASIC sends bandwidth grants through the Switch Fabric to the originating Interface ASIC.
                         7.	Upon receipt of each bandwidth grant, the originating Interface ASIC sends a cell through the Switch Fabric to the
                            destination Packet Forwarding Engine.
                         8.	 The destination Interface ASIC receives cells from the Switch Fabric, reorders the data received as cells and
                             reassembles into packets, and passes the header to the Packet-Processing ASIC.
                         9.	 The Packet -Processing ASIC performs the route lookup, adds Layer 2 encapsulation, and sends back the header to
                             the Interface ASIC.
                         10.	 The Interface ASIC appends the modified header received from the Packet-Processing ASIC and sends the packets
                              to the outgoing PIC interface.
                         11.	 The outgoing PIC sends the packets out into the network.

                         Data Flow Through the Enhanced Scaling (ES) FPCs on T Series Routers
                         This chipset includes the following ASICS:
                         •	 L2/L3 Packet-Processing ASIC (L2/3)
                         •	 Switch Interface ASIC
                         •	 T-Series Lookup Processor ASIC
                         •	 Queuing and Memory Interface ASIC (Q&M)
                         The T Series ASICs leverage the conceptual framework, many of the building blocks, and Juniper Networks extensive
                         operational experience—gained from the Juniper Networks M Series Multiservice Edge Routers chipset.
                         Figure 6 demonstrates data flow through a T Series routing node by illustrating how the T Series chipset is arranged to
                         implement a single instance of a PFE.

                                                    PIC    FPC                                            5                         FPC         SIB
                                                                  l2/L3                               T Series
                                            Layer 1
                         WAN                 ASIC
                                                                 Packet                               Lookup
                                                               Processing              3             Processor                  6                  Fabric
                                                                                   Switch                                  Switch
                                                      1            2              Interface    4                          Interface
                                                                                    ASIC                                    ASIC
                                                                  l2/L3                               Queuing
                                            Layer 1
                         WAN                 ASIC
                                                                 Packet                             and Memory
                                                               Processing                          Interface ASIC



                                               Figure 6. PFE and switch fabric using the T Series chipset (“to fabric” direction)




Copyright © 2012, Juniper Networks, Inc.	                                                                                                                      7
White Paper - T Series Core Router Architecture Overview



                        In this example, data flows in the following sequence:
                        1.	 Packets enter through an incoming PIC, which contains the Layer 1 interface chips, and are passed to the PFE on the
                            originating FPC. The PIC is connected to the PFE on the FPC via a high speed link (HSL).
                        2.	 The Layer 2/Layer 3 packet-processing ASIC parses the packets and divides them into cells. In addition, a behavior
                            aggregate (BA) classifier determines the forwarding treatment for each packet.
                        3.	 The network-facing Switch Interface ASIC places the lookup key in a notification and passes it to the T Series
                            Lookup Processor.
                        4.	 The Switch Interface ASIC also passes the data cells to the Queuing and Memory Interface ASICs for buffering
                        on the FPC.
                        5.	 The T Series Lookup Processor performs the route lookup and forwards the notification to the Queuing and Memory
                            Interface ASIC. In addition—if configured—filtering, policing, sampling, and multifield classification are performed at
                            this time.
                        6.	 The Queuing and Memory Interface ASIC sends the notification to the switch-fabric-facing Switch Interface ASIC,
                            which issues read requests to the Queuing and Memory Interface ASIC to begin reading data cells out of memory.
                        The destination FPC is shown in the following figure.

                                   SIB        FPC                                                      10        FPC     11
                                                                      T Series      8                     l2/L3
                                                                                                                              Layer 1
                                                                      Lookup                             Packet                                      WAN
                        Fabric                                       Processor                         Processing
                                                                                                                               ASIC

                                                    Switch                               Switch
                                                   Interface   7                    9   Interface
                                                     ASIC                                 ASIC
                                                                      Queuing                             l2/L3
                                                                                                                              Layer 1
                                                                    and Memory                           Packet
                                                                                                                               ASIC
                                                                                                                                                     WAN
                                                                   Interface ASIC                      Processing



                                                                   Figure 7. Packet flow from fabric to egress PFE

                        7.	 Cells representing packets are received from the Switch Interface ASIC connected to the fabric. The cells are
                            handed off to the Queuing and Memory Interface ASIC for buffering. Lookup keys and data pointers are sent to the
                            T Series Lookup Processor.
                        8.	 The T Series Lookup Processor performs the route lookup and forwards the notification to the Queuing and Memory
                            Interface ASIC, which forwards it to the network-facing Switch Interface ASIC.
                        9.	 The Switch Interface ASIC sends requests to the Queuing and Memory Interface ASIC to read the data cells out of
                            memory, and it passes the cells to the Layer 2/Layer 3 packet-processing ASIC.
                        10.	 The Layer 2/Layer 3 packet-processing ASIC reassembles the cells into packets, performs the necessary Layer 2
                             encapsulation, and sends the packets to the outgoing PIC. Queuing policy and rewrites occur at this time on the
                             egress router.
                        11.	 The PIC passes the packets into the network.
                        The T Series chipset provides hardware-based forwarding performance and service delivery for IPv4 (unicast and
                        multicast), IPv6 (unicast and multicast), and MPLS, while also having the flexibility to support other protocols in
                        the future. The chipset was designed to provide the functionality needed for the development of single-chassis and
                        multichassis systems.




8	                                                                                                                     Copyright © 2012, Juniper Networks, Inc.
White Paper - T Series Core Router Architecture Overview




                         T Series Switch Fabric Architecture
                         The N+1 redundancy of the T Series switch fabric architecture is what has led to the in-service upgrade capability and
                         the flexibility of scaling the T Series in the multichassis dimension since its inception. For example, just by swapping
                         the power entry modules and the switch interface boards (using the N+1 redundancy), service providers can upgrade a
                         T640 Core Router to a T1600 Core Router without disrupting service or changing customer-facing interfaces.
                         Furthermore, connecting a T1600 platform to a TX Matrix Plus system is also a smooth in-service process, involving
                         upgrading the switch fabric boards and the control boards on the T1600 and then connecting the T1600 to the central
                         switching element—the TX Matrix Plus.
                         The five planes of the T Series switch fabric (Figure 5) are implemented using four operationally independent, parallel
                         switch planes (Labeled A through D) that are simultaneously active and an identical fifth plane (Labeled E) that acts
                         as a hot spare to provide N+1 redundancy.

                                                                          Plane E (Backup)

                                                                              Plane D

                                                                                 Plane C
                          Network           PIC                                                                                              PIC     Network
                                                  Ingress                            Plane B                                      Egress
                                                          SI (Fabric)                                              SI (Fabric)
                                                    PFE                                                                            PFE
                          Network           PIC                                                                                              PIC     Network
                                                                                         Plane A

                                                                                               Fabric
                                                                                                ASIC



                                                                  Figure 5. Five switch fabric planes for the T Series

                         The Fabric ASIC provides non-blocking connectivity among the PFEs that populate a single-chassis system. On a
                         single-chassis T Series system, each chassis contains a maximum of 8 FPCs, with each FPC supporting 2 PFEs (up to
                         50 Gbps each), for a total of 16 PFEs that communicate across the switch fabric.
                         The required input and output aggregate bandwidth of the switch fabric exceeds the I/O capabilities of a single Fabric
                         ASIC by a large margin. In a multichassis-capable T Series router, each PFE is connected to four active switch planes,
                         and each switch plane carries a portion of the required bandwidth. To guarantee that cells are evenly load-balanced
                         across the active switch planes, each PFE distributes cells equally across the switch planes on a cell-by-cell basis
                         rather than a packet-by-packet basis.
                         For further design considerations of the T Series switch fabric, see Appendix B: Switch Fabric Properties.

                         Switch Fabric Operation
                         The switch fabric implements a “fairness protocol” that, when congestion is occurring, ensures fairness across source
                         PFEs. The process of transmitting a data cell across the switch fabric involves a request and grant protocol. The
                         source PFE transmits a request across the switch fabric to the destination PFE. Each request for a given destination is
                         transmitted across a different switch plane in a round-robin order to distribute the load equally.
                         When the request is received by the destination PFE, the destination PFE transmits a grant to the source PFE across
                         the same switch plane on which the corresponding request was received.
                         When the grant is received by the source PFE, the source PFE transmits the data cell to the destination PFE across the
                         same switch plane on which the corresponding grant was received.
                         This approach provides both a flow control mechanism for transmitting cells into the fabric and a mechanism to detect
                         broken paths across the switch fabric.
                         Appendix B covers Switch Fabric Design Properties. In particular, there are two key features of the T Series switch
                         fabric that deserve special attention in terms of their ability to handle modern multiservice applications such as video
                         delivery—these are QoS and multicast.
                         As shown in Figure 6, any crossbar endpoint (PFE) can theoretically become congested in the egress direction:
                         multiple ingress PFEs can send traffic to one egress PFE, thus creating potential for congestion. This condition can be
                         handled in any number of ways—backpressure, rate limiting, fabric speedup, or a combination—the key is that it must
                         not result in the loss of priority traffic.




Copyright © 2012, Juniper Networks, Inc.	                                                                                                                          9
White Paper - T Series Core Router Architecture Overview



                        Multicast traffic always must coexist with unicast traffic, which is why a T Series switch fabric treats both traffic types
                        equally: unicast and multicast traffic must conform to their QoS profiles mapped into fabric with two priority levels
                        (note PFE 0).


                                                                                  Routing Engine
                                                                          Regular Fabric Queues
                                             Multicast                    (Per Destination PFE)
                                              Input

                                           Unicast
                                            Input                PFE                                              PFE              WAN
                                                                      0                                                4


                                               WAN               PFE                                              PFE              WAN
                                                                      1                                                5
                                                                                       SWITCH
                                                                                       FABRIC
                                               WAN               PFE                                              PFE              WAN
                                                                      2                                                6


                                               WAN               PFE                                              PFE              WAN
                                                                      3                                                N




                                                    Figure 6. Multicast and unicast each utilizing fabric queues in T Series routers

                        Because the T Series Core Routers use a patented tree replication algorithm, they are able to distribute multicast
                        replication across available PFEs. Thus, in the very rare case of a congested egress PFE, T Series routers can easily
                        apply backpressure notification from egress to ingress to limit the traffic and avoid drops. As competing designs
                        replicate in the fabric itself, and allow multicast traffic to bypass the fabric queues, the potential for intelligent drops
                        with even minimal congestion is very demonstrable.

                        Multichassis Switch Fabric (TX Matrix and TX Matrix Plus)
                        An increasingly popular approach to augmenting forwarding performance, boosting bandwidth density, and extending the
                        deployable lifetime of core routers is to use a multichassis system. Multichassis systems are designed with an expandable
                        switch fabric that allows service providers to grow systems in increments required by budget and traffic loads.
                        While most service providers typically follow the technology curve and upgrade as soon as the next generation of
                        routers comes along (mainly because of improved efficiencies such as higher capacity, better footprint, and lower
                        power), Juniper’s multichassis solution allows providers to grow node capacity either to bridge between generations, or
                        to build multi-terabit nodes.
                        Another common reason for a service provider to use a multichassis system is to prevent the proliferation of too many
                        interconnects. More network elements mean more interconnects that do no support revenue-generating traffic but
                        merely ensure a full mesh. Interconnects are not a good way to scale—in addition to the obvious cost of the ports, there
                        is also a cost in terms of driving additional chassis as well to handle overhead connections.
                        Additionally, multichassis systems can also be combined with an independent control plane such as the JCS1200 to
                        create a virtualized core.3 The JCS1200 allows the creation of hardware-virtualized routers that can assume individual
                        networking functions, such as core or aggregation; service functions (such as private VPNs); mobile packet backbones;
                        or peering functions.

                        CLOS Topology
                        Figure 7 illustrates the topology of a typical three-stage CLOS fabric4. The blue squares represent individual single-
                        stage crossbar switches. The topology consists of multiple rows of single-stage crossbar switches arranged into three
                        columns. The benefit of this topology is that it facilitates the construction of large, scalable, and non-blocking switch
                        fabrics using smaller switch fabrics as the fundamental building block.




10	                                                                                                                        Copyright © 2012, Juniper Networks, Inc.
White Paper - T Series Core Router Architecture Overview




                                                            STAGE 1                            STAGE 2                                STAGE 3
                                                      Input         Output              Input         Output                   Input           Output
                                                  1
                                                        •             •                    •             •                        •                •
                                    SWITCH 1            •
                                                        •
                                                              P1      •
                                                                      •
                                                                                           •
                                                                                           •
                                                                                                 P2      •
                                                                                                         •
                                                                                                                                  •
                                                                                                                                  •
                                                                                                                                         P3        •
                                                                                                                                                   •
                                                  16


                                                  1
                                                        •             •                    •             •                        •                •
                                    SWITCH 2            •
                                                        •
                                                              P1      •
                                                                      •
                                                                                           •
                                                                                           •
                                                                                                 P2      •
                                                                                                         •
                                                                                                                                  •
                                                                                                                                  •
                                                                                                                                         P3        •
                                                                                                                                                   •
                                                  16

                                                               •                                  •                                        •
                                                               •                                  •                                        •
                                                               •                                  •                                        •

                                                  1
                                                        •             •                    •             •                        •                •
                                   SWITCH 16            •
                                                        •
                                                              P1      •
                                                                      •
                                                                                           •
                                                                                           •
                                                                                                 P2      •
                                                                                                         •
                                                                                                                                  •
                                                                                                                                  •
                                                                                                                                         P3        •
                                                                                                                                                   •
                                                  16

                                                                          Figure 7. Three-stage CLOS topology

                         The topology of a CLOS network has each output port of the first stage connected to one of the crossbars in the
                         second stage. Observe that a two-stage switch fabric would provide any-to-any connectivity (any ingress port to the
                         fabric can communicate with any egress port from the fabric), but the path through the switch is blocking. The addition
                         of the third stage creates a non-blocking topology by creating a significant number of redundant paths. It is the
                         presence of the redundant paths that provides the non-blocking behavior of a CLOS fabric.

                         Juniper Networks Implementation
                         The Juniper Networks implementation of a CLOS fabric for a multichassis routing node is specifically designed to
                         support the following attributes:
                         •	 Non-blocking
                         •	 Fair bandwidth allocation
                         •	 Maintains packet order
                         •	 Low latency for high-priority traffic
                         •	 Distributed control
                         •	 Redundancy and graceful degradation
                         The existence of multiple parallel paths through the switching fabric to any egress port gives the switch fabric its
                         rearrangeably non-blocking behavior. Dividing packets into cells and then distributing them across the CLOS switch
                         fabric achieves the same effect as moving (rearranging) connections in a circuit-switched network because all of the
                         paths are used simultaneously. Thus, a CLOS fabric proves to be non-blocking for packet or cell traffic. This design
                         allows PFEs to continue sending new traffic into the fabric and, as long as there are no inherent conflicts—such as an
                         overcommitted output port—the switch remains non-blocking.
                         Finally, an important property of a CLOS fabric is that each crossbar switch is totally independent of the other crossbar
                         switches in the fabric. Hence, this fabric does not require a centralized scheduler to coordinate the actions of the
                         individual crossbars. Each crossbar acts independently of the others, which results in a switch fabric that is highly
                         resilient to failures and scales to support the construction of extremely large fabrics.

                         Multichassis System Fabric
                         The T Series router multichassis fabric is constructed using a crossbar designed by Juniper Networks called the Fabric
                         ASIC, which is the same Fabric ASIC used in a single-chassis T Series platform. Depending on its placement in the
                         CLOS topology, each Fabric ASIC is a general building block that can perform stage 1, stage 2, or stage 3 functionality
                         within the switch fabric.
                         The CLOS fabric provides the interconnection among the PFEs in a T Series multichassis routing node (Figure 7). The
                         switch fabric for a multichassis system is implemented using four operationally independent, but identical, switch
                         planes (labeled A through D) that are simultaneously active and an identical fifth plane (labeled E) that acts as a hot
                         spare to provide redundancy. Each plane contains a three-stage CLOS fabric built using Juniper Networks Fabric ASICs.




Copyright © 2012, Juniper Networks, Inc.	                                                                                                                          11
White Paper - T Series Core Router Architecture Overview




                                                                         Plane E

                                             CHASSIS 1                      Plane D                                      CHASSIS 2
                                                                                Plane C
                        Network          PIC                                                                                              PIC      Network
                                                Ingress                            Plane B                                       Egress
                                                        SI (Fabric)                                                SI (Fabric)
                                                  PFE                                                                             PFE
                        Network          PIC                                                                                              PIC      Network
                                                                                       Plane A
                                                                                             F1   F2    F3
                                                                                             F1   F2    F3
                                                                                             •     •    •
                                                                                             •     •    •
                                                                                             •     •    •
                                                                                             F1   F2    F3



                                                                Figure 8. T Series multichassis switch fabric planes

                        The required input and output bandwidth of the switch fabric exceeds the I/O capabilities of a single plane. As with the
                        single-chassis system, each PFE is connected to four active switch planes, with each switch plane providing a portion
                        of the required bandwidth. To guarantee that cells are evenly load-balanced across the active switch planes, each PFE
                        distributes cells equally across the four switch planes on a cell-by-cell basis rather than a packet-by-packet basis.


                                                                      Routing Engine         Routing Engine
                                        TX Matrix (Plus)                 Primary               Secondary                              Matrix Control Paths
                                                                                                                                      Matrix Data Paths




                         T Series LCC                                                                                                      T Series LCC
                                  Routing Engine:              Redundant                            Routing Engine:           Redundant
                                Local Routing Engine          Routing Engine                      Local Routing Engine       Routing Engine




                              FPC            FPC                            FPC                   FPC        FPC                                 FPC


                               PIC             PIC                           PIC                  PIC        PIC                                  PIC



                                                                 Figure 9. Multichassis system high-level overview

                        Line-card chassis are connected to switch chassis with fiber-optic cable. This connectivity uses the latest VCSEL
                        technology to provide extremely high throughput, low power consumption, and low bit-error rates.
                        An abstract view of the interconnections for a TX Matrix Plus multichassis system is shown in Figure 10.




12	                                                                                                                       Copyright © 2012, Juniper Networks, Inc.
White Paper - T Series Core Router Architecture Overview




                                                       TX MATRIX
                                                        Switch-Card
                                                          Chassis                                  RE/CB 0         RE/CB 1
                                                  RE/CB 0           RE/CB 1                        Standby          Main
                                                  Standby            Main              Optical
                                                                                      Data Plane
                                                                                                    T1600 Plane 0 - Standby
                                                                                                         T1600 Plane 1




                                                                              F2SIB
                                                    LCC 00 F13 SIB LCC 01
                                                    LCC 02 F13 SIB LCC 03                                T1600 Plane 2
                                                            Blank                                        T1600 Plane 3




                                                                              F2SIB
                                                    LCC 00 F13 SIB LCC 01                               T1600 Plane 4
                                                    LCC 02 F13 SIB LCC 03
                                                                                                            T1600
                                                            Blank
                                                                                                           Line-Card




                                                                              F2SIB
                                                    LCC 00 F13 SIB LCC 10
                                                                                                            Chassis
                                                    LCC 02 F13 SIB LCC 03




                                                                              F2SIB
                                                    LCC 00 F13 SIB LCC 01
                                                    LCC 02 F13 SIB LCC 03
                                                            Blank




                                                                              F2SIB
                                                    LCC 00 F13 SIB LCC 01
                                                    LCC 02 F13 SIB LCC 03
                                                            Blank
                                                            Blank
                                                            Blank



                                                    Figure 10. Switch-card chassis to line-card chassis interconnections

                         In all T Series systems, there is a fifth data plane on hot standby in the event any of the other four data planes fail—the
                         same is true for a TX Matrix Plus system.
                         Redundancy in the center stage is provided, as the system functions with any two of the center stage chassis. Each
                         center stage chassis has no more than two data planes, so even if one chassis fails, forwarding still proceeds at full
                         throughput for large packets. Five chassis can be supported by the architecture.

                         Conclusion
                         The T Series demonstrates how Juniper has evolved its router architecture to achieve substantial technology
                         breakthroughs in packet forwarding performance, bandwidth density, IP service delivery, and system reliability. At the
                         same time, the integrity of the original design has made these breakthroughs possible.
                         Not only do T Series platforms deliver industry-leading scalability, they do so while maintaining feature and software
                         continuity across all routing platforms. Whether deploying a single-chassis or multichassis system, service providers
                         can be assured that the T Series satisfies all networking requirements.

                         About Juniper Networks
                         Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud
                         providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics
                         of networking. The company serves customers and partners worldwide. Additional information can be found at
                         www.juniper.net.




Copyright © 2012, Juniper Networks, Inc.	                                                                                                                   13
White Paper - T Series Core Router Architecture Overview




                        Appendix A: References

                        Applications for an Independent Control Plane:
                        www.juniper.net/us/en/local/pdf/app-notes/3500134-en.pdf

                        Control Plane Scaling and Router Virtualization:
                        www.juniper.net/us/en/local/pdf/whitepapers/2000261-en.pdf

                        Efficient Scaling for Multiservice Networks:
                        www.juniper.net/us/en/local/pdf/whitepapers/2000207-en.pdf

                        Energy Efficiency for Network Equipment:
                        www.juniper.net/us/en/local/pdf/whitepapers/2000284-en.pdf

                        Network Operating System Evolution:
                        www.juniper.net/us/en/local/pdf/whitepapers/2000264-en.pdf

                        Virtualization in the Core of the Network:
                        www.juniper.net/us/en/local/pdf/whitepapers/2000299-en.pdf




14	                                                                                  Copyright © 2012, Juniper Networks, Inc.
White Paper - T Series Core Router Architecture Overview




                         Appendix B: Switch Fabric Properties
                         The T Series switch fabric for both single-chassis and a multichassis system is specifically designed to provide the
                         following attributes.
                         •	 Non-blocking
                         •	 Fair bandwidth allocation
                         •	 Maintains packet order
                         •	 Low latency for high-priority traffic
                         •	 Distributed control
                         •	 Redundancy and graceful degradation

                         Non-blocking
                         A switch fabric is considered non-blocking if two traffic flows directed to two different output ports never conflict. In
                         other words, the internal connections within the switch allow any ingress PFE to send its fair share of bandwidth to any
                         egress PFE simultaneously.


                                                                             4X4 CROSSBAR SWITCH

                                            Port 1 Ingress                                                                    Port 1 Egress


                                            Port 2 Ingress                                                                    Port 2 Egress


                                            Port 3 Ingress                                                                    Port 3 Egress


                                            Port 4 Ingress                                                                    Port 4 Egress


                                                                    Figure 11. A 4 x 4 non-blocking crossbar switch

                         Figure 11 illustrates the internal topology for a non-blocking, single-stage, 4-port crossbar switch. The challenge
                         when building a crossbar is that it requires n² communication paths internal to the switch. In this example, the
                         4-port crossbar requires a communication path connecting each input port to each output port, for a total of 16
                         communication paths. As the number of ports supported by the crossbar increases, the n² communication path
                         requirement becomes an implementation challenge.

                         Fair Bandwidth Allocation
                         In a production network, it is impossible to control the pattern of ingress traffic so that an egress port of the crossbar
                         switch is never overcommitted. An egress port becomes overcommitted when there is more input traffic destined for
                         the egress port than the egress port can forward. In Figure 12, the aggregate amount of traffic that ingress ports 1, 2, and
                         3 forward to egress port 4 is greater than the capacity of egress port 4. Fair bandwidth allocation is concerned with the
                         techniques that a switch uses to share the bandwidth among competing ingress flows to an overcommitted egress port.

                                                                             4X4 CROSSBAR SWITCH

                                            Port 1 Ingress                                                                    Port 1 Egress


                                            Port 2 Ingress                                                                    Port 2 Egress


                                            Port 3 Ingress                                                                    Port 3 Egress


                                            Port 4 Ingress                                                                    Port 4 Egress


                                                                    Figure 12. An overcommitted egress switch port

                         The T Series router switch fabric provides fairness by ensuring that all ingress PFEs receive an equal amount of
                         bandwidth across the switch fabric when transmitting cells to an oversubscribed egress PFE. Providing this type of
                         fairness across all streams is hard to support because it is difficult to keep track of all users of the bandwidth to the
                         egress PFE. The challenge is that if there are n ports on the switch, then there are n² streams of traffic through the


Copyright © 2012, Juniper Networks, Inc.	                                                                                                                        15
White Paper - T Series Core Router Architecture Overview



                         switch. Since most switch architectures are not capable of keeping track of n² individual streams, they are forced
                         to aggregate traffic streams, thus making it impossible to be completely fair to each individual stream. The T Series
                         switch fabric can monitor the n² streams so that each stream receives its fair share of the available fabric bandwidth to
                         an oversubscribed egress PFE.

                         Maintains Packet Order
                         The potential for misordering cells as they are transmitted across parallel switch planes to the egress PFE is eliminated
                         by the use of sequence numbers and a reorder buffer. In this design, the Switch Interface ASIC on the ingress PFE places a
                         sequence number into the cell header of each cell that it forwards into the fabric. On the egress PFE, the Switch Interface
                         ASIC buffers all cells that have sequence numbers greater than the next sequence number it waits to receive.
                         If a cell arrives out of order, the Switch Interface ASIC buffers the cells until the correct in-order cell arrives and the
                         reorder buffer is flushed. The reorder buffer and sequence number space are large enough to ensure that packets (and
                         the cells in a given packet) are not reordered as they traverse the switch fabric.

                         Low Latency for High-Priority Traffic
                         Some types of traffic, such as voice or video, have both latency and bandwidth requirements. The T Series switch fabric
                         is designed so that blocking in the fabric is extremely rare because the size of the ports from the ingress PFE into the
                         fabric is considerably larger than the network ports into the ingress PFE.
                         In the rare case that congestion does occur, each ingress PFE is allocated priority queues into the switch fabric. As
                         previously discussed, the switch fabric fairly allocates bandwidth among all of the ingress PFEs that are competing to
                         transmit cells to an overcommitted egress PFE.
                         •	 The use of priority queues from the ingress PFE into the switch fabric provides two important benefits:
                         •	 The latency for high-priority traffic is always low because the fabric never becomes congested.
                         •	 The CoS intelligence required to perform admission control into the fabric is implemented in the PFEs. This design allows
                            the switch fabric to remain relatively simple because CoS is not implemented inside the fabric, but at the edges of the
                            switch fabric. It also scales better than a centralized design because it is distributed and grows with the number of FPCs.

                         Distributed Control
                         A T Series routing platform does not have a centralized controller that is connected to all of the components in the switch
                         fabric. Hence, within the fabric, if any component fails, the other components around the failed component continue to
                         operate. Additionally, a centralized control channel does not need to be operational for the switch fabric to function.

                         Redundancy and Graceful Degradation
                         Each Switch Interface ASIC monitors a request-grant mechanism. If the ingress PFE depends on a grant for an
                         outstanding request but the grant does not return after a reasonable amount of time, or a data cell is lost, then the
                         ingress PFE takes into account that the destination PFE is unreachable on the plane that was used to send the request.
                         If a switch plane fails, only the cells that are currently in transit across the switch plane are lost because buffering does
                         not occur within a switch plane and the request-grant mechanism ensures that cells are never transmitted across a
                         failed switch plane.
                         The request-grant mechanism allows a failed component within a switch plane to be removed from service by diverting
                         traffic around the faulty component, or all traffic using the faulty plane can be switched to a redundant plane. If there
                         are a significant number of faults on a given switch plane or the plane must be swapped out for maintenance, the
                         chassis manager coordinates moving traffic to the redundant switch plane. Each step in the migration of traffic to
                         the redundant plane involves moving only a small fraction of overall traffic. This design allows the system to remain
                         operational with no significant loss in fabric performance.




Corporate and Sales Headquarters                    APAC Headquarters                        EMEA Headquarters                To purchase Juniper Networks solutions,
Juniper Networks, Inc.                              Juniper Networks (Hong Kong)             Juniper Networks Ireland         please contact your Juniper Networks
1194 North Mathilda Avenue                          26/F, Cityplaza One                      Airside Business Park            representative at 1-866-298-6428 or
Sunnyvale, CA 94089 USA                             1111 King’s Road                         Swords, County Dublin, Ireland   authorized reseller.
Phone: 888.JUNIPER (888.586.4737)                   Taikoo Shing, Hong Kong                  Phone: 35.31.8903.600
or 408.745.2000                                     Phone: 852.2332.3636                     EMEA Sales: 00800.4586.4737
Fax: 408.745.2100                                   Fax: 852.2574.7803                       Fax: 35.31.8903.601
www.juniper.net

Copyright 2012 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos,
NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other
countries. All other trademarks, service marks, registered marks, or registered service marks are the property of
their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper
Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.

2000302-003-EN          Mar 2012                       Printed on recycled paper



16	                                                                                                                                     Copyright © 2012, Juniper Networks, Inc.

More Related Content

What's hot

TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
TechBook: IMS on z/OS Using EMC Symmetrix Storage SystemsTechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
EMC
 
Brocade200 E Metro Cluster Design
Brocade200 E Metro Cluster DesignBrocade200 E Metro Cluster Design
Brocade200 E Metro Cluster Designbsdux
 
O&m manual(mux 2200 e)v1.1
O&m manual(mux 2200 e)v1.1O&m manual(mux 2200 e)v1.1
O&m manual(mux 2200 e)v1.1
Van Anh Lizaris
 
Tinyos programming
Tinyos programmingTinyos programming
Tinyos programming
ssuserf04f61
 
Mat power manual
Mat power manualMat power manual
Mat power manual
Chiru Prakash
 
Ibm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configurationIbm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configuration
gagbada
 
Db2 virtualization
Db2 virtualizationDb2 virtualization
Db2 virtualization
bupbechanhgmail
 
Cisco routers for the small business a practical guide for it professionals...
Cisco routers for the small business   a practical guide for it professionals...Cisco routers for the small business   a practical guide for it professionals...
Cisco routers for the small business a practical guide for it professionals...Mark Smith
 
Team Omni L2 Requirements Revised
Team Omni L2 Requirements RevisedTeam Omni L2 Requirements Revised
Team Omni L2 Requirements RevisedAndrew Daws
 
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems  TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
EMC
 
IBM Power 750 and 760 Technical Overview and Introduction
IBM Power 750 and 760 Technical Overview and IntroductionIBM Power 750 and 760 Technical Overview and Introduction
IBM Power 750 and 760 Technical Overview and Introduction
IBM India Smarter Computing
 
IBM Power 770 and 780 Technical Overview and Introduction
IBM Power 770 and 780 Technical Overview and IntroductionIBM Power 770 and 780 Technical Overview and Introduction
IBM Power 770 and 780 Technical Overview and Introduction
IBM India Smarter Computing
 
2 4routing
2 4routing2 4routing
2 4routing
Rupesh Basnet
 
IBM Flex System Interoperability Guide
IBM Flex System Interoperability GuideIBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM India Smarter Computing
 
Book VMWARE VMware ESXServer Advanced Technical Design Guide
Book VMWARE VMware ESXServer  Advanced Technical Design Guide Book VMWARE VMware ESXServer  Advanced Technical Design Guide
Book VMWARE VMware ESXServer Advanced Technical Design Guide
aktivfinger
 
Electronics en engineering-basic-vocational-knowledge
Electronics en engineering-basic-vocational-knowledgeElectronics en engineering-basic-vocational-knowledge
Electronics en engineering-basic-vocational-knowledgesandeep patil
 
zend framework 2
zend framework 2zend framework 2
zend framework 2
Sridhar Mantha
 
Notes of 8051 Micro Controller for BCA, MCA, MSC (CS), MSC (IT) & AMIE IEI- b...
Notes of 8051 Micro Controller for BCA, MCA, MSC (CS), MSC (IT) & AMIE IEI- b...Notes of 8051 Micro Controller for BCA, MCA, MSC (CS), MSC (IT) & AMIE IEI- b...
Notes of 8051 Micro Controller for BCA, MCA, MSC (CS), MSC (IT) & AMIE IEI- b...
ssuserd6b1fd
 
VNX Snapshots
VNX Snapshots VNX Snapshots
VNX Snapshots
EMC
 

What's hot (20)

TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
TechBook: IMS on z/OS Using EMC Symmetrix Storage SystemsTechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
 
Brocade200 E Metro Cluster Design
Brocade200 E Metro Cluster DesignBrocade200 E Metro Cluster Design
Brocade200 E Metro Cluster Design
 
Rhel Tuningand Optimizationfor Oracle V11
Rhel Tuningand Optimizationfor Oracle V11Rhel Tuningand Optimizationfor Oracle V11
Rhel Tuningand Optimizationfor Oracle V11
 
O&m manual(mux 2200 e)v1.1
O&m manual(mux 2200 e)v1.1O&m manual(mux 2200 e)v1.1
O&m manual(mux 2200 e)v1.1
 
Tinyos programming
Tinyos programmingTinyos programming
Tinyos programming
 
Mat power manual
Mat power manualMat power manual
Mat power manual
 
Ibm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configurationIbm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configuration
 
Db2 virtualization
Db2 virtualizationDb2 virtualization
Db2 virtualization
 
Cisco routers for the small business a practical guide for it professionals...
Cisco routers for the small business   a practical guide for it professionals...Cisco routers for the small business   a practical guide for it professionals...
Cisco routers for the small business a practical guide for it professionals...
 
Team Omni L2 Requirements Revised
Team Omni L2 Requirements RevisedTeam Omni L2 Requirements Revised
Team Omni L2 Requirements Revised
 
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems  TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
TechBook: DB2 for z/OS Using EMC Symmetrix Storage Systems
 
IBM Power 750 and 760 Technical Overview and Introduction
IBM Power 750 and 760 Technical Overview and IntroductionIBM Power 750 and 760 Technical Overview and Introduction
IBM Power 750 and 760 Technical Overview and Introduction
 
IBM Power 770 and 780 Technical Overview and Introduction
IBM Power 770 and 780 Technical Overview and IntroductionIBM Power 770 and 780 Technical Overview and Introduction
IBM Power 770 and 780 Technical Overview and Introduction
 
2 4routing
2 4routing2 4routing
2 4routing
 
IBM Flex System Interoperability Guide
IBM Flex System Interoperability GuideIBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
 
Book VMWARE VMware ESXServer Advanced Technical Design Guide
Book VMWARE VMware ESXServer  Advanced Technical Design Guide Book VMWARE VMware ESXServer  Advanced Technical Design Guide
Book VMWARE VMware ESXServer Advanced Technical Design Guide
 
Electronics en engineering-basic-vocational-knowledge
Electronics en engineering-basic-vocational-knowledgeElectronics en engineering-basic-vocational-knowledge
Electronics en engineering-basic-vocational-knowledge
 
zend framework 2
zend framework 2zend framework 2
zend framework 2
 
Notes of 8051 Micro Controller for BCA, MCA, MSC (CS), MSC (IT) & AMIE IEI- b...
Notes of 8051 Micro Controller for BCA, MCA, MSC (CS), MSC (IT) & AMIE IEI- b...Notes of 8051 Micro Controller for BCA, MCA, MSC (CS), MSC (IT) & AMIE IEI- b...
Notes of 8051 Micro Controller for BCA, MCA, MSC (CS), MSC (IT) & AMIE IEI- b...
 
VNX Snapshots
VNX Snapshots VNX Snapshots
VNX Snapshots
 

Similar to T Series Core Router Architecture Review (Whitepaper)

SEAMLESS MPLS
SEAMLESS MPLSSEAMLESS MPLS
SEAMLESS MPLS
Johnson Liu
 
2000402 en juniper good
2000402 en juniper good2000402 en juniper good
2000402 en juniper good
Achint Saraf
 
Emerging Multicast VPN Applications
Emerging  Multicast  VPN  ApplicationsEmerging  Multicast  VPN  Applications
Emerging Multicast VPN Applications
Johnson Liu
 
FCOE Convergence at the Access Layer with Juniper Networks QFX3500 Switch
FCOE Convergence at the Access Layer with Juniper Networks QFX3500 SwitchFCOE Convergence at the Access Layer with Juniper Networks QFX3500 Switch
FCOE Convergence at the Access Layer with Juniper Networks QFX3500 Switch
Juniper Networks
 
Presentation data center deployment guide
Presentation   data center deployment guidePresentation   data center deployment guide
Presentation data center deployment guide
xKinAnx
 
Junipe 1
Junipe 1Junipe 1
Junipe 1Ugursuz
 
IBM eX5 Portfolio Overview IBM System x3850 X5, x3950 X5, x3690 X5, and Blade...
IBM eX5 Portfolio Overview IBM System x3850 X5, x3950 X5, x3690 X5, and Blade...IBM eX5 Portfolio Overview IBM System x3850 X5, x3950 X5, x3690 X5, and Blade...
IBM eX5 Portfolio Overview IBM System x3850 X5, x3950 X5, x3690 X5, and Blade...
IBM India Smarter Computing
 
trex_astf.pdf
trex_astf.pdftrex_astf.pdf
trex_astf.pdf
ssuserf52607
 
Ngen mvpn with pim implementation guide 8010027-002-en
Ngen mvpn with pim implementation guide   8010027-002-enNgen mvpn with pim implementation guide   8010027-002-en
Ngen mvpn with pim implementation guide 8010027-002-en
Ngoc Nguyen Dang
 
Gdfs sg246374
Gdfs sg246374Gdfs sg246374
Gdfs sg246374Accenture
 
Win plc engine-en
Win plc engine-enWin plc engine-en
Win plc engine-en
dreamtech2
 
Integrating SDN into the Data Center
Integrating SDN into the Data CenterIntegrating SDN into the Data Center
Integrating SDN into the Data Center
Juniper Networks
 
Spi research paper
Spi research paperSpi research paper
Spi research paper
QuyenVu47
 
Hypermedia Telular manual-ver5
Hypermedia Telular manual-ver5Hypermedia Telular manual-ver5
Hypermedia Telular manual-ver5
Victor Jaramillo
 
Wireless m-bus-quick-start-guide
Wireless m-bus-quick-start-guideWireless m-bus-quick-start-guide
Wireless m-bus-quick-start-guide
봉조 김
 
Ibm virtualization engine ts7500 planning, implementation, and usage guide sg...
Ibm virtualization engine ts7500 planning, implementation, and usage guide sg...Ibm virtualization engine ts7500 planning, implementation, and usage guide sg...
Ibm virtualization engine ts7500 planning, implementation, and usage guide sg...Banking at Ho Chi Minh city
 
software-eng.pdf
software-eng.pdfsoftware-eng.pdf
software-eng.pdf
fellahi1
 
Machine to Machine White Paper
Machine to Machine White PaperMachine to Machine White Paper
Machine to Machine White Paper
Josep Pocalles
 
IBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data CenterIBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data Center
IBM India Smarter Computing
 

Similar to T Series Core Router Architecture Review (Whitepaper) (20)

SEAMLESS MPLS
SEAMLESS MPLSSEAMLESS MPLS
SEAMLESS MPLS
 
2000402 en juniper good
2000402 en juniper good2000402 en juniper good
2000402 en juniper good
 
Emerging Multicast VPN Applications
Emerging  Multicast  VPN  ApplicationsEmerging  Multicast  VPN  Applications
Emerging Multicast VPN Applications
 
FCOE Convergence at the Access Layer with Juniper Networks QFX3500 Switch
FCOE Convergence at the Access Layer with Juniper Networks QFX3500 SwitchFCOE Convergence at the Access Layer with Juniper Networks QFX3500 Switch
FCOE Convergence at the Access Layer with Juniper Networks QFX3500 Switch
 
Presentation data center deployment guide
Presentation   data center deployment guidePresentation   data center deployment guide
Presentation data center deployment guide
 
Junipe 1
Junipe 1Junipe 1
Junipe 1
 
IBM eX5 Portfolio Overview IBM System x3850 X5, x3950 X5, x3690 X5, and Blade...
IBM eX5 Portfolio Overview IBM System x3850 X5, x3950 X5, x3690 X5, and Blade...IBM eX5 Portfolio Overview IBM System x3850 X5, x3950 X5, x3690 X5, and Blade...
IBM eX5 Portfolio Overview IBM System x3850 X5, x3950 X5, x3690 X5, and Blade...
 
trex_astf.pdf
trex_astf.pdftrex_astf.pdf
trex_astf.pdf
 
Ngen mvpn with pim implementation guide 8010027-002-en
Ngen mvpn with pim implementation guide   8010027-002-enNgen mvpn with pim implementation guide   8010027-002-en
Ngen mvpn with pim implementation guide 8010027-002-en
 
Gdfs sg246374
Gdfs sg246374Gdfs sg246374
Gdfs sg246374
 
Win plc engine-en
Win plc engine-enWin plc engine-en
Win plc engine-en
 
Integrating SDN into the Data Center
Integrating SDN into the Data CenterIntegrating SDN into the Data Center
Integrating SDN into the Data Center
 
Spi research paper
Spi research paperSpi research paper
Spi research paper
 
Air cam ug
Air cam ugAir cam ug
Air cam ug
 
Hypermedia Telular manual-ver5
Hypermedia Telular manual-ver5Hypermedia Telular manual-ver5
Hypermedia Telular manual-ver5
 
Wireless m-bus-quick-start-guide
Wireless m-bus-quick-start-guideWireless m-bus-quick-start-guide
Wireless m-bus-quick-start-guide
 
Ibm virtualization engine ts7500 planning, implementation, and usage guide sg...
Ibm virtualization engine ts7500 planning, implementation, and usage guide sg...Ibm virtualization engine ts7500 planning, implementation, and usage guide sg...
Ibm virtualization engine ts7500 planning, implementation, and usage guide sg...
 
software-eng.pdf
software-eng.pdfsoftware-eng.pdf
software-eng.pdf
 
Machine to Machine White Paper
Machine to Machine White PaperMachine to Machine White Paper
Machine to Machine White Paper
 
IBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data CenterIBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data Center
 

More from Juniper Networks

Why Juniper, Driven by Mist AI, Leads the Market
 Why Juniper, Driven by Mist AI, Leads the Market Why Juniper, Driven by Mist AI, Leads the Market
Why Juniper, Driven by Mist AI, Leads the Market
Juniper Networks
 
Experience the AI-Driven Enterprise
Experience the AI-Driven EnterpriseExperience the AI-Driven Enterprise
Experience the AI-Driven Enterprise
Juniper Networks
 
How AI Simplifies Troubleshooting Your WAN
How AI Simplifies Troubleshooting Your WANHow AI Simplifies Troubleshooting Your WAN
How AI Simplifies Troubleshooting Your WAN
Juniper Networks
 
Real AI. Real Results. Mist AI Customer Testimonials.
Real AI. Real Results. Mist AI Customer Testimonials.Real AI. Real Results. Mist AI Customer Testimonials.
Real AI. Real Results. Mist AI Customer Testimonials.
Juniper Networks
 
SD-WAN, Meet MARVIS.
SD-WAN, Meet MARVIS.SD-WAN, Meet MARVIS.
SD-WAN, Meet MARVIS.
Juniper Networks
 
Are you able to deliver reliable experiences for connected devices
Are you able to deliver reliable experiences for connected devicesAre you able to deliver reliable experiences for connected devices
Are you able to deliver reliable experiences for connected devices
Juniper Networks
 
Stop Doing These 5 Things with Your SD-WAN
Stop Doing These 5 Things with Your SD-WANStop Doing These 5 Things with Your SD-WAN
Stop Doing These 5 Things with Your SD-WAN
Juniper Networks
 
Securing IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic ApproachSecuring IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic Approach
Juniper Networks
 
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Juniper Networks
 
What's Your IT Alter Ego?
What's Your IT Alter Ego?What's Your IT Alter Ego?
What's Your IT Alter Ego?
Juniper Networks
 
Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?
Juniper Networks
 
Juniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCOJuniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCO
Juniper Networks
 
SDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider OrganizationSDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider Organization
Juniper Networks
 
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's PerspectiveNavigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
Juniper Networks
 
vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks
Juniper Networks
 
NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud
Juniper Networks
 
Juniper SRX5800 Infographic
Juniper SRX5800 InfographicJuniper SRX5800 Infographic
Juniper SRX5800 Infographic
Juniper Networks
 
Infographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer SatisfactionInfographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer Satisfaction
Juniper Networks
 
Infographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning FastInfographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning Fast
Juniper Networks
 
High performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computingHigh performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computing
Juniper Networks
 

More from Juniper Networks (20)

Why Juniper, Driven by Mist AI, Leads the Market
 Why Juniper, Driven by Mist AI, Leads the Market Why Juniper, Driven by Mist AI, Leads the Market
Why Juniper, Driven by Mist AI, Leads the Market
 
Experience the AI-Driven Enterprise
Experience the AI-Driven EnterpriseExperience the AI-Driven Enterprise
Experience the AI-Driven Enterprise
 
How AI Simplifies Troubleshooting Your WAN
How AI Simplifies Troubleshooting Your WANHow AI Simplifies Troubleshooting Your WAN
How AI Simplifies Troubleshooting Your WAN
 
Real AI. Real Results. Mist AI Customer Testimonials.
Real AI. Real Results. Mist AI Customer Testimonials.Real AI. Real Results. Mist AI Customer Testimonials.
Real AI. Real Results. Mist AI Customer Testimonials.
 
SD-WAN, Meet MARVIS.
SD-WAN, Meet MARVIS.SD-WAN, Meet MARVIS.
SD-WAN, Meet MARVIS.
 
Are you able to deliver reliable experiences for connected devices
Are you able to deliver reliable experiences for connected devicesAre you able to deliver reliable experiences for connected devices
Are you able to deliver reliable experiences for connected devices
 
Stop Doing These 5 Things with Your SD-WAN
Stop Doing These 5 Things with Your SD-WANStop Doing These 5 Things with Your SD-WAN
Stop Doing These 5 Things with Your SD-WAN
 
Securing IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic ApproachSecuring IoT at Scale Requires a Holistic Approach
Securing IoT at Scale Requires a Holistic Approach
 
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
 
What's Your IT Alter Ego?
What's Your IT Alter Ego?What's Your IT Alter Ego?
What's Your IT Alter Ego?
 
Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?Are You Ready for Digital Cohesion?
Are You Ready for Digital Cohesion?
 
Juniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCOJuniper vSRX - Fast Performance, Low TCO
Juniper vSRX - Fast Performance, Low TCO
 
SDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider OrganizationSDN and NFV: Transforming the Service Provider Organization
SDN and NFV: Transforming the Service Provider Organization
 
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's PerspectiveNavigating the Uncertain World Facing Service Providers - Juniper's Perspective
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
 
vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks vSRX Buyer’s Guide infographic - Juniper Networks
vSRX Buyer’s Guide infographic - Juniper Networks
 
NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud
 
Juniper SRX5800 Infographic
Juniper SRX5800 InfographicJuniper SRX5800 Infographic
Juniper SRX5800 Infographic
 
Infographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer SatisfactionInfographic: 90% MetaFabric Customer Satisfaction
Infographic: 90% MetaFabric Customer Satisfaction
 
Infographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning FastInfographic: Whack Hackers Lightning Fast
Infographic: Whack Hackers Lightning Fast
 
High performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computingHigh performance data center computing using manageable distributed computing
High performance data center computing using manageable distributed computing
 

Recently uploaded

FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
CatarinaPereira64715
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 

Recently uploaded (20)

FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 

T Series Core Router Architecture Review (Whitepaper)

  • 1. White Paper T Series Core Router Architecture Overview High-End Architecture for Packet Forwarding and Switching Copyright © 2012, Juniper Networks, Inc. 1
  • 2. White Paper - T Series Core Router Architecture Overview Table of Contents Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Nine Years of Industry-Leading Core Routing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Forwarding Plane Design Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Architectural Framework and Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 PIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 FPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Switch Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Routing Engine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 T Series Forwarding Path Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Data Flow Through the 3D FPC on the T4000 Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Data Flow Through the Enhanced Scaling (ES) FPCs on T Series Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 T Series Switch Fabric Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Switch Fabric Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Multichassis Switch Fabric (TX Matrix and TX Matrix Plus) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 CLOS Topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Juniper Networks Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Multichassis System Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 About Juniper Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 List of Figures Figure 1. T Series scaling by slot and chassis (including the JCS1200). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Figure 2. Scalable performance in multiple dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Figure 3. T Series routing platform architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Figure 4. T Series router components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Figure 5. Data flow through the T4000 router. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Figure 6. PFE and switch fabric using the T Series chipset (“to fabric” direction). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Figure 7. Packet flow from fabric to egress PFE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Figure 5. Five switch fabric planes for the T Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Figure 6. Multicast and unicast each utilizing fabric queues in T Series routers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Figure 7. Three-stage CLOS topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Figure 8. T Series multichassis switch fabric planes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Figure 9. Multichassis system high-level overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Figure 10. Switch-card chassis to line-card chassis interconnections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2 Copyright © 2012, Juniper Networks, Inc.
  • 3. White Paper - T Series Core Router Architecture Overview Executive Summary Juniper Networks® T Series Core Routers have been in production since 2002, with the introduction of the Juniper Networks T640 Core Router. Since that time, T Series routers have evolved to maintain an unequivocal industry lead in capacity (slot, chassis, and system) and operational efficiencies in power and usability. Maintaining this standard has in part been possible due to design decisions made with the very first T Series system. Nine Years of Industry-Leading Core Routing In April 2002, Juniper Networks began shipping the first T Series routing platform: the T640 Core Router. The T640 is a carrier-class, multichassis-capable core routing platform that supports high-density 10 Gbps (OC-192c/STM-64 and 10 Gbps Gigabit Ethernet) to 40 Gbps (OC-768c/STM-256 and 40 Gbps Gigabit Ethernet) interfaces. In July 2002, Juniper Networks began shipping the second T Series platform: the Juniper Networks T320 Core Router. The T320 is a carrier-class, single-chassis core router that has a smaller form factor than the T640 and a lower entry price point. In 2004, Juniper offered the first multichassis core routing system with the Juniper Networks TX Matrix, supporting 3.2 Tbps in a four-chassis system. Then in 2007, Juniper announced and released the industry’s first 100 Gbps/slot system in the Juniper Networks T1600 Core Router, a multichassis-capable routing platform that was designed—taking advantage of the original switch plane architecture of the T640 —to be upgradeable in service from the T640. In 2009, Juniper introduced the Juniper Networks TX Matrix Plus, a central switching and routing element that connects up to 4 T1600 routing chassis into a single routing entity: a 6.4 Tbps system. In 2011, Juniper introduced the 350 Gbps/slot-capable system, Juniper Networks T4000 Core Router, which has significantly extended the T Series system capacity, and allowed customers’ investment in the T Series to be protected by upgrading existing T640 and T1600 systems to the T4000. All T Series platforms use Juniper Networks Junos® operating system and T Series ASICs to provide the ease of use, performance, reliability, and feature richness that service providers have come to expect from all Juniper Networks products. The T Series Core Routers—T320, T640, T1600, T4000, TX Matrix, and TX Matrix Plus—provide the ingredients for high-end and core networks of the future, especially when controlled by the Juniper Networks JCS1200 Control System. Figure 1 illustrates the industry-leading scaling characteristics of the T Series on the forwarding and control planes. JCS1200 Single Chassis T640 TX Matrix Single Chassis T1600 TX Matrix Plus Single Chassis T4000 Multichassis Ports Ports Ports Ports Multichassis Ports 320 10 Gbps 40 10 Gbps 160 10 Gbps 80 10 Gbps 208 10 Gbps 32 64 40 Gbps 8 40 Gbps 32 40 Gbps 16 40 Gbps 16 40 Gbps 32 100 Gbps 8 100 Gbps 16 100 Gbps 40 Gbps/slot 2002 2004 100 Gbps/slot 2007 2009 350 Gbps/slot 2011 Figure 1. T Series scaling by slot and chassis (including the JCS1200) This paper provides a technical introduction to the architecture of T Series Core Routers, both single-chassis and multichassis systems. It describes the design objectives, system architecture, packet forwarding architecture, single- chassis switch fabric architecture, and multichassis switch fabric architecture. For an explanation of the core virtualization capabilities available with the integration of the JCS1200, see the References section in Appendix A, particularly the paper entitled Virtualization in the Core of the Network at www. juniper.net/us/en/local/pdf/whitepapers/2000299-en.pdf. Copyright © 2012, Juniper Networks, Inc. 3
  • 4. White Paper - T Series Core Router Architecture Overview Forwarding Plane Design Objectives Juniper’s architectural philosophy is based on the premise that multiservice networks can be viewed as three- dimensional systems containing forwarding, control, and service dimensions: • Forwarding (“how you move the bits”) • Control (“how you direct the bits”) • Software/service (“how you monetize the bits”) For a network to be a converged, scalable, packet core infrastructure, it must scale in all these dimensions (Figure 2): Control: JCS1200 P Forwarding: T1600 D S P : ce vi er S Figure 2. Scalable performance in multiple dimensions In the forwarding plane, traffic growth is the key driver of the core router market. As the global economy becomes increasingly networked and dependent upon the communications infrastructure, traffic rates continue to balloon— growing 70 to 80 percent a year by most estimates—and high-density core routing remains critical. Furthermore, the importance of the control plane cannot be overlooked. A router’s control plane must scale to accommodate ever-growing routing and forwarding tables, service tunnels, virtual networks, and other information related to network configuration and management. Finally, the importance of the service plane is brought to bear when considering the requirements of an increasingly disparate and global marketplace. These changing market dynamics create pressure for greater network innovation and a much deeper integration between applications and the network.1. T Series routing platforms are designed with this philosophy in mind at all times—the main focus of this paper is on the forwarding plane of the router. The T Series routing platforms were developed to support eight key design objectives: • Packet forwarding performance • Bandwidth density • IP service delivery • Multichassis capability • High availability (HA) • Single software image • Security • Power efficiency Juniper Networks leads the industry in all of these categories, as the white papers in the References section illustrate. 4 Copyright © 2012, Juniper Networks, Inc.
  • 5. White Paper - T Series Core Router Architecture Overview Architectural Framework and Components Figure 3 illustrates the high-level architecture of a T Series routing platform. A T Series platform uses a distributed architecture with packet buffering at the ingress Packet Forwarding Engine (PFE) before the switch fabric, as well as packet buffering at the egress PFE before the output port. As a packet enters a T Series platform from the network, the ingress PFE segments the packet into cells, the cells are written to ingress memory, a route lookup is performed, and the cells representing the packet are read from ingress memory and sent across the switch fabric to the egress PFE. When the cells arrive at the egress PFE, they are written to the second memory; an egress lookup is performed; the cells representing the packet are read out of the second memory, reassembled into a packet, and transmitted on the output interface to the network. Packet Processing/ Packet Processing/ Buffer Buffer Packet Processing/ Switch Packet Processing/ Buffer Buffer Packet Processing/ Packet Processing/ Buffer Buffer Figure 3. T Series routing platform architecture Drilling more deeply into the components, a T Series system consists of four major components: PICs, Flexible PIC Concentrators (FPCs), the switch fabric, and one or more Routing Engines (Figure 4). Routing Engine Routing Engine Switching FPC FPC FPC FPC Planes PIC PIC PIC PIC Figure 4. T Series router components These components are discussed in more detail in the following sections. PIC The PICs connect a T Series platform to the network and perform both physical and link-layer packet processing. They perform all of the functions that are required for the routing platform to receive packets from the network and transmit packets to the network. Many PICs, such as the Juniper IQ PICs, also perform packet processing. FPC Each FPC can contain one or more PFEs. For instance, a T4000 240 Gbps slot supports two 120 Gbps PFEs. Logically, each PFE can be thought of as a highly integrated packet-processing engine using custom ASICs developed by Juniper Networks. These ASICs enable the router to achieve data forwarding rates that match fiber optic capacity. Such high forwarding rates are achieved by distributing packet-processing tasks across this set of highly integrated ASICs. When a packet arrives from the network, the ingress PFE extracts the packet header, performs a routing table lookup and any packet filtering operations, and determines the egress PFE connected to the egress PIC. The ingress PFE forwards the packet across the switch fabric to the egress PFE. The egress PFE performs a second routing table lookup to determine the output PIC, and it manages any egress class-of-service (CoS) and quality-of-service (QoS) specifications. Finally, the packet is forwarded to the network. Copyright © 2012, Juniper Networks, Inc. 5
  • 6. White Paper - T Series Core Router Architecture Overview Switch Fabric The switch fabric provides connectivity between the PFEs. In a single-chassis system, the switch fabric provides connectivity among all of the PFEs residing in the same chassis. In a multichassis system, the switch fabric provides connectivity among all of the PFEs in the different chassis of the routing node cluster. In a single-chassis or a multichassis system, each PFE is considered to be logically contiguous to every other PFE connected to the switch fabric. Routing Engine The Routing Engine executes the Junos OS and creates the routing tables that are downloaded into the lookup ASICs of each PFE. An internal Ethernet connects the Routing Engine to the other subsystems of a T Series platform. Each subsystem includes one or more embedded microprocessors for controlling and monitoring the custom ASICs, and these microprocessors are also connected to the internal Ethernet. T Series Forwarding Path Architecture The function of the PFE can be understood by following the flow of a packet through the router: first into a PIC, then through the switching fabric, and finally out another PIC for transmission on a network link. Generally, the data flows through the PFE as follows: • Packets enter the router through incoming PIC interfaces, which contain controllers that perform media-specific processing (and optionally intelligent packet processing). • PICs pass the packets to the FPCs, where they are divided into cells and are distributed to the router’s buffer memory. • The PFE performs route lookups, forwards the cells over the switch fabric to the destination PFE, reads the cells from buffer memory, reassembles the cells into packets, and sends them to the destination port on the outgoing PIC. • The PIC performs encapsulation and other media-specific processing, and it sends the packets out into the network. The use of highly integrated ASICs is critical to the delivery of industry-leading forwarding performance and packet processing. The T Series chipset is specifically designed and developed to comprise the state of the art in carrier-class routers. Data Flow Through the 3D FPC on the T4000 Router 3D refers to the newer Type 5 FPC introduced along with the T4000 system. To ensure the efficient movement of data through the 3D FPC on the T4000 router, the router is designed so that ASICs on the hardware components handle the forwarding of data. Data flows through the 3D FPC on the T4000 router in the following sequence (see Figure 5): Lookup and Packet- Midplane Processing ASIC P LAN/WAN Switch Packets Switch I Interface, Interface Fabric In Buffering ASIC ASIC C P LAN/WAN Switch Packets Switch I Interface, Interface Fabric Out Buffering ASIC ASIC C Lookup and Packet- Processing ASIC Figure 5. Data flow through the T4000 router 6 Copyright © 2012, Juniper Networks, Inc.
  • 7. White Paper - T Series Core Router Architecture Overview 1. Packets arrive at an incoming PIC interface. 2. The PIC passes the packets to the FPC, where the interface ASIC does pre-classification and sends the packet header to the Packet-Processing ASIC that performs Layer 2 and Layer 3 lookup. The Packet-Processing ASIC is also capable of doing Layer 4 through Layer 7 packet processing. 3. The Interface ASIC receives the modified header from the Packet-Processing ASIC and updates the packet and divides it into 64-byte cells. 4. The Interface ASIC sends these 64-byte cells to the Switch Fabric via the Switch Interface ASIC-facing Switch Fabric, unless the destination is on the same Packet Forwarding Engine. In this case, the Interface ASIC sends packets to the outgoing port without passing them through the Switch Fabric. 5. The Interface ASIC sends bandwidth requests through the Switch Fabric to the destination port. 6. The destination Interface ASIC sends bandwidth grants through the Switch Fabric to the originating Interface ASIC. 7. Upon receipt of each bandwidth grant, the originating Interface ASIC sends a cell through the Switch Fabric to the destination Packet Forwarding Engine. 8. The destination Interface ASIC receives cells from the Switch Fabric, reorders the data received as cells and reassembles into packets, and passes the header to the Packet-Processing ASIC. 9. The Packet -Processing ASIC performs the route lookup, adds Layer 2 encapsulation, and sends back the header to the Interface ASIC. 10. The Interface ASIC appends the modified header received from the Packet-Processing ASIC and sends the packets to the outgoing PIC interface. 11. The outgoing PIC sends the packets out into the network. Data Flow Through the Enhanced Scaling (ES) FPCs on T Series Routers This chipset includes the following ASICS: • L2/L3 Packet-Processing ASIC (L2/3) • Switch Interface ASIC • T-Series Lookup Processor ASIC • Queuing and Memory Interface ASIC (Q&M) The T Series ASICs leverage the conceptual framework, many of the building blocks, and Juniper Networks extensive operational experience—gained from the Juniper Networks M Series Multiservice Edge Routers chipset. Figure 6 demonstrates data flow through a T Series routing node by illustrating how the T Series chipset is arranged to implement a single instance of a PFE. PIC FPC 5 FPC SIB l2/L3 T Series Layer 1 WAN ASIC Packet Lookup Processing 3 Processor 6 Fabric Switch Switch 1 2 Interface 4 Interface ASIC ASIC l2/L3 Queuing Layer 1 WAN ASIC Packet and Memory Processing Interface ASIC Figure 6. PFE and switch fabric using the T Series chipset (“to fabric” direction) Copyright © 2012, Juniper Networks, Inc. 7
  • 8. White Paper - T Series Core Router Architecture Overview In this example, data flows in the following sequence: 1. Packets enter through an incoming PIC, which contains the Layer 1 interface chips, and are passed to the PFE on the originating FPC. The PIC is connected to the PFE on the FPC via a high speed link (HSL). 2. The Layer 2/Layer 3 packet-processing ASIC parses the packets and divides them into cells. In addition, a behavior aggregate (BA) classifier determines the forwarding treatment for each packet. 3. The network-facing Switch Interface ASIC places the lookup key in a notification and passes it to the T Series Lookup Processor. 4. The Switch Interface ASIC also passes the data cells to the Queuing and Memory Interface ASICs for buffering on the FPC. 5. The T Series Lookup Processor performs the route lookup and forwards the notification to the Queuing and Memory Interface ASIC. In addition—if configured—filtering, policing, sampling, and multifield classification are performed at this time. 6. The Queuing and Memory Interface ASIC sends the notification to the switch-fabric-facing Switch Interface ASIC, which issues read requests to the Queuing and Memory Interface ASIC to begin reading data cells out of memory. The destination FPC is shown in the following figure. SIB FPC 10 FPC 11 T Series 8 l2/L3 Layer 1 Lookup Packet WAN Fabric Processor Processing ASIC Switch Switch Interface 7 9 Interface ASIC ASIC Queuing l2/L3 Layer 1 and Memory Packet ASIC WAN Interface ASIC Processing Figure 7. Packet flow from fabric to egress PFE 7. Cells representing packets are received from the Switch Interface ASIC connected to the fabric. The cells are handed off to the Queuing and Memory Interface ASIC for buffering. Lookup keys and data pointers are sent to the T Series Lookup Processor. 8. The T Series Lookup Processor performs the route lookup and forwards the notification to the Queuing and Memory Interface ASIC, which forwards it to the network-facing Switch Interface ASIC. 9. The Switch Interface ASIC sends requests to the Queuing and Memory Interface ASIC to read the data cells out of memory, and it passes the cells to the Layer 2/Layer 3 packet-processing ASIC. 10. The Layer 2/Layer 3 packet-processing ASIC reassembles the cells into packets, performs the necessary Layer 2 encapsulation, and sends the packets to the outgoing PIC. Queuing policy and rewrites occur at this time on the egress router. 11. The PIC passes the packets into the network. The T Series chipset provides hardware-based forwarding performance and service delivery for IPv4 (unicast and multicast), IPv6 (unicast and multicast), and MPLS, while also having the flexibility to support other protocols in the future. The chipset was designed to provide the functionality needed for the development of single-chassis and multichassis systems. 8 Copyright © 2012, Juniper Networks, Inc.
  • 9. White Paper - T Series Core Router Architecture Overview T Series Switch Fabric Architecture The N+1 redundancy of the T Series switch fabric architecture is what has led to the in-service upgrade capability and the flexibility of scaling the T Series in the multichassis dimension since its inception. For example, just by swapping the power entry modules and the switch interface boards (using the N+1 redundancy), service providers can upgrade a T640 Core Router to a T1600 Core Router without disrupting service or changing customer-facing interfaces. Furthermore, connecting a T1600 platform to a TX Matrix Plus system is also a smooth in-service process, involving upgrading the switch fabric boards and the control boards on the T1600 and then connecting the T1600 to the central switching element—the TX Matrix Plus. The five planes of the T Series switch fabric (Figure 5) are implemented using four operationally independent, parallel switch planes (Labeled A through D) that are simultaneously active and an identical fifth plane (Labeled E) that acts as a hot spare to provide N+1 redundancy. Plane E (Backup) Plane D Plane C Network PIC PIC Network Ingress Plane B Egress SI (Fabric) SI (Fabric) PFE PFE Network PIC PIC Network Plane A Fabric ASIC Figure 5. Five switch fabric planes for the T Series The Fabric ASIC provides non-blocking connectivity among the PFEs that populate a single-chassis system. On a single-chassis T Series system, each chassis contains a maximum of 8 FPCs, with each FPC supporting 2 PFEs (up to 50 Gbps each), for a total of 16 PFEs that communicate across the switch fabric. The required input and output aggregate bandwidth of the switch fabric exceeds the I/O capabilities of a single Fabric ASIC by a large margin. In a multichassis-capable T Series router, each PFE is connected to four active switch planes, and each switch plane carries a portion of the required bandwidth. To guarantee that cells are evenly load-balanced across the active switch planes, each PFE distributes cells equally across the switch planes on a cell-by-cell basis rather than a packet-by-packet basis. For further design considerations of the T Series switch fabric, see Appendix B: Switch Fabric Properties. Switch Fabric Operation The switch fabric implements a “fairness protocol” that, when congestion is occurring, ensures fairness across source PFEs. The process of transmitting a data cell across the switch fabric involves a request and grant protocol. The source PFE transmits a request across the switch fabric to the destination PFE. Each request for a given destination is transmitted across a different switch plane in a round-robin order to distribute the load equally. When the request is received by the destination PFE, the destination PFE transmits a grant to the source PFE across the same switch plane on which the corresponding request was received. When the grant is received by the source PFE, the source PFE transmits the data cell to the destination PFE across the same switch plane on which the corresponding grant was received. This approach provides both a flow control mechanism for transmitting cells into the fabric and a mechanism to detect broken paths across the switch fabric. Appendix B covers Switch Fabric Design Properties. In particular, there are two key features of the T Series switch fabric that deserve special attention in terms of their ability to handle modern multiservice applications such as video delivery—these are QoS and multicast. As shown in Figure 6, any crossbar endpoint (PFE) can theoretically become congested in the egress direction: multiple ingress PFEs can send traffic to one egress PFE, thus creating potential for congestion. This condition can be handled in any number of ways—backpressure, rate limiting, fabric speedup, or a combination—the key is that it must not result in the loss of priority traffic. Copyright © 2012, Juniper Networks, Inc. 9
  • 10. White Paper - T Series Core Router Architecture Overview Multicast traffic always must coexist with unicast traffic, which is why a T Series switch fabric treats both traffic types equally: unicast and multicast traffic must conform to their QoS profiles mapped into fabric with two priority levels (note PFE 0). Routing Engine Regular Fabric Queues Multicast (Per Destination PFE) Input Unicast Input PFE PFE WAN 0 4 WAN PFE PFE WAN 1 5 SWITCH FABRIC WAN PFE PFE WAN 2 6 WAN PFE PFE WAN 3 N Figure 6. Multicast and unicast each utilizing fabric queues in T Series routers Because the T Series Core Routers use a patented tree replication algorithm, they are able to distribute multicast replication across available PFEs. Thus, in the very rare case of a congested egress PFE, T Series routers can easily apply backpressure notification from egress to ingress to limit the traffic and avoid drops. As competing designs replicate in the fabric itself, and allow multicast traffic to bypass the fabric queues, the potential for intelligent drops with even minimal congestion is very demonstrable. Multichassis Switch Fabric (TX Matrix and TX Matrix Plus) An increasingly popular approach to augmenting forwarding performance, boosting bandwidth density, and extending the deployable lifetime of core routers is to use a multichassis system. Multichassis systems are designed with an expandable switch fabric that allows service providers to grow systems in increments required by budget and traffic loads. While most service providers typically follow the technology curve and upgrade as soon as the next generation of routers comes along (mainly because of improved efficiencies such as higher capacity, better footprint, and lower power), Juniper’s multichassis solution allows providers to grow node capacity either to bridge between generations, or to build multi-terabit nodes. Another common reason for a service provider to use a multichassis system is to prevent the proliferation of too many interconnects. More network elements mean more interconnects that do no support revenue-generating traffic but merely ensure a full mesh. Interconnects are not a good way to scale—in addition to the obvious cost of the ports, there is also a cost in terms of driving additional chassis as well to handle overhead connections. Additionally, multichassis systems can also be combined with an independent control plane such as the JCS1200 to create a virtualized core.3 The JCS1200 allows the creation of hardware-virtualized routers that can assume individual networking functions, such as core or aggregation; service functions (such as private VPNs); mobile packet backbones; or peering functions. CLOS Topology Figure 7 illustrates the topology of a typical three-stage CLOS fabric4. The blue squares represent individual single- stage crossbar switches. The topology consists of multiple rows of single-stage crossbar switches arranged into three columns. The benefit of this topology is that it facilitates the construction of large, scalable, and non-blocking switch fabrics using smaller switch fabrics as the fundamental building block. 10 Copyright © 2012, Juniper Networks, Inc.
  • 11. White Paper - T Series Core Router Architecture Overview STAGE 1 STAGE 2 STAGE 3 Input Output Input Output Input Output 1 • • • • • • SWITCH 1 • • P1 • • • • P2 • • • • P3 • • 16 1 • • • • • • SWITCH 2 • • P1 • • • • P2 • • • • P3 • • 16 • • • • • • • • • 1 • • • • • • SWITCH 16 • • P1 • • • • P2 • • • • P3 • • 16 Figure 7. Three-stage CLOS topology The topology of a CLOS network has each output port of the first stage connected to one of the crossbars in the second stage. Observe that a two-stage switch fabric would provide any-to-any connectivity (any ingress port to the fabric can communicate with any egress port from the fabric), but the path through the switch is blocking. The addition of the third stage creates a non-blocking topology by creating a significant number of redundant paths. It is the presence of the redundant paths that provides the non-blocking behavior of a CLOS fabric. Juniper Networks Implementation The Juniper Networks implementation of a CLOS fabric for a multichassis routing node is specifically designed to support the following attributes: • Non-blocking • Fair bandwidth allocation • Maintains packet order • Low latency for high-priority traffic • Distributed control • Redundancy and graceful degradation The existence of multiple parallel paths through the switching fabric to any egress port gives the switch fabric its rearrangeably non-blocking behavior. Dividing packets into cells and then distributing them across the CLOS switch fabric achieves the same effect as moving (rearranging) connections in a circuit-switched network because all of the paths are used simultaneously. Thus, a CLOS fabric proves to be non-blocking for packet or cell traffic. This design allows PFEs to continue sending new traffic into the fabric and, as long as there are no inherent conflicts—such as an overcommitted output port—the switch remains non-blocking. Finally, an important property of a CLOS fabric is that each crossbar switch is totally independent of the other crossbar switches in the fabric. Hence, this fabric does not require a centralized scheduler to coordinate the actions of the individual crossbars. Each crossbar acts independently of the others, which results in a switch fabric that is highly resilient to failures and scales to support the construction of extremely large fabrics. Multichassis System Fabric The T Series router multichassis fabric is constructed using a crossbar designed by Juniper Networks called the Fabric ASIC, which is the same Fabric ASIC used in a single-chassis T Series platform. Depending on its placement in the CLOS topology, each Fabric ASIC is a general building block that can perform stage 1, stage 2, or stage 3 functionality within the switch fabric. The CLOS fabric provides the interconnection among the PFEs in a T Series multichassis routing node (Figure 7). The switch fabric for a multichassis system is implemented using four operationally independent, but identical, switch planes (labeled A through D) that are simultaneously active and an identical fifth plane (labeled E) that acts as a hot spare to provide redundancy. Each plane contains a three-stage CLOS fabric built using Juniper Networks Fabric ASICs. Copyright © 2012, Juniper Networks, Inc. 11
  • 12. White Paper - T Series Core Router Architecture Overview Plane E CHASSIS 1 Plane D CHASSIS 2 Plane C Network PIC PIC Network Ingress Plane B Egress SI (Fabric) SI (Fabric) PFE PFE Network PIC PIC Network Plane A F1 F2 F3 F1 F2 F3 • • • • • • • • • F1 F2 F3 Figure 8. T Series multichassis switch fabric planes The required input and output bandwidth of the switch fabric exceeds the I/O capabilities of a single plane. As with the single-chassis system, each PFE is connected to four active switch planes, with each switch plane providing a portion of the required bandwidth. To guarantee that cells are evenly load-balanced across the active switch planes, each PFE distributes cells equally across the four switch planes on a cell-by-cell basis rather than a packet-by-packet basis. Routing Engine Routing Engine TX Matrix (Plus) Primary Secondary Matrix Control Paths Matrix Data Paths T Series LCC T Series LCC Routing Engine: Redundant Routing Engine: Redundant Local Routing Engine Routing Engine Local Routing Engine Routing Engine FPC FPC FPC FPC FPC FPC PIC PIC PIC PIC PIC PIC Figure 9. Multichassis system high-level overview Line-card chassis are connected to switch chassis with fiber-optic cable. This connectivity uses the latest VCSEL technology to provide extremely high throughput, low power consumption, and low bit-error rates. An abstract view of the interconnections for a TX Matrix Plus multichassis system is shown in Figure 10. 12 Copyright © 2012, Juniper Networks, Inc.
  • 13. White Paper - T Series Core Router Architecture Overview TX MATRIX Switch-Card Chassis RE/CB 0 RE/CB 1 RE/CB 0 RE/CB 1 Standby Main Standby Main Optical Data Plane T1600 Plane 0 - Standby T1600 Plane 1 F2SIB LCC 00 F13 SIB LCC 01 LCC 02 F13 SIB LCC 03 T1600 Plane 2 Blank T1600 Plane 3 F2SIB LCC 00 F13 SIB LCC 01 T1600 Plane 4 LCC 02 F13 SIB LCC 03 T1600 Blank Line-Card F2SIB LCC 00 F13 SIB LCC 10 Chassis LCC 02 F13 SIB LCC 03 F2SIB LCC 00 F13 SIB LCC 01 LCC 02 F13 SIB LCC 03 Blank F2SIB LCC 00 F13 SIB LCC 01 LCC 02 F13 SIB LCC 03 Blank Blank Blank Figure 10. Switch-card chassis to line-card chassis interconnections In all T Series systems, there is a fifth data plane on hot standby in the event any of the other four data planes fail—the same is true for a TX Matrix Plus system. Redundancy in the center stage is provided, as the system functions with any two of the center stage chassis. Each center stage chassis has no more than two data planes, so even if one chassis fails, forwarding still proceeds at full throughput for large packets. Five chassis can be supported by the architecture. Conclusion The T Series demonstrates how Juniper has evolved its router architecture to achieve substantial technology breakthroughs in packet forwarding performance, bandwidth density, IP service delivery, and system reliability. At the same time, the integrity of the original design has made these breakthroughs possible. Not only do T Series platforms deliver industry-leading scalability, they do so while maintaining feature and software continuity across all routing platforms. Whether deploying a single-chassis or multichassis system, service providers can be assured that the T Series satisfies all networking requirements. About Juniper Networks Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking. The company serves customers and partners worldwide. Additional information can be found at www.juniper.net. Copyright © 2012, Juniper Networks, Inc. 13
  • 14. White Paper - T Series Core Router Architecture Overview Appendix A: References Applications for an Independent Control Plane: www.juniper.net/us/en/local/pdf/app-notes/3500134-en.pdf Control Plane Scaling and Router Virtualization: www.juniper.net/us/en/local/pdf/whitepapers/2000261-en.pdf Efficient Scaling for Multiservice Networks: www.juniper.net/us/en/local/pdf/whitepapers/2000207-en.pdf Energy Efficiency for Network Equipment: www.juniper.net/us/en/local/pdf/whitepapers/2000284-en.pdf Network Operating System Evolution: www.juniper.net/us/en/local/pdf/whitepapers/2000264-en.pdf Virtualization in the Core of the Network: www.juniper.net/us/en/local/pdf/whitepapers/2000299-en.pdf 14 Copyright © 2012, Juniper Networks, Inc.
  • 15. White Paper - T Series Core Router Architecture Overview Appendix B: Switch Fabric Properties The T Series switch fabric for both single-chassis and a multichassis system is specifically designed to provide the following attributes. • Non-blocking • Fair bandwidth allocation • Maintains packet order • Low latency for high-priority traffic • Distributed control • Redundancy and graceful degradation Non-blocking A switch fabric is considered non-blocking if two traffic flows directed to two different output ports never conflict. In other words, the internal connections within the switch allow any ingress PFE to send its fair share of bandwidth to any egress PFE simultaneously. 4X4 CROSSBAR SWITCH Port 1 Ingress Port 1 Egress Port 2 Ingress Port 2 Egress Port 3 Ingress Port 3 Egress Port 4 Ingress Port 4 Egress Figure 11. A 4 x 4 non-blocking crossbar switch Figure 11 illustrates the internal topology for a non-blocking, single-stage, 4-port crossbar switch. The challenge when building a crossbar is that it requires n² communication paths internal to the switch. In this example, the 4-port crossbar requires a communication path connecting each input port to each output port, for a total of 16 communication paths. As the number of ports supported by the crossbar increases, the n² communication path requirement becomes an implementation challenge. Fair Bandwidth Allocation In a production network, it is impossible to control the pattern of ingress traffic so that an egress port of the crossbar switch is never overcommitted. An egress port becomes overcommitted when there is more input traffic destined for the egress port than the egress port can forward. In Figure 12, the aggregate amount of traffic that ingress ports 1, 2, and 3 forward to egress port 4 is greater than the capacity of egress port 4. Fair bandwidth allocation is concerned with the techniques that a switch uses to share the bandwidth among competing ingress flows to an overcommitted egress port. 4X4 CROSSBAR SWITCH Port 1 Ingress Port 1 Egress Port 2 Ingress Port 2 Egress Port 3 Ingress Port 3 Egress Port 4 Ingress Port 4 Egress Figure 12. An overcommitted egress switch port The T Series router switch fabric provides fairness by ensuring that all ingress PFEs receive an equal amount of bandwidth across the switch fabric when transmitting cells to an oversubscribed egress PFE. Providing this type of fairness across all streams is hard to support because it is difficult to keep track of all users of the bandwidth to the egress PFE. The challenge is that if there are n ports on the switch, then there are n² streams of traffic through the Copyright © 2012, Juniper Networks, Inc. 15
  • 16. White Paper - T Series Core Router Architecture Overview switch. Since most switch architectures are not capable of keeping track of n² individual streams, they are forced to aggregate traffic streams, thus making it impossible to be completely fair to each individual stream. The T Series switch fabric can monitor the n² streams so that each stream receives its fair share of the available fabric bandwidth to an oversubscribed egress PFE. Maintains Packet Order The potential for misordering cells as they are transmitted across parallel switch planes to the egress PFE is eliminated by the use of sequence numbers and a reorder buffer. In this design, the Switch Interface ASIC on the ingress PFE places a sequence number into the cell header of each cell that it forwards into the fabric. On the egress PFE, the Switch Interface ASIC buffers all cells that have sequence numbers greater than the next sequence number it waits to receive. If a cell arrives out of order, the Switch Interface ASIC buffers the cells until the correct in-order cell arrives and the reorder buffer is flushed. The reorder buffer and sequence number space are large enough to ensure that packets (and the cells in a given packet) are not reordered as they traverse the switch fabric. Low Latency for High-Priority Traffic Some types of traffic, such as voice or video, have both latency and bandwidth requirements. The T Series switch fabric is designed so that blocking in the fabric is extremely rare because the size of the ports from the ingress PFE into the fabric is considerably larger than the network ports into the ingress PFE. In the rare case that congestion does occur, each ingress PFE is allocated priority queues into the switch fabric. As previously discussed, the switch fabric fairly allocates bandwidth among all of the ingress PFEs that are competing to transmit cells to an overcommitted egress PFE. • The use of priority queues from the ingress PFE into the switch fabric provides two important benefits: • The latency for high-priority traffic is always low because the fabric never becomes congested. • The CoS intelligence required to perform admission control into the fabric is implemented in the PFEs. This design allows the switch fabric to remain relatively simple because CoS is not implemented inside the fabric, but at the edges of the switch fabric. It also scales better than a centralized design because it is distributed and grows with the number of FPCs. Distributed Control A T Series routing platform does not have a centralized controller that is connected to all of the components in the switch fabric. Hence, within the fabric, if any component fails, the other components around the failed component continue to operate. Additionally, a centralized control channel does not need to be operational for the switch fabric to function. Redundancy and Graceful Degradation Each Switch Interface ASIC monitors a request-grant mechanism. If the ingress PFE depends on a grant for an outstanding request but the grant does not return after a reasonable amount of time, or a data cell is lost, then the ingress PFE takes into account that the destination PFE is unreachable on the plane that was used to send the request. If a switch plane fails, only the cells that are currently in transit across the switch plane are lost because buffering does not occur within a switch plane and the request-grant mechanism ensures that cells are never transmitted across a failed switch plane. The request-grant mechanism allows a failed component within a switch plane to be removed from service by diverting traffic around the faulty component, or all traffic using the faulty plane can be switched to a redundant plane. If there are a significant number of faults on a given switch plane or the plane must be swapped out for maintenance, the chassis manager coordinates moving traffic to the redundant switch plane. Each step in the migration of traffic to the redundant plane involves moving only a small fraction of overall traffic. This design allows the system to remain operational with no significant loss in fabric performance. Corporate and Sales Headquarters APAC Headquarters EMEA Headquarters To purchase Juniper Networks solutions, Juniper Networks, Inc. Juniper Networks (Hong Kong) Juniper Networks Ireland please contact your Juniper Networks 1194 North Mathilda Avenue 26/F, Cityplaza One Airside Business Park representative at 1-866-298-6428 or Sunnyvale, CA 94089 USA 1111 King’s Road Swords, County Dublin, Ireland authorized reseller. Phone: 888.JUNIPER (888.586.4737) Taikoo Shing, Hong Kong Phone: 35.31.8903.600 or 408.745.2000 Phone: 852.2332.3636 EMEA Sales: 00800.4586.4737 Fax: 408.745.2100 Fax: 852.2574.7803 Fax: 35.31.8903.601 www.juniper.net Copyright 2012 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. 2000302-003-EN Mar 2012 Printed on recycled paper 16 Copyright © 2012, Juniper Networks, Inc.